2014-03-27

A custom keymap for Indian languages

As we saw in the last couple of posts, keying in Indian languages using a QWERTY keyboard requires a keyboard/IME software as well as a standardised way to map the Latin alphabet to the characters in the Indian language du jour. As before, I use Google's Input Tools on Windows and Lipika on OS X. Unlike a representation format (which case use diacritic or other accent marks), a key-map can only employ the characters inputtable through the QWERTY keyboard. So while I use ISO-15919 as the representation format, I needed a key-map as well. As in the previous post, here were my requirements:
  1. Meaningfulness
  2. Pan‐linguistic consistency
  3. Fidelity to pronunciation
  4. Modularity and symmetry
  5. Alphabet restrictions: the scheme must use only Latin characters to represent phonemes; the scheme may use punctuation marks to represent non-phonemic punctuation-like characters in the target language.
With these requirements, I set about to create a key-map I could use. I'd start with my requirements, and in the end, if the key-map ended up resembling an existing "standard", I'd just stick with that instead.

I started out by identifying characters in Tamil and Sanskrit (the 2 Indian languages I write in) based on phonetics and history; this identification process is important for pan-linguistic consistency.

Vowels and Dependents
Sanskrit (Devanagari) ISO‐15919 Tamil Key-map
a
i
u
r̥̄
l̥̄
e
ai
o
au
'

Consonants
Sanskrit (Devanagari) ISO‐159191 Tamil Key-map
क् k க்
ख् kh
ग् g
घ् gh
ङ् ங்
च् c ச்
छ् ch
ज् j
झ् jh
ञ् ஞ்
ट् ட்
ठ् ṭh
ड्
ढ् ḍh
ण् ண்
t ̱ ற்
ன்
त् t த்
थ् th
द् d
ध् dh
न् n ந்
प् p ப்
फ् ph
ब् b
भ् bh
म् m ம்
य् y ய்
र् r ர்
r ̣ ழ்
ळ् ள்
ல்
ल् l
व् v வ்
श्
ष्
स् s
ह् h

The next step was filling in the key-combinations that were "natural" and "obvious".
  1. Given the existence of short and long vowels, using lower- and upper-case letters for vowels seems natural.
  2. Naturally, any unmarked consonant in ISO-15919 can be mapped to the bare letter.
  3. Representing retroflexion by upper-casing the corresponding dental consonant is standard-practice. By modularity, we can do the same for liquids and sibilants too.

Sanskrit (Devanagari) ISO‐15919 Tamil Key-map
a a
A
i i
I
u u
U
r̥̄
l̥̄
e e
E
ai ai
o o
O
au au
'
क् k க் k
ख् kh kh
ग् g g
घ् gh gh
ङ् ங்
च् c ச் c
छ् ch ch
ज् j j
झ् jh jh
ञ् ஞ்
ट् ட் T
ठ् ṭh Th
ड् D
ढ् ḍh Dh
ण् ண் N
t ̱ ற்
ன்
त् t த் t
थ् th th
द् d d
ध् dh dh
न् n ந் n
प् p ப் p
फ् ph ph
ब् b b
भ् bh bh
म् m ம் m
य् y ய் y
र् r ர் r
r ̣ ழ்
ळ् ள் L
ல்
ल् l l
व् v வ் v
श्
ष् S
स् s s
ह् h h

6 issues remain: Dravidian alveolar consonants, the Dravidian approximant, Sanskrit nasals, Sanskrit sibilants, Sanskrit syllabic vowels, and miscellaneous rarely used dependents.
  1. Dravidian alveolar consonants: from the point of view of tongue-position, alveolar stops are intermediate between dental stops and retroflex stops. From this, a natural choice of key-combination for an alveolar stop is a juxtaposition of the keys for the corresponding dental and retroflex stops. Likewise for the alveolar liquid ல்.
  2. Dravidian approximant: based on usage, I picked 'z' as the key for the approximant ழ். The fact that non-native speakers mispronounce the approximant as a voiced sibilant adds credibility to this choice :-)
  3. Sanskrit nasals and sibilants: there are 2 remaining nasals: ङ्, ञ् and one remaining sibilant: श्. The palatal nasal is both a palatal stop and a nasal; a natural representation combines the nasality of 'n' with the palatalness of 'j' or 'c'; we thus get 'nj' and 'nc' as possible key-combinations. By correspondence, the palatal sibilant श् is 'sc' or 'sj', and the velar nasal ङ् 'nk or 'ng'.
Looks like the consonants are done! Here they are:
Consonants
Sanskrit (Devanagari) ISO‐15919 Tamil Key-map
क् k க் k
ख् kh kh
ग् g g
घ् gh gh
ङ् ங் nk/ng
च् c ச் c
छ् ch ch
ज् j j
झ् jh jh
ञ् ஞ் nc/nj
ट् ட் T
ठ् ṭh Th
ड् D
ढ् ḍh Dh
ण् ண் N
t ̱ ற் tT/Tt
ன் nN/Nn
त् t த் t
थ् th th
द् d d
ध् dh dh
न् n ந் n
प् p ப் p
फ् ph ph
ब् b b
भ् bh bh
म् m ம் m
य् y ய் y
र् r ர் r
r ̣ ழ் z
ळ् ள் L
ல் lL/Ll
ल् l l
व् v வ் v
श् sc/sj
ष् S
स् s s
ह् h h
  1. Sanskrit syllabic vowels: The Sanskrit syllabic vowels (ऋ, ॠ, ऌ, ॡ – the last one not actually used) present a problem. The mid-central vowel inherent in these is absent in European languages and thus lacks a symbol; it can however be described as mid-way between 'y' and 'w'. 'y' is already used up in our scheme, but 'w' is free! Using 'w' also ensures people don't mispronounce it as a front-vowel. We thus get 'rw', 'Rw', 'lw' and 'Lw' respectively.
  2. Misc. dependent letters: There are a few different dependent letters that can only existƒ attached to a vowel — the anusvāra, the anunāsika, the visarga and its two other forms the jihvāmulīya and the upadhmānīya, and the āythayeṛuttu. The anusvāra is traditionally represented by an 'M', and the anunāsika by 'MM'; we can stick with those. The visarga, likewise is an 'H'. The upadhmānīya is closest to the Latin 'f', and we can use that. The jihvāmūlīya and the āythayeṛuttu are both velar/glottal and as such 'K' is the most suitable.
We finally have a complete key-map for vowels and dependents! Here it is:
Vowels and Dependents
Sanskrit (Devanagari) ISO‐15919 Tamil Key-map
a a
A
i i
I
u u
U
rw
r̥̄ Rw
lw
l̥̄ Lw
e e
E
ai ai
o o
O
au au
M
MM
H
f
K
K
' '

You can download the keymap for Tamil and Sanskrit from http://code.ambari.sh/keymap.

Footnotes:

1 Unfortunately, ISO-15919 does not distinguish between alveolar and dental liquids; Tamil has only the former, while Sanskrit only the latter. As such, I've had to make a few minor modifications to ISO-15919, where ற and ல are concerned. Thanks to Greg for pointing this out in the comments.

2014-03-02

Representing Indian languages using the Latin script

Indian languages are written in a diverse set of scripts, most of them derived from the Brahmi script and not from the Phoenician script. These scripts neither look like Latin, nor do they have the familiar A, B, … ordering. Further, many Indian languages have many phonemes not present in languages Latin was traditionally used for. As a consequence, many of these scripts have many more than 26 base glyphs (not to mention ligature forms). Mapping these to Latin characters becomes important for 2 distinct uses:
  1. storing/presenting Indian language content
  2. inputting Indian language content.
While Unicode encodes most popular scripts used for Indian languages and even some rare ones, Latin‐letters continue to be used in representing Indian languages everywhere. They're used in email, in SMS messages, in web‐pages, in file‐names; pretty much ubiquitously. But how to map the diverse phoneme set (between 30–50 for most Indian languages) into the 26 letters in the Latin alphabet?

[Likewise, physical keyboards with a QWERTY layout dominate the world; how to allow combinations of characters on the QWERTY keyboard to represent the diverse character sets in Indian languages? I'll address this in another post.]

As usual, there are many options.

"The nice thing about standards is that you have so many to choose from." Andrew S. Tanenbaum

ISO‐15919, an international scholastic standard, and a few other schemes – IAST, Hunterian, National Library of Kolkata, ALA‐LC – use diacritic (accent) marks over/under Latin characters. Harvard‐Kyoto, Velthuis, ITRANS, SLP1, WX, VedaType and ISO‐15919's limited character set option are schemes that restrict themselves to 7‐bit ASCII but use punctuation characters.

For example, here're the same Sanskrit characters in a few sample schemes:
Devanagari ISO‐15919 Hunterian ISO‐15919‐lcs Harvard‐Kyoto ITRANS
aa A aa/A
ri ,r R RRi/R^i
e ee e e
ं (anusvāra) m ;m M M
ख् kh kh kh kh kh
ञ् n ~n J ~n/JN
ड् d .d D D
श् sh ;s z sh

What a mess!

I'm going to address what I think should be the hallmarks of a good scheme for representing Indian language text using Latin characters – how one can figure out if such a scheme was thoughtfully, carefully designed and not thrown together in the middle of a Usenet discussion.

[Note: Indian languages being phonetic, I'm sometimes careless about the phoneme vs. written character distinction. I carelessly use the word “character” for both; the meaning should be clear].

Here's my prioritised list of features a good Indian language representation scheme should have:

  1. Unambiguity: the scheme must preserve the integrity of the script. Put another way, the mapping should be reversible from Latin back to the original script without loss. [Shockingly, the Hunterian scheme fails this basic test.]
  2. Meaningfulness: the scheme must use Latin letters phonetically close to the original Indian language sound. This automatically rules out absurdities like using f for the Sanskrit velar nasal (ङ्) [vide. the wx notation].
  3. Pan‐linguistic consistency: identical phonemes across Indian languages must have identical representations. Especially as Indian languages have heavily borrowed from Sanskrit, it's inconsistent if the "same" word has multiple Latin representations. Sadly, many schemes fail to satisfy this requirement: in ITRANS, the Sanskrit word केवल is represented as “kevala”, but its Tamil borrowing is represented as “kēvala”.
  4. Pan‐linguistic consistency: conversely, a single Latin representation must identify the same phoneme across Indian languages. Again, some schemes fail this requirement too — in some schemes, ḷ would mean a syllabic dental liquid in Sanskrit but could mean a retroflex liquid consonant in Tamil.
  5. Fidelity to pronunciation: the scheme should aid pronunciation, or at least, not encourage distortion. Harvard‐Kyoto's RRi for a syllabic alveolar trill; ITRANS's x for a conjunct consonant and GY/dny for another conjunct consonant are all misspellings (for phonetic languages, misspelling and mispronunciations go together!)
  6. Modularity and symmetry: in Indian languages, the phonemes (and thus the characters) have relationships among them: clearly, the palatal nasal stop has a relationship with the other nasals, as well as a different relationship to the other palatal stops, and yet another relationship with palatal/semi‐palatal vowels. Any scheme may have various mechanisms to represent such features: perhaps an underdot may represent retroflexion, or perhaps doubling a Latin letter may indicate vowel‐length. These mechanisms should be modular and symmetric, i.e. they should work independent of one another, and they should always mean the same thing. If doubling the vowel ‘a’ indicates a long ‘a’ sound, doubling the vowel ‘i’ should indicate a long ‘i’ sound. If adding an underdot to ‘t’ makes it retroflex, adding an underdot to ‘s’ should make that retroflex too.
  7. Alphabet restrictions: the scheme must not use punctuation-marks or mixed-casing. Punctuation‐marks in the middle of words look ugly. ;L'ik:e t"h..i;s. As does miXEd‐CaSiNg. Using punctuation marks also collides with well‐understood orthographic usages: it seems like a shame to not be able to use commas in transliterated Tamil text, just because the scheme gives the comma a special meaning. And starting every sentence with a lower‐case letter is too edgy for my taste.
Only one of the standard schemes fulfils 6 of the 7 requirements completely and the 7th partially (it uses punctuation, but only to represent some combinations that cannot normally occur in the language) – ISO‐15919. In addition to being well thought‐out and sane, it is an international standard and is widely used in scholastic publications. Its one drawback is that the official standard is not available free of cost; instead, ISO charges more then $100 for an electronic copy. However, it's fully documented at Dr. Anthony Stone's website and is usable today. I use it everyday, and so should you!

2014-01-04

On keying in Indian languages

Right from the time I started out learning Sanskrit, I've had to type in Devanagari to send email, do my assignments, write blog-posts etc. Sometime later, I realised I'd like to be able to write in my mother-tongue, Tamil, too: I was rapidly becoming more fluent at writing in Sanskrit than in Tamil, and that bothered me! I used to use Windows and now use OS X, and have used a variety of tools to be able to compose in Indic scripts. I thought I'd write about my journey, what tools I loved, used, hated and discarded and what I now use.

In 2008, to begin with, my requirements were — types Devanagari, works on Windows Vista+, works across all applications (no browser plug-in for me) and has configurable key-maps. That last was very important. There were multiple competing ways to render Sanskrit in ASCII (and corresponding ways to type Devanagari on an US-ASCII keyboard): Harvard-Kyoto, IAST, ITRANS, etc. They had nothing in common, except for one thing: they all sucked. [I'll explain why in a follow-up post.] So I needed to use a keymap I liked, one I created myself.

I started out using Ajit Krishnan's wonderful Mudgala IME. It was very straight-forward to use, and supported custom keyboards. Switching to and out of it was so easy, and it never crashed once in more than a year of using it. But 2 things made me want moar: first, Mudgala IME didn't support Tamil very well. Secondly, Mudgala IME worked by installing a low-level keyboard-hook. Windows has a way to support software keyboards and Input Method editors so that the keyboard/IME would show up in the Language Bar, and other Good Things™ would happen. Mudgala IME instead worked at a much lower level by intercepting every key-stroke, translating to the corresponding Devanagari code-point, and inserting it into the current window. This approach had a couple of drawbacks: as above, it didn't integrate well with Windows or with the keyboard shortcuts Windows provides to switch input keyboards. Worse, the keyboard shortcut Mudgala provided to switch to and out of it was a global shortcut and interfered with others software's use of that shortcut. If you didn't understand the above, suffice it to say that its integration with Windows is non-standard :-)

In 2009, Google came out with its Indic language input tools; this has now became Google Input Tools for Windows. You had to install a separate piece of software for each language you need to type in. It did support custom keyboards, but you had to switch to your custom keyboard explicitly, and worse, you had to do that every-time you switch languages. Worst of all, if you had multiple custom keyboards per language (say, a keyboard to output Devanagari and one to output ISO-15919), you had to use the GUI to switch keyboards each time. Kind of bad UX, but it worked OK otherwise. There were also occasional bugs involving typing too soon after switching to the Google IME: the thing needed a few seconds to initialise, and it would swallow key-strokes till then!

In 2013, Google released an update in 2013 that unified the various language-specific IMEs: you now had to install one software, and pick various languages at installation-time. Unhappily, the documentation on setting up custom keyboards was never updated: the page refers to a Scheme directory of C:/Program Files/Google/Google [Language] Input/, but the actual directory is:

%ProgramData%/Google/Google Input Tools/com.google.input_tools.t13n.ime.{language-name}/schemes/

The other bugs remain, but this is, in my experience, the most solid and clean Indic typing solution on Windows today. I use to type Sanskrit (in Devanagari, Grantha, IPA, ISO-15919) and Tamil (in Vattu, IPA, ISO-15919) and it's never let me down.

Lately, though, I've switched from my trusty Thinkpad to my wife's old MacBook Air, and as soon as I switched, I realised there was a paucity of good IMEs on OS X. Happily, my friend Ranganath Atreya has come up with Lipika, an IME that also supports the same custom-keyboard format Google IME does (yay for compatibility!). I was one of the beta-testers, and I love it.

At work, I use Windows and don't want to install software; when I occasionally want to send email, I use Google Input Tools online. It's pretty Google-y: barebones but efficient.

On my Windows Phone, Microsoft still does not allow 3rd party keyboards :-( I make do with Binu's Type Tamil and Type Sanskrit, which let you input text, which you then have to copy and paste into wherever you want. This sucks, so I'm still looking to you, Microsoft!