Ten-Key Keyboard: A Pitch JOSH SAND, 22 Mar 2016



In seventh grade my computer teacher made everyone learn how to type with cardboard boxes over our hands so we couldn’t look down at the keyboard. Thanks to this harrowing experience I could finally touch-type, including on random surfaces (armrests, sides of my legs, etc.) and “feel” when I make typos without having to look. I frequently daydream about actually typing at a computer this way, because there are times where sitting down in front of a keyboard or thumbing away at a phone keyboard isn’t the right environment you need to get the words out. Sometimes you want to get words out in a stream-of-consciousness style, not even seeing the words on a screen. Sometimes you want to write while in bed, riding in a car, on a couch half-watching TV, or walking through a park with your hands in your pockets.


How could this be achieved? There already are keyboard gloves and other solutions, but the problem is that these throw out the muscle memory computer users already have. My proposal is a ten-key “keyboard”, made of small pressure-sensitive pads for each finger that allow the user to type on any surface with predictive algorithms to guess the word typed based on the fingers used (similar to the “Word” typing on pre-smartphone phones). The user could type “hello” like they would normally, which the program would read as “right index, left middle, right ring, right ring, right ring.” Other words might match this (like, “jello” or gibberish like “ucooo”), requiring something like a Bayesian network or a full-on neural network to predict the word based on the user’s own writing or a larger corpus of English writing.

hand small

While my inspiration comes from a place of indulgent daydreaming, there are more practical purposes to this as well. Even if the aspect of pressure-sensitive pads is completely removed and the user types with their fingers locked on the ‘asdfjkl;’ keys, this can increase typing speed and reduce finger strain and hand movements, all without sacrificing existing muscle memory.

Someone could even reanalyze the provided corpus of English writing to their own finger-key mapping. Personally, I like staying strict to Mavis Beacon-esque home row rules, but I don’t think I’ve ever used my middle finger for “c”, for example, and this would allow me to reconfigure for that. This would also allow easy configuration for international keyboard layouts and provide easier typing for people who don’t have full mobility in their hands (arthritis, strain, missing digits, etc).


Analysis and obstacles

The benefit of typing this way always has to outweigh normal typing at every step, so all this work has to be done invisibly on the developer’s side if anyone hopes to find this useful. If solutions ever involve the user needing to retrain their muscle memory, the benefits are lost.

An ideal end-scenario would look like this: the user types (prose, essays, notes, stenography, etc.) using their preexisting touch-typing muscle memory in places where typing at a regular keyboard or handheld device is inconvenient. The user can read over the text later and change incorrectly guessed words to what was intended.

Let’s put aside the engineering aspect of the ten-key keyboard and focus on the programming side. Using ten keys, can someone type readable, logical, capitalized, punctuated prose?

For the sake of testing, finger presses are represented by the home row keys your fingers normally rest on (left pinky on “A”, left ring finger on “S”, etc). To guess what word the user typed, we’re working with a lookup table where the key is a word’s “home row” equivalent (“JDLLL”) and the value is the intended English word (“HELLO”). Collisions are handled by guessing which value was intended based on the context of earlier words.

Here are examples of collisions the algorithm would have to deal with:


Let’s assume you want to use this software to sit down and write The Old Man and the Sea. How many words would share the same “home row” key?

Words with 1 collision : 1958
Words with 2 collisions:  143
Words with 3 collisions:   35
Words with 4 collisions:   12
Words with 5 collisions:    3
Words with 6 collisions:    1

91.0% of the words in The Old Man and the Sea were completely unique and didn’t need to use the predictive algorithm at all. Not bad! But The Old Man and the Sea is a book with a famously minimal writing style. How do these ratios look with a more complex book, like Ulysses?

Words with 1 collision  : 22112
Words with 2 collisions :  1339
Words with 3 collisions :   380
Words with 4 collisions :   144
Words with 5 collisions :    79
Words with 6 collisions :    54
Words with 7 collisions :    30
Words with 8 collisions :    13
Words with 9 collisions :     5
Words with 10 collisions:     6
Words with 11 collisions:     5
Words with 12 collisions:     4
Words with 13 collisions:     4
Words with 14 collisions:     1
Words with 15 collisions:     1

It turns out the complexity is an advantage, since even more words (91.4%) have unique mappings in the lookup table. However, there are far more words that all collide to the same key, with one home row key even matching fifteen different English words.

If well trained enough, an algorithm should be able to easily sort out what word you want based on what part of speech it is (like, something that ends with “s” probably won’t come after “I”, etc.). However, problems arise for words with similar meetings. When writing your first sentence, how would the algorithm know if you’re writing about a dog or an elf? What if the words are synonyms, like roaring/blaring? (“The music was…”) It’s necessary to show options for incorrectly guessed words at a later point—but will the mistaken word throw off the rest of the sentence? Can any algorithm expect to take “JD FKF A FKF FKF LFDF A FKF AF A FKF SJKDJ FKF FJD FKF” and interpret it as “He bit a big rib over a bib at a gig which fit the fib”?

Here are other possible scenarios the algorithm would have to handle:



Invented names/words



Short, contextless sentences

Storing all words and context (for future confirmation)

A free thumb can fill in for some of these, as well as the underused right pinky. (Why does a spacebar require as much keyboard space as it does, anyway? Do people regularly alternate which thumb they space with?) Holding one pinky down while pressing another key could easily replicate capitalization, although algorithmic capitalization would not be difficult. (Numbers could be written out, but careful technical writing or coding would probably never be a good fit for this typing system.) A finger or left thumb could fill in as a general punctuation key, but this could get messy. Plus, some method for deletion is necessary. I frequently hit the “wrong key” when typing on an armrest and muscle memory my way back with my right pinky. A wrong word can ruin whatever context the algorithm had built up and throw off future words. The corpus of English words should cover all common proper nouns, but writers who frequently use invented words and names might have to manually add these to the program’s corpus of words to look for, similar to telling Word to learn the spelling of an uncommon last name. Will storing all reasonably well-used English words (which is, an unfortunate aspect of this language, quite a huge number) and their common contexts be a reasonable size? Or would a service like Amazon/Google/Facebook have to store the database and require internet access?

While this seems like a long list of obstacles, I’ve learned to be optimistic about the capabilities of computers when fed massive amounts of data to train from. The feasibility of this project will become more clear as early tests are created, something more elaborate than the short Python scripts I made to analyze the collisions of words in text files.