Voice recognition software has been around for at least 20 years. I first played with the technology in the 1980s but was very unimpressed by its abilities, horrible set up a process, and general applicability as a technology of last resort for the handicapped were truly keyboard allergic.
I’ve tried to use the technology transcribe dictation made during long car commutes, but that never worked either. A combination of too much background noise, a lack of discipline on my part to stick with the process of correcting and training the software to recognize my voice and my peculiar way of dictation, and voice-recognition software joined they heap of otherwise optimistic stuff that science fiction promised would be useful but practice proved otherwise.
This post is being dictated with Dragon NaturallySpeaking version 11 running on a ThinkPad T410s and using a phone headset as a microphone. Since my arm surgery on Tuesday, I’ve dictated about 2000 words and so far am pretty impressed.
Dictation is a foreign mode of writing for me. I’ve used a keyboard in one form or another since I was about 10 years old and my atrocious handwriting condemned me to a typewriter. I never learned how to touch type, but over the years got up to what about 100 words per minute using a frantic index finger/thumb method that over the years as developed a sort of muscle memory of the keyboard which permits me to type without looking at the keys. When word processing technology first emerged in the late 1970s, some writers complained that the electronic ease of deletion, cut and paste, and general speed of composition reduced the value of the word put on the page, and led to a certain compositional laziness that had been moderated by the penalties of working with paper, white out, carbon paper, and the other manual vestiges of writing in the early 20th century. One can writers said the same thing about the typewriter in the 19th century, claiming it made writing “too easy” compared to pen and ink on paper.
Voice technology has come a long way in recent years, especially on android phones where Google’s voice-recognition technology in its maps and search tools are excellent. In the pre-android era, if I wanted to set a destination on the cars GPS, I needed to tediously punch in numbers, cities and states before I could put the car in motion. Attempting to set an address while underway was a recipe for a head-on collision. Now, if I want to get to my office, I simply press the microphone icon and say “go to W. 39th St., New York, NY” and Google does the rest. Voice-recognition is a lifesaver, literally, when I need to respond to a text message while driving, yet my son is fond of a pending the word “bitch” to my dictation.
My biggest complaint with voice-recognition is it forces me to enunciate and be choppy and my diction, where as when typing, I am able to pound away with relatively fluid ease and no concern over misunderstandings and goofy transcriptions. That said, I am a terrible typist and spend a huge amount of time on the backspace key correcting typos and mess ups. Another drawback of dictation is lack of privacy. I hate it when someone looks over my shoulder while I’m writing, and now my voice bellows through the house making me very self-conscious of whether or not I could be overheard by my wife or son. If I were in a cubicle in a typical office I would literally be dumbstruck.
I have no choice but to continue dictating for the foreseeable future, until my doctor gives me the all clear to start typing again.
But at least I can blog and work on memos and have some productivity that otherwise would be completely lost due to surgery.
(This entire post was dictated straight through with nothing corrected)