Nat Friedman describes his experience using a student to take dictation for emails and code while recovering from a broken wrist:
He sits at the other end of my desk on a separate computer while I conduct the machine with my left hand, jumping from mail to mail, opening buffers, reading web pages, and generally doing the interactive low-latency low-volume typing tasks myself. He can see everything I’m doing because my desktop is shared over the network. And when I need to enter a large block of text, well, I just start talking, he types, and the words appear on the screen.
If I don’t look up from the screen, I can pretend he’s not there and that I have the world’s most powerful speech recognition engine. So I have a sneak peek into what computers will be like when speech recognition works really well.
Also, it turns out that context is very important for accuracy:
It’s hard to recognize speech.
It’s hard to wreck a nice beach.