You don’t need to have had personal experience with voice recognition software to know it’s still a work in progress.
I think I blame Microsoft for getting the ball rolling.
Maybe you’ve got a longer memory than I do. Near as I can tell, though, they pioneered the practice of releasing products with multiple defects, and following up with newer versions of those products which fixed some defects while creating multiple new ones - all eagerly purchased by repeat customers, demonstrating to industry that you needn’t worry so much about perfecting your stuff before putting it on the market.
Folks in the cellphone biz paid attention, especially the smartphone sub-market. Dropped calls? Software crashes? No problem, people will eagerly buy whatever you’re selling, as long as it’s marginally better than what they were using before (buggier cells from the last generation, or God forbid payphones). Is Bluetooth ready for prime-time? Well, not really, but let’s put it on the market anyway; people will pay to do away with cords even if it means impaired signal.
So along came voice recognition. (Technically, it had been around before cellphones were in common usage, but you didn’t see too many people using it.) You don’t need to have had personal experience with it; if you know anybody who has, they’ve probably already given you an earful. Notwithstanding claims that you can train the system to respond to your input with better accuracy, and that it’ll adapt to you over time - well, let’s just say it’s still a work in progress.
The first time I worked in a hospital which had VR rather than transcriptionists, the software wasn’t the best. I struggled with it for awhile, and eventually gave up trying. Unlike one of the more established guys in the department who habitually argued with the thing, taking minutes at a time to verbally bludgeon it into submission rather than seconds to manually type a troublesome word, I stopped even turning on the microphone, instead pulling the keyboard over to myself and typing my reports from scratch.
I admit, I had an unfair advantage in this regard. I had learned touch-typing from my father around the beginning of grade school, and so I could easily churn out reports in about the same time it would have taken me to do them via VR, since the latter required careful proofreading and multiple manual corrections anyway.
However, few if any typists can approach the speed with which folks talk (especially those of us in the New York area), and thoughts recurred that, if only voice recognition could work as promised, healthy gains in productivity could result. I am happy to say that, after years of resisting it, I finally found some ways to (more or less) peacefully coexist with my VR software.
I’ll cough up some of my hard-learned secrets in my next blog post.
Can MRI-Based AI Enhance Pre-Op Prediction of Tumor Deposits in Patients with Rectal Cancer?
October 31st 2024For patients with rectal cancer, an emerging nomogram that combines deep learning and clinical factors had greater than 16 percent and 23 percent increases in accuracy and specificity, respectively, for pre-op prediction of tumor deposits in comparison to clinical factors alone.