“Suki, let’s get Mr. Jones a two-week run of clarithromycin and schedule him back here for a follow-up in two weeks.” It’s a problem that started when doctors switched from handwritten records to electronic ones. Health care organizations have tried more manual fixes—human scribes either in the exam room or outsourced to Asia and dictation tools that can only convert text verbatim.

But these new assistants—you’ll meet Suki in a sec—go one step further. Equipped with advanced artificial intelligence and natural language processing algorithms, all a doc has to do is ask them to listen.

From there they’ll parse the conversation, structure it into medical and billing lingo, and insert it cleanly into an EHR. “We must reduce the burden on clinicians,” says John Halamka, chief information officer at Boston-based Beth Israel Deaconess Medical Center.1 He’s been conducting extensive early research around how Alexa might be used in a hospital, to help patients locate their care team or request additional services, for example.

“Ambient listening—the notion that technologies like Alexa and Siri turn clinician speech and clinician-patient conversations into medical records—is a key strategy.” Alexa and Siri might be the best known voice assistants, but they’re not the first ones doctors are trusting with their patients. While Amazon and Apple are rumored to be working on voice applications for health care, so far they’re still piloting potential use cases with hospitals and long-term care facilities.

They don’t yet have any HIPAA-compliant products on the market. Not so for Sopris Health, a Denver-based health intelligence company that launched today after starting to roll out its app at the beginning of the year. Read more from wired.com…

thumbnail courtesy of wired.com