There’s long been talk in medicine about the need to listen more to the patient voice — and now that mantra is being taken literally.
Academics and entrepreneurs are rushing to develop technology to diagnose and predict everything from manic episodes to heart disease to concussions based on an unusual source of data: How you talk.
A growing body of evidence suggests that an array of mental and physical conditions can make you slur your words, elongate sounds, or speak in a more nasal tone. They may even make your voice creak or jitter so briefly that it’s not detectable to the human ear. It’s still not absolutely clear that analyzing speech patterns can generate accurate — or useful — diagnoses. But the race is on to try.
The latest player to enter the arena is Sonde Health, a Boston company launched Tuesday by the venture capital firm PureTech, based on technology licensed from researchers at the Massachusetts Institute of Technology. Sonde wants to develop software for consumers that can screen for depression as well as respiratory and cardiovascular conditions.
“Speaking is something that we do naturally every day,” Sonde COO Jim Harper said.
The company will start by analyzing audio clips of patients reading aloud, but aims to develop technology that can extract vocal features without actually having to record the words. The goal, Harper said, is to “move the monitoring into the background and to collect some of that with devices that people already own.”
Click here to read the full article by Rebecca Robbins, STAT News.