Machine Learning
Posted
November 23, 2020
by
Abridge

How machine learning supports understanding and follow-through in healthcare

Everything we do at Abridge, whether it’s our focus on user experience, our dedication to security and privacy, or our approach to customer support, is ultimately in service of our core mission: to help people understand and follow-through on their health. But when it comes to the rawest building blocks of understanding and follow-through—those key insights, transcribed, highlighted, and defined—it’s our trailblazing work in machine learning research that makes the rest of the Abridge experience possible.

Everything we do at Abridge, whether it’s our focus on user experience, our dedication to security and privacy, or our approach to customer support, is ultimately in service of our core mission: to help people understand and follow-through on their health. But when it comes to the rawest building blocks of understanding and follow-through—those key insights, transcribed, highlighted, and defined—it’s our trailblazing work in machine learning research that makes the rest of the Abridge experience possible.   

Previously, we spoke a bit about how we use machine learning to demystify health conversations, and shared some basics about our dataset, annotations, and research focus areas. However, we’ve made a lot of progress in the months since, so we wanted to share an update on what we’ve learned, and what it all means for people who use Abridge! 

Everything begins with the free-flowing conversations between people and their doctors. This type of speech tends to be really tricky for machines to follow, for a whole host of reasons: 

  • There are several people talking.
  • When people speak naturally, there are disfluencies — i.e. people don’t talk the way they write. They don’t finish their thoughts. They interrupt each other. They mumble.
  • Health conversations are full of complex medical terminology. 

Abridge algorithms need to accurately capture the words in each conversation before they can even begin to determine which moments are important to people’s health. That’s why we contribute to research in Automatic Speech Recognition (ASR), the field of machine learning dedicated to the transcription of speech. We also adapt, or correct, off-the-shelf ASR systems to improve the transcription accuracy of medical terminology. We’ve trained our algorithms to focus more on medical concepts than many other ASR systems do, and to understand relevant bits of context that might be spread across each conversation. 

The output of our ASR system — the transcript — is passed through our clinical concept extraction pipeline, which highlights medications, diagnoses, and procedures. Some of these key medical terms are then linked with concise explanations from our trusted content partners, including the National Library of Medicine and the Mayo Clinic.  

The Abridge app puts a bow on all this behind-the-scenes work, delivering abridged transcripts with color-coded medical concepts, and definitions that help people actually understand what’s going on with their health. 

The primary reason for helping people understand their health, however, is so they can take the right next steps, and follow through on their doctors’ advice. That’s why we’re going one step further than just extracting ‘important’ pieces from each conversation, and beginning to look at how we might sort those pieces into relevant buckets, or topics. 

To that end, we built a machine learning model that can classify utterances from medical conversations according to (i) whether it was more likely spoken by a doctor or patient, and (ii) where it might belong in the notes doctors write after each appointment. It turns out that most medical documentation follows the SOAP format, short for:

  • Subjective: The ‘story’ from the patient about why they are visiting. 
  • Objective: The (often quantitative) record of the doctor’s examination. 
  • Assessment: A summary of the doctor’s decision-making process and diagnoses. 
  • Plan: The doctor’s next steps for the patient based upon their Assessment.

If Abridge knows that a sentence was spoken by a doctor, and that it relates to their Plan, then that sentence is likely a key takeaway. This is how our “starring” feature works— and it’s only the tip of the iceberg when it comes to work we plan to leverage to help people stay on top of their health. 

Here’s a taste of what’s coming next:

We know that people forget a significant portion of conversations with their physicians — which can make even something as seemingly basic as medication adherence extremely challenging, even with the best of intentions. Abridge can help here. By surfacing more detailed medication instructions— in context, and in the doctors’ exact words— we can help people stay on track. We recently published our findings from an experiment geared towards better surfacing medication frequency, route of delivery, and changes to instructions. Some of this work has already made its way into our app, allowing us to provide people with GoodRx coupons for prescribed medications. 

This work isn’t just relevant for Abridge or our users, of course. Just as we’ve benefited from meaningful advancements in research over the years, we believe our contributions to the field can drive progress elsewhere, too. That’s part of why we actively collaborate with research labs at Carnegie Mellon University, and clinical experts at the University of Pittsburgh Medical Center — and why we contribute to the broader body of academic literature. 

In future posts, we’ll dive deep into some of our research challenges, and share our attempts at solving them in more detail. 

Want to learn more about how Abridge can help?

Contact us
Share

Next

Machine Learning
Posted
February 23, 2024
by
Abridge

Read more