So far in this series, we’ve looked at machine learning in sports: NFL tackle predictions, Formula 1 race strategy, NBA player tracking, and MLB pitch analysis. This is the first ML in the Wild post outside of sports, and it covers machine learning in healthcare – something you’ll run into personally.
Two things stood out to me recently in machine learning in healthcare. The first is already happening in exam rooms right now. The second was published in the New England Journal of Medicine AI in late 2025. Both use the same machine learning concepts we’ve been exploring in sports, but instead of predicting tackles or classifying pitches, they’re writing medical notes and flagging hidden heart conditions from a 10-second EKG.
The AI Scribe Listening to Your Doctor’s Appointment
There’s a good chance that the next time you visit your doctor, an AI-powered app will be recording the conversation. It’s called an ambient AI scribe. “Ambient” means it runs in the background without anyone having to press record or dictate. Your doctor opts in, and a HIPAA-compliant app on their phone, tablet, or a microphone in the exam room records the conversation between the two of you. Machine learning models then turn that conversation into a structured clinical note, usually within a couple of minutes after your visit ends.
Why does this exist? Physicians spend roughly two hours on EHR work for every one hour of direct patient care. That’s the charting, note-taking, and data entry that happens after your appointment, often late at night after a full day of seeing patients. Doctors have a name for it: “pajama time.” It’s one of the biggest drivers of physician burnout, and it means your doctor is often splitting attention between you and a keyboard during your visit.
In 2025, ambient scribes brought in an estimated $600 million in revenue, more than double the year before. That kind of growth tells you the problem is real and health systems are willing to pay to fix it.
How Do Ambient AI Scribes Work?
These systems run on a three-stage pipeline, and it’s worth understanding because you’ll see this same pattern behind almost every ML product that processes language.
Stage 1: Sound to text. A speech recognition model converts the conversation into a written transcript. This is the same core technology behind Siri or Google Assistant, but trained specifically on clinical conversations. That matters because medical dialogue is messy. People talk over each other, switch between “I’ve been feeling short of breath” and “dyspnea,” and go on tangents. A general speech model would struggle. A model trained on thousands of doctor-patient conversations handles it.
Stage 2: Text to meaning. A natural language processing model reads the transcript and figures out what’s clinically relevant. When your doctor asks about your weekend and then says “so tell me about that knee pain,” the model knows which part belongs in the clinical note and which doesn’t. This is classification at work: the same concept we covered in earlier posts, applied to sorting medical facts from small talk.
Stage 3: Meaning to document. A large language model takes the classified information and writes a formatted clinical note that reads like a physician wrote it. The doctor still reviews and approves the note before it goes into your medical record. These platforms are HIPAA compliant, with signed Business Associate Agreements, encrypted data processing, and most don’t store raw audio after the note is generated.
Sound to text, text to meaning, meaning to document. If that pipeline looks familiar, it should. It’s the same pattern behind meeting transcription tools like Otter.ai, customer service chatbots, and voice assistants. The difference is what’s plugged into each stage. Once you see that pattern, you start noticing it everywhere.
At the time of this post, the biggest players are Nuance DAX, Abridge, and Ambience Healthcare. Even Doximity released a free AI scribe in 2025, which tells you basic ambient transcription is becoming expected rather than exceptional.
What This Means for You as a Patient
Next time you see your doctor, pay attention. If they seem more focused on you, with more eye contact and less typing, an AI scribe might be in the room. Some doctors report cutting documentation time in half with these tools. That doesn’t just help them. It means more of your appointment is actually spent on your care.
A 10-Second Heart Test That Finds What Other Tests Miss
While AI scribes handle the paperwork side of medicine, another machine learning application is focused on screening: flagging a heart condition that traditional tests miss entirely.
In late 2025, researchers at the University of Michigan published a study in NEJM AI describing a machine learning model that can detect coronary microvascular dysfunction (CMVD) from a standard 10-second EKG.
If you haven’t heard of CMVD, most people haven’t. It affects the tiny blood vessels in your heart, not the large arteries. It causes chest pain and raises heart attack risk, but here’s the problem: standard tests like angiograms look at the big vessels. If those are clear, your test comes back “normal” even if the small vessels are in trouble. Diagnosing CMVD properly requires a PET myocardial perfusion scan, which is expensive, specialized imaging that most hospitals don’t have.
About 14 million people visit an ER or outpatient clinic in the U.S. each year with chest pain. Some of them have CMVD but get sent home because their standard tests don’t show anything wrong.
How the Michigan EKG Model Works
The approach behind this study is one worth knowing, because it’s the same technique behind ChatGPT, Claude, and most of the large language models making headlines right now. It’s called self-supervised learning, and the mental model is simple: learn the language first, then learn the task.
Phase 1: Learn the language. The Michigan team fed their model more than 800,000 EKG waveforms with no labels attached. Nobody told it which ones were healthy and which ones weren’t. The model’s only job was to process the electrical patterns from those 800,000 hearts and figure out on its own what “normal” looks like, what “abnormal” looks like, and what all the variations in between look like. Think of it like dropping someone into a foreign country for a year before asking them to do a specific job there. They don’t know the job yet, but they know the language.
Phase 2: Learn the task. Once the model understood the “language” of EKG signals, the team fine-tuned it on a much smaller set of EKGs that had matching PET scan results (the gold standard for CMVD diagnosis). Now the model could connect specific electrical patterns it already recognized to confirmed disease.
This two-phase approach is why the model worked so well. Training directly on labeled data (the old way) requires huge amounts of expensive, expert-labeled examples. Self-supervised learning lets the model build a foundation from cheap, unlabeled data first, then specialize with a smaller set of labeled examples. It’s the same reason ChatGPT can write a legal memo even though it wasn’t trained specifically on legal documents. It learned language first, then applied it.
The result: the model could identify CMVD using a regular resting EKG. No expensive imaging, no exercise stress test, no specialist center required. It outperformed earlier AI models across nearly all 12 diagnostic tasks it was tested on, including several that existing EKG tools can’t handle at all.
Dr. Venkatesh Murthy, the study’s senior author, described it this way: they taught the model to understand the electrical language of the heart without human supervision.
How You Can See Machine Learning in Healthcare
At your next appointment: Ask your doctor if they use an AI scribe. Practices that use them are generally required to tell you. Knowing how the technology works makes you a more informed patient.
On your wrist: The same pattern recognition behind the Michigan EKG study already exists in consumer wearables. Apple Watch detects atrial fibrillation. Fitbit and Samsung watches track heart rate variability. These aren’t as advanced as clinical models, but they use the same machine learning fundamentals: training on large datasets of sensor data to identify unusual patterns.
During telehealth visits: If you’ve done a virtual visit recently, machine learning is probably working in the background to help triage your symptoms, route you to the right provider, or flag potential drug interactions.
Where Healthcare ML Goes From Here
The ambient scribe market still has open questions. Privacy, billing accuracy, and whether AI-written notes change how doctors think about patient care are all active debates. And research like the Michigan EKG study still needs real-world validation before it shows up in your local ER.
But the same machine learning techniques we’ve been exploring through sports (pattern recognition, classification, learning from large datasets) are already being used in medicine. The core idea is the same whether you’re predicting an NFL tackle or screening for a heart condition. The difference is what’s at stake.
