• Healthy Innovations
  • Posts
  • 👨🏼‍⚕️ Should doctors trust an AI that can't explain itself?

👨🏼‍⚕️ Should doctors trust an AI that can't explain itself?

As healthcare AI explodes to a $500B+ market, the black box problem is becoming medicine's most urgent challenge

In partnership with

Welcome back to Healthy Innovations! 👋

In this issue of Healthy Innovations we're tackling a question that's becoming impossible to ignore: as AI systems make more clinical decisions, how much does it matter that we can't explain how they work?

With around 900 FDA-cleared AI medical devices now in use and the market racing toward $500 billion, the "black box problem" has shifted from academic debate to urgent practical challenge. These systems can diagnose diseases and guide treatment with remarkable accuracy - but when asked "why?" they often can't answer.

Let's dive in!

Black box problem (in healthcare AI): A situation in which an artificial intelligence system used for medical tasks produces predictions or recommendations,
but the internal process by which it transforms clinical inputs into those outputs
is not transparent or understandable to humans, hindering trust, validation,
and accountability.

From diagnostic algorithms to clinical decision support, artificial intelligence is reshaping medicine at unprecedented speed. But there's a fundamental problem standing between promising technology and widespread adoption: we often can't explain how these systems reach their conclusions.

Healthcare AI is experiencing explosive growth. As of early 2024, the FDA had authorized around 900 AI-enabled medical devices, with well over 100 new approvals arriving annually in recent years. The market is projected by multiple analyses to reach several hundred billion dollars by the early-to-mid 2030s, with some forecasts exceeding $500 billion.

Yet beneath this impressive performance lies what researchers call the "black box problem."

Complex deep learning models can have millions of parameters working in intricate ways that make it nearly impossible to trace how specific inputs lead to specific outputs. An AI might correctly identify cancer in a scan 95% of the time, but when asked why it flagged a particular region as suspicious, the answer often amounts to computational silence.

When an AI recommends a treatment or flags a diagnosis, clinicians need to understand the reasoning. Patients deserve to know why an algorithm is influencing their care. And when something goes wrong, investigators need to pinpoint the failure to prevent future errors.

Why transparency matters more in healthcare than anywhere else

In most applications, AI mistakes are inconvenient. In healthcare, they can be fatal.

The potential high-impact consequences of erroneous AI predictions in critical sectors like healthcare have amplified the need for clarity and explainability. When a radiologist disagrees with an AI diagnosis, they need to understand the system's reasoning to make an informed decision. Without that understanding, the choice becomes binary: trust the AI blindly or ignore it completely.

Trust issues extend beyond individual clinical encounters. Developers and clinicians have opposing goals when it comes to AI explainability - developers prioritize model interpretability while clinicians seek clinical plausibility.

This disconnect means that even when technical explanations exist, they may not address what clinicians actually need to know.

The bias problem hidden inside the black box

Perhaps the most concerning aspect of opaque AI systems is their potential to perpetuate or amplify existing healthcare disparities.

As of May 2024, FDA approvals of AI-enabled medical devices had reached 882, predominantly in radiology, followed by cardiology and neurology. However, research reveals troubling patterns in how these systems perform across different populations.

AI models learn from historical data, which often reflects existing biases in healthcare delivery. One study examining class imbalance effects on AI fairness found that imbalanced representation of racial groups in ICU mortality prediction yielded recall rates as low as 25% for underrepresented populations.

The problems compound throughout development:

  • Training data typically comes from a few large urban academic medical centers, missing rural and lower-income populations

  • Critical demographic information is often absent from patient records, making bias testing impossible

  • Emerging studies show poorer performance in some racial and ethnic minorities for specific devices, while systematic reviews highlight large reporting gaps that raise concerns about undetected disparities

The explainability toolbox is expanding

The field of Explainable AI has evolved significantly, developing techniques to peer inside previously impenetrable models:

  • Model-agnostic methods like LIME and SHAP generate explanations by analyzing how changes in inputs affect outputs, working across various machine learning models

  • Visual explanation methods create heat maps highlighting which regions of medical images most influenced an AI's decision

  • Self-explainable AI builds interpretability directly into model architecture rather than adding it afterward.

The regulatory landscape is shifting

The FDA released final guidance in December 2024 on predetermined change control plans for AI medical devices. These plans allow manufacturers to update systems after approval without submitting entirely new applications for each modification - but only if changes follow precisely documented procedures.

Recent FDA draft guidance in August 2025 requires comprehensive documentation including model descriptions, data lineage, performance metrics, bias analysis, human-AI workflow integration, and post-market monitoring. Regulators are starting to address bias through data and process requirements, but there are still no detailed, output-level performance thresholds for fairness in most jurisdictions.

Looking ahead: Building trust through transparency

Gaining the trust of healthcare professionals requires AI applications to be transparent about their decision-making processes. This means developing explanations calibrated to different audiences - technical teams need algorithmic details while clinicians need clinical reasoning and patients need accessible language.

Future AI systems will need to provide explanations in a context-dependent manner. A radiologist reviewing a lung scan needs different information than an oncologist planning treatment or a patient considering their options.

The field also needs to balance explainability with performance.

A significant percentage of commonly used medicines lack comprehensive pharmacological explanations, yet these drugs are deemed safe based on robust efficacy data from randomized controlled trials. This raises questions about whether AI should be held to higher standards than pharmaceuticals.

The stakes couldn't be higher

As healthcare systems worldwide accelerate AI adoption, the black box problem moves from academic concern to urgent practical challenge. The potential benefits are enormous, but realizing them requires building systems that clinicians can trust and patients can accept.

The technology is advancing. Regulatory frameworks are emerging. What remains is ensuring that as AI transforms healthcare, it maintains transparency and accountability when things go wrong.

In an era where algorithms increasingly influence life-or-death decisions, understanding how they reach those decisions isn't just technically desirable - it's ethically essential.

Innovation highlights

🧠 Brain chip, hold the wires. Researchers from Columbia, Stanford, and Penn have developed BISC, a wireless brain-computer interface packed onto a single silicon chip. With 65,536 electrodes crammed into just 3 cubic millimeters, it's about 1,000 times smaller than standard implants. The flexible chip curves to match the brain's surface and streams neural data wirelessly. In preclinical trials with pigs and primates, BISC delivered reliable recordings for months, decoding everything from wrist movements to visual patterns. Human trials for epilepsy are next.

🧪 Hep C test goes turbo. Northwestern scientists have built the fastest hepatitis C diagnostic yet, delivering results in just 15 minutes. That's 75% faster than current rapid tests, which take 40-60 minutes and often outlast a typical clinic visit. The secret? Adapting their COVID-era DASH PCR platform for whole blood samples. When Johns Hopkins independently tested 97 specimens, accuracy hit 100%. For the 50 million people worldwide living with chronic HCV, same-day diagnosis could finally mean same-day treatment.

👯 Your digital twin awaits. Harvard researchers are building virtual copies of patients to test treatments before doctors prescribe them. The AI tool, called COMPASS, crunches health records, genetic data, and tumor biopsies to simulate how an individual might respond to specific drugs. One patient could have 100 digital twins running different scenarios. For Alzheimer's patients especially, this could finally answer a maddening question: is this medication actually helping me, or am I just another data point in someone else's clinical trial?

Company to watch

🦾 Open Bionics is transforming prosthetics with the Hero Arm, a 3D-printed bionic limb for below-elbow amputees that's lightweight, functional, and deliberately eye-catching. Founded in 2014 in Bristol, England, the company uses myoelectric technology—muscle signals from the residual limb—to control multiple programmable grips with haptic feedback.

What makes Open Bionics stand out is their philosophy: limb difference as empowerment, not something to hide. Their bold designs include branded covers from Disney and gaming companies, turning prosthetics into conversation starters rather than concealed medical devices.

The 3D printing approach cuts both weight and cost compared with traditional prosthetics. The Hero Arm is now available through clinical partners across the UK, USA, Europe, and Australia, with NHS approval in some UK cases and various insurance and funding pathways elsewhere. Open Bionics has earned multiple engineering awards while serving thousands of users worldwide.

Weird and wonderful

👵 Grandma's got an upgrade. AI company 2wai just released an ad showing a pregnant woman video-calling her dead grandmother's AI avatar for parenting advice. "Put your hand on your tummy and hum to him," the digital granny suggests. "You used to love that."

Founded by CEO Mason Geyser and Disney child star Calum Worthy, 2wai creates "HoloAvatars" - AI renditions of real people powered by large language models. Beyond deceased relatives, the company offers digital versions of fictional characters, historical figures, celebrities, and even yourself ("Is one of you really enough?" their website asks).

The internet responded predictably. "We are going to make what you're doing illegal," one viewer commented. Another noted they "missed the part where the 'grandmother' stops working and tells them to upgrade to the premium version." Geyser told The Independent the controversy was deliberate, following the playbook of AI hardware company Friend. Mission accomplished.

Image source: Canva AI

Simplify Training with AI-Generated Video Guides

Simplify Training with AI-Generated Video Guides

Are you tired of repeating the same instructions to your team? Guidde revolutionizes how you document and share processes with AI-powered how-to videos.

Here’s how:

1️⃣ Instant Creation: Turn complex tasks into stunning step-by-step video guides in seconds.
2️⃣ Fully Automated: Capture workflows with a browser extension that generates visuals, voiceovers, and call-to-actions.
3️⃣ Seamless Sharing: Share or embed guides anywhere effortlessly.

The best part? The browser extension is 100% free.

Thank you for reading the Healthy Innovations newsletter!

Keep an eye out for next week’s issue, where I will highlight the healthcare innovations you need to know about.

Have a great week!

Alison ✨

P.S. If you enjoyed reading the Healthy Innovations newsletter, please subscribe so I know the content is valuable to you!

P.P.S. Healthcare is evolving at an unprecedented pace, and your unique insights could be invaluable to others in the field. If you're considering starting your own newsletter to share your expertise and build a community around your healthcare niche, check out beehiiv (affiliate link). There's never been a better time to start sharing your knowledge with the world!

Reply

or to participate.