Welcome back to Healthy Innovations! 👋
This week we are diving into AI in medical imaging – one of the fastest-moving areas in healthcare right now. Landmark study findings published this month in Nature Cancer showed that AI can outperform human radiologists in detecting breast cancer, catching cases that routine screening was missing entirely.
So, this is the perfect moment to explore how AI is reshaping radiology, from emergency triage to TB screening in communities that have never had a radiologist.
Let's dive in!
The findings that changed the conversation
This month, findings published in Nature Cancer drew widespread attention across radiology.
Researchers from Imperial College London, Google, the Universities of Cambridge and Surrey, and multiple NHS trusts completed the largest AI breast cancer screening study in NHS history – covering 175,000 women across three separate phases of research.
The work was published across two papers and examined AI as a second reader, as a tool for reducing radiologist workload, and – for the first time – as a participant in clinical arbitration, where readers disagree on a diagnosis.
Across all three phases, the AI detected more cases of invasive cancer, more cases overall, had fewer false positives, and recalled fewer women at their first scan than human readers did. It also caught one in four of the cancers slipping through routine screening undetected.
AI has the potential to transform how the NHS prevents, detects and treats diseases like cancer. These findings highlight how AI can support clinicians to identify more cancers earlier, reduce errors and deliver higher quality care to patients.
A specialty under pressure
To understand why AI matters so much in radiology right now, you need to understand the workforce crisis the specialty is navigating.
In the UK, there is a 29% shortfall of clinical radiologists – almost 2,000 posts – projected to reach 39% by 2029. The US picture is comparable, with vacancy rates at all-time highs. In some countries, fewer than five radiologists serve entire national populations. For many patients, AI is not a convenience. It is the only way a scan gets read at all.
By mid-2025, the FDA had cleared approximately 873 AI algorithms for medical imaging, making diagnostic imaging the single largest AI target among all medical specialties. The technology now spans virtually every scan type – mammography, CT, MRI, chest X-ray, ultrasound – each addressing a different clinical problem.
What AI actually does in clinical practice
AI in radiology shows up at different points in the diagnostic journey, solving different problems. There are three main roles it plays today.
1. Detection and triage
Busy radiology departments receive hundreds of scans a day. Without AI, an urgent case, a brain bleed, a pulmonary embolism, a stroke, can sit in a queue behind routine work. AI triage systems flag critical findings the moment a scan is processed, so it reaches a radiologist immediately.
Viz. ai's stroke detection platform is deployed in over 1,600 hospitals, and patients reach treatment approximately 66 minutes faster when its alert system is in use. At one trauma center, 30-day mortality for brain hemorrhage patients fell from 27.7% to 17.5% after an AI triage tool was introduced.
2. Screening support
High-volume screening programs generate enormous caseloads that stretch radiologist capacity. AI acts as a second reader, catching findings that might otherwise be missed and reducing unnecessary recalls.
Key players include Lunit and iCAD, focused on mammography accuracy and cutting false positives, and Kheiron Medical Technologies, whose Mia system showed meaningful cancer detection gains across NHS screening sites in Scotland in a separate 2026 Nature Cancer study.
3. Report drafting
Writing up findings is one of the most time-consuming parts of a radiologist's day. Generative AI tools now produce a structured first draft from image findings – which the radiologist reviews, edits, and signs off on.
Rad AI's reporting platform found that 79% of radiologists at one large health system improved their efficiency, measured by median time spent per report. A prospective study from a2z Radiology AI at RSNA 2025 showed AI-assisted drafting reduced reporting time by 17.8% and decreased mental demand by 22.4%, without increasing false positives.
Beyond high-income health systems
One of the most significant – and least discussed – dimensions of AI in radiology is what it is doing where there are virtually no radiologists to begin with.
Qure. ai, named among Time Magazine's 100 Most Influential Companies in 2025, has built a chest X-ray AI that identifies over 35 abnormalities, has been evaluated by the WHO for tuberculosis (TB) screening in settings without human readers, and is deployed in more than 100 countries.
In Mali, a mother with a persistent cough received a TB diagnosis in seconds at a community health center with no doctor present – a mobile X-ray machine and an AI algorithm. In refugee camps in Chad, AI reads X-rays where no radiologists exist. Over 80 low- and middle-income countries now use AI to screen for tuberculosis. AI performance for TB detection has approximated that of human experts, offering a practical solution to the shortage of radiograph readers in high-burden, low-resource settings.
Companies shaping the field
At RSNA 2025, more than 100 companies filled the AI showcase, promising improvements across every imaging modality. The established imaging hardware giants – GE HealthCare (96 FDA-cleared tools), Siemens Healthineers (80), and Philips (42) – have all built AI deeply into their platforms, with Philips unveiling its Advanced Visualization Workspace at RSNA 2025, pulling multi-modality imaging into a single workflow.
Among the specialists:
Aidoc operates an orchestration platform that works like an app store for radiology AI, letting third-party tools plug into existing hospital workflows, analyzing approximately 3 million patients each month
Viz. ai focuses on time-critical conditions – stroke, hemorrhage, pulmonary embolism – with more than 50 FDA-cleared algorithms and care coordination tools that alert clinical teams within seconds of a scan being read
Subtle Medical, recognized by TIME as one of the world's top healthcare companies in 2025, has its image-quality enhancement software installed in over 1,000 scanners worldwide, improving MRI and PET clarity without additional scan time or radiation dose
What still needs work
The progress is real, but so are the obstacles.
Bias in training data remains the most fundamental concern.
Philips' 2025 Future Health Index found that 63% of radiologists are worried about bias in AI algorithms. Most systems were trained on datasets skewed toward patients from high-income countries and specific scanner manufacturers. A tool validated at one hospital can perform quite differently at another.
Trust between clinicians and AI is still being earned.
The Imperial-Google study found that radiologists sometimes rejected correct AI findings during arbitration – overruling a tool that had caught a cancer a human had missed. The reverse risk is equally real: automation bias can lead clinicians to place undue trust in AI outputs and set aside their own clinical judgement, particularly under time pressure.
On regulation, the EU AI Act, in full effect from January 2026, classifies medical AI as high-risk, requiring rigorous documentation of training data, bias checks, and human oversight policies. In many lower-income countries, no equivalent framework exists – creating both an access opportunity and a patient safety gap that need addressing in parallel.
The patient view
The benefit shows up in concrete ways: a stroke patient reaching treatment an hour sooner, a woman leaving her first mammogram with a biopsy referral rather than a missed diagnosis, a community in rural Nigeria receiving TB screening that geography had previously made impossible.
While 85% of radiologists in Philips' 2025 survey express optimism about AI in healthcare, only 59% of patients feel similarly confident. Closing that gap will take demonstrated outcomes and honest clinical communication.
The tools now in use are finding cancers, preventing strokes, and reading scans in places that have never had a radiologist.
The work now is making sure that progress reaches every patient who needs it.
Join over 800 healthcare innovators who get these insights delivered straight to their inbox! Healthy Innovations is read weekly by top pharma executives, leading academics, startup founders, agency teams, and investors.
Subscribe now to stay ahead of industry trends!
Innovation highlights
🥽 VR calms pre-op nerves. Before surgery, most patients get a leaflet – but around six in 10 adults in England struggle with complex medical information. A new study tested VR as part of the consent process for kidney stone treatment. 150 patients aged 22–80 explored a virtual operating room, watching a 3D demonstration of shockwave lithotripsy. Afterward, patients reported better understanding and less anxiety – with the biggest impact seen in those aged 65 and above.
🧠 Blood makes brains see-through. Scientists have found a way to make living brain tissue transparent without altering its biology – a world first. The key ingredient? Albumin, a common blood protein. When added to the culture medium, it makes brain slices transparent within an hour. In living mice, fluorescence signals from deep neurons became three times brighter, for the first time revealing activity in the deeper layers of the cerebral cortex.
🎓 AI upgrades med school. Medical education just got a serious upgrade. Elsevier's new Osmosis AI combines AI-powered answers with award-winning Osmosis videos and 140 years of clinical expertise – giving medical students a study companion they can actually trust. Every response is cited and verified against Elsevier's evidence-based content, so hallucinations are off the table. With 74% of med students already using AI tools, this purpose-built solution fills a real gap – helping students prep for USMLE exams anytime, anywhere.
Cool tool
🔬 DermaSensor is a handheld AI device that gives primary care clinicians a fast, evidence-based second opinion on suspicious skin lesions. Press the probe against a lesion, and it uses spectroscopy to analyze cellular signatures beneath the surface, returning a clear "evaluate further" or "likely benign" recommendation in seconds. It's the first FDA-cleared AI device for point-of-care skin cancer evaluation in primary care.
A pivotal study across 22 US centers found 96% sensitivity across 224 confirmed skin cancers, and negative results carried a 97% probability of being benign. In a companion study, using DermaSensor cut missed skin cancers in half, from 18% down to 9%.
For GPs and internists without dermatology training, it's a practical tool that builds confidence and supports smarter referral decisions, without replacing biopsy or specialist review.
Weird and wonderful
🐱 Cats: Our cancer colleagues. Your cat isn't just stealing your pillow – it's also quietly advancing cancer research. Scientists just completed the largest-ever cancer DNA sequencing study of cat tumors, mapping 1,000 genes across 500 cats and 13 tumor types. Turns out, feline and human cancers are strikingly similar at the genetic level. The most commonly mutated gene in cat cancers is TP53 – the exact same gene most commonly mutated in human cancers.
The gene PIK3CA, mutated in about 40% of human breast cancers, showed up altered in roughly 50% of cat mammary cancers, meaning drugs already developed for human PIK3CA mutations could now be tested on cats too. Cats also share many of the same underlying conditions as humans – obesity, diabetes, kidney disease – making your napping, indifferent housemate a surprisingly reliable research partner.

Image created using Google Nano Banana Pro
Thank you for reading the Healthy Innovations newsletter!
Keep an eye out for next week’s issue, where I will highlight the healthcare innovations you need to know about.
Have a great week!
Alison ✨





