TrendPulse Logo

These medical X-rays are all deepfakes — and they fool even radiologists

Source: NatureView Original
scienceMarch 24, 2026

-

Email

-

Bluesky

-

Facebook

-

LinkedIn

-

Reddit

-

Whatsapp

-

X

AI-generated X-rays, such as these examples, could distort AI tools used to analyse medical data if included in their training data sets.Credit: Radiological Society of North America (RSNA)

Most radiologists struggle to identify X‑ray scans that are generated by artificial intelligence, with fewer than half spotting synthetic images hidden in real medical data, according to research published today in Radiology1. Large language models (LLMs) also had a hard time picking out real versus synthetic medical images.

The study provides training to help radiologists to improve their skills in detecting AI-generated X-rays. Researchers also warn that the negative impacts of synthetic data could creep into scientific literature and medical litigation.

“The results from this study are both disturbing and not very surprising to me,” says Elisabeth Bik, a microbiologist and image-integrity specialist based in San Francisco, California. “This raises concerns not only for research integrity, but also for clinical workflows, insurance claims and legal contexts where imaging evidence is used.”

Overly smooth bones

In the study, 17 radiologists from 12 research centres were presented with X-ray scans — half were real scans, and half were generated by AI. Without knowing the purpose of the study, participants were asked about the technical quality of the AI images and whether they noticed anything unusual. 41% raised concerns that AI scans might have infiltrated the data set.

How AI slop is causing a crisis in computer science