Jump to content

Featured Replies

Would AI be any better (more likely) to pick this diagnosis error up considering how rare (less likely) the cancer condition was, since AIs are normally programmed to yield the most likely answer ?

Case in point

BBC News
No image preview

'Our daughter's cancer symptoms were dismissed because sh...

Isla Sneddon died in March 2025 aged 17, just six months after she was diagnosed with cancer.

The parents of a teenage girl who died from breast cancer say their only daughter could still be alive now if she had been treated the same as an adult.

Isla Sneddon, from Airdrie, died in March 2025 aged 17, just six months after she was diagnosed with cancer.

Her parents say doctors had downgraded her referral for biopsies to a routine one because of her age, meaning her cancer went undetected until it was too late.

7 minutes ago, studiot said:

Would AI be any better (more likely) to pick this diagnosis error up considering how rare (less likely) the cancer condition was, since AIs are normally programmed to yield the most likely answer ?

Case in point

BBC News
No image preview

'Our daughter's cancer symptoms were dismissed because sh...

Isla Sneddon died in March 2025 aged 17, just six months after she was diagnosed with cancer.

Hmm, I think this may be misunderstanding how diagnostic AI works. These are, to my understanding, not LLMs.

  • Author
Just now, exchemist said:

Hmm, I think this may be misunderstanding how diagnostic AI works. These are, to my understanding, not LLMs.

Just now, exchemist said:

Hmm, I think this may be misunderstanding how diagnostic AI works. These are, to my understanding, not LLMs.

Yes that is what I wanted to discuss.

Computer programs are very good at pattern recognition, when asked for a particular configuration to search for.

So for instance they are good for fingrrprints, Xray defects etc.

18 minutes ago, studiot said:

Yes that is what I wanted to discuss.

Computer programs are very good at pattern recognition, when asked for a particular configuration to search for.

So for instance they are good for fingrrprints, Xray defects etc.

Indeed. I don't know how diagnostic AI works but I imagine it may look for patterns in the data: X-ray pictures, blood analyses, physical examinations and so forth, and then provide the doctor with an assessment of probabilities of different conditions, or something like that.

In this sad case, I imagine the doctor would then have had to make a decision to dismiss cancer from the list of possible conditions presented to him or her in black and white. This would be psychologically hard to do - and to justify in retrospect - if the AI came up with a probability of, say, over 20% for cancer. So maybe it might have prompted an intervention.

2 hours ago, studiot said:

Would AI be any better (more likely) to pick this diagnosis error up

Details matter here. It depends. Which AI? How was it trained? What parameters were given for the analysis? Would humans have done any better / worse? Probably the latter.

16 minutes ago, iNow said:

Would humans have done any better / worse? Probably the latter.

Details matter here. It depends. Which humans? How were they trained? What parameters were given for the analysis?

Just now, Genady said:

Details matter here. It depends. Which humans? How were they trained? What parameters were given for the analysis?

Indeed, it depends, what algorithm and how does it relate to human thinking?

48 minutes ago, Genady said:

Details matter here. It depends. Which humans? How were they trained? What parameters were given for the analysis?

Bravo. Agreed. /chefskiss

I had a discussion recently with folks from health authorities who were testing a chatbot for patient interactions and diagnostics. It is specifically trained on medical data and what they wanted to use it for is initial interactions and preliminary diagnoses. I don't know the specific model they tested, but they did a comparative study with health care providers.

The interesting bit is in the patient cohort, folks significantly preferred them over interactions with real family doctors. To a large degree because they didn't feel rushed and could chat at length regarding their issues. And on the diagnostic side, they outperformed humans, because they were able to pick up things that were not mentioned or missed by humans.

That being said, I think medicine is a great place for AI, as in many cases the way a healthcare provider works is based on existing diagnoses and there is comparatively little room (or allowance) for creative assessments or trying out new ideas. I think there was one area where AI underperformed by a little bit, but I cannot recall what it was. It is possible that it was related to rare diseases, where overall detection was low to begin with.

I thin there are a few things one could gleam from those tests (unfortunately the paper is not written yet). First is benefits to patient satisfaction. Even though it is virtual, the fact that things are at their pace and because AI has unlimited patience, they feel taking seriously. The second is that for routine things, they perform better, as they are less likely to dismiss things. For rare or very difficult diagnoses, it would depend a bit. On the human side, the variance is huge. Some specialist get to the right diagnosis, just because it happens to be in their wheelhouse. Also, in my experience, MDs with an active research program tend to be picking up non-regular things, as they are more used to think in an analytical way, as opposed to going through check lists. I had cases where I had to explain family doctors the etiology of certain diseases and their molecular mechanisms, because they either got it wrong or the references they used (in one case, wiki) was off.

I assume an AI system (based on current capacities) will have less variance, but will more likely miss the outliers, though that can be tweaked, of course. But given the system in which healthcare currently operates, AI models are almost certainly to have serious impact here, including on the patient-facing side.

Edit: On the diagnostic side the implementation is probably seamless, basically AI-enhanced tools with human oversight The main issue I see there is that these conveniences often lead to a drop in human capacity, especially as trust in the tools themselves increases. As those tools might not be static, it is unclear to me what happens if human capacity decreases.

Create an account or sign in to comment

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.