- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
TL;DR: yes
It’s unfortunate that LLMs are the only thing that come to mind when AI is mentioned though. Something that can do pattern recognition better than a human can is good for this application
Even if it were to do pattern recognition as well as or slightly worse than a human, it’s still worthwhile. As the article points out: It’s basically a non-tiring, always-ready second opinion. That alone helps a lot.
One issue I could see is using it not as a second opinion, but the only opinion. That doesn’t mean this shouldn’t be pursued, but the incentives toward laziness and cost-cutting are obvious.
EDIT: One another potential issue is the AI detection being more accurate with certain groups (i.e. White Europeans), which could result in underdiagnosis in minority groups if the training data set doesn’t include sufficient data for those groups. I’m not sure if that’s likely with breast cancer detection, however.
Also if it’s integrated poorly. Like if you have the human only serve as a secondary check to the AI, which is mostly right, you condition the human to just click through and defer to the AI. The better way to do this would be to have both the human and AI judge things independently and review carefully where they disagree but that won’t save anyone money.
if the court system allowed deferring partial fault for “preventable” deaths to the hospital for employing practices that are not in the best interests of the patient it might give them a financial incentive.
Definitely, here’s hoping the accountability question will prevent that, but the incentive is there, especially in systems with for-profit healthcare.
My favorite AI fact is from cancer research. The New Yorker has a great article about how an algorithm used to identify and price out pastries at a Japanese bakery found surprising success as a cancer detector. https://www.newyorker.com/tech/annals-of-technology/the-pastry-ai-that-learned-to-fight-cancer
I remember, when we were learning prolog, that in the 70s, or something like, that they were already experimenting with AI and it was quite good at diagnostics. However doctors were scared of losing jobs instead of embracing it and using it as a tool. So they dropped it at the time. Hopefully they will use it as an additional tool this time and everybody profits.
If you’re going back that far, I remember hearing a story about the Australian military experimenting with immersive AI during a typical “give us money” event where a helicopter was flying over an area and the kangaroos scattered at the sound, disappearing over a hill…
Then reappeared with RPGs and fired them at the helicopter, taking it down. Lots of red faces and mumbling about working out some kinks. 😄
tl;dr: I’m old enough to remember when “AI” was a benign comic novelty. 🙃