Commentary: AI isn't ready to be your doctor yet but will it ever be?

This is read by an automated voice.Please report any issues or inconsistencies here.
As almost everybody knows, the AI gold rush is upon us.And in few fields is it happening as fast and furiously as in healthcare.
That points to an important corollary: Beware.Artificial intelligence technology has helped radiologists identify anomalies in images that human users have missed.It has some evident benefits in relieving doctors of the back-office routines that consume hours better spent treating patients, such as filing insurance claims and scheduling appointments.
Eventually, a lot of this stuff is going to be great, but we’re not there yet.— Eric Topol, Scripps ResearchBut it has also been accused of providing erroneous information to surgeons during operations that placed their patients at grave risk of injury, and fomenting panic among users who take its offhand responses as serious diagnoses.The commercial direct-to-consumer applications being promoted by AI firms, such as OpenAI’s ChatGPT Health and Anthropic’s Claude for Healthcare — both of which were introduced in January — raise special concerns among medical professionals.
That’s because they’ve been pitched to users who may not appreciate their tendency to output erroneous information errors and offer inappropriate advice.Commentary on economics and more from a Pulitzer Prize winner.By continuing, you agree to our Terms of Service and our Privacy Policy.
“Eventually, a lot of this stuff is going to be great, but we’re not there yet,” says Eric Topol, a cardiologist associated with Scripps Research Institute in La Jolla.“The fact that they’re putting these out without enough anchoring in safety and quality and consistency concerns me,” Topol says.
“They need much tighter testing.The problem I have is that these efforts are largely stemming from commercial interests — there’s furious competition to be the first to come out with an app for patients, even if it’s not qu...