In medicine, cautionary tales about the unintended effects of artificial intelligence (AI) are well-known. There have been instances where AI programs designed to predict sepsis or improve follow-up care ended up triggering false alarms and deepening health disparities. As a result, physicians have been hesitant to fully incorporate AI into their workflows, using it mainly as a scribe, second opinion, or back-office organizer. However, the field of AI in medicine is gaining momentum and investment.

The Food and Drug Administration (FDA) has a crucial role in approving new medical products and has been actively exploring the use of AI. AI has the potential to discover new drugs, identify unexpected side effects, and assist overwhelmed staff with repetitive tasks. However, the FDA has faced criticism for not thoroughly vetting and describing the AI programs it approves for detecting various medical conditions.

President Biden recently issued an executive order calling for regulations to manage the security and privacy risks of AI in healthcare. The order also includes more funding for AI research in medicine and a safety program to gather reports on harm or unsafe practices.

The FDA is lagging behind the rapidly evolving advances in AI, particularly in areas such as overseeing “large language models” and reviewing technology that continues to learn as it processes diagnostic scans. The agency’s existing rules only focus on one problem at a time, unlike AI tools in Europe that scan for a range of problems.

While the FDA’s reach is limited to approved products for sale, large health systems and insurers can develop their own AI tools with little to no government oversight. This lack of oversight, combined with the limited information available about AI programs, has raised concerns among doctors. They are cautious about incorporating AI into patient care without knowing how the programs were built, tested, and whether they provide meaningful benefits.

To address these concerns, experts suggest building labs where developers can access large amounts of data to build and test AI programs. However, adapting machine learning for major hospital and health networks faces additional challenges, such as software systems that don’t communicate with each other and uncertainty about who should bear the cost.

Efforts are underway to vet AI programs and evaluate their performance. Organizations like Radiology Partners are testing approved AI programs to ensure they accurately detect abnormalities and improve patient care. These evaluations have revealed both impressive and flawed AI programs in detecting conditions like lung abnormalities and aneurysms.

Despite the challenges and criticism, the use of AI in medicine has the potential to be life-changing. Instances where AI has accurately detected life-threatening conditions like a brain clot have demonstrated the positive impact of AI in patient care. However, further scrutiny, regulations, and data-driven evaluations are needed to ensure the reliable and effective use of AI in healthcare.

Leave a comment

Trending