Search

By Michelle Mello
Key Points
  • AI tools are already influencing life-and-death medical decisions in U.S. hospitals, often with little vetting or monitoring.
  • A “foundational trust deficit” among patients and physicians is slowing adoption, as most believe healthcare organizations and insurers aren’t using AI responsibly.
  • Policymakers can play a critical role by requiring risk assessment, disclosure, and accountability from AI developers and healthcare providers, helping close the trust gap and speed safe adoption.
This is a lightly edited excerpt of testimony recently provided to the U.S. House’s Energy and Commerce Health subcommittee hearing "Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies."

AI is already influencing life-and-death decisions in U.S. hospitals, often without many safeguards. That creates both a historic opportunity and a serious risk. My work has led me to understand that although poorly designed regulation can hinder innovation, the government has a critical role to play in ensuring the conditions for innovation translate into greater adoption of healthcare AI. The key problem isn’t that there isn’t a lot of innovation; it’s that uptake of new innovations is low. A major reason why adoption lags behind innovation and interest in AI is what experts have called “a foundational trust deficit.”

Policymakers can help ensure that the entities that develop and use AI adequately assess, disclose and mitigate the risks of these tools. You may be surprised to hear that most healthcare organizations and health insurers do little vetting of AI tools before they put them into use, and often no meaningful monitoring of their impact afterwards. 

Furthermore, developers are not required to make any particular disclosures when they pitch their tools to healthcare organizations and health insurers. The law also permits developers to disclaim liability and warranties for their products when they license them to healthcare organizations and insurers. As a result, developers currently have little incentive to reveal weaknesses of their AI tools and face little consequence when things go wrong.

So what does this rule-free space mean? It means that when your mother, spouse or child seeks care at a U.S. hospital today, AI tools may strongly influence how they’re prioritized for attention and services, which tests they receive, what records are kept about their care and many other important decisions — sometimes solely on the basis of a sales pitch and monitored simply by checking that the software is on and working.

I am enthusiastic about healthcare AI. There are so many problems we’ve been trying to solve for decades that it may help with. I’m especially optimistic about the prospects for addressing the tragedy of missed and delayed diagnoses and the menace of physician burnout. But in healthcare, before we do things to patients, we test. We study. We gain physicians’ and nurses’ confidence that a new treatment or technology generates more benefit than harm. Yet most healthcare organizations don’t do that for AI and most developers don’t provide much support to those that do.

Things are moving fast in the AI space and our traditional ways of testing new treatments, such as clinical trials, often aren’t a good fit for the pace of innovation. But we still need some way of assessing and managing risk; we owe that to patients.

I interview patients about their perceptions of healthcare AI tools in my work at Stanford, and they consistently express hope about AI but send us a stern message: We expect you to keep us safe. Research suggests that patients don’t think we’ve risen to the challenge: 60% of U.S. adults say they would be uncomfortable with their physician relying on AI and only one-third trust healthcare organizations to use AI responsibly.

Physicians are concerned, too. They are trained to be cautious, and savvy enough to realize they’re usually the ones on the hook when patients get hurt. This is a huge problem for the AI market because uncertainty and fear about the risks of healthcare AI is currently chilling adoption. Again, the trust deficit is a major reason why AI adoption isn’t higher—and why the largest area of AI implementation in healthcare isn’t the exciting tools that promise great leaps forward in saving lives, it’s tools that perform administrative tasks like taking notes during clinic visits.

AI holds enormous promise for improving healthcare, but its adoption is being slowed by a fundamental trust deficit. By taking practical steps now, Congress can help close that gap, make healthcare providers and the public more excited to receive the products coming out of industry, and ensure that innovation truly reaches the bedside.

Read the full testimony here.

Michelle Mello is a professor of Health Policy at Stanford University.

*The opinions expressed in this column are those of the author and do not necessarily reflect the views of HealthPlatform.News.


Subscribe to our newsletter: