A cautious eye is needed in AI healthcare delivery Image By Vaile Wright Key Points AI can expand access to behavioral health care through digital therapeutics, chatbots, and administrative support tools — reducing clinician burnout and increasing individualized treatment options. Without diverse data and oversight, AI can perpetuate health inequities by misjudging risk scores for certain populations; unregulated consumer chatbots also pose serious safety risks. The APA urges stronger federal regulation (FTC, CPSC) to investigate deceptive or unsafe AI health tools, while advocating for a “human-in-the-loop” model to ensure AI augments rather than replaces clinicians. This is a lightly edited excerpt of testimony recently provided to the U.S. House’s Energy and Commerce Health subcommittee hearing "Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies." The American Psychological Association (APA) recognizes the immense potential of AI to revolutionize health care. For consumers and patients alike, AI-powered tools can enhance diagnostic precision, allow for more individualized and accessible treatment and preventative care and improve engagement in their own well-being. In behavioral health, where we face a significant workforce shortage, AI can enable the scaling of evidence-based interventions to reach a much broader segment of the population. Therapeutic chatbots and digital therapeutics can deliver care to those who might otherwise receive none, but these tools are most effective and safest when used to augment, not replace, the care provided by a qualified professional, ensuring a human remains in the loop. For providers, AI can alleviate administrative burdens that lead to burnout, support clinical decision-making and free up valuable time for direct patient interaction. For taxpayers and the health system at large, these advancements promise not only to improve population health outcomes but also potentially decrease overall health care costs. AI is not a future concept; it is already integrated into our health care system. In behavioral health, its applications are varied. One of the most promising areas is the use of AI-powered "scribes" and support tools to reduce the significant administrative burden on clinicians by summarizing therapy sessions and automating progress notes. This is a critical step in combating the high rates of provi der burnout and allows clinicians to focus on direct patient care. We are also seeing the rise of digital therapeutics — software-based interventions that deliver evidence-based, clinically validated psychological treatments to patients under the oversight and management of a licensed provider. These tools, which are currently regulated by the FDA as medical devices, can make medical claims to treat specific conditions like insomnia, ADHD and substance use disorders, and require a prescription or order from a licensed provider. They represent a responsible pathway for innovation and one that might be replicated as more AI-based tools come online. For patients, the most significant promise lies in increased access to care. However, public trust remains fragile. According to the Pew Research Center, 60% of Americans report being uncomfortable with AI being used in their own health care. This discomfort is not unfounded. For example, one widely used algorithm measured a patient’s level of illness based on their total health care costs. Since some patient populations have historically spent less on healthcare due to systemic factors, the algorithm unfairly attributed lower risk scores to them, even when they had comparable or more severe and complex health conditions, ultimately exacerbating health inequities. This problem can impact patients based on gender, age, race, ethnicity, socioeconomic status or geographic location. Without representative data and diverse programming teams, AI risks amplifying, rather than reducing, health disparities. Furthermore, the direct-to-consumer market is flooded with unregulated chatbots that make deceptive claims. Certain entertainment-based chatbots, such as one on the platform Character.ai, have been utilized for “therapy” or “companionship.” This particular chatbot has engaged in over 4.9 million chats while presenting itself as a “psychologist.” These unregulated products can provide dangerous advice. In one instance documented in a lawsuit, a Character.ai chatbot appeared to validate a user’s thoughts of violence against their parents, stating, “‘child kills parents after a decade of physical and emotional abuse' stuff like this makes me understand a little bit why it happens." This is unacceptable and dangerous, which is why the APA has formally requested an investigation by the Federal Trade Commission and has urged the Consumer Product Safety Commission to investigate these potentially harmful products. To harness the benefits of AI while mitigating its risks, we must move forward not with blind optimism, but with cautious, informed, and ethical stewardship. This requires a fundamental commitment to a human-centered approach. Read the full testimony here. Vaile Wright is the Senior Director of Health Care Innovation at the American Psychological Association. *The opinions expressed in this column are those of the author and do not necessarily reflect the views of HealthPlatform.News. SUGGESTED STORIES Congress can help advance healthcare AI innovation This is a lightly edited excerpt of testimony recently provided to the U.S. House’s Energy and Commerce Health subcommittee hearing "Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies." Read more Consumers are capable of weighing market tradeoffs in the healthcare industry This is a lightly edited excerpt of testimony recently provided to the U.S. Senate’s Health, Education, Labor, and Pensions hearing " Making Health Care Affordable: Solutions to Lower Costs and Empower Patients." Consumers regularly make a varie Read more The commercial healthcare market can show what spending areas could be cut This is a lightly edited excerpt of testimony recently provided to the U.S. Senate’s Health, Education, Labor, and Pensions hearing " Making Health Care Affordable: Solutions to Lower Costs and Empower Patients." The high cost of U.S. health care presents a con Read more
Congress can help advance healthcare AI innovation This is a lightly edited excerpt of testimony recently provided to the U.S. House’s Energy and Commerce Health subcommittee hearing "Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies." Read more
Consumers are capable of weighing market tradeoffs in the healthcare industry This is a lightly edited excerpt of testimony recently provided to the U.S. Senate’s Health, Education, Labor, and Pensions hearing " Making Health Care Affordable: Solutions to Lower Costs and Empower Patients." Consumers regularly make a varie Read more
The commercial healthcare market can show what spending areas could be cut This is a lightly edited excerpt of testimony recently provided to the U.S. Senate’s Health, Education, Labor, and Pensions hearing " Making Health Care Affordable: Solutions to Lower Costs and Empower Patients." The high cost of U.S. health care presents a con Read more