States Should Protect Patients from AI and Federal Regulators

Published October 10, 2024

Insurance firms, drug developers, and health care companies are all using AI thanks to unprecedented new investment in the groundbreaking technology. Meanwhile, the federal government has begun regulating how AI should “decide” who gets care, who doesn’t, and how it should be delivered.

States have an interest in being cautious of both. Legislators are moving quickly to catch up to the emerging technology fueled by unprecedented investments chasing tantalizing promises of a new era in medical breakthroughs.

AI can assist in reducing medical errors by flagging outdated practices or catching common mistakes in the delivery of medicines and treatments. A recent Canadian study anticipates up to 26 percent of unanticipated hospital deaths could be avoided with AI. Billions of dollars could be saved by reducing hospital readmissions and eliminating unnecessary or harmful treatments. Private payers could see savings of $80 to $100 billion over the next five years.

Consumer protection advocates say the same technology could potentially harm patients by denying them insurance coverage, or suggesting treatments based upon unproven or biased AI decisions.

A court case against Humana alleges that the company used faulty AI in determining eligibility. JoAnne Barrows, a plaintiff in the case was discharged to a rehabilitation facility in November 2021 after being hospitalized to treat injuries she sustained in a fall. She was under a non-weight-bearing order for six weeks due to a leg injury but was only covered by insurance for two weeks. This is one of many such cases that pit patients against insurers relying on AI to reduce cost through algorithms that limit care.

AI chat services can offer medical advice. But when do these services cross the line into providing unlicensed care? A Florida study found that ChatGPT was only correct 60 percent of the time in offering medical advice to patients about urologic searches. A chat-bot service that provides answers to a teen struggling with an eating disorder could be extremely dangerous and potentially life threatening if the information is incorrect. Last year, the National Eating Disorders Association (NEDA) shut down its chatbot named “Tessa” after the group’s executive director found the service was directing patients seeking “eating disorder” information to ways to lose “one to two pounds a week” by limiting calorie intake to 2,000 per day.

Utah has begun the nation’s first state AI office aimed at addressing AI’s disruptive effects on all areas of government programs and the people they impact.  In 2024, the office is beginning discussions to regulate mental health chatbots that effectively offer mental health services traditionally regulated by state licenses.

In May, Colorado enacted AI laws limiting “consequential decisions,” including health care insurance and treatment. Colorado’s SB24-205  is the first broad attempt to provide protections to consumers in ways that recognize FDA approved systems.

In all, more than a dozen states have proposed legislation in the 2023-2024 biennium aimed at catching up to AI’s impact on health care delivery. The bills focus on consumer protection through notification and limitations on the scope and requirements around machine learning and decision making. However, states aren’t the only ones interested in putting guardrails up for AI.

The Department of Health and Human Services through the Office of the National Coordinator for Health Information Technology (ONC) came out with guidance in February 2024 that sets a wide-ranging and ambiguous roadmap for charting specific legislation.  The “Health Data, Technology, and Interoperability” (HTI-1) final rule, defines federal requirements for artificial intelligence (AI) and machine learning (ML)-based predictive software in health care. The rule was promulgated in response a requirement within the Federal Cures Act.

State legislators and governors should be vigilant about protecting the data privacy, right to care, and freedom from bias in federal rules governing the treatment of patient data and decisions around care. “Predictive decision support interventions” are defined as “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation, or analysis.” State legislators are now hearing from constituents who have been impacted by “predictive decision support interventions.”

Recently, a group of 28 private health care companies signed on to a voluntary commitment to “working with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective, and safe (FAVES) AI principles.” Those same companies operate under the penalty of potential criminal sanctions if those outcomes are not met. “It is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government” President Biden contends, because, “Only then can Americans trust AI to advance civil rights, civil liberties, equity, and justice for all.”

State legislators and governors must protect patients and providers by understanding and rules of AI at the federal level and demand transparency and fairness for their constituents. In the absence of concrete and uniform rules for the AI road ahead, states need to lead, not follow federal action.

For additional Information please refer to the following resources:

NCSL Artificial Intelligence and Health Care: A Primer

National Council on State Legislatures offers an updated look at state proposals on health care  AI (August 2024)

A Regulation To Promote Responsible AI In Health Care

Health Affairs examines AI opportunities and risks (Feb 2023)

Artificial Intelligence in the States: Emerging Legislation

National Council on State Legislatures reviews 2023-2024 AI state legislation (Dec 2023)

AI in healthcare: The future of patient care and health management

The Mayo Clinic reviews how AI can help patients

Big Tech’s A.I. Power Grab

Heritage examines the overwhelming attraction to your data (March 2024)

AI Can Hugely Improve Our Lives, but Don’t Assume It Will Automatically Boost the Public Finances

The Cato Institute writes about the folly of scoring savings with new innovations like AI

Nothing in this Research & Commentary is intended to influence the passage of legislation, and it does not necessarily represent the views of The Heartland Institute. For further information on this and other topics, visit the The Heartland Institute’s website, and PolicyBot, Heartland’s free online research database.

Please don’t hesitate to contact us if we can be of assistance! If you have any questions or comments, contact Heartland’s government relations team at [email protected] or 312/377-4000.