In the past decade, health insurance companies have increasingly implemented artificial intelligence algorithms to determine coverage for treatments and services recommended by physicians. While hospitals and clinicians use artificial intelligence to assist in diagnosis and care, insurers leverage these systems largely for administrative functions such as prior authorization—deciding whether a specific treatment is ´medically necessary´ and therefore eligible for coverage. Artificial intelligence also plays a role in limiting or defining the extent of care, such as stipulating the number of hospital days approved after surgery.
When an insurance company denies coverage based on an algorithmic assessment, patients have limited and often onerous choices: appealing the decision (a rarely pursued and complex process), accepting alternative care that is covered, or paying out of pocket, which is prohibitive for most. The opacity of these algorithms adds to the dilemma, as insurers tightly guard the criteria and data inputs as trade secrets. Critics argue that this lack of transparency allows insurers to use artificial intelligence in ways that can delay or deny medically needed care, serving cost-cutting goals and potentially incentivizing delays that can be life-threatening to seriously ill patients.
There are growing concerns about discrimination against marginalized groups, including those with chronic illnesses, racial and ethnic minorities, and LGBTQ individuals, who research shows are more likely to have coverage claims denied. Despite mounting evidence of harm, insurance algorithms operate in a largely unregulated space. Federal oversight is limited—insurance artificial intelligence tools are not subject to Food and Drug Administration review, and new federal rules mainly affect public programs like Medicare. While a handful of states have introduced or passed legislation to require some level of oversight—such as mandating physician supervision in California—most laws still leave definition and enforcement of ´medical necessity´ in insurers´ hands.
Health law experts advocate for federal regulation, suggesting the Food and Drug Administration is positioned to set national standards and conduct independent evaluations of artificial intelligence tools before deployment. Yet, statutory changes may be required for the agency to broaden its regulatory scope over insurance algorithms. Until then, state and federal agencies can push for independent testing of these systems for safety and fairness. The movement toward regulating health insurance uses of artificial intelligence is underway, but substantial gaps remain as patients´ access to care and well-being hang in the balance.