Health insurers face growing scrutiny over artificial intelligence use

Regulators, clinicians, and patients are increasingly focused on how health insurers deploy artificial intelligence, as concerns grow about opaque decision making and potential bias in coverage determinations.

Health insurers are accelerating the use of artificial intelligence tools to make decisions about coverage, utilization management, and patient outreach, raising questions about who is responsible for overseeing those systems. As algorithms play a larger role in determining which treatments get approved and how benefits are administered, regulators, clinicians, and patient advocates are pressing for clearer guardrails to ensure that automated decisions do not unfairly restrict access to care.

Scrutiny is intensifying around the transparency of these models, since many are developed by third party vendors whose methods and training data are kept proprietary. That opacity makes it difficult for patients and providers to understand why a claim is denied or a service is delayed, and it complicates efforts by state and federal regulators to assess whether artificial intelligence driven tools comply with existing insurance and civil rights laws. Questions are also mounting about how to audit and validate these systems at scale, and which entities should have the authority and technical capacity to do so.

Debate over accountability extends beyond formal regulators to include accreditation bodies, professional societies, and independent researchers, which are beginning to probe the performance and fairness of insurance focused algorithms. Health systems and clinicians are seeking more visibility into how payer models operate, especially when automated decisions conflict with clinical judgment. Patient groups, meanwhile, are calling for mechanisms to appeal artificial intelligence powered determinations and for clearer disclosures whenever automated tools influence coverage decisions, underscoring the need for a more coordinated framework to keep tabs on how health insurers are using artificial intelligence.

70

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.