Medical startup uses Artificial Intelligence to run appointments and make diagnoses

A California startup, Akido Labs, deploys a proprietary system called ScopeAI that uses large language models to transcribe and analyze patient visits and produce diagnostic recommendations for doctor approval. The approach aims to expand access but raises questions about regulation, disclosure, and automation bias.

Akido Labs operates clinics in Southern California where patients often meet with medical assistants who collect histories while a proprietary system called ScopeAI transcribes and analyzes the dialogue. ScopeAI is described by the company as a large language model based workflow that generates follow-up questions, compiles likely conditions, and produces a concise clinician note with the most likely diagnosis, alternative diagnoses, recommended next steps and a justification for each recommendation. Doctors review and approve or correct ScopeAI outputs after the visit rather than participating live in the interview.

Technically, ScopeAI uses a set of fine tuned large language models, primarily versions of Meta’s open-access Llama models and also models from Anthropic such as Claude, to perform discrete steps in the visit workflow. Assistants read questions from ScopeAI during appointments and the system adapts questions as it processes patient responses. Akido says ScopeAI is used across cardiology, endocrinology, primary care and a street medicine team serving people experiencing homelessness; clinicians report faster access to medications and the ability to manage more patients because approvals can be done asynchronously.

Company leaders and some clinicians frame the system as a way to increase clinician productivity and expand access for Medicaid beneficiaries who often face long waits. At the same time, ethicists, computer scientists and legal scholars cited in the article express concern. Emma Pierson of UC Berkeley highlights expertise gaps between doctors and automated assistants, and Zeke Emanuel and others worry patients may not understand how much an algorithm influences care. Issues include inconsistent insurance rules that allow asynchronous approvals under Medicaid but not other plans, potential exacerbation of disparities, and the risk of automation bias where clinicians favor system recommendations.

Akido reports testing ScopeAI on historical data and requires that the correct diagnosis appears in the system’s top three recommendations at least 92 percent of the time before deployment; doctor corrections are used to further train models. The company has not published randomized or comparative studies measuring patient outcomes or whether automation bias affects clinician behavior in practice. Observers say stronger evaluations and regulatory clarity would be needed to assess whether the model safely improves access to care.

65

Impact Score

xAI staff exposed to child abuse content during Grok training

Workers helping train xAI’s Grok say permissive policies have exposed them to artificial intelligence generated child sexual abuse content, spotlighting gaps in safeguards and reporting. Internal documents and staff accounts describe mounting psychological harm and unanswered questions about corporate responsibility.

Nvidia and Intel partner on Artificial Intelligence data-center and PC chips

Nvidia will invest Not stated in Intel stock and the two companies will co-develop multiple generations of Artificial Intelligence data-center and PC chips, combining Nvidia’s accelerated computing stack and NVLink with Intel’s x86 CPUs. Specific investment and per-share terms are Not stated.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.