Akido Labs operates clinics in Southern California where patients often meet with medical assistants who collect histories while a proprietary system called ScopeAI transcribes and analyzes the dialogue. ScopeAI is described by the company as a large language model based workflow that generates follow-up questions, compiles likely conditions, and produces a concise clinician note with the most likely diagnosis, alternative diagnoses, recommended next steps and a justification for each recommendation. Doctors review and approve or correct ScopeAI outputs after the visit rather than participating live in the interview.
Technically, ScopeAI uses a set of fine tuned large language models, primarily versions of Meta’s open-access Llama models and also models from Anthropic such as Claude, to perform discrete steps in the visit workflow. Assistants read questions from ScopeAI during appointments and the system adapts questions as it processes patient responses. Akido says ScopeAI is used across cardiology, endocrinology, primary care and a street medicine team serving people experiencing homelessness; clinicians report faster access to medications and the ability to manage more patients because approvals can be done asynchronously.
Company leaders and some clinicians frame the system as a way to increase clinician productivity and expand access for Medicaid beneficiaries who often face long waits. At the same time, ethicists, computer scientists and legal scholars cited in the article express concern. Emma Pierson of UC Berkeley highlights expertise gaps between doctors and automated assistants, and Zeke Emanuel and others worry patients may not understand how much an algorithm influences care. Issues include inconsistent insurance rules that allow asynchronous approvals under Medicaid but not other plans, potential exacerbation of disparities, and the risk of automation bias where clinicians favor system recommendations.
Akido reports testing ScopeAI on historical data and requires that the correct diagnosis appears in the system’s top three recommendations at least 92 percent of the time before deployment; doctor corrections are used to further train models. The company has not published randomized or comparative studies measuring patient outcomes or whether automation bias affects clinician behavior in practice. Observers say stronger evaluations and regulatory clarity would be needed to assess whether the model safely improves access to care.