How Anthropic’s safety first approach won over big business, and how its own engineers use Claude Artificial Intelligence

Anthropic has emerged as a preferred vendor for enterprise Artificial Intelligence by selling a safety-first pitch to corporate buyers. An internal study of 132 engineers shows heavy use of Claude but also highlights limits to delegation and worries about deskilling.

Anthropic has quietly positioned itself as a favorite in the race for enterprise Artificial Intelligence. Surveys and research cited in the article give Anthropic strong shares by model usage and spending, including a Menlo Ventures summer survey that showed Anthropic with “32%” market share by model usage compared to OpenAI’s “25%” and Google’s “20%”, and a HSBC report that attributed a “40%” share by total AI spending to Anthropic versus OpenAI’s “29%” and Google’s “22%”. OpenAI disputes those figures, pointing to its claim of “1 million” paying business customers versus Anthropic’s “330,000”. The piece frames Anthropic’s safety-first stance as a key reason corporate tech buyers have gravitated to its Claude models, while also flagging the company’s continued challenges around fundraising, burn rate, and scaling rapidly without fracturing.

On the engineering front, the article examines how Claude is actually used inside Anthropic. Dario Amodei’s earlier forecast that “90%” of enterprise software code might be written by Artificial Intelligence drew attention and some pushback; he later softened that claim and said he never meant to suggest humans would be absent from deployment decisions. Anthropic’s own study of “132” engineers combined qualitative interviews and usage data and found coders self-reporting about “60%” of their work tasks touched by Claude. More than half said they can “fully delegate” between “none and 20%” of work to Claude because outputs still require human verification. Common uses were debugging, explaining parts of codebases, and to a lesser extent implementing new features, while high-level design, data science, and front-end work were less commonly delegated. The company also reported that without Claude roughly “27%” of the work would not have been done, and that “8.6%” of Claude Code tasks were small “papercut fixes”.

The human side of the study is mixed. Some engineers say Claude frees them to focus on higher-level design and product questions, and that the tool increases productivity and enables work they otherwise would not attempt. Others worry about deskilling, particularly for junior developers, and miss the intrinsic satisfaction of hand coding; some deliberately practice tasks without Claude to preserve skills. Anthropic’s transparency in publishing internal findings is credited, even as critics inside and outside the industry debate whether safety-focused advocacy slows adoption. The article leaves open whether Anthropic can sustain its lead, raise enough capital, and manage hypergrowth while keeping product quality and corporate trust intact.

55

Impact Score

Tether Data launches QVAC Fabric LLM for edge-first Artificial Intelligence inference and fine-tuning

Tether Data on December 2, 2025 released QVAC Fabric LLM, an edge-first LLM inference runtime and fine-tuning framework that runs and personalizes models on consumer GPUs, laptops, and smartphones. The open-source platform enables on-device Artificial Intelligence training and inference across iOS, Android, Windows, macOS, and Linux while avoiding cloud dependency and vendor lock-in.

French Artificial Intelligence startup Mistral unveils Mistral 3 open-source models

French Artificial Intelligence startup Mistral unveiled Mistral 3, a next-generation family of open-source models that includes small dense models 14B, 8B, and 3B and a larger sparse mixture-of-experts called Mistral Large 3. The company said the release represents its most capable model to date and noted Microsoft backing.

Artificial Intelligence newsroom: Anthropic’s new model redefines coding

Anthropic released Claude Opus 4.5, a new large language model that scored 80% on the SWE verified benchmark and took the no. 1 spot on the ARC AGI test. Enterprise Artificial Intelligence adoption is accelerating, with full implementation up 282%, while the U.S. Genesis Mission opens petabytes of lab data to foundation model teams.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.