Anthropic has quietly positioned itself as a favorite in the race for enterprise Artificial Intelligence. Surveys and research cited in the article give Anthropic strong shares by model usage and spending, including a Menlo Ventures summer survey that showed Anthropic with “32%” market share by model usage compared to OpenAI’s “25%” and Google’s “20%”, and a HSBC report that attributed a “40%” share by total AI spending to Anthropic versus OpenAI’s “29%” and Google’s “22%”. OpenAI disputes those figures, pointing to its claim of “1 million” paying business customers versus Anthropic’s “330,000”. The piece frames Anthropic’s safety-first stance as a key reason corporate tech buyers have gravitated to its Claude models, while also flagging the company’s continued challenges around fundraising, burn rate, and scaling rapidly without fracturing.
On the engineering front, the article examines how Claude is actually used inside Anthropic. Dario Amodei’s earlier forecast that “90%” of enterprise software code might be written by Artificial Intelligence drew attention and some pushback; he later softened that claim and said he never meant to suggest humans would be absent from deployment decisions. Anthropic’s own study of “132” engineers combined qualitative interviews and usage data and found coders self-reporting about “60%” of their work tasks touched by Claude. More than half said they can “fully delegate” between “none and 20%” of work to Claude because outputs still require human verification. Common uses were debugging, explaining parts of codebases, and to a lesser extent implementing new features, while high-level design, data science, and front-end work were less commonly delegated. The company also reported that without Claude roughly “27%” of the work would not have been done, and that “8.6%” of Claude Code tasks were small “papercut fixes”.
The human side of the study is mixed. Some engineers say Claude frees them to focus on higher-level design and product questions, and that the tool increases productivity and enables work they otherwise would not attempt. Others worry about deskilling, particularly for junior developers, and miss the intrinsic satisfaction of hand coding; some deliberately practice tasks without Claude to preserve skills. Anthropic’s transparency in publishing internal findings is credited, even as critics inside and outside the industry debate whether safety-focused advocacy slows adoption. The article leaves open whether Anthropic can sustain its lead, raise enough capital, and manage hypergrowth while keeping product quality and corporate trust intact.
