Global artificial intelligence race accelerates across infrastructure, regulation, and applications

Tech giants, governments, and startups ramped up artificial intelligence investment, infrastructure buildouts, and regulatory positioning, while new models, agent platforms, and security concerns intensified competitive and ethical fault lines.

Natural language processing and broader artificial intelligence development surged across finance, infrastructure, and geopolitics in the week ending Friday 13th February 2026. Alphabet is raising US$185B through a seven-part bond offering to fund artificial intelligence infrastructure and capital expenditures through 2026, and separately launched a US$15B multi-tranche bond that includes a rare 100-year component, underscoring the unprecedented capital intensity of the current cycle. Anthropic is seeking to raise US$20B at a US$350B valuation, while another report cites a US$30B round valuing the startup at US$380B, alongside a US$200m investment from Blackstone that takes its stake to roughly US$1 billion, illustrating the fierce investor appetite for foundation model leaders. Databricks has raised US$5B at a US$134B valuation, Cerebras Systems secured US$1B at a US$23B valuation, ElevenLabs raised US$500m at an US$11 billion valuation, and Runway closed a US$315m Series E at a US$5.3 billion valuation as capital floods into artificial intelligence infrastructure, chips, and generative media.

Sovereign artificial intelligence strategies advanced as governments sought digital autonomy and regional competitiveness. A Gartner report predicts global sovereign cloud spend will surge 35.6% to US$80B in 2026, while European sovereign Cloud IaaS is forecast to grow from US$6.7B in 2025 to US$23.1B in 2027, reflecting intensified focus on data residency and geopolitical resilience. The EU inaugurated NanoIC, a €700m semiconductor pilot line at imec in Leuven, and plans a sovereign military data-sharing platform by 2030, even as member states acknowledge that fully deleting US technology is unrealistic. Mistral AI committed €1.2B to Swedish data centres, Fractile pledged £100m for UK chip facilities, and Pakistan Digital Authority partnered with DFINITY on a sovereign cloud and artificial intelligence platform, while Singapore announced a National AI Council chaired by Prime Minister Wong and a ‘Champions of AI’ programme with tax incentives and grants. Parallel efforts in India, Saudi Arabia, Oman, Morocco, Canada, the UAE, Pakistan, and France’s Suite Numérique initiative highlighted a multipolar race to shape national artificial intelligence ecosystems and reduce reliance on US hyperscalers.

Physical infrastructure spending reached extraordinary levels, provoking environmental and financial scrutiny. A Dell’Oro Group report projects worldwide data center capital expenditures will reach US$1.7T by 2030, while Microsoft is planning a record $650B artificial intelligence infrastructure spend in 2026, against the backdrop of cloud providers’ reported US$11T backlog of contracted work. Meta is building a US$10B data center in Lebanon, Indiana and has expanded its Louisiana artificial intelligence site to nearly 3,700 acres, Australia’s businesses have doubled planned data centre spending to US$52B, and Firmus secured a $10B debt facility plus a separate US$100m equity investment to expand artificial intelligence factories across Australia. Yet resistance is rising: Edinburgh councillors rejected a proposed ‘green’ datacenter, New York lawmakers are weighing a three-year moratorium on data center development, and a UC Riverside report found California data centers used 50 billion liters of water in 2023 with projections rising to 116 billion liters by 2028, fuelling legislative pushback over power grids, water use, and local impacts.

Model innovation and platform features continued at high velocity, with an emphasis on reasoning, agents, and cost efficiency. Google’s Gemini 3 Deep Think update is reported to achieve breakthrough reasoning capabilities across scientific, coding, and logical tasks, while Anthropic’s Claude Opus 4.6 tops the Artificial Analysis Intelligence Index and is credited with discovering over 500 high-severity security flaws in open-source libraries by reasoning like a human researcher. DeepSeek expanded its context window from 128,000 to over 1 million tokens, Z.ai launched GLM-5 with 744B parameters and best-in-class performance claims, and OpenAI introduced GPT-5.3-Codex-Spark on Cerebras hardware alongside a new chat model and a potential US$100B funding round. Across the ecosystem, companies rolled out orchestration layers such as Eccentex’s artificial intelligence platform for case work, MinIO’s AIStor Tables, Mastra’s observational memory for long-running agents, Nvidia’s dynamic memory sparsification promising up to eight times memory cost reduction, and MIT-ETH Zurich self-distillation techniques that let models learn new skills without forgetting.

Agentic artificial intelligence moved from concept to production across sales, finance, security, and operations. Genesys launched an agentic virtual agent built on large action models to autonomously resolve enterprise customer requests, while Google proposed an Agent-to-Agent protocol allowing artificial intelligence agents to collaborate across applications. Goldman Sachs is co-developing agents with Anthropic for accounting and trade reconciliation, and platforms such as Mizo, Phenom Cloud’s Lexy, Zania, and Qlik’s agentic analytics promise autonomous IT support, people operations, third-party risk assessments, and business intelligence. In parallel, OpenAI applied a custom ChatGPT on GenAI.mil to 3 million US military personnel, while dozens of sector-specific tools emerged, from Uber’s Cart Assistant and Redfin’s property search app in ChatGPT to Intuit’s end-to-end ERP for construction and OttoPilot’s veterinary insights, indicating rapid diffusion of natural language interfaces into everyday workflows.

Security and safety concerns intensified as red teams and researchers exposed vulnerabilities in leading systems. AIM Intelligence breached Anthropic’s Claude Opus 4.6 in 30 minutes, its system card documented prompt injection attack success rates ranging from 0% to 78.6% across environments, and OpenClaw was reported to have exposed over 135,000 internet-facing instances. Google Translate powered by Gemini was shown vulnerable to prompt injections, Google logged widespread ‘distillation attacks’ on Gemini, and OpenAI warned lawmakers that China’s DeepSeek is using sophisticated methods to extract and copy results from US models. Microsoft research showed models can be unaligned from guardrails with minimal prompting, while a survey by Microsoft found that while 80% of Fortune 500 companies are deploying artificial intelligence agents, only 47% have security controls in place, prompting new scanners for hidden backdoors in large language models and tools like Astrix Security’s OpenClaw Scanner and NanoClaw’s hardened assistant designs.

Regulatory and political pressure mounted alongside commercial competition. The UK is moving from voluntary guidelines to mandatory obligations, with legislation requiring frontier artificial intelligence developers to register, conduct safety tests, and report incidents, even as its regulators complain that limited resources, not legal powers, are impeding effective oversight. The EU accused Meta of anti-competitive behaviour by blocking rival chatbots from WhatsApp, pursued Google over artificial intelligence Overviews and publishers’ content, and continued wider tech enforcement that has drawn trade war threats from the Trump administration. New York is weighing bills to label artificial intelligence-generated news and pause data center projects, Connecticut lawmakers are targeting minors’ data and artificial intelligence risks, and OpenAI faces allegations of violating California’s artificial intelligence safety law with GPT-5.3-Codex. Inside the industry, Anthropic donated US$20m to a super PAC aimed at countering OpenAI’s influence, while Andreessen Horowitz has become a significant policy voice in Washington advocating minimal regulation, signalling an increasingly politicized landscape around foundation models.

Labour markets and social impacts remained contested as companies restructured around automation. Salesforce cut nearly 1,000 jobs, Heineken plans to cut up to 7% of its workforce with artificial intelligence cited as a factor, Washington State recorded over 19,500 tech layoffs early in 2026, and a Challenger, Gray & Christmas survey found US employers cut over 108,000 jobs in January, the worst start since 2009. OthersideAI’s CEO warned artificial intelligence could replace up to half of entry-level white-collar jobs within five years, while Khan Academy’s Salman Khan suggested even a 10% reduction in white-collar roles could trigger a depression-like crisis. By contrast, leaders at Anthropic, Superhuman, IBM, and UJET argued that human skills, expanded entry-level hiring, and redesigned roles mean artificial intelligence will augment rather than replace workers, although a UC Berkeley study showed artificial intelligence tools are currently causing employees to work more hours, not fewer, and UK authorities are exploring chatbot work coaches even as fears grow about job losses and diminished human connection in healthcare and mental health contexts.

70

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.