Microsoft’s artificial intelligence goal is humanist intelligence in service of people and humanity

Mustafa Suleyman, Microsoft's head of Artificial Intelligence, says the company is pursuing Humanist Superintelligence: advanced systems designed to work in service of people and humanity. The plan foregrounds domain-specific, calibrated systems while acknowledging deep uncertainty about how to guarantee safety.

Microsoft’s head of Artificial Intelligence Mustafa Suleyman outlined a directional shift in the company’s research toward what he calls Humanist Superintelligence (HSI). Suleyman describes HSI as “incredibly advanced Artificial Intelligence capabilities that always work for, in service of, people and humanity more generally,” and frames the approach as problem-oriented and domain specific rather than an unbounded, highly autonomous entity. The post emphasizes calibration, contextualization, and limits as defining features of systems Microsoft intends to develop.

To pursue those goals Microsoft has formed a dedicated group called the Microsoft Artificial Intelligence superintelligence team. According to the announcement, the effort will marshal massive resources and combine human intelligence, hardware, software, and other forms of intelligence to build steerable systems. The team intends to prioritize ways that the most advanced forms of Artificial Intelligence can remain under human control while accelerating work on pressing global challenges. The description stresses practical, bounded applications over general autonomy.

The post also highlights a central unresolved problem: how to guarantee the safety of superintelligent systems. Suleyman writes that “no Artificial Intelligence developer, no safety researcher, no policy expert, no person I’ve encountered has a reassuring answer to this question. How do we guarantee it’s safe?” That admission frames safety and control as open technical and policy questions that the team must confront. Microsoft positions its Humanist Superintelligence agenda as an effort to explore those questions while steering development toward systems that serve people and broader societal goals.

52

Impact Score

Global regulatory trends on the use of generative artificial intelligence

Governments in the EU, Japan, the United States, and the United Kingdom are moving quickly to regulate generative artificial intelligence, using a mix of binding laws, guidelines, and standards. Diverging philosophies and timelines are making cross-border compliance planning increasingly complex for companies.

Perplexity launches Computer to orchestrate many Artificial Intelligence models

Perplexity is rolling out Computer, a cloud-based agent that coordinates 19 Artificial Intelligence models for complex workflows, as it pivots toward high-value enterprise users and deep research. The launch underscores a broader bet on multi-model orchestration, custom benchmarks and a boutique business strategy over mass adoption.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.