Microsoft’s artificial intelligence goal is humanist intelligence in service of people and humanity

Mustafa Suleyman, Microsoft's head of Artificial Intelligence, says the company is pursuing Humanist Superintelligence: advanced systems designed to work in service of people and humanity. The plan foregrounds domain-specific, calibrated systems while acknowledging deep uncertainty about how to guarantee safety.

Microsoft’s head of Artificial Intelligence Mustafa Suleyman outlined a directional shift in the company’s research toward what he calls Humanist Superintelligence (HSI). Suleyman describes HSI as “incredibly advanced Artificial Intelligence capabilities that always work for, in service of, people and humanity more generally,” and frames the approach as problem-oriented and domain specific rather than an unbounded, highly autonomous entity. The post emphasizes calibration, contextualization, and limits as defining features of systems Microsoft intends to develop.

To pursue those goals Microsoft has formed a dedicated group called the Microsoft Artificial Intelligence superintelligence team. According to the announcement, the effort will marshal massive resources and combine human intelligence, hardware, software, and other forms of intelligence to build steerable systems. The team intends to prioritize ways that the most advanced forms of Artificial Intelligence can remain under human control while accelerating work on pressing global challenges. The description stresses practical, bounded applications over general autonomy.

The post also highlights a central unresolved problem: how to guarantee the safety of superintelligent systems. Suleyman writes that “no Artificial Intelligence developer, no safety researcher, no policy expert, no person I’ve encountered has a reassuring answer to this question. How do we guarantee it’s safe?” That admission frames safety and control as open technical and policy questions that the team must confront. Microsoft positions its Humanist Superintelligence agenda as an effort to explore those questions while steering development toward systems that serve people and broader societal goals.

52

Impact Score

Training without consent is risky business: what business owners need to know about the proposed Artificial Intelligence Accountability and Data Protection Act

The proposed Artificial Intelligence Accountability and Data Protection Act would create a federal private right of action for use of individuals’ personal or copyrighted data without express consent, exposing companies that train models without permission to new liability. The bill would broaden covered works beyond registered copyrights and allow substantial remedies including compensatory, punitive and injunctive relief.

How to create your own Artificial Intelligence performance coach

Lucas Werthein, co-founder of Cactus, describes building a personal Artificial Intelligence health coach that synthesizes MRIs, blood tests, wearables and journals to optimize training, recovery and injury management. Claire Vo hosts a 30 to 45 minute episode that shows practical steps for integrating multiple data sources and setting safety guardrails.

What’s next for AlphaFold

Five years after AlphaFold 2 remade protein structure prediction, Google DeepMind co-lead John Jumper reflects on practical uses, limits and plans to combine structure models with large language models.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.