Magma: Foundation Model for Multimodal AI Agents

Explore how Magma enables AI systems to navigate both digital and physical tasks, representing a significant leap for Artificial Intelligence.

Microsoft Research has unveiled a new foundational model called Magma, designed to enable artificial intelligence agents to operate seamlessly across digital and physical environments. Magma represents a leap forward by integrating vision, language, and action (VLA) models, allowing AI systems to understand and interact with user interfaces and physical objects alike. With the ability to suggest actions such as button clicks and orchestrate robotic tasks, Magma positions itself as a significant advancement in AI, potentially transforming how AI assistants function in diverse settings.

The foundation of Magma is a large and diverse pretraining dataset, setting it apart from previous models that were specific task-oriented. The innovation of Magma lies in its capacity to generalize across various environments, outstripping its predecessors in performance on tasks such as user interface navigation and robotic manipulation. One of the standout features of Magma is its use of Set-of-Mark (SoM) and Trace-of-Mark (ToM) annotations, which provide the model with a structured understanding of environments and tasks, enhancing its ability to plan and execute actions.

Magma’s introduction is part of a larger strategy by Microsoft Research to enhance the capabilities of agentic AI systems, with potential applications in both developer tools and everyday AI assistants. By enabling AI to reason, explore, and take actions effectively, Magma could pave the way for more capable and robust AI systems in the future. It is currently available for researchers and developers on Azure AI Foundry Labs and Hugging Face, inviting experimentation with this cutting-edge technology.

77

Impact Score

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.