Looking ahead at artificial intelligence and work in 2026

MIT Sloan researchers expect 2026 to bring a widening performance gap between humans and large language models, a push to scale responsible artificial intelligence deployments, and new questions about creativity, safety, and data access in the workplace.

As 2026 begins, MIT Sloan faculty and researchers are watching how artificial intelligence will reshape work, from productivity and governance to creativity and data access. Several experts contributing to the school’s Artificial Intelligence at Work newsletter highlight a mix of technical, organizational, and human concerns that they believe will define the next phase of enterprise adoption. Their perspectives underscore that the technology’s impact will depend as much on implementation choices and governance as on raw model capabilities.

Professor of the practice Rama Ramakrishnan is focused on what he calls the human large language model accuracy gap in knowledge work. He notes that many enterprise generative artificial intelligence pilots hinge on whether large language model outputs are accurate enough for specific tasks and argues that companies should compare these systems not to a theoretical “100%” standard, but to how accurately humans perform the same work today. He uses the example where maybe humans achieve “95%” accuracy and the large language model achieves “only” “90%” accuracy to illustrate that frontier models are improving while human accuracy will likely remain stable. Ramakrishnan expects that large language model accuracy could surpass human accuracy in 2026 for many enterprise tasks and is asking which tasks these are, how much business value they represent, and how much employment could be at risk.

Principal research scientist Barbara Wixom is studying the guardrails organizations need to deploy artificial intelligence effectively and safely without undermining compliance, values, ethics, or innovation. She and her colleague Nick van der Meulen see that traditional governance playbooks are failing to keep up with the pace of change, so they are tracking emerging practices that help companies adapt governance so artificial intelligence solutions can scale and endure. Professor Roberto Rigobon raises concerns about what happens when humans outsource experimentation and creative thought processes to artificial intelligence, warning that brain plasticity means people forget skills they stop exercising and arguing that human creativity in entrepreneurship, art, and music remains “infinitely better” than what artificial intelligence entities can do, making implementation choices a first order concern. Senior lecturer Melissa Webster is watching mechanistic interpretability research that seeks to reveal how neural networks function, noting that generative artificial intelligence models are “grown” through training rather than explicitly built and that better understanding their inner workings could improve safety, alignment, and decision-making at work.

Senior lecturer George Westerman expects a shift in 2026 from experimentation with generative artificial intelligence and agents to solutions that create real value at scale, and he stresses that organizations must start with the question of what problem they are trying to solve and then find the right mix of artificial intelligence, traditional information technology, and human work for each task. Digital fellow and assistant professor Harang Ju anticipates what he calls the large language model ification of data, in which corporate data sources and private databases such as personal note apps become directly accessible to large language model based agents rather than being reachable only through human oriented interfaces. Together, these viewpoints paint a picture of workplaces where artificial intelligence systems are more capable, more deeply embedded in data and workflows, and subject to new governance and safety techniques, even as researchers urge caution about overreliance on automated creativity and the long term effects on human skills.

62

Impact Score

Model autophagy disorder and the risk of self consuming Artificial Intelligence models

Glow New Media director Phil Blything warns that as Artificial Intelligence systems generate more online text, future language models risk training on their own synthetic output and degrading in quality. He draws a parallel with the early human driven web, arguing that machine generated content could undermine the foundations that made resources like Wikipedia possible.

Artificial intelligence and the new great divergence

A White House research paper compares the potential impact of artificial intelligence to the Industrial Revolution and examines whether it could trigger a new great divergence among nations. The report outlines how the Trump administration aims to secure American leadership through accelerated innovation, infrastructure, and deregulation.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.