Looking ahead at artificial intelligence and work in 2026

MIT Sloan researchers expect 2026 to bring a widening performance gap between humans and large language models, a push to scale responsible artificial intelligence deployments, and new questions about creativity, safety, and data access in the workplace.

As 2026 begins, MIT Sloan faculty and researchers are watching how artificial intelligence will reshape work, from productivity and governance to creativity and data access. Several experts contributing to the school’s Artificial Intelligence at Work newsletter highlight a mix of technical, organizational, and human concerns that they believe will define the next phase of enterprise adoption. Their perspectives underscore that the technology’s impact will depend as much on implementation choices and governance as on raw model capabilities.

Professor of the practice Rama Ramakrishnan is focused on what he calls the human large language model accuracy gap in knowledge work. He notes that many enterprise generative artificial intelligence pilots hinge on whether large language model outputs are accurate enough for specific tasks and argues that companies should compare these systems not to a theoretical “100%” standard, but to how accurately humans perform the same work today. He uses the example where maybe humans achieve “95%” accuracy and the large language model achieves “only” “90%” accuracy to illustrate that frontier models are improving while human accuracy will likely remain stable. Ramakrishnan expects that large language model accuracy could surpass human accuracy in 2026 for many enterprise tasks and is asking which tasks these are, how much business value they represent, and how much employment could be at risk.

Principal research scientist Barbara Wixom is studying the guardrails organizations need to deploy artificial intelligence effectively and safely without undermining compliance, values, ethics, or innovation. She and her colleague Nick van der Meulen see that traditional governance playbooks are failing to keep up with the pace of change, so they are tracking emerging practices that help companies adapt governance so artificial intelligence solutions can scale and endure. Professor Roberto Rigobon raises concerns about what happens when humans outsource experimentation and creative thought processes to artificial intelligence, warning that brain plasticity means people forget skills they stop exercising and arguing that human creativity in entrepreneurship, art, and music remains “infinitely better” than what artificial intelligence entities can do, making implementation choices a first order concern. Senior lecturer Melissa Webster is watching mechanistic interpretability research that seeks to reveal how neural networks function, noting that generative artificial intelligence models are “grown” through training rather than explicitly built and that better understanding their inner workings could improve safety, alignment, and decision-making at work.

Senior lecturer George Westerman expects a shift in 2026 from experimentation with generative artificial intelligence and agents to solutions that create real value at scale, and he stresses that organizations must start with the question of what problem they are trying to solve and then find the right mix of artificial intelligence, traditional information technology, and human work for each task. Digital fellow and assistant professor Harang Ju anticipates what he calls the large language model ification of data, in which corporate data sources and private databases such as personal note apps become directly accessible to large language model based agents rather than being reachable only through human oriented interfaces. Together, these viewpoints paint a picture of workplaces where artificial intelligence systems are more capable, more deeply embedded in data and workflows, and subject to new governance and safety techniques, even as researchers urge caution about overreliance on automated creativity and the long term effects on human skills.

62

Impact Score

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.