Looking ahead at artificial intelligence and work in 2026

MIT Sloan researchers expect 2026 to bring a widening performance gap between humans and large language models, a push to scale responsible artificial intelligence deployments, and new questions about creativity, safety, and data access in the workplace.

As 2026 begins, MIT Sloan faculty and researchers are watching how artificial intelligence will reshape work, from productivity and governance to creativity and data access. Several experts contributing to the school’s Artificial Intelligence at Work newsletter highlight a mix of technical, organizational, and human concerns that they believe will define the next phase of enterprise adoption. Their perspectives underscore that the technology’s impact will depend as much on implementation choices and governance as on raw model capabilities.

Professor of the practice Rama Ramakrishnan is focused on what he calls the human large language model accuracy gap in knowledge work. He notes that many enterprise generative artificial intelligence pilots hinge on whether large language model outputs are accurate enough for specific tasks and argues that companies should compare these systems not to a theoretical “100%” standard, but to how accurately humans perform the same work today. He uses the example where maybe humans achieve “95%” accuracy and the large language model achieves “only” “90%” accuracy to illustrate that frontier models are improving while human accuracy will likely remain stable. Ramakrishnan expects that large language model accuracy could surpass human accuracy in 2026 for many enterprise tasks and is asking which tasks these are, how much business value they represent, and how much employment could be at risk.

Principal research scientist Barbara Wixom is studying the guardrails organizations need to deploy artificial intelligence effectively and safely without undermining compliance, values, ethics, or innovation. She and her colleague Nick van der Meulen see that traditional governance playbooks are failing to keep up with the pace of change, so they are tracking emerging practices that help companies adapt governance so artificial intelligence solutions can scale and endure. Professor Roberto Rigobon raises concerns about what happens when humans outsource experimentation and creative thought processes to artificial intelligence, warning that brain plasticity means people forget skills they stop exercising and arguing that human creativity in entrepreneurship, art, and music remains “infinitely better” than what artificial intelligence entities can do, making implementation choices a first order concern. Senior lecturer Melissa Webster is watching mechanistic interpretability research that seeks to reveal how neural networks function, noting that generative artificial intelligence models are “grown” through training rather than explicitly built and that better understanding their inner workings could improve safety, alignment, and decision-making at work.

Senior lecturer George Westerman expects a shift in 2026 from experimentation with generative artificial intelligence and agents to solutions that create real value at scale, and he stresses that organizations must start with the question of what problem they are trying to solve and then find the right mix of artificial intelligence, traditional information technology, and human work for each task. Digital fellow and assistant professor Harang Ju anticipates what he calls the large language model ification of data, in which corporate data sources and private databases such as personal note apps become directly accessible to large language model based agents rather than being reachable only through human oriented interfaces. Together, these viewpoints paint a picture of workplaces where artificial intelligence systems are more capable, more deeply embedded in data and workflows, and subject to new governance and safety techniques, even as researchers urge caution about overreliance on automated creativity and the long term effects on human skills.

62

Impact Score

How NotebookLM navigates copyright, contracts, and privacy in academic use

NotebookLM’s retrieval-augmented design can keep faculty and students on safer legal ground than general Artificial Intelligence chatbots, but only if copyright, publisher terms, and FERPA constraints are respected. Educators are urged to distinguish between fair use, contractual text and data mining limits, and ownership of Artificial Intelligence generated materials.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.