Creating psychological safety in the artificial intelligence era

A new report from MIT Technology Review Insights argues that psychological safety is a prerequisite for successful enterprise artificial intelligence adoption, finding that cultural fears still hinder experimentation despite high self-reported safety levels. Executives link experiment-friendly environments directly to better artificial intelligence outcomes, but many organizations acknowledge their foundations remain unstable.

The article argues that deploying enterprise-grade artificial intelligence requires organizations to tackle both technical complexity and the human factors that determine whether new tools deliver value. Psychological safety, defined as feeling free to express opinions and take calculated risks without worrying about career repercussions, is presented as essential for successful artificial intelligence adoption. In workplaces where psychological safety is strong, employees can question assumptions and raise concerns about new technologies without fear of reprisal, which is critical when dealing with a powerful and still-evolving technology that lacks established best practices.

The report cites Infosys executive vice president and chief technology officer Rafee Tarafdar, who asserts that “psychological safety is mandatory in this new era of AI,” because the technology is changing quickly and companies must be willing to experiment, accept that some initiatives will fail, and provide a safety net for employees. To understand how psychological safety shapes enterprise artificial intelligence outcomes, MIT Technology Review Insights surveyed 500 business leaders. The findings show high self-reported levels of psychological safety but also reveal that fear persists beneath official messaging. Experts suggest that even when organizations publicly promote a safe to experiment culture, deeper cultural undercurrents can undermine that message, indicating that human resources alone cannot drive the necessary transformation and that psychological safety must be embedded into core collaboration processes.

Key survey results highlight that companies with experiment-friendly cultures see stronger artificial intelligence results. The majority of executives surveyed (83%) believe a company culture that prioritizes psychological safety measurably improves the success of AI initiatives, and 84% have observed connections between psychological safety and tangible AI outcomes. Psychological barriers are described as greater obstacles to enterprise artificial intelligence adoption than technical ones, as nearly three-quarters (73%) of respondents indicated they feel safe to provide honest feedback and express opinions freely in their workplace, yet 22% admit they have hesitated to lead an AI project because they might be blamed if it misfires. Achieving psychological safety is portrayed as a moving target: fewer than half of leaders (39%) rate their organization’s current level of psychological safety as “very high,” while another 48% report a “moderate” degree, suggesting many enterprises are pursuing artificial intelligence initiatives on cultural foundations that are not yet fully stable.

52

Impact Score

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.