Creating psychological safety in the artificial intelligence era

A new report from MIT Technology Review Insights argues that psychological safety is a prerequisite for successful enterprise artificial intelligence adoption, finding that cultural fears still hinder experimentation despite high self-reported safety levels. Executives link experiment-friendly environments directly to better artificial intelligence outcomes, but many organizations acknowledge their foundations remain unstable.

The article argues that deploying enterprise-grade artificial intelligence requires organizations to tackle both technical complexity and the human factors that determine whether new tools deliver value. Psychological safety, defined as feeling free to express opinions and take calculated risks without worrying about career repercussions, is presented as essential for successful artificial intelligence adoption. In workplaces where psychological safety is strong, employees can question assumptions and raise concerns about new technologies without fear of reprisal, which is critical when dealing with a powerful and still-evolving technology that lacks established best practices.

The report cites Infosys executive vice president and chief technology officer Rafee Tarafdar, who asserts that “psychological safety is mandatory in this new era of AI,” because the technology is changing quickly and companies must be willing to experiment, accept that some initiatives will fail, and provide a safety net for employees. To understand how psychological safety shapes enterprise artificial intelligence outcomes, MIT Technology Review Insights surveyed 500 business leaders. The findings show high self-reported levels of psychological safety but also reveal that fear persists beneath official messaging. Experts suggest that even when organizations publicly promote a safe to experiment culture, deeper cultural undercurrents can undermine that message, indicating that human resources alone cannot drive the necessary transformation and that psychological safety must be embedded into core collaboration processes.

Key survey results highlight that companies with experiment-friendly cultures see stronger artificial intelligence results. The majority of executives surveyed (83%) believe a company culture that prioritizes psychological safety measurably improves the success of AI initiatives, and 84% have observed connections between psychological safety and tangible AI outcomes. Psychological barriers are described as greater obstacles to enterprise artificial intelligence adoption than technical ones, as nearly three-quarters (73%) of respondents indicated they feel safe to provide honest feedback and express opinions freely in their workplace, yet 22% admit they have hesitated to lead an AI project because they might be blamed if it misfires. Achieving psychological safety is portrayed as a moving target: fewer than half of leaders (39%) rate their organization’s current level of psychological safety as “very high,” while another 48% report a “moderate” degree, suggesting many enterprises are pursuing artificial intelligence initiatives on cultural foundations that are not yet fully stable.

52

Impact Score

Should U.S. be worried about an artificial intelligence bubble

Harvard Business School professor Andy Wu argues that worries about an artificial intelligence bubble hinge on how much debt and risk smaller players and vendors take on, while big technology firms appear structurally insulated from a potential bust.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.