Inside the Artificial Intelligence divide roiling Electronic Arts

Electronic Arts is pushing nearly 15,000 employees to weave Artificial Intelligence into daily work, but many developers say the tools add errors, extra cleanup, and job anxiety. Internal training, in-house chatbots, and executive cheerleading are colliding with creative skepticism and ethical concerns.

A viral Slack meme at Electronic Arts captured a growing workplace rift: executives demanding Artificial Intelligence immediately, and employees asking to what end. That tension mirrors broader corporate trends as companies from Microsoft to Shopify encourage or track worker use of generative tools. Fresh data underline the gulf: a Dayforce-funded survey of 7,000 professionals found 87 percent of executives use Artificial Intelligence daily versus 57 percent of managers and 27 percent of employees, while an Upwork-commissioned poll reported 92 percent of C-suites expect productivity gains and 40 percent of employees cite heavier workloads.

Inside Electronic Arts, leadership has urged staff to treat generative systems as a thought partner across creative and managerial tasks. Internal materials describe required trainings, daily tool usage, and sample prompts that coach managers through delicate conversations on performance, pay, and promotions, as well as guidance for employees after a promotion denial. Some workers say the company’s in-house chatbot, ReefGPT, generates flawed code and other hallucinations that require time-consuming fixes. Creative teams report pressure to train models on their own work and fear reduced demand for roles such as character artists and level designers. A recently laid-off senior quality-assurance designer said Artificial Intelligence could handle a key part of his job, summarizing feedback from hundreds of play testers, and suspects that capability factored into about 100 cuts at Respawn Entertainment in the spring. Electronic Arts declined to comment.

The industry context is complicated. While game AIs have long powered opponents and nonplayer characters, today’s tools automate search, content generation, and coding at scale, and corporate spending on Artificial Intelligence roughly doubled in 2024, according to Bain & Co. Yet the technology carries a reputational burden, from formulaic résumés and derivative images to academic cheating and troubling chatbot overuse. Workers worry about displacement, a point echoed by Wharton’s Peter Cappelli, who cautions leaders not to expect enthusiasm for tools that may cost people their jobs. At Electronic Arts, CEO Andrew Wilson told investors the technology is the core of the business, even as the company’s 10-K warned of ethical and legal risks that could damage brands and results.

Financial pressures add urgency. Electronic Arts’ net income fell 9.4 percent in the fiscal year ended June 30, 2025, including a 28 percent drop in the final quarter. Across gaming, an estimated 14,600 jobs were cut in 2024, and the global workforce has shrunk about 9 percent since 2022, according to Aldora Intelligence, even as software spending is projected to grow. Creators are wary: in a survey of 3,000 developers by the Game Developers Conference, Omdia, and Game Developer, nearly one third said generative tools are hurting the sector and about half voiced serious ethical concerns, citing intellectual property, energy use, and bias. Analyst Doug Creutz summed up the adoption challenge: it is a problem when the dogs will not eat the dog food.

Still, there are paths to adoption. A meta-analysis led by MIT Sloan’s Jackson G. Lu finds people prefer Artificial Intelligence when it clearly outperforms humans and personalization is unnecessary, such as forecasting or numeric estimation. He advises starting there, then phasing tools into tasks involving taste, fairness, or empathy with human oversight and customization. Get the fit right, he argues, and even skeptics can become power users. Until then, the trust gap between leadership ambitions and employee realities remains a defining obstacle at Electronic Arts and beyond.

52

Impact Score

Artificial Intelligence divides employers as hiring and headcount shift

U.S. hiring beat expectations in April, but employers remain split on whether Artificial Intelligence should drive layoffs, productivity gains, or internal redeployment. At the same time, candidate use of Artificial Intelligence is outpacing employer adoption in hiring, adding new pressure to screening and entry-level recruiting.

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.