EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company's internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

EA’s leadership is defending a wider push into Artificial Intelligence after internal complaints that the company’s tools were creating extra work and reducing efficiency. The debate has intensified following management’s announcement of an Artificial Intelligence pivot aimed at lowering operating costs, even though workers later indicated the company had already been pushing these systems internally well before the takeover.

At the Iicon gaming event in Las Vegas, Andrew Wilson said, “almost all, like 85%, of our quality assurance is done with some kind of machine learning or Artificial Intelligence-driven algorithm.” He also said EA’s QA hiring is at an all-time high. Wilson described the technology as “almost entirely augmentation,” rather than a replacement for staff, and said it is being used for routine verification work such as checking whether systems boot properly, shut down properly, or crash.

That defense stands in contrast to employee claims that EA’s Artificial Intelligence push was causing more harm than good and ultimately costing them time. The tension reflects a broader question inside the company over whether these tools are improving productivity in practice or adding friction to development workflows.

EA’s use of Artificial Intelligence also appears to extend beyond basic internal testing. There is evidence of Artificial Intelligence-generated assets in Battlefield, and the company has partnered with Stability Artificial Intelligence to develop generative Artificial Intelligence tools. Together, those moves suggest EA is positioning Artificial Intelligence not only as a back-end efficiency tool but also as part of its creative and production pipeline.

52

Impact Score

Generative Artificial Intelligence is reshaping cybercrime less than feared

Research into criminal underground forums suggests generative Artificial Intelligence is being used mainly as a productivity tool rather than a transformative criminal breakthrough. The biggest near-term risks may come from automation, fraud support, and attackers adapting content to influence chatbot outputs.

Samsung strike threat raises chip supply risks

A possible labor strike at Samsung Electronics in South Korea is raising concerns about chip production disruptions, client defections, and pressure on its position in the global semiconductor race. The dispute centers on bonus rules, but the larger risk is damage to Samsung’s credibility as a reliable supplier for major tech customers.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.