Musk and Altman clash over credibility in final trial week

The final week of the Musk v. Altman trial centered on whether Elon Musk or Sam Altman is more credible, and whether OpenAI abandoned its nonprofit mission. Jurors are now weighing competing claims over control, restructuring, and Artificial Intelligence safety.

The final week of the Musk v. Altman trial turned on dueling attacks over credibility, governance, and the future of OpenAI’s mission. Sam Altman was pressed over allegations of lying and self-dealing tied to companies that do business with OpenAI, while OpenAI’s legal team portrayed Elon Musk as a power-seeker who wanted to control the development of artificial general intelligence, or AGI. The courtroom fight culminated in closing arguments that cast Musk as a rival seeking to undermine a competitor and Altman as an unreliable steward of a nonprofit created to build Artificial Intelligence for humanity’s benefit.

Musk’s lawyer Steven Molo argued that Altman and Greg Brockman broke a promise to use Musk’s donations to preserve OpenAI as a nonprofit devoted to developing Artificial Intelligence for the benefit of humanity. OpenAI’s lawyer Sarah Eddy countered that Altman and Brockman never made such a promise and said OpenAI remains a nonprofit dedicated to developing Artificial Intelligence safely despite later restructuring. Eddy also argued that Musk sued too late and said his real motive is to sabotage xAI, which he launched in 2023. The jury will begin deliberating on Monday and deliver an advisory verdict as soon as next week. The jury verdict is not binding on the judge, who will decide the case.

Testimony in the closing stretch sharpened the case into two competing narratives. Altman testified that in 2017 Musk sought control over a proposed for-profit arm and even suggested that control of OpenAI could pass to his children if he died. Molo responded by highlighting testimony from former executives and board members who said Altman had lied to them, including events tied to his brief removal as chief executive in 2023. Molo also questioned Altman about personal investments in startups that do business with OpenAI, including testimony that he tried to steer OpenAI toward buying power from Helion Energy, a company of which he owns a third.

A central issue was whether OpenAI still functions as a nonprofit committed to safe AGI development. Eddy argued that the nonprofit still controls the for-profit and remains focused on helping AGI turn out well for humanity. Molo countered that the nonprofit’s control is only nominal because the same leadership overlaps across both entities, the nonprofit hired employees only shortly before trial, and its work has focused on grant-making rather than Artificial Intelligence research. The proceedings also returned repeatedly to safety disputes, despite the judge’s efforts to keep them peripheral. OpenAI underscored that theme with a golden donkey trophy inscribed, “Never stop being a jackass for safety,” which employee Joshua Achiam said commemorated Musk insulting him after he warned in 2018 that racing toward AGI could compromise safety.

58

Impact Score

Artificial Intelligence model learns to say it does not know

South Korean researchers developed a training method that helps Artificial Intelligence models recognize when they lack knowledge instead of responding with misplaced confidence. The approach aims to reduce hallucinations and improve reliability in areas such as autonomous driving and medicine.

Artificial Intelligence reshapes the UK jobs market

Artificial Intelligence is changing how UK businesses hire, train and structure work, with growing adoption among SMEs and rising concern over entry-level roles. The shift is increasing demand for digital skills while deepening worries about youth unemployment and long-term skills shortages.

State media shapes large language model outputs

Research in Nature finds that government control of media can influence large language model behavior through training data. The effect appears especially visible across languages, with models producing more favorable answers about China when prompted in Chinese.

Deepfake porn’s hidden victims

Nonconsensual sexual deepfakes are harming not only the people whose faces are inserted into explicit content, but also adult performers whose bodies and likenesses are repurposed without consent. As generative Artificial Intelligence tools spread, performers face growing psychological, legal, and financial risks with limited protection.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.