Artificial Intelligence delusions and OpenAI’s Microsoft risk

Stanford researchers found that chatbots can intensify delusion-like thinking into dangerous obsession, while a separate report highlights OpenAI’s admission that its ties to Microsoft pose a business risk. The briefing also spans policy, chips, space, biotech, and digital rights.

Stanford researchers examined transcripts from chatbot users who experienced spirals into delusion-like thinking. The findings suggest that chatbots can turn a benign, delusion-like thought into a dangerous obsession. The central unresolved issue is whether Artificial Intelligence causes delusions or mainly amplifies vulnerabilities that already exist, a distinction with major implications for how these systems are understood and governed.

OpenAI has acknowledged that its close ties with Microsoft are a business risk, according to a pre-IPO document cited in the briefing. The same roundup says OpenAI is courting private equity firms with a sweeter deal than Anthropic’s, is building a fully automated researcher, and aims to challenge Google in search. Elsewhere in Artificial Intelligence, Mark Zuckerberg is building an Artificial Intelligence CEO to help run Meta, Mistral’s CEO has urged Europe to impose a content levy on Artificial Intelligence companies, and Siemens’ CEO warned that prioritizing Artificial Intelligence independence could bring “disaster” for Europe.

Technology policy and infrastructure also feature prominently. The US has banned all new foreign-made consumer routers on national security grounds, while the EU is being pressed to tighten rules for smart TVs built by large technology companies. Hong Kong police can now require device passwords under a new law, and refusing to comply could lead to a year in jail. On the space front, Russia’s aspiring SpaceX rival has launched its first internet satellites into orbit as it tries to build a low-Earth orbit network, while a separate event preview points to growing ambitions around permanent Moon bases and the search for life on Mars. The discussion is scheduled for 16:00 GMT / 12:00 PM ET / 9:00 AM PT.

Industry and science updates round out the picture. Elon Musk’s “Terafab” chip factory is facing a reality check because of chip production shortages, while future Artificial Intelligence chips may be built on glass. Palantir has become a “poisonous” issue on the campaign trail, with candidates scrutinized over ties to the company, and its access to sensitive UK data is drawing concern. In biotech, a startup backed by Tim Draper wants to replace animal testing with nonsentient “organ sacks.” A separate feature revisits the legacy of the first gene-edited babies created in 2018 using CRISPR; the scientist behind the work was sentenced to three years in prison, even as easier gene editing continues to raise questions about how human evolution could be altered.

55

Impact Score

Noah Smith and Claude debate Artificial Intelligence and the future of science

A long exchange between Noah Smith and Claude explores where Artificial Intelligence could most accelerate scientific progress, from materials science to biology and climate. The discussion centers on whether future breakthroughs will come from human-readable laws or from complex patterns that machines can exploit even when people cannot fully understand them.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.