Artificial Intelligence hype cools as quantum navigation targets GPS jamming

The newsletter charts how excitement around generative artificial intelligence has run into limits in 2025, while researchers push quantum navigation as a potential answer to dangerous GPS jamming in conflict zones.

The newsletter opens by reflecting on what it calls the great Artificial Intelligence hype correction of 2025. When OpenAI launched ChatGPT as a free web app in late 2022, it rapidly altered the trajectory of the tech industry and even influenced world economies, as millions of people began conversing with their computers and getting conversational responses in return. Those early successes created high expectations, but this year has exposed the gap between bold promises from the leaders of top Artificial Intelligence companies and what the technology can consistently deliver. Core model updates no longer feel like dramatic step changes, and the technology’s most impressive feats still come with significant caveats, underscoring that it remains experimental despite the genuine “Wow” moments of the past few years.

This reassessment is framed as part of a broader Hype Correction package that aims to help readers reset their expectations for what Artificial Intelligence can and cannot do. The publication is actively encouraging readers to explore stories that unpack both the possibilities and the limits of the field, including deeper analysis in its weekly Algorithm newsletter devoted to Artificial Intelligence. In parallel, the issue highlights a very different kind of frontier technology: quantum navigation. Since the 2022 invasion of Ukraine, thousands of flights have been disrupted by a Russian campaign of radio transmissions that jam GPS signals, raising the risk of a serious aviation incident and underscoring how vulnerable satellite-based navigation has become to jamming and spoofing tactics.

Quantum navigation is presented as a promising alternative now emerging from research labs, using the quantum behavior of light and atoms to build ultra-sensitive sensors that allow aircraft and other vehicles to navigate without relying on satellites at all. The rest of the newsletter curates notable tech stories, including the Trump administration’s new US Tech Force program designed to attract engineers into government modernization, scrutiny of how Artificial Intelligence data centers affect electricity and water use, and shifting corporate strategies around electric vehicles and financial regulation. It also touches on Hollywood’s divided response to Artificial Intelligence, corporate America’s new obsession with hiring storytellers, and the rise of the Chinese model DeepSeek as a tool for digitally mediated fortune-telling among anxious young people. A closing section offers lighter diversions, from the long history of online chess to reflections on Jane Austen’s legacy and New England seafood.

52

Impact Score

Creating psychological safety in the artificial intelligence era

A new report from MIT Technology Review Insights argues that psychological safety is a prerequisite for successful enterprise artificial intelligence adoption, finding that cultural fears still hinder experimentation despite high self-reported safety levels. Executives link experiment-friendly environments directly to better artificial intelligence outcomes, but many organizations acknowledge their foundations remain unstable.

Should U.S. be worried about an artificial intelligence bubble

Harvard Business School professor Andy Wu argues that worries about an artificial intelligence bubble hinge on how much debt and risk smaller players and vendors take on, while big technology firms appear structurally insulated from a potential bust.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.