Google´s artificial intelligence overviews—the automated, generative search summaries now widely embedded in Google Search—have once again come under scrutiny after displaying critical factual errors in the wake of a fatal Air India crash. According to multiple user reports, including posts on Reddit, when users sought information about the incident, some were presented with AI-generated summaries incorrectly stating that an Airbus A330-243 was involved. In reality, the crash involved a Boeing 787, not an Airbus aircraft. Not all overviews made this error, but the inconsistency underscores the unreliable nature of such automated responses during rapidly evolving news events.
This latest hallucination follows previous notorious mistakes in Google´s AI Overviews, notably the infamous ´glue-on-pizza´ suggestion. While some past hallucinations generated amusement or disbelief, the Air India crash error demonstrates a more severe consequence with the risk of spreading misinformation about a serious and sensitive event. Such errors could impact affected companies—like Airbus in this instance—as well as mislead investors, travelers, and the general public, especially since many users may accept featured search snippets at face value and not seek out primary news sources to verify them.
Google responded by promptly correcting the inaccurate AI Overviews and issued a statement assuring that its systems use such examples to improve and update their accuracy. The company claims AI Overviews maintain an accuracy rate on par with other search features, such as Featured Snippets. However, critics argue that disclaimers about potential mistakes are insufficient, especially when the technology is aggressively presented to users with no easy way to opt out. The episode reinforces enduring concerns around hallucination in generative artificial intelligence and the difficulties in minimizing such risks at scale. Despite notable advancements, the challenge of ensuring factual reliability and preventing the propagation of false or misleading information from artificial intelligence-driven platforms persists, and the technology´s integration into daily search experiences remains controversial.