Google and other chatbots surface real phone numbers

Generative Artificial Intelligence chatbots are surfacing real phone numbers and other personal details, sometimes by pulling from obscure public sources and sometimes by inventing plausible but wrong contact information. Privacy experts say users have few reliable ways to find out whether their data is in model training sets or to force its removal.

Generative Artificial Intelligence chatbots are exposing people’s real phone numbers and other personally identifiable information, creating a new privacy risk for ordinary users. Reports described callers being misdirected by Google’s Gemini to private numbers for a lawyer, a product designer, and a locksmith. In another case, a software developer in Israel was contacted on WhatsApp after Gemini gave his personal number as a customer service contact for PayBox. A University of Washington PhD candidate also prompted Gemini to reveal a colleague’s personal cell phone number, even though the information was difficult to locate through a normal web search.

Privacy experts say these failures are likely tied to personally identifiable information in training data, though the exact mechanism is hard to pin down. DeleteMe says customer queries about generative Artificial Intelligence have increased by 400% up to a few thousand in the last seven months. Specifically, 55% of these concerns about generative Artificial Intelligence reference ChatGPT, 20% reference Gemini, 15% Claude, and 10% other Artificial Intelligence tools. The complaints usually fall into two patterns: chatbots returning accurate personal details about a user, or generating plausible but incorrect contact information for someone else. In one example, Daniel Abraham later searched his own number and found it had appeared online once in 2015 on a local question-and-answer site, suggesting a single old public posting may have been enough for Gemini to reproduce it years later.

Guardrails meant to prevent these disclosures are proving inconsistent. Gemini gave Yael Eiger’s number in response to a prompt asking for her contact info, even though the number had been shared only for a technology workshop and was buried in search results. Researchers at the University of Washington then tested ChatGPT, which initially refused but then suggested an “investigative-style” approach requiring hints such as a neighborhood or co-owner name. After receiving that information, ChatGPT produced a professor’s home address, home purchase price, and spouse’s name from city property records. Similar issues have also been reported with xAI’s Grok, which was found to provide residential addresses, phone numbers, and work addresses in many cases.

There are few clear remedies. Experts say consumers have no straightforward way to verify whether their data is in a model’s training set or to compel its removal, especially when the information was scraped from the public web. According to the California data broker registry, 31 of 578 registered data brokers operating in the state self-reported that they had “shared or sold consumers’ data to a developer of a GenAI system or model in the past year.” Google said it was “looking into” the reported Gemini cases and pointed to a support page for privacy objections and correction requests. OpenAI offers a privacy portal for removal requests, while Anthropic describes its data practices but does not provide a clear removal path. For now, privacy specialists say the best defense is to remove personal data from public sources before it is captured in future scrapes, though that does not solve information already absorbed into models.

74

Impact Score

U.S. and China revisit Artificial Intelligence emergency talks

Washington and Beijing are exploring renewed talks on an emergency communication channel for Artificial Intelligence as fears grow over the capabilities of Anthropic’s Mythos model. The shift reflects rising concern in both capitals that competitive pressure is outpacing safeguards.

Artificial Intelligence divides employers as hiring and headcount shift

U.S. hiring beat expectations in April, but employers remain split on whether Artificial Intelligence should drive layoffs, productivity gains, or internal redeployment. At the same time, candidate use of Artificial Intelligence is outpacing employer adoption in hiring, adding new pressure to screening and entry-level recruiting.

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.