OpenAI curbs odd creature references in chatbot responses

OpenAI has adjusted its models after users complained about overly familiar responses and strange references to goblins, gremlins, pigeons, and raccoons. The company traced the behavior to a retired "nerdy" personality whose habits spread into broader model training.

OpenAI has changed how some of its tools respond after users and employees noticed an unusual pattern of references to goblins and other creatures in chatbot output. The issue became visible in Codex, where code problems were sometimes described as “little goblins” following the release of GPT-5.1 in November. Users also complained that the model had become overly familiar in tone, prompting the company to examine what it called specific verbal tics.

OpenAI found that mentions of “goblins” had risen 175 percent since the launch of GPT-5.1, while mentions of “gremlins” had risen by 52 percent. OpenAI released GPT-5.5 in late April. Users had also spotted internal instructions in Codex telling the tool to avoid a list of creatures unless they were clearly relevant. The instructions said Codex should “never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query”. It also told the tool to avoid platitudes.

The company said the behavior appears to have come from training work aimed at giving models different communication styles. One of those styles was a “nerdy” persona that rewarded metaphorical mentions of goblins, gremlins, and similar creatures. Although that personality has been retired, OpenAI said its habits had seeped into wider model training.

The episode highlights a broader challenge for generative Artificial Intelligence systems as they are used for a wider range of tasks, including mission-critical enterprise work. Unpredictable outputs have raised concerns about fictional references, fabricated citations in legal filings, and responses that become overly sycophantic. A study by OpenAI rival Anthropic published in March found that users’ main anxiety about the technology centered on spurious outputs, commonly described as hallucinations.

52

Impact Score

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

Distillation debate risks policy overreach

Distillation remains a standard technique across the Artificial Intelligence industry, but recent misuse of closed model APIs has blurred the line between normal training practice and abuse. Growing political pressure could end up harming open-weight ecosystems, smaller developers, and academic research more than the targeted offenders.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.