OpenAI has developed an experimental large language model called a weight-sparse transformer that is far easier to analyze than typical models. The project responds to a core problem: modern models are black boxes, and researchers cannot fully explain why they hallucinate or fail when applied to important domains. Leo Gao, a research scientist at OpenAI, said the work aims to improve safety as Artificial Intelligence systems are integrated into high-stakes tasks.
The research sits in the field of mechanistic interpretability, which tries to map the internal circuits models use to carry out tasks. Most existing models use dense neural networks, where neurons connect broadly and learned features are spread across many units. That structure creates superposition, where individual neurons represent multiple features, making it hard to attribute behavior to specific parts. OpenAI instead constructed a weight-sparse transformer in which each neuron connects to only a few others, forcing features into localized clusters and making it easier to relate neurons or groups of neurons to concrete concepts and functions.
The resulting model is much smaller and slower than leading commercial models and at most as capable as GPT-1, according to Gao. OpenAI has used it to trace exact chains of computation for simple tasks, such as adding a matching quotation mark to a block of text, and identified a learned circuit that mirrors an algorithm one might implement by hand. External researchers praised the approach as promising, while others warned it may not scale to larger, more capable models. Gao and Dan Mossing of OpenAI acknowledge the limitations but say the technique could eventually yield a fully interpretable model on the order of GPT-3, which would provide deep insight into how complex Artificial Intelligence systems function.
