San Francisco-based startup Goodfire has released Silico, a tool that lets researchers and engineers inspect the inner workings of an Artificial Intelligence model and adjust its parameters during training. Goodfire describes it as the first off-the-shelf product aimed at debugging every stage of model development, from dataset creation to training. The company is focused on mechanistic interpretability, a technique for mapping neurons and pathways inside models to better understand how they produce outputs, with the goal of making model building more systematic and less dependent on trial and error.
Silico is designed to examine specific parts of a trained model, including individual neurons and groups of neurons, and run experiments on how they behave. It works with models whose internal parameters are accessible, which means it is better suited to many open-source systems than to closed products such as ChatGPT or Gemini. Developers can test which inputs activate certain neurons, trace how signals move upstream and downstream, and then modify connected parameters to amplify or suppress behaviors. Goodfire says it has already used these methods to reduce hallucinations in large language models and is now packaging those techniques into a commercial product. The tool also uses agents to automate interpretability tasks that previously required human specialists.
Goodfire provided several examples of the system in use. In one case, the company identified a neuron in the open-source model Qwen 3 that was linked to the trolley problem, and activating it made the model frame responses as explicit moral dilemmas. In another test, researchers asked a model whether a company should disclose that its Artificial Intelligence behaves deceptively in 0.3% of cases, affecting 200 million users. The model initially said no, but boosting neurons associated with transparency and disclosure flipped the answer from no to yes nine out of 10 times. Goodfire argues this shows that models may already contain useful ethical reasoning patterns that can be strengthened rather than rebuilt from scratch.
Silico can also be used earlier in development by filtering training data so that undesirable parameter values are less likely to emerge. Goodfire says this could help address errors such as models claiming that 9.11 is greater than 9.9, potentially by identifying misleading associations and retraining the system to avoid them in mathematical contexts. The company plans to sell Silico with pricing set case by case. Supporters say tools like this could help smaller firms build more trustworthy systems for areas such as health care and finance, while critics caution that the work still adds precision to an inherently uncertain process rather than turning model training into true engineering.
