Configuring language models in opencode

Opencode uses the Artificial Intelligence sdk and Models.dev to connect to more than 75 large language model providers, with support for both cloud and local models. Users can choose recommended models, set defaults, configure options, and define variants through a central config file.

Opencode uses the Artificial Intelligence sdk and Models.dev to support for 75+ LLM providers and it supports running local models, with popular providers preloaded and activated once credentials are added through the /connect command. After a provider is configured, users can list and select available models by typing /models, which surfaces a range of options that change frequently as new models are released. The documentation suggests that only a subset of these are strong at both code generation and tool calling, and it highlights several models that currently work well with Opencode, including GPT 5.2, GPT 5.1 Codex, Claude Opus 4.5, Claude Sonnet 4.5, Minimax M2.1, and Gemini 3 Pro.

To set a default model, users specify a model key in the opencode.json configuration file, where the full identifier follows the provider_id/model_id pattern, such as lmstudio/google/gemma-3n-e4b or opencode/gpt-5.1-codex when using Opencode Zen. Custom providers follow the same structure, with provider_id taken from the provider section and model_id from provider.models, and these settings determine which model Opencode will prioritize on startup. Opencode checks for models in a fixed order at launch, beginning with any –model or -m command line flag using the provider_id/model_id format, then falling back to the model entry in the config file, then the last used model, and finally the first model found according to an internal priority.

Opencode allows global configuration of model behavior through the provider block in opencode.jsonc, where options can be tuned for specific models such as gpt-5 on the openai provider or claude-sonnet-4-5-20250929 on the anthropic provider, including settings like reasoningEffort, textVerbosity, reasoningSummary, and thinking budgets such as budgetTokens: 16000. These options can also be overridden at the agent level, so agent-specific configs take precedence over global model settings. Variants provide another layer of control by letting users define named configurations like high or low for the same model without duplicating entries, and Opencode ships with built-in variants for providers such as anthropic, openai, and google, with presets spanning none, minimal, low, medium, high, xhigh and different effort or thinking budgets. Users can override or disable built-in variants, add custom ones like thinking or fast, and quickly switch between them at runtime using the variant_cycle keybind.

52

Impact Score

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.