Configuring language models in opencode

Opencode uses the Artificial Intelligence sdk and Models.dev to connect to more than 75 large language model providers, with support for both cloud and local models. Users can choose recommended models, set defaults, configure options, and define variants through a central config file.

Opencode uses the Artificial Intelligence sdk and Models.dev to support for 75+ LLM providers and it supports running local models, with popular providers preloaded and activated once credentials are added through the /connect command. After a provider is configured, users can list and select available models by typing /models, which surfaces a range of options that change frequently as new models are released. The documentation suggests that only a subset of these are strong at both code generation and tool calling, and it highlights several models that currently work well with Opencode, including GPT 5.2, GPT 5.1 Codex, Claude Opus 4.5, Claude Sonnet 4.5, Minimax M2.1, and Gemini 3 Pro.

To set a default model, users specify a model key in the opencode.json configuration file, where the full identifier follows the provider_id/model_id pattern, such as lmstudio/google/gemma-3n-e4b or opencode/gpt-5.1-codex when using Opencode Zen. Custom providers follow the same structure, with provider_id taken from the provider section and model_id from provider.models, and these settings determine which model Opencode will prioritize on startup. Opencode checks for models in a fixed order at launch, beginning with any –model or -m command line flag using the provider_id/model_id format, then falling back to the model entry in the config file, then the last used model, and finally the first model found according to an internal priority.

Opencode allows global configuration of model behavior through the provider block in opencode.jsonc, where options can be tuned for specific models such as gpt-5 on the openai provider or claude-sonnet-4-5-20250929 on the anthropic provider, including settings like reasoningEffort, textVerbosity, reasoningSummary, and thinking budgets such as budgetTokens: 16000. These options can also be overridden at the agent level, so agent-specific configs take precedence over global model settings. Variants provide another layer of control by letting users define named configurations like high or low for the same model without duplicating entries, and Opencode ships with built-in variants for providers such as anthropic, openai, and google, with presets spanning none, minimal, low, medium, high, xhigh and different effort or thinking budgets. Users can override or disable built-in variants, add custom ones like thinking or fast, and quickly switch between them at runtime using the variant_cycle keybind.

52

Impact Score

AMD releases ROCm 7.2 software update for modern artificial intelligence workloads

AMD has released ROCm 7.2 for its Instinct GPUs, bringing software optimizations aimed at improving artificial intelligence and high performance computing performance, scalability, and efficiency. The update focuses on math libraries, mixed precision formats, communication topology, and multi GPU power management for production workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.