llm-anthropic 0.25 updates support for Anthropic models in the LLM tooling ecosystem and adds a new model alongside several output and configuration changes. New model: claude-opus-4.7, which supports thinking_effort: xhigh. The release is focused on expanding available model options while refining how reasoning-related output can be displayed and managed.
The update introduces new thinking_display and thinking_adaptive boolean options. thinking_display summarized output is currently only available in JSON output or JSON logs. Those additions give users more control over how model reasoning information is presented, with the summarized display mode currently scoped to structured output formats rather than standard text responses.
llm-anthropic 0.25 also increases default max_tokens to the maximum allowed for each model. That change raises default output capacity without requiring manual adjustment for individual models. The release further streamlines compatibility by no longer using obsolete structured-outputs-2025-11-13 beta header for older models, removing a legacy implementation detail from earlier behavior.
