Large Language Models can run tools in your terminal with LLM 0.26

LLM 0.26 introduces powerful tool support, enabling Large Language Models to execute custom Python functions and plugin tools directly from the terminal or Python library. This brings unprecedented extensibility to modern Artificial Intelligence workflows.

The release of LLM 0.26 represents a major advancement in Large Language Model (LLM) development, introducing full support for tools that allow models to execute Python functions and plugin-based utilities directly from the command-line interface (CLI) or Python library. Users can now grant models from OpenAI, Anthropic, Gemini, and local offerings like Ollama access to any capability expressible as a Python function, bridging a critical gap between language models and bespoke computational tasks.

Key features include the ability to install tool plugins that enhance model capabilities across supported providers, the use of the --tool or -T options for direct tool invocation, and the flexibility to pass ad-hoc Python functions via the --functions flag. Tools operate seamlessly in both synchronous and asynchronous contexts. Python API enhancements mirror CLI improvements, supporting complex tool interactions, including stepwise ‘chain’ execution where models can request and iterate through tool calls to reach a final answer. Example use cases range from simple version checks and time lookups to advanced mathematical evaluation, JavaScript execution, and direct SQL queries—offered through dedicated plugins like llm-tools-simpleeval, llm-tools-quickjs, llm-tools-sqlite, and llm-tools-datasette.

LLM 0.26’s architecture abstracts tool use across a rapidly maturing ecosystem, aligning with advances such as Model Context Protocol (MCP) for interoperability. Model vendors, including OpenAI, Anthropic, Google, and Mistral, now universally support function calling and tool usage patterns, making this release particularly timely. The changelog highlights contributions like improved plugin documentation, enhanced logging, and the ability for the Python API to handle both ‘single tool’ and ‘toolbox’ constructs. The project’s roadmap includes support for more plugin types, better interfaces for tool execution logs, and full MCP client integration. This major release positions LLM as a flexible, model-agnostic platform for users and developers to easily extend the capabilities of modern language models, fostering innovation across Artificial Intelligence-driven workflows.

72

Impact Score

Nvidia chief projects chip sales growth

Nvidia’s chief executive is tied to a projection of massive future Artificial Intelligence chip revenue, but the available source material provides no reported details beyond the headline and a brief author description.

Can world models unlock general purpose robotics

World models aim to help robots learn physics from large-scale video instead of relying mainly on hand-built simulators and scarce robot-specific data. Early results are promising, but major questions remain around consistency, tactile sensing, speed, and economics.

HHS weighs clinical Artificial Intelligence adoption around trust and burden

HHS is using public feedback to shape how Artificial Intelligence should be adopted in clinical care, with a focus on provider burden, patient trust, interoperability, and responsible use. The department is signaling that future changes in regulation, reimbursement, and research will reflect the themes that emerge.

Designing carbon materials with Artificial Intelligence at exascale

Argonne researchers are using supercomputers and Artificial Intelligence to predict how carbon changes under extreme heat and pressure. The work could help design nanocarbon materials for medicine, energy, and national security before they are built in the lab.

NVIDIA unveils RTX PRO 4500 Blackwell server edition GPU

NVIDIA has introduced a passively cooled, single-slot RTX PRO 4500 Blackwell Server Edition aimed at compute-dense server deployments. The card closely matches the standard RTX PRO 4500 Blackwell while lowering power and memory speed to fit hyper-dense configurations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.