Large Language Models can run tools in your terminal with LLM 0.26

LLM 0.26 introduces powerful tool support, enabling Large Language Models to execute custom Python functions and plugin tools directly from the terminal or Python library. This brings unprecedented extensibility to modern Artificial Intelligence workflows.

The release of LLM 0.26 represents a major advancement in Large Language Model (LLM) development, introducing full support for tools that allow models to execute Python functions and plugin-based utilities directly from the command-line interface (CLI) or Python library. Users can now grant models from OpenAI, Anthropic, Gemini, and local offerings like Ollama access to any capability expressible as a Python function, bridging a critical gap between language models and bespoke computational tasks.

Key features include the ability to install tool plugins that enhance model capabilities across supported providers, the use of the --tool or -T options for direct tool invocation, and the flexibility to pass ad-hoc Python functions via the --functions flag. Tools operate seamlessly in both synchronous and asynchronous contexts. Python API enhancements mirror CLI improvements, supporting complex tool interactions, including stepwise ‘chain’ execution where models can request and iterate through tool calls to reach a final answer. Example use cases range from simple version checks and time lookups to advanced mathematical evaluation, JavaScript execution, and direct SQL queries—offered through dedicated plugins like llm-tools-simpleeval, llm-tools-quickjs, llm-tools-sqlite, and llm-tools-datasette.

LLM 0.26’s architecture abstracts tool use across a rapidly maturing ecosystem, aligning with advances such as Model Context Protocol (MCP) for interoperability. Model vendors, including OpenAI, Anthropic, Google, and Mistral, now universally support function calling and tool usage patterns, making this release particularly timely. The changelog highlights contributions like improved plugin documentation, enhanced logging, and the ability for the Python API to handle both ‘single tool’ and ‘toolbox’ constructs. The project’s roadmap includes support for more plugin types, better interfaces for tool execution logs, and full MCP client integration. This major release positions LLM as a flexible, model-agnostic platform for users and developers to easily extend the capabilities of modern language models, fostering innovation across Artificial Intelligence-driven workflows.

72

Impact Score

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.