Simon Willison´s LLM Tool Adds Major Features in Latest Releases

Simon Willison´s LLM Python library and CLI tool keeps advancing, now supporting multimodal prompts, long context, and structured data extraction for Artificial Intelligence workflows.

Simon Willison’s LLM, a Python library and command-line interface for interacting with large language models, has seen a series of rapid developments, adding significant capabilities to better serve developers working with modern Artificial Intelligence models like GPT-4, Llama, Claude, and Gemini. The tool started as a utility for interacting with services like ChatGPT, and has grown into a feature-rich platform supporting a wide range of workflows.

Key releases include LLM 0.5, which introduced plugin support for integrating additional models, including self-hosted ones, expanding its flexibility beyond commercial API providers. By version 0.9, LLM incorporated tools for working with embeddings, supporting enhanced search and retrieval-augmented generation (RAG) scenarios. Notable updates in LLM 0.13 and subsequent annotated releases refined compatibility with both local models and service APIs, and cemented its status as a versatile tool for both research and production use cases.

Recent milestones stand out for their focus on multimodality and structured data workflows. LLM 0.17 enabled users to send prompts to models with image, audio, and video inputs directly from their terminal, pushing LLM into vision and audio Artificial Intelligence applications. Version 0.22 and 0.23 delivered annotated release notes and unveiled support for schemas, empowering users to extract structured information from unstructured content, with updates to key plugins like llm-anthropic and llm-gemini. LLM 0.24 tackled the challenge of long context windows through fragments and template plugins, optimizing inputs for models capable of processing extensive documents. Version 0.25 further enhanced video processing by allowing video files to be converted into JPEG frames for analysis by vision models, leveraging new plugin capabilities.

Together, these updates position LLM as a uniquely comprehensive, open-source solution for developers, journalists, and data practitioners seeking advanced control over language models, embeddings, multimodal inputs, context management, and schema-driven information extraction in the evolving Artificial Intelligence landscape.

63

Impact Score

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

BitUnlocker bypasses TPM-only Windows 11 BitLocker

Intrinsec disclosed BitUnlocker, a downgrade attack that can bypass TPM-only Windows 11 BitLocker protections with physical access to a machine. The technique abuses a flaw in Windows recovery and deployment components and relies on older trusted boot code.

Micron samples 256 GB DDR5 9200 MT/s RDIMM server modules

Micron has begun sampling 256 GB DDR5 RDIMM server modules built on its 1-gamma technology to key ecosystem partners. The company positions the new modules as a higher-speed, more power-efficient option for scaling next-generation Artificial Intelligence and HPC infrastructure.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.