The LM Studio model catalog collects new and noteworthy local models you can run on your own machine. Entries include short descriptions, available parameter sizes, capability badges and licensing notes. The catalog emphasizes models designed for on‑device or local deployment and highlights options for vision, coding, reasoning, safety classification and Mixture‑of‑Experts architectures.
OpenAI appears with two related entries. gpt‑oss and gpt‑oss‑safeguard are presented as OpenAI’s first open‑source large language model offerings, available in 20B and 120B sizes. gpt‑oss is described as supporting configurable reasoning effort (low, medium, high), trained for tool use, and released under the Apache 2.0 license. gpt‑oss‑safeguard extends that family with open safety models trained to help classify text content based on customizable policies. The catalog also surfaces vision‑language models such as Qwen3‑VL and qwen2.5‑vl, which include multi‑size variants and upgrades to visual perception, spatial reasoning, and large context support.
Other highlights include Granite 4.0, described as lightweight, multilingual and suitable for coding, retrieval‑augmented generation, tool use and JSON output; seed‑oss from ByteDance, a 36B advanced reasoning model with a flexible “thinking budget”; and Ernie‑4.5 from Baidu, a medium‑size Mixture‑of‑Experts foundation model. The list further features edge‑focused LFM2, Mistral and Mistral‑derived coding models such as devstral and codestral, Google’s gemma family for image plus text input, phi‑4 reasoning variants, and distilled or specialized models like deepseek‑r1. Each entry shows available parameter counts, short capability blurbs and recent update indicators so users can compare sizes, modalities and intended use cases for local Artificial Intelligence deployments.
