Model catalog – LM Studio

A curated catalog of local models for on‑device Artificial Intelligence, listing open-source and Mixture‑of‑Experts foundation models from OpenAI, Baidu, and other vendors.

The LM Studio model catalog collects new and noteworthy local models you can run on your own machine. Entries include short descriptions, available parameter sizes, capability badges and licensing notes. The catalog emphasizes models designed for on‑device or local deployment and highlights options for vision, coding, reasoning, safety classification and Mixture‑of‑Experts architectures.

OpenAI appears with two related entries. gpt‑oss and gpt‑oss‑safeguard are presented as OpenAI’s first open‑source large language model offerings, available in 20B and 120B sizes. gpt‑oss is described as supporting configurable reasoning effort (low, medium, high), trained for tool use, and released under the Apache 2.0 license. gpt‑oss‑safeguard extends that family with open safety models trained to help classify text content based on customizable policies. The catalog also surfaces vision‑language models such as Qwen3‑VL and qwen2.5‑vl, which include multi‑size variants and upgrades to visual perception, spatial reasoning, and large context support.

Other highlights include Granite 4.0, described as lightweight, multilingual and suitable for coding, retrieval‑augmented generation, tool use and JSON output; seed‑oss from ByteDance, a 36B advanced reasoning model with a flexible “thinking budget”; and Ernie‑4.5 from Baidu, a medium‑size Mixture‑of‑Experts foundation model. The list further features edge‑focused LFM2, Mistral and Mistral‑derived coding models such as devstral and codestral, Google’s gemma family for image plus text input, phi‑4 reasoning variants, and distilled or specialized models like deepseek‑r1. Each entry shows available parameter counts, short capability blurbs and recent update indicators so users can compare sizes, modalities and intended use cases for local Artificial Intelligence deployments.

50

Impact Score

AMD shortens FidelityFX Super Resolution to FSR

AMD has quietly shortened the name of FidelityFX Super Resolution to FSR on its official product page without a formal announcement. The change surfaces ahead of the FSR Redstone demo, which will detail Artificial Intelligence and machine learning features and game support.

Phison expands infrastructure to accelerate Artificial Intelligence workloads

Phison has introduced next-generation PCIe Gen 5 enterprise SSDs, the Pascari X201 and Pascari D201, and demonstrated Artificial Intelligence agents running on an integrated GPU using aiDAPTIV+ GPU memory extension technology. The portfolio targets IT departments, universities, hyperscalers, and enterprise data centers with low latency and scalable performance.

Intel confirms AVX10.2 and APX support in Nova Lake

Intel’s 60th architecture manual confirms Nova Lake will support AVX-512 and the AVX10.2 superset, ending months of speculation. The move means both major x86 vendors will offer native 512-bit vector processing in desktop CPUs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.