NVIDIA updates Project G-Assist with lighter model, halves VRAM requirement

NVIDIA cut Project G-Assist´s video memory needs in half by shipping a lighter Artificial Intelligence model and smarter tool-calling, enabling the assistant to run on many 8 GB and 6 GB GPUs.

NVIDIA used Gamescom 2025 to reveal a major update to Project G-Assist, its locally accelerated chatbot and utility for gamers. The company says the new model reduces the video memory threshold from 12 GB or higher down to just 6 GB. That change comes from a model redesign that lowers memory footprint by about 40 percent combined with improved tool-calling intelligence, according to NVIDIA.

The immediate hardware effect is broader compatibility. Cards that previously fell short of the minimum requirement are now supported, including 11 GB cards such as the rtx 2080 ti, 10 GB cards such as the rtx 3080, a wide range of 8 GB models, and even 6 GB GPUs like the rtx 2060. NVIDIA stressed that the update should be especially useful for laptops and other form factors where video memory is often smaller than on desktop variants. In short, more systems can now run G-Assist locally without needing the highest-end graphics memory configurations.

Functionally, G-Assist remains an in-game helper for diagnostics and optimization. Players can invoke it during play to run system checks, optimize game performance, get graphics sub-system information and monitoring, and change gpu or gaming peripheral settings using simple natural language prompts. The combination of a smaller model and better tool-calling is intended to make those capabilities more responsive and less resource intensive on supported hardware.

NVIDIA also announced a G-Assist plug-in hub built with Mod.io to encourage third-party extensions and peripheral integrations. The company recently ran a plug-in hackathon and highlighted winning projects including Omniplay, Launchpad, and the Flux NIM microservice for G-Assist as examples of what developers can add. Those plug-ins illustrate how the assistant can be extended beyond diagnostics into workflow and device control, and they point toward a growing developer ecosystem around the locally running assistant.

For users, the update means older or midrange systems gain access to an on-device assistant that previously required more memory. For developers and modders, the plug-in hub and hackathon show NVIDIA pushing for a platform approach where third-party tools complement the base model´s capabilities. The company did not announce additional hardware requirements changes beyond the reduced vram figure, but the shift signals a clear focus on wider availability and more efficient local deployment of Artificial Intelligence helpers in games.

70

Impact Score

Connecting to a local LLM with Ollama

Run a local language model with Ollama and connect Mundi for offline, free Artificial Intelligence-powered geospatial analysis on your laptop.

###CFCACHE###

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend