How to run visual generative artificial intelligence locally on Nvidia RTX PCs

Nvidia outlines how creators can use RTX PCs and ComfyUI to run modern visual generative artificial intelligence models like FLUX.2 and LTX-2 locally for image, video and 3D-guided workflows, while managing GPU memory and performance.

The article explains how visual generative artificial intelligence has moved into mainstream creative tools such as Adobe and Canva, and how agencies and studios are increasingly adopting it for image and video production. Creators are shifting these workloads to local PCs to keep assets under direct control, avoid cloud costs and reduce iteration friction, which makes it easier to refine outputs quickly. Since their inception, Nvidia RTX PCs have been positioned as the preferred systems for creative artificial intelligence because they offer high performance for faster iteration and allow users to run models locally without token-based usage limits, and recent RTX optimizations plus new open-weight models introduced at CES are intended to further speed up workflows and expand creative control.

To simplify running visual generative artificial intelligence locally, Nvidia highlights ComfyUI, an open source tool that provides templates and drag-and-drop node workflows. Users install ComfyUI on Windows from comfy.org, launch it, and start with a “1.1 Starter – Text to Image” template, connecting the model node to the Save Image node and pressing Run to generate their first RTX-accelerated image before experimenting with prompt changes. As projects grow, GPU VRAM becomes a key constraint, and Nvidia recommends matching model sizes to available VRAM while using FP4 models with Nvidia GeForce RTX 50 Series GPUs and FP8 models with RTX 40 Series GPUs so that models use less VRAM while providing more performance. The article walks through using the FLUX.2-Dev template in ComfyUI, where the large model weights must be downloaded on demand from repositories such as Hugging Face, noting that FLUX.2 can be >30GB depending on the version and that ComfyUI saves filename.safetensors files into the correct folders automatically.

Once weights are downloaded, users save the loaded template as a Workflow and can reload it from the Workflows window to resume generation with FLUX.2-Dev. The piece offers prompt tips for FLUX.2-Dev, encouraging clear, concrete, short-to-medium prompts that specify subject, setting, style, framing, realism and detail level, while avoiding negative prompting and trimming adjectives if results become too busy. It also describes where ComfyUI stores outputs by default on Windows standalone installs (C:ComfyUIoutput or a similar path), Windows desktop installs (paths under C:Users%username%AppDataLocalPrograms@comfyorgcomfyui-electronresourcesComfyUIoutput) and Linux (typically ~/.config/ComfyUI). For video, the article introduces Lightrick’s LTX‑2 audio-video model, which uses an image plus a text prompt for storyboard-style video generation, and gives detailed guidance on writing prompts that describe shots, action, characters, camera moves, lighting, atmosphere, pacing, style, emotions and audio. Because LTX-2 consumes substantial VRAM as resolution, frame rate, length or steps increase, ComfyUI and Nvidia have implemented a weight streaming feature that offloads work to system memory if GPU VRAM runs out, at the cost of performance, and users are advised to constrain generation settings accordingly.

The walkthrough continues with instructions for building a combined workflow that links FLUX.2-Dev image generation directly into an LTX-2 Image to Video workflow so users can text prompt for both image and video in a single pipeline by copying and connecting nodes between workflows and then saving under a new name. For more advanced pipelines, Nvidia points to its 3D-guided generative artificial intelligence blueprint, which uses 3D scenes and assets to drive more controllable, production-style image and video pipelines on RTX PCs, and it notes that creators can share work and seek support in the Stable Diffusion subreddit and the ComfyUI Discord. The closing section recaps recent RTX artificial intelligence PC advancements announced at CES, including 4K artificial intelligence video generation acceleration on PCs using LTX-2 and ComfyUI upgrades and broader RTX accelerations across tools such as Llama.cpp, Ollama and Hyperlink. It also highlights Black Forest Labs’ FLUX.2 [klein] compact models, which are accelerated by NVFP4 and NVFP8 to boost speed by up to 2.5x and run efficiently across many RTX GPUs, and describes Project G-Assist’s new Reasoning Mode that improves accuracy, allows multiple commands at once and lets the assistant control G-SYNC monitors, Corsair peripherals and components, with support coming to Elgato Stream Decks and a new Cursor-based plug-in builder for developers.

55

Impact Score

Nvidia Drive AV helps mercedes benz cla secure top euro ncap safety award

The mercedes benz cla has been named euro ncap’s best performer of 2025 after combining traditional crash protection with nvidia drive av powered driver assistance to achieve the year’s top overall safety score. The result highlights how artificial intelligence based active safety is reshaping how vehicles are evaluated and trusted on the road.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.