Nvidia and ComfyUI bring streamlined local artificial intelligence video tools to creators

Nvidia is rolling out new ComfyUI features and upscaling tools that speed up local artificial intelligence video generation and make node-based workflows more approachable for game developers and artists.

Nvidia is expanding its local artificial intelligence tooling for creators at the Game Developers Conference in San Francisco, focusing on faster, more accessible video generation on RTX GPUs and the Nvidia DGX Spark desktop supercomputer. ComfyUI, a popular node-based generative tool, now offers an App View that presents workflows through a simplified interface so users can type a prompt, tweak a few parameters and generate results without dealing with node graphs. The traditional Node View remains available, and users can switch between App View and Node View depending on their comfort level and project needs.

The ComfyUI App View integrates with existing RTX optimizations, with performance for RTX GPUs reported as 40% faster since September while also adding native support for NVFP4 and FP8 data formats. All combined, performance is 2.5x faster and VRAM is reduced by 60% with Nvidia GeForce RTX 50 Series GPUs’ NVFP4 format, and performance is 1.7x faster and VRAM is reduced by 40% with FP8. New NVFP4 and FP8 model variants are available directly in ComfyUI for LTX-2.3, with NVFP4 support coming soon, as well as FLUX.2 Klein 4B and FLUX.2 Klein 9B, which can be pulled from Hugging Face and swapped into default ComfyUI templates via the Template Browser.

To tackle the trade-offs between speed, VRAM and control in high-resolution workflows, Nvidia is also making RTX Video Super Resolution available as a ComfyUI node for rapid 4K upscaling of generated clips. The same artificial intelligence upscaling technology is exposed to developers as a free Python package on the PyPI repository, supported by sample code on GitHub and a VFX Python bindings guide, and it runs on RTX GPU Tensor Cores to deliver 4K upscaling 30x faster than alternative popular local upscalers at a fraction of the VRAM cost. Around GDC, Nvidia is highlighting a broader RTX artificial intelligence ecosystem as well, including the LTX Desktop local video editor optimized for Nvidia GPUs, LM Link for remote model execution across devices, upcoming DLSS 4.5 overrides for GeForce RTX 50 Series, a forthcoming RTX Remix update with Advanced Particle VFX for modders, and Topaz Labs’ NeuroStream VRAM optimization that helps complex artificial intelligence models run on consumer hardware.

55

Impact Score

Nvidia and Thinking Machines form gigawatt scale Artificial Intelligence partnership

Nvidia and Thinking Machines Lab have entered a multiyear deal to deploy at least one gigawatt of next generation Vera Rubin systems for frontier Artificial Intelligence model training and customizable platforms. The partnership combines major infrastructure commitments with a strategic investment to expand access to frontier and open models for enterprises, researchers and scientists.

Artificial intelligence’s role in modern conflict and a rising legal battle

Artificial intelligence is increasingly shaping both how modern conflicts are conducted and how they are perceived, while major technology firms clash with the US government over military blacklists and oversight. A mix of battlefield experimentation, legal challenges, and cultural anxieties reveals how deeply Artificial Intelligence is embedding itself into geopolitics and everyday tools.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.