Nvidia and ComfyUI bring streamlined local artificial intelligence video tools to creators

Nvidia is rolling out new ComfyUI features and upscaling tools that speed up local artificial intelligence video generation and make node-based workflows more approachable for game developers and artists.

Nvidia is expanding its local artificial intelligence tooling for creators at the Game Developers Conference in San Francisco, focusing on faster, more accessible video generation on RTX GPUs and the Nvidia DGX Spark desktop supercomputer. ComfyUI, a popular node-based generative tool, now offers an App View that presents workflows through a simplified interface so users can type a prompt, tweak a few parameters and generate results without dealing with node graphs. The traditional Node View remains available, and users can switch between App View and Node View depending on their comfort level and project needs.

The ComfyUI App View integrates with existing RTX optimizations, with performance for RTX GPUs reported as 40% faster since September while also adding native support for NVFP4 and FP8 data formats. All combined, performance is 2.5x faster and VRAM is reduced by 60% with Nvidia GeForce RTX 50 Series GPUs’ NVFP4 format, and performance is 1.7x faster and VRAM is reduced by 40% with FP8. New NVFP4 and FP8 model variants are available directly in ComfyUI for LTX-2.3, with NVFP4 support coming soon, as well as FLUX.2 Klein 4B and FLUX.2 Klein 9B, which can be pulled from Hugging Face and swapped into default ComfyUI templates via the Template Browser.

To tackle the trade-offs between speed, VRAM and control in high-resolution workflows, Nvidia is also making RTX Video Super Resolution available as a ComfyUI node for rapid 4K upscaling of generated clips. The same artificial intelligence upscaling technology is exposed to developers as a free Python package on the PyPI repository, supported by sample code on GitHub and a VFX Python bindings guide, and it runs on RTX GPU Tensor Cores to deliver 4K upscaling 30x faster than alternative popular local upscalers at a fraction of the VRAM cost. Around GDC, Nvidia is highlighting a broader RTX artificial intelligence ecosystem as well, including the LTX Desktop local video editor optimized for Nvidia GPUs, LM Link for remote model execution across devices, upcoming DLSS 4.5 overrides for GeForce RTX 50 Series, a forthcoming RTX Remix update with Advanced Particle VFX for modders, and Topaz Labs’ NeuroStream VRAM optimization that helps complex artificial intelligence models run on consumer hardware.

55

Impact Score

Google expands agentic enterprise push

Google used Cloud Next ’26 to position itself as a more integrated enterprise Artificial Intelligence provider, combining models, infrastructure, security, and multicloud data services. The strategy broadens its reach into enterprise software while emphasizing interoperability with rival clouds and platforms.

China still blocking Nvidia H200 chip sales

Nvidia has yet to complete H200 sales into China even after the United States reopened exports. Chinese authorities are reportedly limiting imports as Beijing pushes buyers toward domestic semiconductor suppliers.

OpenAI prepares GPT-5.5 launch

OpenAI is reportedly preparing GPT-5.5, its first fully retrained base model since GPT-4.5, as it pushes harder into enterprise software. The model is expected to bring native multimodal capabilities and stronger support for agent-based workflows.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.