NVIDIA Showcases Next-Gen Multimodal Generative AI Research at ICLR 2024

NVIDIA unveils groundbreaking Artificial Intelligence models for audio, robotics, and multimodal applications at ICLR 2024, pushing generative innovation across industries.

NVIDIA Research is at the forefront of advancing Artificial Intelligence through a comprehensive approach that spans cutting-edge computing infrastructure, optimized compilers, novel algorithms, and transformative applications. Presenting over 70 papers at the International Conference on Learning Representations (ICLR) 2024 in Singapore, the company is showcasing technological strides intended to deliver new capabilities in fields such as autonomous systems, healthcare, content creation, and robotics.

Key research highlights include Fugatto, an advanced audio generative model adept at creating or transforming combinations of music, voice, and sounds from mixed text and audio prompts, redefining the possibilities in sound synthesis. Robotics development is propelled by the HAMSTER project, which leverages hierarchical designs in vision-language-action models to facilitate knowledge transfer using data that doesn´t rely on costly real-world robot collection. Meanwhile, Hymba introduces a family of small language models utilizing a hybrid architecture that blends transformer and state space models, providing higher throughput, improved recall, and efficient memory usage without compromising accuracy.

Innovations in visual understanding are advanced through LongVILA, enabling efficient training of visual-language models on long-form video data and achieving state-of-the-art results across multiple benchmarks. On the language modeling front, LLaMaFlex introduces a novel compression technique for large language models, outperforming several existing methods and significantly reducing computational costs. In computational biology, Proteina presents new capabilities for generating eligible protein backbone structures using deep transformer architectures. Other notable progress includes the SRSA framework, which enhances robotic learning by enabling task adaptation from preexisting skill libraries, and STORM, capable of reconstructing dynamic 3D outdoor scenes swiftly from minimal input—vital for autonomous vehicle development.

NVIDIA Research´s 400-member team continues to drive global innovation across computer architecture, generative technologies, graphics, self-driving systems, and robotics, cementing the company´s role as a pivotal contributor to the next generation of Artificial Intelligence across diverse sectors.

83

Impact Score

DeepSeek launches new flagship Artificial Intelligence models

DeepSeek has introduced preview versions of its V4 Flash and V4 Pro models, positioning them as its most powerful open-source Artificial Intelligence platform yet. The release renews competition with OpenAI, Anthropic, and major Chinese rivals while drawing fresh attention to the startup’s technical ambitions and regulatory scrutiny.

OpenAI’s GPT-5.5 sharpens coding but trails Anthropic’s Opus 4.7

OpenAI’s latest model upgrade improves coding, tool use, reasoning and token efficiency as the company pushes deeper into enterprise adoption. Early evaluations suggest stronger security performance, but Anthropic’s Opus 4.7 still leads in some important coding areas.

DeepSeek previews new model for Huawei chips

DeepSeek has unveiled a preview of its V4 model adapted for Huawei chip technology, signaling a closer partnership as China pushes to reduce reliance on US semiconductors. The release lands amid escalating US accusations over Chinese Artificial Intelligence intellectual property practices and export control violations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.