NVIDIA Showcases Next-Gen Multimodal Generative AI Research at ICLR 2024

NVIDIA unveils groundbreaking Artificial Intelligence models for audio, robotics, and multimodal applications at ICLR 2024, pushing generative innovation across industries.

NVIDIA Research is at the forefront of advancing Artificial Intelligence through a comprehensive approach that spans cutting-edge computing infrastructure, optimized compilers, novel algorithms, and transformative applications. Presenting over 70 papers at the International Conference on Learning Representations (ICLR) 2024 in Singapore, the company is showcasing technological strides intended to deliver new capabilities in fields such as autonomous systems, healthcare, content creation, and robotics.

Key research highlights include Fugatto, an advanced audio generative model adept at creating or transforming combinations of music, voice, and sounds from mixed text and audio prompts, redefining the possibilities in sound synthesis. Robotics development is propelled by the HAMSTER project, which leverages hierarchical designs in vision-language-action models to facilitate knowledge transfer using data that doesn´t rely on costly real-world robot collection. Meanwhile, Hymba introduces a family of small language models utilizing a hybrid architecture that blends transformer and state space models, providing higher throughput, improved recall, and efficient memory usage without compromising accuracy.

Innovations in visual understanding are advanced through LongVILA, enabling efficient training of visual-language models on long-form video data and achieving state-of-the-art results across multiple benchmarks. On the language modeling front, LLaMaFlex introduces a novel compression technique for large language models, outperforming several existing methods and significantly reducing computational costs. In computational biology, Proteina presents new capabilities for generating eligible protein backbone structures using deep transformer architectures. Other notable progress includes the SRSA framework, which enhances robotic learning by enabling task adaptation from preexisting skill libraries, and STORM, capable of reconstructing dynamic 3D outdoor scenes swiftly from minimal input—vital for autonomous vehicle development.

NVIDIA Research´s 400-member team continues to drive global innovation across computer architecture, generative technologies, graphics, self-driving systems, and robotics, cementing the company´s role as a pivotal contributor to the next generation of Artificial Intelligence across diverse sectors.

83

Impact Score

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Generative Artificial Intelligence is reshaping cybercrime less than feared

Research into criminal underground forums suggests generative Artificial Intelligence is being used mainly as a productivity tool rather than a transformative criminal breakthrough. The biggest near-term risks may come from automation, fraud support, and attackers adapting content to influence chatbot outputs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.