NVIDIA Showcases Next-Gen Multimodal Generative AI Research at ICLR 2024

NVIDIA unveils groundbreaking Artificial Intelligence models for audio, robotics, and multimodal applications at ICLR 2024, pushing generative innovation across industries.

NVIDIA Research is at the forefront of advancing Artificial Intelligence through a comprehensive approach that spans cutting-edge computing infrastructure, optimized compilers, novel algorithms, and transformative applications. Presenting over 70 papers at the International Conference on Learning Representations (ICLR) 2024 in Singapore, the company is showcasing technological strides intended to deliver new capabilities in fields such as autonomous systems, healthcare, content creation, and robotics.

Key research highlights include Fugatto, an advanced audio generative model adept at creating or transforming combinations of music, voice, and sounds from mixed text and audio prompts, redefining the possibilities in sound synthesis. Robotics development is propelled by the HAMSTER project, which leverages hierarchical designs in vision-language-action models to facilitate knowledge transfer using data that doesn´t rely on costly real-world robot collection. Meanwhile, Hymba introduces a family of small language models utilizing a hybrid architecture that blends transformer and state space models, providing higher throughput, improved recall, and efficient memory usage without compromising accuracy.

Innovations in visual understanding are advanced through LongVILA, enabling efficient training of visual-language models on long-form video data and achieving state-of-the-art results across multiple benchmarks. On the language modeling front, LLaMaFlex introduces a novel compression technique for large language models, outperforming several existing methods and significantly reducing computational costs. In computational biology, Proteina presents new capabilities for generating eligible protein backbone structures using deep transformer architectures. Other notable progress includes the SRSA framework, which enhances robotic learning by enabling task adaptation from preexisting skill libraries, and STORM, capable of reconstructing dynamic 3D outdoor scenes swiftly from minimal input—vital for autonomous vehicle development.

NVIDIA Research´s 400-member team continues to drive global innovation across computer architecture, generative technologies, graphics, self-driving systems, and robotics, cementing the company´s role as a pivotal contributor to the next generation of Artificial Intelligence across diverse sectors.

83

Impact Score

Siemens debuts digital twin composer for industrial metaverse deployments

Siemens has introduced digital twin composer, a software tool that builds industrial metaverse environments at scale by merging comprehensive digital twins with real-time physical data, enabling faster virtual decision making. Early deployments with PepsiCo report higher throughput, shorter design cycles and reduced capital expenditure through physics-accurate simulations and artificial intelligence driven optimization.

Cadence builds chiplet partner ecosystem for physical artificial intelligence and data center designs

Cadence has introduced a Chiplet Spec-to-Packaged Parts ecosystem aimed at simplifying chiplet design for physical artificial intelligence, data center and high performance computing workloads, backed by a roster of intellectual property and foundry partners. The program centers on a physical artificial intelligence chiplet platform and framework that integrates prevalidated components to cut risk and speed commercial deployment.

Patch notes detail split compute and IO tiles in Intel Diamond Rapids Xeon 7

Linux kernel patch notes reveal that Intel’s upcoming Diamond Rapids Xeon 7 server processors separate compute and IO tiles and adopt new performance monitoring and PCIe 6.0 support. The changes point to a more modular architecture and a streamlined product stack focused on 16-channel memory configurations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.