AMD expands open artificial intelligence software ecosystem with Brium acquisition

AMD acquires Brium to further strengthen its open Artificial Intelligence software ecosystem, targeting performance and developer empowerment.

AMD has announced the acquisition of Brium, a firm with renowned expertise in compiler technologies and Artificial Intelligence software development. Brium´s team will infuse AMD´s offerings with advancements in machine learning, Artificial Intelligence inference, and performance optimization, strengthening AMD´s open software ecosystem. Their specialized knowledge spans areas such as compiler technology, model execution frameworks, and streamlined end-to-end inference solutions, all crucial for effectively deploying Artificial Intelligence workloads at scale.

The move is a deliberate part of AMD´s broader strategy to foster long-term innovation and empower developers working on the next generation of intelligent applications. With Brium now part of its portfolio, AMD reiterates its dedication to building a high-performance open-source ecosystem that truly leverages the potential of its hardware, particularly in Artificial Intelligence development and deployment scenarios.

This transaction marks the latest in a string of targeted investments for AMD, which previously acquired companies like Silo AI, Nod.ai, and Mipsology. Collectively, these acquisitions consolidate AMD´s commitment to supporting the open-source community and delivering optimized efficiency and adaptability on its hardware platforms. Brium´s integration represents another decisive step toward making AMD a formidable player in the competitive landscape of Artificial Intelligence hardware and software synergy.

68

Impact Score

Intel unveils massive artificial intelligence processor test vehicle showcasing advanced packaging

Intel Foundry has revealed an experimental artificial intelligence chip test vehicle that uses an 8 reticle-sized package with multiple logic and memory tiles to demonstrate its latest manufacturing and packaging capabilities. The design highlights how Intel intends to build next-generation multi-chiplet artificial intelligence and high performance computing processors with advanced interconnects and power delivery.

Reward models inherit value biases from large language model foundations

New research shows that reward models used to align large language models inherit systematic value biases from their pre-trained foundations, with Llama and Gemma models diverging along agency and communion dimensions. The work raises fresh safety questions about treating base model choice as a purely technical performance decision in Artificial Intelligence alignment pipelines.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.