Meta Introduces Llama 4 Model with Scout and Maverick

Meta Platforms unveils Llama 4, featuring models Scout and Maverick, enhancing its Artificial Intelligence capabilities.

Meta Platforms has announced the release of Llama 4, the latest iteration in its series of large language models, designed to advance its capabilities in Artificial Intelligence. The new models, named Llama 4 Scout and Llama 4 Maverick, represent Meta´s ongoing efforts to expand and refine its AI product offerings, leveraging cutting-edge computational techniques.

The unveiling took place on April 5, where Meta highlighted the transformative potential of Scout and Maverick. These models are expected to bolster Meta´s existing portfolio by providing enhanced multimodal functionalities, catering to the growing demand for more sophisticated language processing and AI-driven interactions.

With Llama 4, Meta aims to set a new standard in the industry by focusing on robust performance and adaptability. By integrating new AI technologies, the company seeks to strengthen its position against competitors in a rapidly evolving tech landscape where natural language understanding and machine learning are more critical than ever.

65

Impact Score

LLM-PIEval: a benchmark for indirect prompt injection attacks in large language models

Large language models have increased interest in Artificial Intelligence and their integration with external tools introduces risks such as direct and indirect prompt injection. LLM-PIEval provides a framework and test set to measure indirect prompt injection risk and the authors release API specifications and prompts to support wider assessment.

NVIDIA may stop bundling memory with gpu kits amid gddr shortage

NVIDIA is reportedly considering supplying only bare silicon to its aic partners rather than the usual gpu and memory kit as gddr shortages constrain fulfillment. The move follows wider industry pressure from soaring dram prices and an impending price increase from AMD of about 10% across its gpu lineup.

SK Hynix to showcase 48 Gb/s 24 Gb GDDR7 for Artificial Intelligence inference

SK Hynix will present a 24 Gb GDDR7 chip rated for 48 Gb/s at ISSCC 2026, claiming a symmetric dual-channel design and updated internal interfaces that push past the expected 32 to 37 Gb/s. The paper positions the device for mid-range Artificial Intelligence inference and SK Hynix will also show LPDDR6 running at 14.4 Gb/s.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.