Meta Debuts Large Concept Models for Multilingual AI

Meta introduces a novel language model architecture that enhances multilingual capabilities through concept-based reasoning.

Large Language Models (LLMs) are currently fundamental tools in natural language processing, focusing on token-based output. Meta’s research team proposes a paradigm shift with the introduction of Large Concept Models (LCM), which process language at a conceptual level rather than at the token level. This innovative model achieves substantial improvements in zero-shot generalization across different languages, surpassing the performance of LLMs of similar sizes.

The LCM operates within a semantic embedding space named SONAR, which facilitates higher-order conceptual reasoning. This architecture marks a significant departure from traditional approaches and has shown remarkable performance on semantic similarity tasks and large-scale bitext mining for translations. SONAR’s framework includes an encoder-decoder architecture without the common cross-attention mechanism, utilizing a fixed-size bottleneck layer. This design integrates a combination of machine translation objectives, denoising auto-encoding, and mean squared error loss to enhance semantic consistency.

LCM’s design enables it to perform abstract reasoning across languages and modalities, providing support even for low-resource languages. The system is modular, allowing for independent development of concept encoders and decoders, facilitating the expansion to new languages and modalities without retraining. Meta’s LCM demonstrates promising results in various NLP tasks, including summarization and summary expansion, showcasing its ability to generate coherent outputs across multiple texts and contexts.

84

Impact Score

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.