Post-LLM roadmap charts the future of artificial intelligence with knowledge, collaboration, and co-evolution

A new roadmap highlights how artificial intelligence will evolve beyond large language models by integrating domain knowledge, enabling collaboration, and supporting co-evolution.

A recent paper published in the journal ´Engineering´ examines how artificial intelligence may transcend the current limitations of large language models, identifying a roadmap focused on knowledge empowerment, model collaboration, and model co-evolution. While large language models have achieved success in processing multimodal data, they remain constrained by challenges such as reliance on outdated information, hallucinations, inefficiency, and opaque decision processes. The authors argue that overcoming these drawbacks will require systematic advancements in how artificial intelligence incorporates external knowledge, how multiple models interact, and how systems evolve together over time.

Knowledge empowerment involves embedding external or domain-specific information directly into the structure and processes of large language models. The paper details methods like designing knowledge-aware loss functions for pre-training, leveraging instruction tuning, retrieval-augmented inference, and deploying knowledge prompting. For example, retrieval-augmented generation fetches relevant external data at inference time, allowing for more accurate and contextually aware outputs. By enhancing factual accuracy and interpretability, these techniques aim to help artificial intelligence systems reason more effectively and provide answers that reflect up-to-date and relevant information.

Model collaboration is another cornerstone of the post-large language model era. This involves merging separate models—through methods such as ensembling, model fusion, or the mixture of experts approach—or creating frameworks where specialized smaller models work under the guidance of a large manager model. In practical applications, like image generation, large language models might orchestrate specialized modules to fulfill complex prompt requirements, leveraging each component´s strengths for superior results.

Model co-evolution extends these ideas, enabling multiple models to adapt jointly across varying types of heterogeneity, whether in network architectures, tasks, or data environments. Techniques such as parameter sharing, dual knowledge distillation, and federated learning allow different models to learn from each other despite dissimilarities, improving resilience and generalization across domains. The authors cite broad impacts across scientific research, engineering practice, and societal-level applications such as healthcare and traffic systems. In each domain, knowledge-driven, collaborative, and co-evolving artificial intelligence promises more adaptive, insightful, and robust support for complex challenges.

Finally, the roadmap looks to the emergence of embodied and brain-like artificial intelligence, non-transformer foundational models, and systems that themselves generate new models, highlighting these as promising directions for future research. The study concludes that integrating knowledge, enabling collaboration, and fostering co-evolution are central to the next generation of artificial intelligence, shaping systems that are not only more capable, but also more transparent and aligned with human values and needs.

81

Impact Score

Inside the UK’s artificial intelligence security institute

The UK’s artificial intelligence security institute has found that popular frontier models can be jailbroken at scale, exposing reliability gaps and security risks for governments and regulated industries that rely on trusted vendors.

Siemens debuts digital twin composer for industrial metaverse deployments

Siemens has introduced digital twin composer, a software tool that builds industrial metaverse environments at scale by merging comprehensive digital twins with real-time physical data, enabling faster virtual decision making. Early deployments with PepsiCo report higher throughput, shorter design cycles and reduced capital expenditure through physics-accurate simulations and artificial intelligence driven optimization.

Cadence builds chiplet partner ecosystem for physical artificial intelligence and data center designs

Cadence has introduced a Chiplet Spec-to-Packaged Parts ecosystem aimed at simplifying chiplet design for physical artificial intelligence, data center and high performance computing workloads, backed by a roster of intellectual property and foundry partners. The program centers on a physical artificial intelligence chiplet platform and framework that integrates prevalidated components to cut risk and speed commercial deployment.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.