Cohere Advances Agentic Search with New Command A Language Model

Cohere unveils Command A, its most advanced large language model, aiming to surpass OpenAI and DeepSeek in agentic search capabilities for enterprise Artificial Intelligence solutions.

Cohere has launched Command A, its latest and most powerful large language model, signaling a significant evolution in the company’s offering for enterprise-focused Artificial Intelligence applications. With the introduction of Command A, Cohere emphasizes a robust approach to agentic search—where autonomous agents powered by large language models conduct complex web queries, aggregate data, and provide nuanced answers for business needs.

According to Cohere, Command A delivers higher performance than existing models from competitors such as OpenAI and DeepSeek, while maintaining a leaner computational footprint. The company claims that this next-generation model is able to tackle sophisticated enterprise research tasks, streamline knowledge discovery, and automate advanced workflows. As organizations seek more capable artificial agents to search, reason, and synthesize web-scale information autonomously, Command A positions Cohere at the forefront of this trend, moving beyond simple chatbots toward intelligent, multi-step search and decision support tools.

The release of Command A aligns with Cohere’s strategy to address growing demand for advanced search and reasoning capabilities across sectors like finance, legal, and scientific research. This push includes integrating agentic search as a core part of enterprise AI infrastructure, allowing companies to extract actionable insights from vast, unstructured content repositories with enhanced speed and accuracy. By outperforming current market leaders and optimizing for enterprise needs, Cohere seeks to enable its clients to unlock the full potential of Artificial Intelligence-powered search and automation, redefining how organizations access and act on information at scale.

73

Impact Score

Adaptive training method boosts reasoning large language model efficiency

Researchers have developed an adaptive training system that uses idle processors to train a smaller helper model on the fly, doubling reasoning large language model training speed without sacrificing accuracy. The method aims to cut costs and energy use for advanced applications such as financial forecasting and power grid risk detection.

How to run MiniMax M2.5 locally with Unsloth GGUF

MiniMax-M2.5 is a new open large language model optimized for coding, tool use, search, and office tasks, and Unsloth provides quantized GGUF builds and usage recipes for running it locally. The guide focuses on memory requirements, recommended decoding parameters, and deployment via llama.cpp and llama-server with an OpenAI-compatible interface.

Y Combinator backs new wave of computer vision startups in 2026

Y Combinator’s 2026 computer vision cohort spans infrastructure, developer tools, and industry-specific applications from retail security to aquaculture and healthcare. Startups are increasingly pairing computer vision with large vision language models and foundation models to tackle real-time video, automation, and domain-specific analysis.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.