DeepSeek Stirs Anticipation with Rumored R2 Model Breakthrough

Chinese start-up DeepSeek fuels intense online discussion as rumors circulate about the launch and capabilities of its next open-source Artificial Intelligence model, R2.

Chinese start-up DeepSeek is at the center of mounting online speculation as social media buzz grows over the impending release of its next open-source artificial intelligence model, DeepSeek-R2. The company, well-known for its cost-efficient technology, has not officially confirmed details about R2’s launch, but discussions online suggest the new model could introduce significant advances in performance and cost savings, raising anticipation throughout the tech sector against the backdrop of the ongoing US-China tech rivalry.

Interest in DeepSeek surged after the company’s rapid emergence in late 2024 and early 2025, when it introduced two advanced open-source artificial intelligence models, V3 and R1. Both models drew attention for being developed at a fraction of the cost and computing resources compared to those required by global tech giants for similar large language model (LLM) projects. Such LLMs underpin generative artificial intelligence applications like ChatGPT, which have become central to both industry and consumer use cases.

According to recent posts circulating on Chinese stock trading social platforms, the upcoming R2 model is reportedly based on a hybrid mixture-of-experts (MoE) architecture, featuring a massive 1.2 trillion parameters. This architecture divides models into specialized sub-networks that handle different aspects of data processing, resulting in substantially reduced computation needs during pre-training and faster inference times. Notably, R2 is claimed to be up to 97.3 per cent less expensive to build than OpenAI’s GPT-4o. These rumors, if substantiated, could position DeepSeek R2 as a transformative player in global artificial intelligence competition, particularly as Chinese start-ups vie to lessen dependence on Western technologies amid ongoing international tech tensions.

81

Impact Score

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Adaptive training method boosts reasoning large language model efficiency

Researchers have developed an adaptive training system that uses idle processors to train a smaller helper model on the fly, doubling reasoning large language model training speed without sacrificing accuracy. The method aims to cut costs and energy use for advanced applications such as financial forecasting and power grid risk detection.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.