DeepSeek Artificial Intelligence: what to know about the ChatGPT rival

DeepSeek´s R1 is an open-source Artificial Intelligence large language model that has quickly displaced ChatGPT on the App Store and claims benchmark wins versus OpenAI´s o1, while offering a lower-cost API and a small reported hardware footprint.

In a matter of days after its release, DeepSeek´s R1 large language model has become a focal point in the Artificial Intelligence conversation, topping Apple´s App Store as the number one free app and generating market turbulence. The company released R1 as an open-source model, and Mashable reports the launch coincided with headlines about R1 displacing ChatGPT on the App Store, creating investor reaction and renewed debate about global competition in Artificial Intelligence.

DeepSeek published a report claiming R1 outperformed OpenAI´s reasoning model o1 on several advanced math and coding benchmarks, including AIME 2024, MATH-500 and SWE-bench Verified. The company said R1 scored just below o1 on Codeforces and performed near o1 on graduate-level science and general knowledge tests (GPQA Diamond and MMLU). Mashable´s Stan Schroeder tested R1 by asking it to code a fairly complex web app that parsed public data and built a dynamic travel and weather site, and reported being impressed with the model´s capabilities. The article notes other competitive LLMs exist, such as Anthropic´s Claude, Meta´s Llama family and Google´s Gemini, but frames R1´s combination of performance and other attributes as a strong challenge to established models.

DeepSeek emphasized openness and cost as differentiators. Because R1 is open source, programmers can inspect and modify the model, which advocates say helps scale and democratize Artificial Intelligence work. The model is available via a free web app at chat.deepseek.com and an API that DeepSeek says is significantly cheaper than OpenAI´s o1. The article lists pricing as ?.14 per one million cached input tokens for DeepSeek´s reasoning model versus ?.50 per one million cached input tokens for o1, with the currency symbol not stated in the piece.

For industry observers, the other headline-grabbing claim is resource efficiency. Citing DeepSeek engineers and reporting in The New York Times, the article says R1 required only 2,000 Nvidia chips to train, compared with a reported 10,000 Nvidia GPUs for OpenAI models in 2023. That alleged efficiency contributed to a 13 percent dip in Nvidia´s stock on the day. Whether R1 sustains user interest and developer momentum remains uncertain, but the release has clearly intensified competition and shifted attention across the Artificial Intelligence landscape.

78

Impact Score

AMD targets desktop Artificial Intelligence PCs with Copilot+ chips

AMD has introduced the first desktop processors certified for Microsoft Copilot+, aiming to challenge Intel in x86 PCs as demand for on-device Artificial Intelligence computing rises. The company is also balancing that push with export limits that could constrain advanced chip sales in China.

Governance risk highlights from Infosecurity Magazine

Governance and risk coverage centers on regulation, compliance, cybersecurity policy, and the growing role of Artificial Intelligence in enterprise security. Recent headlines point to pressure on critical infrastructure, standards updates, insider threat guidance, and concerns over guardrails for large language models.

Vals publishes public enterprise language model benchmarks

Vals lists a broad set of public enterprise benchmarks spanning law, finance, healthcare, math, education, academics, coding, and beta agent tasks. The index highlights which models currently lead specific enterprise-focused evaluations and how widely each benchmark has been tested.

MIT method spots overconfident Artificial Intelligence models

MIT researchers developed a way to detect when large language models are confidently wrong by comparing their answers with outputs from similar models. The combined uncertainty measure outperformed standard techniques across a range of tasks and may help reduce unreliable responses.

MEPs back delay for parts of Artificial Intelligence Act

European Parliament committees have endorsed targeted delays to parts of the Artificial Intelligence Act while adding a proposed ban on certain non-consensual image manipulation tools. The changes aim to give companies clearer deadlines, reduce overlap with other EU rules, and extend support to small mid-cap enterprises.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.