Second Review of DeepSeek

Community Article Published December 28, 2025

Lu Shouqun

Honorary Chairman of China OSS (Open Source Software) Promotion Union (COPU)

December 19, 2025

The Liang Wenfeng team at DeepSeek released DeepSeek-V3 on December 26th of last year, followed by DeepSeek-R1 on January 20th of this year, creating a sensation in Silicon Valley and around the world. At the time, some people questioned or were confused about the value of DeepSeek. To clarify these doubts and discuss the development of DeepSeek, I published an article titled "Review of DeepSeek" (published on GitHub, Hugging Face, Gitee, and Gitcode), which presented six arguments. Now, almost a year later, I feel it necessary to analyze its future prospects in conjunction with the recent development of DeepSeek and other major models, and also to examine whether the arguments presented in my article are appropriate.

Argument 1: Liang Wenfeng's team has developed a new path for developing artificial intelligence that is "low-investment, low-cost, resource-constrained, highly efficient, and cost-effective (output)." DeepSeek is arguably the representative work of current Chinese artificial intelligence and is changing the global landscape of AI development.

Second Review Commentary: ① Initially, this was the mainstream opinion that shocked Silicon Valley and the world a year ago. ② Mid-term, on October 9th this year, I conducted a cross-review of the world's top ten large-scale models with the DeepSeek APP. We reached a high degree of consensus. In particular, my argument regarding DeepSeek's large-scale models (see Argument 1 above) was recognized by DeepSeek APP, who considered my comments insightful and based on objective facts. ③ Recently (December 10th), Liang Wenfeng was selected as one of Nature's "Ten People of the Year," and his achievements are as stated in Argument 1 above.

Argument 2: DeepSeek operates entirely in open source, adhering to open-source innovation. Open source facilitates iterative innovation, maintenance, upgrades, and ecosystem development in artificial intelligence. DeepSeek integrates its fully open-source large-scale models (C-end) with its open-source business model (B-end), which not only promotes open-source innovation but also supports the development of the open-source industry. This is a major innovation in DeepSeek's open-source business model.

Second Review Commentary: The transparency of DeepSeek's open-source entities and the way they interact with the open-source community has not yet been observed, which seems to need improvement. During a discussion with the DeepSeek APP on October 9th, it mentioned that DeepSeek's open-source approach has also had some negative impacts, such as the leakage of some technologies and increased competitiveness among domestic and international peers (how to address this remains to be addressed).

Argument 3: The existing product rankings of large models seem to put DeepSeek at a disadvantage. However, DeepSeek's output performance is on par with other top large models; there's no exaggerated comparison of better or worse. If compared more scientifically based on cost-effectiveness, DeepSeek is undoubtedly number one (this cost-effectiveness comparison is also a proposition advocated by Geoffrey Hinton).

Second Review Commentary: Other factors also contributed to DeepSeek's decline in the rankings, such as: DeepSeek lags behind other top large models in multimodal development; its agent activation is slower; its deep inference model is slightly inferior to GPT-5; and its development of Artificial General Intelligence (AGI) is also a step behind. Currently, AGI in GPT-4 is at 27%, while in GPT-5 it is 57%. Although Liang Wenfeng emphasized his focus on general artificial intelligence (AGI) in an article on Today’s Headlines on March 23, it is still a step behind.

Argument 4: DeepSeek's core technologies are no longer secrets, so when entering the next stage of AI competition, everyone starts on the same footing. The launch of DeepSeek has triggered an intense global competition in artificial intelligence.

Second Review Commentary: Indeed, recently OpenAI's GPT-5.1, Google's Gemini-3 pro, XAI's Grok 4.1, and Anthropic's Claude-opus 4.5 are all competing for the top spot. It's surprising how difficult it has been for DeepSeek-R2 to be released!

Argument 5: Currently, various large language models, including DeepSeek, are generative autoregressive language models, not true artificial intelligence. Because they do not understand the physical world, lack local and global knowledge, they also have issues with deep memory. Inevitably, they possess the limitations of language large models and the negative flaws such as hallucinations. It is necessary to implement correction and redirection toward the goal of Artificial General Intelligence (AGI). Before achieving AGI, it is essential to go through transitional stages such as multimodality, embodied intelligence, AI agents, and world models (these transitional stages are also part of AGI).

Second Review Commentary: DeepSeek has already begun the process of correction and redirection toward AGI and is putting effort into developing the aforementioned transitional stages. To overcome the slow initial progress, the pace of research and development needs to be accelerated.

Argument 6: Like other standard and widely accessible foundational models, DeepSeek is difficult to directly transform into high-quality productivity for enterprises/industries. It still needs to enhance its currently missing commercial value and deeply integrate into enterprises/industries to fill application gaps.

Second Review Commentary: The open-source advantages of DeepSeek can be utilized by bringing together community volunteer application developers and collaborative enterprise organizations in society to select relevant enterprises/industries for addressing application gaps.

Community

Sign up or log in to comment