The Chinese AI race has hit an inflection point. On coding capability, the open-source LLM ecosystem now operates inside the same practical band as the leading U.S. closed models.
The "China Is 6–9 Months Behind" Narrative Has Collapsed
NIST's CAISI evaluation puts DeepSeek V4 roughly 8 months behind in the general domain — while DeepSeek's own model card shows V4-Pro at parity with Opus 4.6 and GPT-5.4. Both evaluations are correct: they measure different things. The old "China is behind" framing no longer holds; the right question is "which capability, at which cost?"
Economic Implications
For the economically most important capability — coding — some of the best models are now Chinese and open-weight. Western firms are forced to compete with low-cost, high-performance Chinese models. Pricing discipline is tightening fast.
Pace of Progress
Zhipu's stock rose 15.92% on the day of the GLM-5.1 launch. MiniMax's rollout included 100+ rounds of an in-house copy of M2.7 optimising its own scaffolding — recursive model improvement is now the main story on the showroom floor.
Cite this: AI Mevzuları · April 29, 2026 · aimevzulari.com/haberler/cin-acik-kaynak-modeller-glm-deepseek-minimax