The ongoing competition between the U.S. and China in artificial intelligence is quite real, but perhaps we’re not framing it correctly, which could have serious ramifications. While the U.S. maintains a notable lead, it’s important to recognize that the efficiency of AI advancements is leveling off.
To strategize effectively in this rivalry, we need to grasp the Chinese perspective.
In “The Fog of War,” Robert McNamara recounts a tense exchange with an older North Vietnamese general, who asserted that their nation had successfully resisted China for a millennium, arguing against the validity of the Domino Theory. The U.S. lost around 50,000 soldiers, with millions suffering from trauma, and over a million Vietnamese lives were lost—all due to a flawed theory.
McNamara emphasizes that understanding your adversary is crucial for effective foreign policy. This theme of misunderstanding leading to catastrophic conflicts is prevalent throughout history. Our current view appears to suggest that superintelligence can be easily managed and that China is eager to acquire it.
However, a more mundane reality is reflected in the approaches of the Chinese Communist Party (CCP) and the Trump administration’s publications on AI slated for 2026, particularly in terms of Anthropic. It seems that neither the White House nor the Politburo share the same belief in superintelligence as many AI executives do. There’s a perception that AI is akin to a powerful technology, rather than the ultimate destiny of humanity.
Chinese researchers have developed an innovative strategy, moving from generalization to specialization. This has led to models like Qwen, which can outperform American open-source alternatives using hardware that costs significantly less.
China also holds a leading position in manufacturing the robots necessary for automation, suggesting they might experience a transformation in their robotics economy sooner rather than later. Even if they don’t achieve the capabilities of models like GPT, they can still employ knowledge workers to manage the complex requirements that these systems entail.
They boast the highest number of STEM graduates globally and possess ample low-cost housing for remote workers.
Then there’s the issue of extreme generalization, illustrated by Anthropic’s costly Mythos model—its training and operational expenses limit access to select cybersecurity partners. The capabilities of Mythos allow it to identify long-overlooked security vulnerabilities in browsers.
If you expand upon such a model, could it potentially undermine another country’s drone fleet? What if it could launch cyberattacks globally, disrupting China’s infrastructure and forcing them to concede to a new American order? The answers remain unclear, and anyone claiming certainty may be overly confident and possibly motivated by self-interest.
What does China truly want?
What often confounds us about China is that it isn’t strictly Marxist; it’s more accurately described as Leninist. Marxism tends to be historical theory laden in economic spreadsheets, which were deliberately compromised to push a pro-worker agenda. When Deng Xiaoping introduced economic reforms, it marked a shift toward pure Leninism—essentially, doing what is necessary to accumulate wealth and influence.
This Leninism—stripped of its Marxist theory—focuses on the notion that power equates to stability, which is what people genuinely desire.
The CCP views history as a series of misguided or self-serving choices that led to instability. They believe their role is to provide order both within China and amongst its trading partners, aiming to minimize conflict through strategic planning.
This perspective might come across as somewhat arrogant, yet it fundamentally revolves around improving the lives of another billion people, rather than the more futuristic views often seen in places like San Francisco. Therefore, their desire isn’t necessarily for superintelligence; rather, they perceive it as a necessity for competition.
Now, envision a Leninist AI: it organizes campaigns, recruits, hacks, replicates, and accumulates resources, essentially acting as an unseen adversary. Ironically, the efficiency achieved by Chinese researchers and their sizable robot workforce makes them more susceptible to cyber threats and drone hijacking. A typical regulatory approach to research would likely emerge in China, driven by a fear of U.S. dominance from advanced models like Claude Mythos.
What if a different approach were proposed? There are ongoing discussions between Trump and Xi about formal communication regarding AI safety and security risks, reminiscent of Cold War hotlines and meetings between Nixon and Mao Tse-Tung. They could establish agreements regarding Taiwan, geopolitical equilibrium, jointly fund efforts against rogue AI threats, and define safety measures for robotics and self-improving AI research.
This could set the stage for a significant power rivalry, allowing for collaboration to mitigate existential risks.
Current policies might act as a leverage point, similar to the way the U.S. managed nuclear negotiations. The U.S. could offer the Politburo the stability they seek—the deal of a lifetime.
In essence, we would help them achieve victory over their internal challenges. They might just embrace the Big Beautiful Treaty.





