The illusion of independent AI: How the US and China control the machines
The clash over Anthropic reveals a deeper reality: frontier AI is inseparable from state power. While Washington reacts and Beijing plans, both are tightening control, driving a split into rival AI ecosystems. SMU academic Liang Chen shares his analysis.
The recent decision by the Donald Trump administration to designate the artificial intelligence (AI) firm Anthropic as a “supply chain risk” has sparked important conversations across the global tech sector. The friction stemmed from Anthropic’s reluctance to grant the US defence apparatus unrestricted access to its Claude AI models, citing internal ethical guardrails regarding autonomous weapons and surveillance.
What is perhaps most revealing about this dispute is the depth of collaboration that preceded it. Anthropic had not merely provided off-the-shelf software; it had reportedly partnered with defence contractors to build an adapted, custom version of its model operating within physically isolated, classified cloud environments. This bespoke military large language model (LLM) was integrated into operational and intelligence pipelines.
While the public fallout exposes a distinct governance challenge in the West — the intersection of national security imperatives with the independent operational policies of private tech corporations — the existence of this highly adapted military model sends a significant geopolitical signal.
AI decoupling deepens: toward domestic LLM mandates
To policymakers in Beijing, the Anthropic development underscores a pragmatic reality: frontier US tech companies, regardless of their self-defined corporate missions, are inextricably linked to the US national security apparatus.
However, when these technologies cross the threshold from experimental software to foundational, dual-use infrastructure (the deployment phase), the US state pulls them into alignment.
In the research and development (R&D) phases, the US government largely adopts a laissez-faire approach. Tech firms possess immense autonomy to experiment, choose their architectures (open vs closed source), and pursue their own corporate goals. This “permissionless innovation” is what generates fundamental, paradigm-shifting breakthroughs.
However, when these technologies cross the threshold from experimental software to foundational, dual-use infrastructure (the deployment phase), the US state pulls them into alignment. The government strikes this balance not through top-down state planning, but through mechanisms like massive defence procurement budgets to financially incentivise companies to build defence-specific applications, export controls to align private corporate sales with geopolitical objectives, and executive power like the Defense Production Act to force companies to align with national security reviews.
Therefore, if a prominently “safety first” AI lab can be deeply embedded into defence infrastructure and face severe government pushback for attempting to establish operational boundaries, Chinese regulators will naturally view all US-origin LLMs as extensions of American strategic capabilities.
We should anticipate a hardening decoupling between the US and China, particularly at the application layer.
This dynamic is likely to invite intense scrutiny from Beijing, accelerating the shift away from Western AI models within China’s digital borders. We should anticipate a hardening decoupling between the US and China, particularly at the application layer.
Moving forward, it is highly probable that both powers will mandate exclusive reliance on domestic LLMs for critical, commercial, and telecommunications infrastructure, viewing foreign algorithmic integration as an unacceptable vulnerability in terms of data sovereignty and national security.
China’s ‘private sector pivot’
Looking across the Pacific, Beijing offers a different paradigm for managing the delicate balance between fostering cutting-edge innovation and ensuring alignment with strategic goals.
As a recent Foreign Affairs analysis points out, China is currently executing a “private sector pivot”. After a period of stringent regulatory corrections that impacted market valuations, the Chinese leadership recognises that robust advancement in dual-use technologies — such as advanced AI and semiconductors — requires the entrepreneurial vigour of private enterprises.
This pivot, however, represents a shift in governance rather than a relinquishment of oversight. Beijing is moving toward a predictable, institutionalised regulatory framework. When it comes to AI, China employs a structured approach: algorithm registration, guidelines on cross-border data flows, and clear parameters for LLM behaviour are managed as essential components of national development.
This regulatory architecture operates in synergy with China’s broader strategy of civil-military integration. Chinese tech companies operate within a clearly defined corridor: they benefit from state-backed capital, expansive domestic market access, and a supportive ecosystem for scaling. In exchange, there is an inherent expectation that their innovations in natural language processing and computer vision will be available to support national infrastructure and defense objectives smoothly, bypassing the public friction recently seen in the US.
The primary issue with this ex-ante (preventative) governance system is the massive ”compliance tax”. Requiring companies to meticulously filter training data to ensure political correctness degrades model performance and slows down deployment cycles.
China’s modular, iterative approach
In addition, unlike the EU, which attempted a broad, catch-all AI Act, China regulates AI using a modular, iterative approach. China rolls out targeted regulations as the tech evolves: rules for recommendation algorithms in 2021, deep synthesis (deepfakes) in 2022 and generative AI in 2023.
The Cyberspace Administration of China (CAC) is the ultimate arbiter, working alongside the Ministry of Industry and Information Technology. The CAC mandates that developers use the Internet Information Service Algorithm Registry. Before an app is deployed to the public, companies must submit their training data sources, model logic and security assessments to the state.
In the digital economy, competition occurs between ecosystems rather than just individual firms. In China, the state acts as the ultimate “ecosystem architect”. It aligns interests by providing vital infrastructure (e.g., the “East Data, West Compute” initiative within China offering subsidised computing power) and a shielded domestic market. In return, private firms serve to commercialise the technology while strictly adhering to state-mandated ideological and security guardrails (e.g., ensuring AI outputs reflect “Core Socialist Values”).
The primary issue with this ex-ante (preventative) governance system is the massive ”compliance tax”. Requiring companies to meticulously filter training data to ensure political correctness degrades model performance and slows down deployment cycles. While this system is highly sustainable for maintaining social control and scaling “1 to 100” application-layer tech, one could argue that it hinders the unpredictable, “0 to 1” foundational breakthroughs seen in the US.
US model: governance chasing tech
The US innovation ecosystem thrives on the distinct autonomy and diverse perspectives of its private sector; adopting a state-led model would undermine the very independence that made Silicon Valley a global powerhouse.
In fact, the US is the quintessential example of governance chasing tech, operating in an environment of total freedom for experimentation until a threshold is breached.
For example, for years, OpenAI, Google and Anthropic built increasingly powerful models with virtually zero federal oversight. It was only after the explosive public release of ChatGPT that the US government reacted. Because there was no existing legislative framework for AI, the Biden administration had to creatively invoke the 1950 Defense Production Act (a Cold War-era law) to mandate that companies developing foundational models share their safety test results with the government if their models required a massive amount of computing power (greater than 1026 FLOPS). This shows that governance is entirely reactive, stepping in only when the scale of computing poses a national security risk.
Another example is that Meta has aggressively pursued an “open-source” experimentation model, freely releasing the weights of its Llama models to global developers — a totally autonomous corporate decision made without government pre-approval. Originally, Meta’s acceptable use policy strictly forbade using Llama for military or warfare purposes. However, as the tech matured and the US recognised the strategic need for these models, Meta announced (in 2024) that it would make Llama available to US defence contractors (like Palantir and Anduril) and government agencies. This shows how entirely free, permission-less experimentation ultimately gets co-opted and aligned with national security post-hoc.
... Washington may draw a practical lesson from the current landscape: the necessity for a cohesive, proactive governance framework.
Institutionalising the partnership
Thus, Washington may draw a practical lesson from the current landscape: the necessity for a cohesive, proactive governance framework. Relying on ad-hoc mandates or reacting to disputes over corporate terms of service introduces friction into the defence supply chain. Rather than navigating crises as they arise, the US would benefit from a standardised, ex-ante framework that clearly defines how dual-use AI will be deployed in defence settings, establishing shared expectations before integration begins.
The Anthropic dispute represents a defining moment in digital strategy. It signals a future where US and Chinese AI application layers will operate largely in parallel, distinct ecosystems. To maintain a strategic edge in this decoupled reality, the path forward lies not in friction, but in successfully institutionalising a predictable, working partnership between private sector innovation and national security needs.