Has China solved AI governance?

12 Jan 2026
technology
Songruowen Ma
Doctoral student, University of Oxford
Kenddrick Chan
Head of Digital International Relations project, LSE IDEAS
China’s approach to AI governance poses a challenge for policymakers worldwide: can governments keep up with technologies that evolve faster than the rules meant to govern them? Researchers Songruowen Ma and Kenddrick Chan explore China’s strategy.
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on 23 June 2023. (Dado Ruvic/Reuters)
AI (Artificial Intelligence) letters and robot hand are placed on computer motherboard in this illustration created on 23 June 2023. (Dado Ruvic/Reuters)

Against the backdrop of geopolitics, artificial intelligence (AI) development is often framed as a race about who builds the best AI models, deploys them fastest and is able to capture the AI dividend. Yet in the pursuit of greater AI capabilities, many governments find that progress is hardly linear, where acquisition of compute capacity does not automatically lend itself to greater AI capabilities.

The reasons should come as no surprise. Enhancing national AI capabilities bears challenges, even within government. Policymakers have finite capacity, bureaucracies responsible for implementing policy often have competing institutional incentives, and the pace of AI development far outstrips legislative cycles. 

When expectations don’t align

These issues are further compounded by different expectations of what AI should deliver, and these competing institutional logics do not naturally align. Citizens bear the social expectation that AI is human-centred in design, with privacy protection and safety as core pillars. Businesses emphasise the importance of scale, efficiency and the ability to generate AI investment returns. Scientists and research institutes prioritise greater freedom to experiment and innovate, coupled with nurturing a deep talent pipeline. This is why AI policymaking currently falls short.

At present, policymaking tends to collapse into either one of two modes: it either comes in the form of a promotional brochure (“AI will transform everything, here are our funding pots”), or a compliance manual (“AI is risky, here are our restrictions”).

As told by a vice-minister to one of the authors, the issue is not the lack of AI initiatives but too many disconnected ones. 

China’s approach to AI is thus worth studying. It represents far less a carte blanche to copy wholesale and far more a case study in how a major power attempts to reconcile competing logics at scale. Beijing’s own stated premise is that “security” and “development” should be mutually reinforcing rather than mutually exclusive, and that domestic choices should also support its international legitimacy and influence in global AI governance. That worldview has produced a pragmatic architecture with four notable features.

AI as system, not add-on

Beijing has increasingly structured its AI policy around an “AI+” framing, which downplays the familiar notion of AI as mere augmentation and instead advances a vision in which entire sectors are reorganised around AI’s long-term capabilities. Put another way, “+AI” is the old and “AI+” is the new.

This picture taken on 13 November 2025 shows fans, influencers and media personnel attending the presentation of Rokid glasses in Hangzhou, in China’s eastern Zhejiang province. (Hector Retamal/AFP)

AI+ suggests a possible solution to a common policy problem faced by policymakers elsewhere. “Skills here”, “safety there” and “pilots elsewhere” mean a pile up of AI initiatives that lack a unifying logic. As told by a vice-minister to one of the authors, the issue is not the lack of AI initiatives but too many disconnected ones.

The strategic intent behind AI+’s framing also signals something often underestimated by investors. The AI economy, much touted for its promised economic dividends, is as much about AI hyperscalers and cutting-edge labs as it is about diffusion into non-adjacent industries. The likes of manufacturing, health administration and logistics are where productivity gains are likely to be largest and most visible if developed around AI.

Rules that move with reality

The Chinese governance and regulatory model, whilst top-down, should not be dichotomously read as either wanton permissiveness or absolute control. The reality is much more nuanced and flexible.

Here, the concept is familiar to international readers: while there are explicit prohibitions (i.e., red lines), there remains much space above them for experimentation. For China, clear red lines outlined in its Cybersecurity Law, Data Security Law and Personal Information Protection Law help ensure that the worst outcomes are constrained, whilst also negating the need for rulebooks so detailed that they stymie innovation.

A national AI strategy should be a living document, regularly reviewed and updated to ensure it remains fit for purpose.

The rapid pace of AI development suggests that AI policy risks becoming outdated shortly after it is legislated. Beijing’s Five-Year Plan system allows the government to review past policies and adjust priorities accordingly. For example, in the 13th Five-Year Plan (2016-2021), AI was mentioned four times, focused mainly on AI technology and applications, whilst the 14th Five-Year Plan (2021-2025) saw AI mentioned six times, and attention given to safety issues alongside technology development. The latest 15th Five-Year Plan (2025-2030) sees AI mentioned eight times and explicitly linked with a broader strategic agenda, such as global initiatives (e.g. Belt and Road), public welfare, and social governance.

One need not endorse the machinery to appreciate the insight that it brings. AI capabilities will evolve, and the scope and responsibilities of government regulation must evolve alongside them. Regulation should therefore be robust enough to reflect that.

Europe’s AI Act, for example, entered into force in 2024, but is expected to only become fully effective in 2027, with some obligations phased in earlier. While legally elegant, it is also an invitation for reality to outrun the Act unless policymakers build faster and more regularised feedback loops. A national AI strategy should be a living document, regularly reviewed and updated to ensure it remains fit for purpose.

Predictability through structure

The multidisciplinary nature of AI often fragments responsibility for the national AI agenda, with different ministries and government bodies pushing development in line with their institutional mandates. One of us, having previously worked alongside governments of developing countries, bore firsthand witness to the consequences of weak central coordination.

By way of illustration (with all details anonymised), it is not uncommon to find an “AI for workforce productivity” initiative led by the IT ministry closely mirroring parallel efforts under the economic ministry, or “AI business innovation guidelines” under development by the Law ministry substantially replicating the work already undertaken by the science and technology ministry.

China attempts to mitigate this fragmentation through central coordination and designated lead ministries, with cross-departmental cooperation being of a structured rather than ad-hoc, improvised nature.

People gather in Tiananmen Square to attend a flag-raising ceremony with the building of the Great Hall of the People in the background, during sunrise, in Beijing, China, on 20 November 2025. (Maxim Shemetov/Reuters)

China attempts to mitigate this fragmentation through central coordination and designated lead ministries, with cross-departmental cooperation being of a structured rather than ad-hoc, improvised nature. Specifically, the State Council sits at the top of this structure, providing overall strategic direction beyond the authority of individual ministries and coordinating major cross-sectoral science and technology policies.

At the operational level, the National Development and Reform Commission (NDRC, a ministerial-level department of the State Council), the Ministry of Science and Technology (MOST), or the Ministry of Industry and Information Technology (MIIT) is the lead ministry, serving as the core engine of AI policy implementation. While other specialised ministries, such as the Ministry of Ecology and Environment (MEE) and the Ministry of Education (MOE), apply AI policies within their own regulatory domains and coordinate closely with the lead agencies.

Here lies a politically transferable lesson: predictability is prized by investors and innovators alike. Clear ministerial mandates, well-defined lines of authority, and legal clarity over which rules apply in which contexts reduce regulatory uncertainty and, in turn, increase the likelihood of compliance and sustained investment.

The true AI race

The lesson then is not that China has “solved” AI governance, nor that its model should be replicated wholesale elsewhere. Rather, its experience shows that the hardest problem is not building AI, but governing it at scale, over time, and across institutions with often-divergent incentives.

The challenge over AI is thus not one of technology supremacy as measured in model benchmarks or graphics processing unit (GPU) counts. Instead, the question is whether governments can build governing systems capable of learning and adapting as quickly as the technologies they aim to shape. In that sense, the true AI race is not a technological but an institutional one.