AI as Infrastructure: Strategic Lessons from China’s Xinjiang Investment
A Series of Papers on comparison of AI in China and in the West
Reading time: ~9 minutes
TL;DR:
China’s massive investment in Xinjiang shows a strategic shift: AI is no longer treated as a tool, but as core infrastructure, tightly integrated with energy, transport, and long-term planning. This approach prioritises resilience, cost stability, and systemic capability over short-term innovation wins. For Europe, the key challenge is not “catching up” in AI models, but adapting to a world where AI compute has become a strategic dependency that must be governed like infrastructure.
Thanks for reading Lucie’s Substack! Subscribe for free to receive new posts and support my work.
Introduction: Reading the Signal Beneath the Headline
When China announced a ¥3.47 trillion ($495 Bn) investment program in Xinjiang for 2025, the scale alone commanded attention. Yet the more consequential signal lies in the composition of that investment. More than one-third of the total, ¥1.21 trillion ($172 Bn), has been allocated to projects that integrate large-scale computing systems that use AI to help operate and manage core assets such as energy grids, transport networks, water systems, and pipelines. These systems rely mainly on three kinds of models: computer-vision models that inspect equipment and detect faults in assets like power lines or pipelines; forecasting and simulation models that predict grid load, energy demand, or infrastructure stress; and optimisation models that help allocate resources, route logistics, and balance electricity supply and demand in real time.
This is a deliberate shift in how China sees artificial intelligence. Instead of treating AI just as a tool or a consumer innovation, China now approaches it as basic infrastructure, something essential to the economy, like roads or power grids. The developments in Xinjiang show how this new strategy is being put into practice, with big implications for global AI competition.
Observation: What Is Happening on the Ground
For decades, China’s large-scale regional investment programs focused on physical connectivity and resource systems: highways, railways, power grids, and energy pipelines. Xinjiang’s 2025 investment program departs from this by explicitly incorporating intelligent computing into the same priority infrastructure portfolio. Within the ¥1.21 trillion allocated to 186 key projects, intelligent computing appears alongside transport corridors and energy networks as a core development pillar.
While exact funding details for computing facilities haven’t been disclosed, the policy signal is clear and particularly important in China’s context, where the state, rather than private capital markets as in the United States or Europe, plays the primary role in directing investment priorities at scale. Intelligent computing has transitioned from a peripheral technology investment to a strategic asset class.
During visits to Xinjiang, I saw a transformation underway. Desert areas and former energy towns are being turned into large AI computing hubs. New cloud and data centre facilities are being built alongside upgraded transmission lines and additional renewable energy infrastructure, demonstrating a clear effort to integrate advanced computing with older development projects. In fact, dozens of AI compute centre projects have been approved across Xinjiang and neighbouring provinces, many designed to host high-density GPU clusters capable of supporting advanced workloads. These facilities leverage abundant land and low-cost, renewable energy, demonstrating that regional characteristics are being actively harnessed to maximize efficiency and cost-effectiveness. The resulting architecture prioritizes resilience, scalability, and systemic integration, positioning Xinjiang as a key backbone of China’s national AI infrastructure.
This approach aligns with national programs like “East Data, West Compute,” which shift data-intensive tasks from crowded eastern cities to western regions with greater energy resources. The design favors resilience, lower costs, and long-term growth, treating computing as a national resource rather than just a local IT job.
What This Means
This infrastructure push is unfolding amid growing asymmetry in global AI investment. The United States continues to dominate private AI capital deployment, cumulative investment, and global supercomputing capacity, supported by deep capital markets and unrestricted access to advanced semiconductors. U.S. hyperscalers anchor much of the world’s high-performance compute infrastructure and global cloud reach.
China, by contrast, lags in absolute private investment and faces structural constraints on access to frontier chips. Rather than attempting to replicate the U.S. model, China has pursued an alternative path. One centred on state-led infrastructure investment, policy coordination, and concentrated capital deployment. National programs channel compute workloads across regions, leverage lower energy costs in western provinces, and support large-scale data centre construction through planning, subsidies, and long-term development mandates.
Constraints on frontier technologies have accelerated a broader strategic shift in China’s AI approach. Rather than focusing narrowly on singular breakthroughs in chips or models, policy and investment now focus less on individual AI products, and more on building a nationwide capability — ensuring that industry, logistics, and public services can all rely on stable, low-cost computing as part of their everyday operations. AI is treated not as a standalone sector, but as an embedded capability across manufacturing, logistics, healthcare, energy, and aerospace.
By building extensive computing infrastructure and tightly integrating it with power and data networks, China is constructing a resilient backbone capable of absorbing technological shocks and supporting incremental innovation over time. Xinjiang’s role is emblematic: its strategic value lies in hosting large-scale, energy-intensive infrastructure that underpins national productivity and technological resilience.
Europe’s Strategic Moment: Adapting to AI as Infrastructure
Across Europe, there is now broad recognition that AI will shape long-term competitiveness. What is less resolved, particularly at the level of C-suite decision-making, is how to act when the prevailing sense is that Europe has either started too late, or lacks a clear path distinct from the United States and Asia.
This creates a characteristic European tension. On the one hand, enterprises express a strong intent to “embrace AI.” On the other hand, concrete action is often limited to pilots, vendor selection, or incremental automation layered onto existing systems. The result is not inertia, but strategic ambiguity: AI is acknowledged as important, yet rarely treated as something that reshapes the firm’s core assumptions about cost, resilience, and dependency.
The Xinjiang case is instructive precisely because it removes this ambiguity. By treating AI compute as infrastructure, China collapses multiple questions, technology, energy, capital allocation, and governance, into a single planning logic. Europe, by contrast, continues to fragment these decisions across innovation teams, IT functions, procurement, and compliance, leaving no single locus of accountability for AI as a structural dependency.
For European corporate leaders, this is not primarily a question of technological sophistication. It is a question of classification. As long as AI is seen as a software tool that teams “add on” to existing operations, leaders will treat it as an IT project. But once AI is recognised as part of the organisation’s core infrastructure — alongside energy, data, and cloud — it becomes a board-level question of resilience, cost exposure, and strategic dependency.
What This Means for C-Level Leadership
At the executive level, the shift toward AI as infrastructure changes the nature of leadership responsibility. Decisions about compute, cloud architecture, and data processing are no longer neutral operational choices. They increasingly shape long-term exposure to cost volatility, regulatory scrutiny, operational resilience, and geopolitical alignment.
This introduces a mismatch between how AI decisions are made and how their consequences materialise. Technology choices are often delegated; their effects, however, surface years later in balance sheets, risk frameworks, and supervisory interactions. The danger for European firms is not underinvestment, but locking in dependencies without explicit strategic intent.
In this context, adaptation does not mean replicating either the U.S. hyperscaler model or China’s state-led infrastructure approach. It means recognising that control, optionality, and resilience are now core dimensions of AI strategy. For many enterprises, this implies reassessing where compute sits within the operating model, how dependencies have become more concentrated, and whether current governance structures are fit for an environment where AI underpins critical decisions.
Asia’s Infrastructure Path and Its Competitive Implications
Asia’s AI trajectory, particularly China’s, has direct implications for European competitiveness, even for firms that do not operate in China. By embedding AI compute into energy systems and long-term infrastructure planning, Asian economies reduce uncertainty around availability, cost, and continuity. Over time, this creates structural advantages in AI-intensive activities such as simulation, optimisation, logistics, and industrial automation.
European firms competing globally will increasingly encounter counterparts whose AI capabilities are underpinned not only by market access but also by infrastructure certainty. This does not require superior models or algorithms to translate into advantage; stable, low-cost, and resilient compute is often sufficient.
The U.S. Factor and Europe’s Strategic Exposure
This challenge is sharpened by changes in the global strategic environment. Recent U.S. security and industrial policy signals suggest that Europe no longer occupies the same central position in American strategic prioritisation as in previous decades. While this does not imply imminent decoupling, it weakens a long-standing European assumption: that access to U.S.-based AI infrastructure will remain neutral, abundant, and aligned with European interests by default.
For European enterprises, reliance on this assumption constitutes a latent strategic risk. In a world where AI infrastructure is increasingly entangled with national security and industrial policy, access follows priority. Europe’s heavy dependence on non-European compute therefore carries implications that extend beyond cost or performance.
Adaptation, Not Catch-Up
One way to understand the strategic divergence between regions is by looking at the scale of investment in AI infrastructure. Europe’s collective infrastructure investment, by comparison, has been smaller and more nascent. The EU has pledged around €30 billion (≈ $30 billion) for a network of gigawatt‑class AI data centers, with an initial €10 billion already allocated for 13 facilities and another €20 billion for future build‑outs, a meaningful but comparatively modest commitment relative to China and the U.S. scale.
The strategic question facing Europe is often framed as whether it can “catch up” in AI modelling. The Xinjiang case suggests that this framing is misplaced. The more relevant question is whether Europe can adapt to a world in which AI has become infrastructure.
Adaptation begins with reframing AI from a technology agenda to an infrastructure exposure; from a variable operating cost to a long-term dependency; and from an innovation opportunity to a source of systemic risk and advantage. Firms and institutions that make this shift early retain the option to choose. Those who delay will inherit dependencies they did not explicitly select.
AI as Strategic Infrastructure: Implications and Risks
AI is increasingly not just a technology but a form of strategic infrastructure with profound implications for investment, financial stability, operational resilience, and regulatory oversight. Treating AI compute as infrastructure reframes priorities across capital allocation, governance, and risk management.
For investors, this means moving beyond frontier technologies to focus on compute platforms, energy-compute integration, and regionally resilient infrastructure supported by policy incentives and predictable cash flows, where long-term returns depend on both technological innovation and operational reliability.
Financial institutions must recognize AI exposure as a systemic infrastructure risk: dependencies on compute capacity and cloud providers affect cost volatility, operational continuity, and regulatory exposure. Diversification of dependencies, embedding resilience in governance frameworks, and proactive integration into enterprise risk planning are critical to managing these risks effectively.
Regulators and governments face the challenge of defining minimum resilience and operational standards, auditing reliance on foreign compute infrastructure, and incorporating AI into stress testing and enterprise risk models to prevent hidden vulnerabilities from undermining financial stability or national competitiveness.
Recasting AI compute as infrastructure also introduces distinct new risks for companies:
Technology-cycle versus asset lifecycle risk arises from the rapid evolution of compute assets, requiring modular design, phased deployment, and flexible financing to avoid obsolescence.
Energy and grid dependency is significant because compute centers are high-power loads sensitive to price fluctuations and grid reliability, necessitating integration with energy producers, hybrid generation models, and coordinated load management.
Geographic concentration risk emerges from clustering compute facilities, which improves efficiency but increases exposure to localized disruptions, making distributed architectures and regional redundancy essential.
Regulatory and policy risk is heightened because strategically significant compute is subject to industrial policy, national security oversight, and data governance, requiring scenario-based planning and continuous engagement with authorities.
Operational and cyber risks demand infrastructure-grade security, access controls, and incident response frameworks to protect critical systems.
For Europe, these risks are amplified by heavy reliance on foreign cloud and AI infrastructure. Existing frameworks such as GDPR, DORA, and emerging AI governance rules further complicate exposure for banks, insurers, and asset managers. Without coordinated management, these dependencies could create structural vulnerabilities that threaten competitiveness, financial stability, and national strategic autonomy.
The strategic imperative is therefore clear: AI must be treated as critical infrastructure. Coordinated action is required to build domestic compute capacity, diversify reliance on foreign providers, embed resilience and governance into operations, and integrate AI into regulatory and supervisory frameworks. By addressing AI as a systemic infrastructure challenge, stakeholders can ensure operational continuity, manage risks effectively, and maintain long-term strategic competitiveness in an increasingly AI-driven global economy.
Conclusion: The Strategic Message Is Structural
The global AI race’s first chapter was defined by innovation. The next will be determined by who can build, finance, govern, and manage AI as infrastructure. Xinjiang’s compute-heavy investment program demonstrates that this transition is underway.
For Europe, the imperative is clear: treat AI as strategic infrastructure, act now to invest in domestic capacity, diversify dependencies, embed resilience into operations, and integrate AI into regulatory and stress-testing frameworks to maintain competitiveness in an AI-driven global economy.
Thanks for reading Lucie’s Substack! Subscribe for free to receive new posts and support my work.



What lands for me here is the shift from “AI as capability” to AI as infrastructure exposure. Framing it that way forces a different kind of governance question: not “how do we use AI?” but “which long‑lived dependencies, jurisdictions, and power grids are we quietly wiring our institutions into?” In most organisations, AI is still treated as an IT or innovation line item, so the real infrastructure choices get made piecemeal by vendors and regulators instead of boards. Your Xinjiang example makes it very hard to keep pretending this is a tools conversation rather than an infrastructure and sovereignty conversation.