AI Power Isn’t Just About “Better Models.” It’s About Who Controls the Systems They Run On

Written by ttassos | Published 2026/02/04
Tech Story Tags: ai-infrastructure | ai-geopolitics | international-relations | ai-future | digital-sovereignty | national-security | ai-models | ai-systems

TLDRThe AI race is no longer just about better models. In 2024–25, real-world limits—like electricity, compute supply, and governance frameworks—became the biggest barriers to deployment. This article maps the overlooked infrastructure and policy layers that increasingly determine AI power and global influence.via the TL;DR App

We’re all watching the race for AI dominance as a sterile contest of algorithms and data. But the decisive story is increasingly not in the code. It is in concrete, copper, permits, and the quiet machinery of who gets to plug in.

This is not a hypothetical bottleneck. In 2024 and 2025, some of the most ambitious AI initiatives stalled, not because the models were weak, but because the systems around them failed to materialize on time: power constraints, grid queues, permitting delays, and the administrative drag that turns “we have compute” into “we have a timeline.”

In practice, the failure mode looks boring until it becomes existential. Teams sit in permitting queues. Data centers negotiate for limited electricity. “Compute access” shows up in decks and procurement memos, convinces investors to underwrite plans, then evaporates at the moment it is actually needed, because the physical and institutional prerequisites never cleared.

This essay is not here to rank models. It is a field map of the infrastructure and governance pressure points that increasingly define what AI can do in the world, and who gets to do it. The focus is scaling, stability, and the systems that support both, especially when they are invisible right up until they fail.

From Rare Earths to Real Bottlenecks

I’m trying to build something practical and public facing, not a formal academic theory, not a complete inventory, and not a claim to have seen every corner of the field.

The goal is simpler: help technologists, policy analysts, and internationally minded readers spot repeating patterns where core AI capability depends on physical systems and governance frameworks that can be stressed, monitored, and redesigned.

At the upstream level, access to materials like critical minerals and rare earths shapes hardware supply chains and remains geopolitically significant.

The second layer is industrial capacity: semiconductor manufacturing, fabrication bottlenecks, and the export controls that govern who can produce or acquire advanced chips.

This essay focuses on a third, downstream layer: the infrastructures and governance arrangements through which AI capability is actually scaled, deployed, and made operational. Electricity access. Grid connectivity. Compliance systems. Platform environments. Organizational capacity.

That choice is deliberate. Upstream resources and midstream industrial production matter. But many of today’s most urgent constraints, and much of the emerging strategic advantage, show up downstream, where systems are powered, connected, governed, and used.

The upstream and midstream dynamics of AI geopolitics deserve their own treatment. Here, the thesis is narrower: the “AI race” is being decided by who can operationalize, not just who can invent.

Why Better Models Alone Won’t Win the AI Race

AI geopolitics is often narrated like a scoreboard: who has the most capable model, who leads on benchmarks, who is “ahead.” That framing misses the layer that determines whether AI becomes strategically meaningful at all, which is the ability to scale and deploy reliably.

Frontier AI is becoming less about breakthrough code and more about the infrastructure that supports it. Scaling depends on access to compute, stable electricity, resilient connectivity, and platforms that are ready for deployment. Those dependencies become sources of leverage, not necessarily because one actor has better researchers, but because someone controls the access points, maintains stability, and sets the terms of use.

Why do infrastructure questions become geopolitical flashpoints? Part of the answer is the size of the upside. The economic stakes make “who can scale” a political question even when it looks, at first glance, like a technical detail. Taiwan’s position in the semiconductor ecosystem is a good illustration of how industrial concentration turns into strategic dependence.

Source: McKinsey Global Institute (2023), The economic potential of generative AI: The next productivity frontier (Exhibit 2)

McKinsey’s 2023 estimate that generative AI could add $2.6 trillion to $4.4 trillion annually across 63 use cases captures why this quickly stops being a lab story and becomes a state capacity story. At that scale, the debate shifts from “can we build it?” to “can we run it, govern it, and keep it running under stress?”

A quiet implication follows: the contest is not only about innovation speed. It is also about infrastructure timelines. And infrastructure timelines are often political timelines in disguise.

Scaling Conditions: The Infrastructure Behind the Code

A more operational question than “who has the best model?” is this:

Who can reliably build, run, and operationalize AI at scale, and under what constraints?

That lens pulls a different set of topics into the center:

  • Energy and grid throughput (can new load connect fast enough?)
  • Compute supply and export controls (who can access cutting edge chips and systems?)
  • Physical connectivity (routes, landing points, repair capacity)
  • Organizational integration (can institutions field AI safely, securely, and at speed?)
  • Platform governance (who sets the practical rules via APIs, compliance, and audit tooling?)

This is not a claim that these factors determine outcomes mechanically. It is a claim that they keep surfacing as the friction points where strategy, tech, and logistics collide.

What makes these constraints easy to miss is that they sit outside the usual innovation narrative. They are slow, procedural, and physical. They do not trend on social media. They do not demo well. They also have a habit of becoming binding all at once.

Here is one texture point that matters more than it gets credit for: “grid access” is not a single switch you flip. It is a chain of institutional decisions. Interconnection requests. Studies. Permits. Negotiations over upgrades. Timelines that can slip because a local authority says no, because a transformer lead time stretches, or because the queue grows faster than the system can process it. None of that changes the benchmark score of a model. It changes whether the model ever ships.

And the deeper you go, the more you see how scaling is a governance problem. Not only because states regulate, but because every large deployment is mediated by gatekeepers: utilities, cloud providers, compliance teams, procurement offices, standards bodies, platform policies, export licensing, and sometimes the quiet veto power of institutional risk.

What Really Gives AI Its Power, and to Whom?

Treat AI as a form of power and a different question follows: what gives it that power in practice?

One useful way to name the recurring patterns is through three kinds of leverage that show up across systems:

Access leverage
Visibility leverage
Architecture leverage

Access leverage is where scaling depends on scarce inputs or slow to substitute systems. It can be conditioned through policy, licensing, procurement, or allocation. You see it when access depends less on price and more on decisions: capacity allocation, regulatory approval, export permission, or simply being “in” the right network.

Visibility leverage is the less discussed cousin of denial. Power is not only the ability to block. It is also the ability to observe, audit, attribute, and shape behavior through standards, monitoring, and compliance architectures. In this mode, influence comes from knowing who is doing what, where, and under which constraints, and from controlling what becomes legible. Umberto Eco’s monastery in The Name of the Rose is an imperfect analogy, but it gets at the vibe: knowledge as a gate, and the right to decide who gets to see.

Architecture leverage sits beyond access and visibility. It is the ability to shape the field by setting default rules: interoperability standards, procurement templates, audit expectations, platform constraints, and the system design choices that raise switching costs and privilege certain ecosystems. Like international financial institutions, these choices may not look political at first. Over time, they tilt the playing field, making some systems easy to build and others practically impossible.

These three are not siloed. They feed each other.

Control access, and you can influence monitoring. Control monitoring, and you can justify architectural rules. Set the architecture, and you entrench access terms. It is not addition. It is multiplication.

A small breather, because it helps to say it plainly: the future often belongs to whoever owns the bottleneck nobody wanted to talk about. In AI, those bottlenecks look increasingly like grid capacity, compliance pipelines, chip controls, and platform default settings. Not glamorous. Very real.


Infrastructure as Strategy: A New Lens on AI Geopolitics

The same lesson shows up again when you zoom out: AI scaling runs into constraints. Forecasts from Epoch AI suggest that by 2030, training scale growth could be limited by power, chip supply, data quality, or latency. These are not theoretical limits. They are the kinds of limits you discover when you try to build.

Source: Epoch AI, “Can AI scaling continue through 2030?”

Compute is the obvious example. Advanced models depend on cutting edge chips, often sourced from a small number of countries and suppliers. Concentration creates exposure to export controls, chokepoints, and shifting geopolitical conditions, as reflected in the U.S. Bureau of Industry and Security materials and CSET’s 2023 explainer of the October 2023 export control update.

But compute is not the only hard dependency. Power is the more basic one. The International Energy Agency’s work on energy supply for AI and Lawrence Berkeley National Laboratory’s reporting on U.S. data center energy use both underscore how scaling is constrained by whether data centers can secure stable electricity and connect to the grid without delay. In many cases, energy access becomes the immediate limiter, long before you hit the frontier of algorithmic ideas.

Connectivity is another constraint that hides in plain sight. Undersea cables carry the vast majority of global internet traffic. Their routes, landing stations, redundancy, and repair capacity shape cloud performance and cross border throughput. The International Telecommunication Union has been explicit that submarine cable resilience is now treated as critical infrastructure. When cables are fragile, the cloud is less “everywhere” than it pretends to be.

Even when the technology exists, organizational capacity can determine whether it delivers strategic value. NATO’s 2024 AI strategy highlights how uneven institutions are in their ability to integrate AI into procurement, security, and operations. There is a difference between adopting AI as a tool and fielding it as a capability inside an institution that can procure, secure, maintain, and audit it.

Then there is the data and sensor layer. Strategic AI does not always rely on public training sets. It often depends on specific, time sensitive data collected through sensors, fused, labeled, and deployed through tightly integrated pipelines. That pipeline is itself infrastructure. It has its own chokepoints: permissions to collect, storage and retention rules, labeling capacity, security classifications, and the governance decisions that decide what can be shared, with whom, and when.

Finally, AI capability often flows through platform environments: cloud services, APIs, compliance layers, and monitoring systems. Governance frameworks like the EU AI Act (2024) and NIST’s AI Risk Management Framework do not only shape ethical use. They shape deployability. They affect which systems can be launched, scaled, and maintained, and which get blocked by design because meeting the compliance and audit burden becomes the gating function.

When you see these dependencies interacting, the competition looks less like a sprint to invent and more like a contest over infrastructure timelines, supply constraints, and institutional readiness.

It is possible I am over-weighting the downstream layer. Some of these bottlenecks may loosen faster than expected through technical change or major capital deployment. But right now, these are the seams where things keep tearing.

What Changes When You Treat Infrastructure as the Story

Look at AI through infrastructure and three shifts follow.

First, power becomes infrastructural. Advantage stops being only about building the most advanced model or leading in benchmarks. What matters is the ability to scale, deploy, and sustain systems under real world constraints: electricity, compute supply, connectivity, and compliance conditions. Technical innovation still matters. It just cannot win alone.

Second, governance becomes a form of strategic influence. Rules, standards, and compliance mechanisms do not simply manage risk. They shape behavior at scale. They determine what can be deployed, who gets access, and under which operating conditions. In practice, governance also redistributes costs. Stronger oversight can reduce harm and improve resilience, but it can also slow adoption, raise compliance burdens, and concentrate deployment in actors large enough to absorb those costs. That is a tradeoff, not a footnote.

Third, resilience becomes part of grand strategy. Infrastructure that used to live in the background, grids, cables, data centers, platforms, now affects national security and international coordination. Delays in permitting or slow grid approvals may look technical, but they shape outcomes. They become strategic vulnerabilities.

That said, these pressure points do not automatically determine outcomes. Systems can be reconfigured. Dependencies can shift. Governance effectiveness depends on political dynamics, institutional capacity, and coordination across allies.

Still, the direction is clear: the geopolitics of AI is moving downstream. Any serious discussion of AI power has to include the infrastructures and governance systems that decide whether AI becomes real in the world.

What we still don’t know (but should ask)

Which dependencies are likely to harden into long term strategic constraints, and which might fade or evolve? Not every bottleneck lasts forever. Some disappear through innovation. Others disappear because someone builds enough capacity. Some persist because the real constraint is political legitimacy, not engineering.

How can alliances coordinate on standards and resilience without fragmenting the systems they rely on? Greater control can strengthen sovereignty and security. It can also produce incompatible rules and lost interoperability.

What tradeoffs emerge between resilience, control, innovation, and openness? Oversight can protect critical systems. It can also raise the cost of collaboration and slow diffusion. How much centralization is too much, and how much openness is too risky?

For governments, the challenge is managing those tensions. Tighter control over AI infrastructure may improve national resilience. It may also raise barriers to innovation and complicate international cooperation.

Ultimately, the contest may not come down to who builds the best model, but to who governs the systems that models rely on.

Disclosure

This essay is written for a general audience. A substantially different academic article, with a narrower research question, formal conceptual framework, and systematic evidence, is under development.

References (Author--Date)

Epoch AI. 2024. “Can AI scaling continue through 2030?” https://epochai.org/blog/can-ai-scaling-continue-through-2030

McKinsey Global Institute. 2023. The economic potential of generative AI: The next productivity frontier.
Landing:https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
PDF:https://www.mckinsey.com/~/media/mckinsey/business%20functions/mckinsey%20digital/our%20insights/the%20economic%20potential%20of%20generative%20ai%20the%20next%20productivity%20frontier/the-economic-potential-of-generative-ai-the-next-productivity-frontier.pdf

International Energy Agency. 2025. “Energy Supply for AI.” https://www.iea.org/reports/energy-and-ai/energy-supply-for-ai

Shehabi, Arman, et al. 2024. 2024 United States Data Center Energy Usage Report. LBNL. https://eta-publications.lbl.gov/sites/default/files/2024-12/lbnl-2024-united-states-data-center-energy-usage-report_1.pdf

International Telecommunication Union. 2024. “Submarine Cable Resilience.” https://www.itu.int/en/mediacentre/backgrounders/Pages/submarine-cable-resilience.aspx

NATO. 2024. “Summary of NATO’s Revised Artificial Intelligence (AI) Strategy.” https://www.nato.int/en/about-us/official-texts-and-resources/official-texts/2024/07/10/summary-of-natos-revised-artificial-intelligence-ai-strategy

U.S. Department of Energy. 2024. “DOE Releases New Report Evaluating Increase in Electricity Demand from Data Centers.” https://www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers

U.S. Bureau of Industry and Security. 2023. Export controls page on advanced computing/semiconductors. https://www.bis.gov/press-release/bis-updated-public-information-page-export-controls-imposed-advanced-computing-semiconductor

Center for Security and Emerging Technology (CSET). 2023. “The Commerce Department’s October 2023 Export Control Update: An Explainer.” https://cset.georgetown.edu/article/bis-2023-update-explainer/

National Institute of Standards and Technology. 2023. AI Risk Management Framework 1.0. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

European Union. 2024. Regulation (EU) 2024/1689 (AI Act). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng


Written by ttassos | I’m Tasos Tassos, Strategic Product Lead at 7projectsAi, GM at BCLA, College Lecturer and Phd(c) International Relations
Published by HackerNoon on 2026/02/04