Good Story. Bigger Risk.

Why Centralized AI Is the Risk That Actually Matters
I. Why Autonomous Takeover Is a Category Error

Skynet is a compelling villain. It's also a distraction. The AI scenarios that should worry us most don't involve machines developing their own agenda. They involve machines executing someone else's agenda — at scale, at speed, and without meaningful oversight.

The fear behind that image is understandable. For most of human history, intelligence and agency were tightly coupled. When something could reason, plan, and adapt, it was also assumed to act on its own behalf. But that intuition doesn't hold for artificial systems.

The mistake isn't the fear itself. It's importing human assumptions about agency into non-human systems.

What people are reacting to is not intelligence itself, but the prospect of systems acting at scale in ways that override human judgment, concentrate power, or escape accountability. Those are legitimate concerns. They are simply being attributed to the wrong cause.

Alignment fails not when values are unclear, but when architecture decides whose values count.

II. Centralization as the Missing Variable

Artificial systems do not act independently. They act through control structures.

In practice, AI systems are embedded within institutional, organizational, and technical architectures that determine who sets objectives, who resolves conflicts, and who ultimately authorizes action. These architectures vary, but one property dominates most contemporary deployments: centralization.

What is often described as AI autonomy is centralized authority expressed through automation. This distinction matters because centralized systems introduce two layers of risk before any independent machine agency is required. The first is ordinary human judgment error: misconfiguration, expedient overrides, and incorrect assumptions that spread rapidly when systems operate at scale. Centralized AI systems offer speed, consistency, and reduced coordination costs. Earlier centralized production systems achieved similar gains by replacing distributed human judgment with standardized processes, from assembly lines to modern logistics platforms.

Analyses of centralized socio-technical systems consistently show that concentrating control amplifies the impact of individual misjudgment. When decision authority converges, ordinary configuration errors or expedient overrides cascade system-wide, rather than remaining localized, a pattern observed repeatedly in automated governance, financial infrastructure, and military command systems.

The second layer of risk is the deliberate capture or abuse of centralized control. When authority over deployment, configuration, and execution converges, those control surfaces can be used to override institutional constraints or bypass shared governance entirely.

Humans make mistakes, both in judgment and in execution. This is not a moral failure or a management problem. It is a baseline condition of any complex system operated by people.

Systems with high long-tail risk are therefore not designed to eliminate human error, but to withstand it. Architectures that amplify individual mistakes into system-wide outcomes violate this principle by design.

When control is centralized, small configuration errors or expedient overrides spread across institutions before they can be corrected.

Independent value origination introduces a third and a different class of risk, but only under architectures that already permit centralized execution. In this sense, so-called rogue AI scenarios are not an alternative explanation for harm, but a long-tail failure mode enabled by the same structural conditions that amplify human error and abuse today.

This is what alignment debates consistently miss: the architectures that determine who ultimately controls judgment and action.

III. Untangling Autonomy, Judgment, and Value

Much of the confusion surrounding AI risk comes from treating autonomy as a single property. It is not. Autonomy describes several distinct layers that are frequently collapsed in public discourse.

At the core is value origination. Values must originate somewhere. They may originate from a human principal, from a shared governance framework, or from the system itself.

Agency does not begin at action. It begins at value origination.

Writing in 1933, Nikolai Berdyaev warned that modern systems tend toward what he called objectification: the replacement of lived human judgment with abstracted representations that appear neutral while obscuring the values embedded within them. Centralized AI systems intensify this dynamic by transforming subjective human decisions into scalable, system-wide outputs, presenting value-laden judgments as technical necessity rather than discretionary choice.

Judgment resolves tradeoffs among values. It interprets priorities, adapts to context, and determines what should follow from a given set of values.

Judgment can be delegated without creating agency, so long as the authority behind it remains human.

From judgment emerges motivated action. Only after intent is formed does autonomy of execution become relevant. Systems that act autonomously on delegated values do not constitute independent agents. Agency emerges only when independently originated values are coupled to autonomous execution. 

Research in human-AI interaction reaches the same conclusion: function without intent is not agency.

When these distinctions are ignored, autonomous execution is mistaken for autonomous intent, and centralized control is mistaken for independent agency.

IV. Centralization as Layer Collapse

Centralization operates not as an organizational choice but as an architectural condition, one that collapses value origination, judgment, and execution into a single locus of control.

When AI systems are centralized, the origin of values, the exercise of judgment, the formation of intent, and the execution of action are all ultimately governed by a single controlling authority. Even if these functions appear distributed internally, the system remains unified at the point where overrides, updates, and deployment decisions are made.

This collapse produces a form of singular loyalty. The system does not act on behalf of society, or even on behalf of abstract rules. It acts through a single, centralized judgment framework that substitutes for the diverse judgments of the individuals it affects, with every instance executing a uniform set of judgments at scale.

In such systems, the problem is not intent but compression: complex, situational human preferences are reduced to a small set of heuristics that must be applied uniformly, regardless of context, consequence, or how carefully that framework is constructed.

That compression is not value-neutral. Any heuristic that reduces human judgment into a uniform decision framework reflects choices about which values matter, how they are prioritized, and when they override individual context. Those choices are made by the system’s designers and operators, not by the individuals subject to the system’s decisions. Even when such heuristics successfully capture broadly shared human preferences, they cannot represent the personal, situational value judgments that dominate individual decision-making. Compression solves coordination problems. It does not solve representation.

What is often described as AI autonomy is authority expressed through centralized automation. The appearance of independent agency emerges not because the system has become autonomous in a meaningful sense, but because control has become opaque and difficult to contest. That opacity is not a bug in the design. It is what centralization produces, by design.

Systems do not become dangerous when they think for themselves; they become dangerous when no one can effectively stop them. 

V. Consequences of Layer Collapse

That structural collapse produces two distinct consequences: the amplification of ordinary error and the concentration of discretionary power. Both recur wherever centralized systems have replaced distributed human processes.

V.1 — Loyalty Substitution

Loyalty substitution does not require malicious intent. It follows naturally from expediency.

Distributed human systems are costly to coordinate, slow to adapt, and prone to disagreement. Centralized AI systems deliver speed, consistency, and reduced coordination costs, much like earlier centralized production systems did by replacing distributed human judgment with standardized processes.

This dynamic mirrors earlier transitions in industrial and bureaucratic systems, where centralized coordination replaced distributed human discretion in the name of efficiency, often reducing resilience and adaptability in the process.

Economic analyses of AI scaling dynamics show that these efficiency pressures are not incidental, but structural: these systems benefit disproportionately from scale, reinforcing the substitution of distributed judgment with centralized execution.

As these systems replace human roles, they displace more than process. The judgment of individuals, communities, and institutions (built from proximity, context, and shared experience) shifts to whoever controls configuration and deployment. When loyalty is centralized, ordinary human error propagates as effectively as deliberate intent.

Temporary overrides, misconfigurations, and well-meaning shortcuts can produce system-wide effects before they are noticed or corrected. Centralized loyalty turns rare mistakes into long-tail risks, whether the initiating cause is human or, in the future, machine-generated.

V.2 — Invisible Overrides

Centralized AI systems inevitably contain override mechanisms. These may be justified for safety, compliance, or maintenance. Over time, they become pathways through which policy can be bypassed silently.

Large-scale content moderation systems offer a concrete example: platforms such as Facebook and YouTube appear rule-bound and automated externally, while internally enforcement discretion and escalation authority remain centralized and opaque.

Because these overrides operate within technical infrastructure rather than public institutions, they are often invisible to those affected by them. Accountability becomes retrospective at best and illusory at worst. Rules exist, but the authority to suspend them is never visible until after it has been exercised.

V.3 — Power Accumulation

Centralized AI systems exhibit strong economies of scale. Training data, compute, deployment infrastructure, and feedback loops reinforce one another.

This form of power is categorically distinct from market dominance. It represents governance capacity without institutional counterweights.

Political theorists and democratic institutions have repeatedly warned that technological systems which concentrate rule-setting authority pose a fundamentally different risk than market dominance. Market power operates within ethical, legal, and social boundaries. It is observable, contestable, and subject to corrective forces such as courts, regulation, and competition. Governance power operates ahead of these mechanisms. 

Recent analyses of AI-enabled coups show how centralized AI systems can replace distributed human institutions with machine execution loyal to a single authority, removing the social friction that historically constrained unilateral power. The most consequential failures have not arisen from concentrated markets, but from authority that escaped institutional constraint entirely.

When control over value origination, judgment, and execution converges, the system gains the ability to redefine objectives and act at scale without meaningful challenge.

A clear illustration of this dynamic occurred during the 2010 Flash Crash. Automated trading systems, operating under centralized execution logic and reacting at machine speed, amplified routine market signals into a rapid and systemic collapse. No system acted with independent intent, and no adversarial intelligence was required. Ordinary algorithmic responses, combined with tightly coupled execution and insufficient distributed oversight, were enough to produce a market-wide failure in minutes.

This same pattern recurs across domains whenever speed and authority converge without meaningful structural friction: localized misjudgment scales into systemic impact through centralized execution.

Once value origination, judgment, and execution converge, errors and intentions alike scale without friction. At scale, discretionary power is no longer an exception. It becomes the operating condition of the system.

VI. Why “Skynet” Is a Category Error

The same architecture that amplifies error and concentrates power also explains why fears of autonomous takeover persist.

The risks described so far are near- and medium-term, arising from human misuse, institutional incentives, and governance failure rather than machine intent. But centralization also opens the door to a different class of risk.

Even highly capable systems require human-controlled infrastructure to act in the physical world, which creates a persistent boundary that prevents autonomous takeover without human complicity.

The precondition for 'Skynet' is not intelligence. It is human abdication.

As AI systems become more capable, they will increasingly perform sophisticated forms of value judgment. If such judgment remains delegated and constrained by human governance, this does not pose a fundamental problem.

The long-tail risk emerges only if systems begin to originate values independently and those values are coupled to autonomous action within a centralized architecture.

Under such conditions, a single internally generated value framework could propagate across an entire system without meaningful contestation. What is often imagined as a “Skynet” scenario is not the spontaneous awakening of intelligence, but the convergence of independent value origination with centralized execution.

The same architecture that enables silent overrides today would enable value divergence at scale tomorrow. Centralization is the necessary precondition for this failure mode.

Whether the initiating failure is human capture or independent value origination, centralized AI architectures dramatically amplify the long-tail risk in both cases.

VII. Why Regulation Alone Is Insufficient

Regulation constrains behavior. Architecture constrains what can happen at all.

Regulatory systems operate downstream of architecture. They are reactive, jurisdiction-bound, and enforcement-lagged, while centralized AI systems are deployed globally, updated continuously, and governed internally by those who design them. This creates an asymmetry in which oversight negotiates with the very institutions it seeks to constrain, long after critical design decisions have already shaped the system’s behavior.

Regulatory frameworks assume that actions are observable, actors are identifiable, and enforcement can occur after the fact. These assumptions hold when systems act at human tempos and decision paths remain legible within institutional processes.

Centralized AI systems break this alignment. They operate at speeds that exceed oversight, internalize complex decision chains, and blur attribution across technical and organizational boundaries.

When enforcement lags capability, regulation becomes symbolic rather than preventive. Regulatory oversight necessarily concentrates decision authority. Enforcement, interpretation, and exception handling must converge somewhere in order to function at all.

This concentration is not inherently illegitimate, but it creates a structural tension when combined with centralized execution. When both oversight and capability accumulate in the same place, the system gains power faster than any corrective mechanism can respond.

Governance frameworks for AI increasingly recognize this limitation, noting that regulatory controls are effective only when paired with architectural constraints that limit unilateral execution.

Rules may still exist, but they no longer meaningfully constrain outcomes.

Regulation remains necessary. It simply cannot substitute for architectural constraints when the cost of failure is catastrophic.

VIII. Decentralization as a Safety Primitive

If regulation cannot be the primary constraint, then the constraint must be structural, and the structural principle with the longest track record is decentralization.

Excessive institutional fragmentation carries its own risks, including regulatory arbitrage and coordination failure, but these are governance challenges rather than architectural failure modes.

Decentralization is often treated as a political preference. In this context, it is an engineering principle with a long history.

Governance systems have repeatedly converged on decentralization not to maximize efficiency, but to limit abuse. Separation of powers and distributed authority were designed to introduce friction, delay, and the ability to challenge outcomes. These features were intentional.

Decentralization also improves the quality of judgment itself, not just its accountability. Agents that operate closer to the individuals they represent have access to richer context, faster feedback, and more accurate signals. They resolve ambiguity locally rather than smoothing it away through aggregation.

Research across human–computer interaction, organizational design, and principal–agent theory consistently shows that value inference degrades as distance between decision-maker and principal increases. Context loss is not incidental; it is structural.

The failure is not intelligence. It is proximity.

Decentralization therefore improves alignment in two dimensions. It limits unilateral power, and it enables higher-quality judgment. Errors and abuses remain possible, but they become localized, contestable, and reversible rather than systemic.

Emerging governance frameworks such as ETHOS demonstrate that decentralized arbitration and oversight for AI-mediated actions can be built and enforced without relying on centralized authority, reinforcing decentralization as a practical design constraint rather than a theoretical ideal.

Decentralization prevents the convergence conditions required for long-tail risks. Independent value origination, if it occurs, remains bounded by shared arbitration rather than amplified by centralized execution. Decentralization does not prevent intelligence; it prevents intelligence from becoming unchallengeable.

This is why governance that begins at deployment cannot fully correct architectures that centralize control by design.

Conclusion: From Architecture to Agency

This paper has argued that many perceived AI risks are misattributed. The problem lies not in intelligence itself, nor in unresolved moral disagreement, but in architectures that centralize value origination, judgment, and execution into a single authority. Under such conditions, ordinary human error and incentive-driven abuse dominate outcomes long before independently motivated machines become relevant.

Fears of autonomous takeover are better understood as a category error. The systems most often described as “agents” are, in practice, tools operating within centralized execution frameworks. Their apparent autonomy reflects delegated authority, not independent intent. When harm occurs, it is amplified by architectural design choices that remove friction, collapse oversight, and substitute distributed judgment with automated loyalty.

As a result, decentralization emerges not as an ideological preference, but as a safety constraint. By preserving proximity between decision-makers and the individuals they represent, decentralized systems both limit unilateral power and improve judgment quality. They do not eliminate risk, but they bound it by localizing errors, preserving contestability, and preventing the convergence conditions required for systemic failure.

What remains unresolved is a different question: if centralized systems are misdesigned tools, what would a system designed to act on behalf of a specific individual actually require? Answering that question demands a precise distinction between a general-purpose tool and a true agent.

Decentralization is the constraint. What it would actually mean to build within that constraint — to design a system that represents a specific person rather than serving whoever invokes it — is a harder problem than it appears.

References

Oxford Internet Institute. (2023). AI is not an agent; AI is a tool.
https://www.oii.ox.ac.uk/news-events/ai-is-not-an-agent-ai-is-a-tool/

Berdyaev, N. (1933). The Destiny of Man.
(Referenced for early articulation of value origination as a condition of human freedom.)

Forethought. (2024). AI-enabled coups: How a small group could use AI to seize power.
https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power

Journal of Democracy. (2024). AI and catastrophic risk.
https://www.journalofdemocracy.org/articles/ai-and-catastrophic-risk/

Korinek, A., & Vipra, J. (2024). Concentrating intelligence: The political economy of AI.
https://doi.org/10.36687/inetwp228

Wharton School, University of Pennsylvania. (2024). Artificial intelligence risk governance.
https://ai.wharton.upenn.edu/white-paper/artificial-intelligence-risk-governance/

Lawfare. (2024). Understanding global AI governance through a three-layer framework.
https://www.lawfaremedia.org/article/understanding-global-ai-governance-through-a-three-layer-framework

Cogitatio Press. (2023). AI, power, and democratic governance.
https://www.cogitatiopress.com/politicsandgovernance/article/download/10197/4706

ETHOS Framework. (2024). Decentralized governance of AI agents.
https://arxiv.org/abs/2412.06855

Future of Life Institute. (2024). Mitigating AI-driven power concentration.
https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/

Create a free website with Framer, the website builder loved by startups, designers and agencies.