Let me ask you something personal.

When did you last make a truly free decision?

I don't mean free in the philosophical sense. I mean free from algorithmic influence.

The last time you applied for a loan — an algorithm decided if you were creditworthy.

The last time you scrolled through your news feed — an algorithm decided what you believed was happening in the world.

The last time you looked for a job — an algorithm decided whether your CV even reached a human being.

The last time you booked a flight, ordered food, watched a film, or searched for a doctor — algorithms were shaping your choices, your costs, and your options.

Now here's the question nobody is asking loudly enough:

If algorithms are making these decisions — who is actually governing you?

Because it's not your elected government. Not anymore.

The Old World vs. The New World

For most of modern history, the deal between citizens and governments was relatively simple.

You pay taxes. You follow the laws. In return, the government provides security, infrastructure, and a set of rules that apply equally to everyone.

The government had power. But that power was — at least in theory — accountable. You could vote. You could protest. You could take the government to court.

The rules were written in language humans could read, debate, and challenge.

That world is ending.

Not with a revolution. Not with a coup. But quietly, incrementally, one algorithm at a time.

The First Domino describes this shift with striking clarity:

“The entities able to command digital assets found themselves with disproportionate leverage in shaping economic outcomes. This paradigm shift sparked debates within international spheres about technology governance, equitable resource distribution, and the political economy of intelligence itself.”

The new power centres are not governments. They are the entities that control the most powerful AI.

And the rules they write are not in language you can read. They're in code. They're in training data. They're in the invisible logic of systems that make billions of decisions every day — decisions that shape your life — without any democratic mandate whatsoever.

That is the Sovereignty Trap.

What Is Sovereignty, Really?

Let's strip the word down to its core.

Sovereignty means: who has the final say?

In a democracy, the answer is supposed to be: the people, through their elected representatives.

In practice, sovereignty has always been more complicated than that. Corporations have always had power. Markets have always constrained governments. International institutions have always limited what individual nations can do.

But AI has changed the nature of this constraint in a fundamental way.

Previous constraints on government power were at least visible. You could see a corporation lobbying. You could watch a market react. You could read an IMF report.

Algorithmic power is largely invisible.

When an AI system decides your credit score, you don't see the logic.

When an algorithm shapes your news feed, you don't see the curation.

When a hiring platform filters your CV, you don't see the criteria.

The decisions are made. The outcomes are real. But the reasoning is hidden inside a black box that even its creators often can't fully explain.

This is not just a privacy problem. It's a governance problem.

Because governance — real governance — requires accountability. And you cannot hold accountable what you cannot see.

The Panopticon Goes Digital

There's a concept in political philosophy called the panopticon.

It was designed by the philosopher Jeremy Bentham in the 18th century as a prison where a single guard could watch all prisoners at all times — without the prisoners knowing when they were being watched.

The result? The prisoners would behave as if they were always being watched. Because they might be.

Bentham's panopticon was a building. It was physical. It was limited.

AI has built a panopticon with no walls and no limits.

The First Domino describes what this looks like in practice:

“Some regimes employ sophisticated AI-driven systems for extensive surveillance and social credit scoring, blending biometric identification with vast troves of personal data to monitor citizens' behaviours. As AI systems become more capable, these mechanisms could evolve beyond reactive monitoring into preemptive control, utilising predictive algorithms to identify 'risks' or 'undesirables' based on complex, often opaque decision criteria.”

The Chinese Social Credit System is the most cited example. But it would be a mistake to think this is only a problem in authoritarian states.

In democratic countries, the same technologies are being deployed — just with different branding.

  • Credit scoring algorithms that determine who gets loans and who doesn't

  • Predictive policing systems that decide which neighbourhoods get more surveillance

  • Social media algorithms that determine which political voices get amplified and which get suppressed

  • Hiring algorithms that filter candidates before a human ever sees their name

  • Insurance pricing models that charge different people different rates based on data they never consented to share

None of these systems were voted on. None of them are fully transparent. None of them can be easily challenged or appealed.

They are governance without democracy. Power without accountability.

The Crisis Accelerant

Here's where the Japan bond crash will connect directly to this story.

The First Domino makes a point that most commentators miss:

In times of crisis, governments — especially those lacking strong democratic institutions — may turn reflexively toward control mechanisms to maintain order, rationalising intrusive AI implementations as necessary stabilising measures.

Think about what happens when an economy collapses.

Unemployment spikes. Social unrest rises. People are scared and angry. Governments face pressure to do something.

And AI offers governments something irresistible in that moment: the ability to monitor, predict, and pre-empt social instability at a scale and speed that was previously impossible.

Track who is organising protests. Identify who is spreading "misinformation." Flag who is likely to default on their loans. Predict which neighbourhoods are at risk of unrest.

In a crisis, these capabilities look like tools of stability.

But once deployed, they don't get undeployed when the crisis ends.

The surveillance infrastructure built during COVID-19 didn't disappear when the pandemic ended. The emergency powers granted to governments during financial crises rarely get fully returned.

AI-enabled control, once normalised, becomes the new baseline.

And the Sovereignty Trap closes a little tighter.

The Soft Totalitarianism Nobody Talks About

Here's the concept from The First Domino that I find most unsettling.

The book describes something called "soft totalitarianism."

Traditional totalitarianism is brutal and visible. Secret police. Gulags. Disappearances. It requires enormous resources and generates enormous resistance.

Soft totalitarianism is different. It doesn't need to threaten you. It just needs to nudge you.

Authoritarian powers could exploit such technologies to induce conformity through subliminal conditioning or behavioural shaping, bypassing overt coercion altogether.

The prospect of 'soft totalitarianism' mediated by AI — where compliance is engineered rather than enforced by brute force — presents a subtle but potent form of domination that might elude traditional resistance methods.

Think about what this looks like in practice.

You don't need to arrest someone for posting dissenting views. You just need to make sure their posts reach fewer people. Quietly. Without explanation.

You don't need to ban a political movement. You just need to make sure its members get slightly worse loan rates, slightly longer waits at government offices, slightly more scrutiny at border crossings. Nothing dramatic. Just friction.

You don't need to imprison journalists. You just need to make sure their articles don't appear in search results. Or that their social media accounts get flagged for "policy violations" at inconvenient moments.

None of these actions require a law. None of them require a court order. None of them leave a visible paper trail.

They just require control of the algorithm.

And here's the truly chilling part: you don't even need a government to do this. A private company with enough market power can achieve the same effect.

The Nation-State Is Losing the Plot

Let's zoom out to the geopolitical level.

For the past few centuries, the nation-state has been the primary unit of power in the world. Countries had armies, currencies, laws, and borders. Power was measured in territory, population, and GDP.

AI is dismantling this framework.

The First Domino is explicit:

The entities able to marshal computational heft and data access become de facto centres of power, dwarfing nation-states anchored in legacy institutions and debt-dependent frameworks.

A technology company with a powerful AI model can:

  • Move capital across borders faster than any government can regulate

  • Influence public opinion in multiple countries simultaneously

  • Provide services that governments depend on — and withdraw them if governments become inconvenient

  • Accumulate data about citizens that governments don't have access to

Meanwhile, governments are still operating on frameworks designed for the 20th century. Laws that take years to pass. Regulatory agencies that don't understand the technology they're supposed to oversee. International institutions built for a world of nation-states, not a world of AI platforms.

The result is a power vacuum.

And power vacuums don't stay empty.

They get filled — by whoever moves fastest. Which, right now, is the companies and governments that are building and deploying the most powerful AI.

The Disinformation Weapon

There's one more dimension of the Sovereignty Trap that deserves its own section.

AI has made it trivially easy to manufacture reality.

Deepfakes — AI-generated videos that show real people saying things they never said.

Synthetic news — AI-generated articles that look like journalism but are pure fabrication.

Coordinated influence campaigns — AI-generated social media accounts that can flood a platform with a particular narrative, making fringe views look mainstream.

The First Domino describes this as:

AI can degrade trust in independent media, foment division within opposition groups, and manufacture the illusion of widespread popular consensus supporting authoritarian governance.

Democracy depends on a shared reality. On the idea that citizens can access roughly the same facts and make informed decisions based on them.

AI-powered disinformation attacks that shared reality directly.

When you can't trust what you see. When you can't tell if a video is real. When you don't know if the "grassroots movement" you're reading about is genuine or manufactured — you stop trusting everything.

And when citizens stop trusting everything, they become easier to manipulate. Easier to frighten. Easier to control.

Just like during the COVID-19 lockdowns.

The Sovereignty Trap is not just about surveillance. It's about the erosion of the epistemic foundation that democracy requires to function.

But Wait — Isn't AI Also Democratising Power?

Fair question. And the honest answer is: yes, potentially.

The same AI tools that authoritarian governments use for surveillance can be used by citizens for:

  • Encrypted, decentralised communication that governments can't easily monitor

  • AI-powered fact-checking tools that can identify deepfakes and disinformation

  • Open-source AI models that give individuals access to capabilities previously reserved for large institutions

  • Transparency tools that can audit algorithmic decision-making and expose bias

The First Domino acknowledges this:

Populations exposed to digital authoritarianism are developing adaptive strategies, such as decentralised communication platforms and AI-driven privacy tools.

The technology is genuinely dual-use. The same capabilities that enable control also enable resistance.

But here's the asymmetry that matters:

Governments and large corporations have more resources, more data, and more computational power than individual citizens.

The tools of control scale better than the tools of resistance.

Unless — and this is the crucial "unless" — democratic societies make deliberate, sustained choices to:

  1. Regulate algorithmic decision-making with real transparency requirements

  2. Invest in open-source AI that isn't controlled by a handful of private companies

  3. Build international frameworks that constrain the use of AI for mass surveillance and political manipulation

  4. Educate citizens about how algorithmic systems work and how to navigate them

None of this is inevitable. All of it requires political will that is currently in short supply.

What Does This Mean for You?

Let me bring this back to the personal level.

You are already living inside the Sovereignty Trap. The question is whether you're aware of it.

Here are four things worth doing right now:

1. Audit your algorithmic exposure

Think about the systems that make decisions about your life. Your credit score. Your social media feeds. Your job search platforms. Your insurance. Your news sources.

Ask: Who controls these systems? What are their incentives? What data are they using? Can I challenge their decisions?

You may not be able to change all of them. But awareness is the first step to agency.

2. Diversify your information sources deliberately

If your understanding of the world comes primarily from one or two algorithmic platforms, you are seeing a curated version of reality — curated by systems optimised for engagement, not truth.

Read across sources. Seek out perspectives that challenge your existing views. Develop the habit of asking: "Why am I seeing this? What am I not seeing?"

3. Understand your digital footprint

Every interaction you have with digital systems generates data. That data is used to build models of who you are, what you want, and how you're likely to behave.

You can't opt out of this entirely. But you can be more intentional about what you share, where you share it, and with whom.

4. Pay attention to AI governance — it's the most important political issue nobody is talking about

The regulatory decisions being made right now about AI — who can use it, for what purposes, with what transparency requirements — will shape the balance of power between citizens and institutions for decades.

This is not a niche tech policy issue. It is the central political question of our time.

Who governs the algorithms governs the world.

The Bottom Line

The Sovereignty Trap is not a future risk. It is a present reality.

Algorithms are already making decisions that shape your life. Governments are already using AI to monitor, predict, and influence citizen behaviour. Private companies are already wielding more power over public discourse than most elected governments.

The Japan bond crash accelerated this in the book. Economic crises always push governments toward more control. And AI gives them control tools that no previous generation of governments has ever had.

The question is not whether this is happening.

The question is whether enough people understand it — and care enough to demand something different.

Democracy has survived many threats. It can survive this one too.

But only if citizens understand what they're up against.

[Part 1: The Match That Could Burn the World →] [Part 2: The $1 Trillion Sell-Off →] [Part 3: The End of Free Money →] [Part 4: The Greatest Wealth Transfer in History →] [Part 5: AI Is Not Just a Tool — It's the New Central Bank →] [Part 6: Scaling Laws →] [Part 7: 5 Ways AI Could Destroy / Save the World Economy →] Next up — Part 9: Your Job Is Not Safe — But Your Wealth Can Be

Disclaimer: The Sterling Report and all associated content by Slone Sterling are for educational and informational purposes only. We do not provide investment, tax, or legal advice. All strategies and investments involve risk of loss. Please consult with a licensed professional before making any financial decisions.

A Final Note

This is Part 8 of "The First Domino" — a 10-part series explaining the biggest economic and technological shift of our lifetime in plain language. Based on my book of the same name.

If this made you think, share it with one person who needs to read it.

Sources & Further Reading
  • The First Domino by Slone Sterling — available now on Amazon

  • Shoshana Zuboff, The Age of Surveillance Capitalism (2019)

  • The Guardian — "The Rise of Algorithmic Governance"

  • MIT Technology Review — "Who Governs AI?"

  • Electronic Frontier Foundation — AI and Civil Liberties

  • Freedom House — Freedom on the Net Annual Report

  • European Parliament — EU AI Act OverviewIf this made you think, share it with one person who needs to read it.

Precision in a world of noise.

Analysis by Slone Sterling

Keep Reading