Have you ever felt like AI is moving too fast?

Like every few months there's a new headline:

"AI beats doctors at diagnosing cancer." "AI writes better code than most programmers." "AI passes the bar exam." "AI outperforms hedge fund managers."

And you think: "How? How is this happening so quickly?"

Most people assume it's because of some mysterious genius happening in a lab somewhere. Some secret breakthrough that only a handful of scientists understand.

The truth is actually simpler than that.

And once you understand it, you'll stop being surprised by AI headlines — and start being able to predict them.

It comes down to two words: scaling laws.

Let's Start With a Story

Imagine you're baking a cake.

You follow the recipe exactly. One cup of flour. Two eggs. A teaspoon of vanilla. You get a decent cake.

Now imagine you double everything. Two cups of flour. Four eggs. Two teaspoons of vanilla.

You'd expect a bigger cake. Maybe twice as big. That's linear. That's what most of us expect from the world.

But what if doubling the ingredients didn't just give you a bigger cake — it gave you a completely different cake? One that tasted better, had a different texture, and somehow also made coffee?

That's what happens with AI when you scale it up.

More data + bigger model + more computing power doesn't just give you a bigger AI.

It gives you a qualitatively different AI. One that can do things the smaller version simply couldn't do at all.

That's the magic — and the mystery — of scaling laws.

What Is a Scaling Law, Exactly?

Let's strip the jargon completely.

A scaling law is a mathematical relationship that says:

As you make an AI model bigger and feed it more data, its performance improves in a predictable, measurable way.

Not randomly. Not unpredictably. Predictably.

Researchers discovered that if you plot AI performance against model size on a graph, you get a smooth, consistent curve. Every time you double the size of the model, performance improves by a predictable amount.

This was a revelation.

Before this discovery, AI research was more like cooking without a recipe. You tried things. Sometimes they worked. Sometimes they didn't. Progress felt random.

Scaling laws gave researchers a roadmap.

They could now say: "If we build a model this big, trained on this much data, with this much computing power — here's roughly how capable it will be."

That changed everything.

The Three Ingredients of AI Power

Scaling laws tell us that AI capability depends on three things working together. Think of them as the three ingredients in the recipe:

Ingredient 1: Model Size (Parameters)

An AI model is made up of billions — sometimes trillions — of tiny mathematical connections called parameters. Think of each parameter as a tiny dial that gets tuned during training.

The more dials you have, the more nuance the model can capture.

Early AI models had millions of parameters. Today's most powerful models have hundreds of billions.

That's not just a bigger number. It's a different category of intelligence.

Ingredient 2: Training Data

AI learns by reading. Enormous amounts of text, images, code, scientific papers, books, websites — essentially, a large chunk of everything humanity has ever written or recorded.

The more data a model trains on, the richer its understanding of the world.

But here's the key insight from scaling laws: bigger models need more data to reach their potential. A small model trained on a huge dataset doesn't improve much. A large model trained on a huge dataset improves dramatically.

The two ingredients amplify each other. That's the positive feedback loop.

Ingredient 3: Computing Power

Training a large AI model on a large dataset requires an almost incomprehensible amount of computation. We're talking about running billions of mathematical operations — every second — for weeks or months.

This is why AI development is concentrated in a handful of companies. The computing infrastructure required — specialised chips called GPUs and TPUs, massive data centres, enormous electricity bills — costs billions of dollars.

But scaling laws showed that this investment pays off in a predictable way. More compute = better AI. And the relationship is consistent enough that you can plan around it.

The Moment Everything Changed: Emergent Abilities

Here's where scaling laws get truly mind-bending.

When researchers scaled up AI models, they expected gradual, linear improvement. A bit better at language. A bit better at reasoning. Steady progress.

What they got instead was something they didn't fully anticipate: emergent abilities.

An emergent ability is a capability that simply doesn't exist in a smaller model — and then suddenly appears, seemingly out of nowhere, once the model crosses a certain size threshold.

Let me give you a concrete example.

Smaller AI models, when asked to solve a multi-step maths problem, would just guess. They had no ability to reason through steps.

Then, at a certain scale, something changed. The model didn't just get slightly better at maths. It suddenly developed the ability to reason — to work through problems step by step, the way a human would.

Nobody programmed that in. Nobody taught it that skill explicitly.

It emerged from scale.

This is what The First Domino describes as a "seismic breakthrough" — AI moving from being a specialised tool for narrow tasks into something that could reason, adapt, and generalise across domains.

And it happened not because of a single genius invention. It happened because of scale.

Why This Explains Every AI Headline You've Ever Read

Once you understand scaling laws and emergent abilities, the AI headlines stop being surprising.

"AI beats doctors at diagnosing cancer" — because at sufficient scale, AI can process millions of medical images and research papers and find patterns no human could see.

"AI writes better code than most programmers" — because at sufficient scale, AI has read essentially all the code ever written and can synthesise patterns from it.

"AI passes the bar exam" — because at sufficient scale, AI can reason through complex legal arguments the way a trained lawyer would.

These aren't magic tricks. They're the predictable output of scaling laws playing out.

And here's the uncomfortable implication:

We are not at the end of this curve. We are still in the early stages.

The models being built right now are larger than anything that existed two years ago. The models being built in two years will be larger still. And if scaling laws continue to hold — which the evidence strongly suggests they will — each new generation will have emergent abilities that the previous generation simply didn't possess.

The Arms Race Nobody Talks About

Here's something most people don't fully appreciate.

The companies building the most powerful AI are not doing it because they love technology.

They're doing it because they understand scaling laws.

They know that if you build a model twice as large, trained on twice as much data, with twice the computing power — you don't just get a model that's twice as good. You get a model that can do things the previous one couldn't do at all.

That's an enormous competitive advantage.

So there is a race — a genuine, high-stakes arms race — to build the biggest, most capable AI models. The participants include:

  • The largest technology companies in the world

  • Governments of major nations

  • Well-funded startups backed by billions in venture capital

And the prize isn't just a better product. It's economic and geopolitical dominance.

As The First Domino puts it:

The entities able to marshal computational heft and data access become de facto centres of power, dwarfing nation-states anchored in legacy institutions and debt-dependent frameworks.

This is why the AI race is not just a technology story. It's a power story. And scaling laws are the engine driving it.

But Wait — Does This Go On Forever?

Fair question. And the honest answer is: probably not forever, but much further than most people think.

Scaling laws do have limits. At some point, the cost of making models bigger starts to outweigh the gains. The energy required becomes prohibitive. The data available starts to run out.

Researchers are already grappling with this. And the response has been fascinating:

Instead of just making models bigger, they're making them smarter about how they use their size. Better architectures. More efficient training methods. Ways to get more capability out of the same amount of compute.

In other words: when raw scaling starts to slow down, innovation in efficiency takes over.

The result is the same: AI keeps getting more capable. The curve doesn't flatten — it just changes shape.

What Does This Mean for the Economy — and for You?

Here's where we connect scaling laws back to the bigger story of this series.

The Japan bond crash. The $1 trillion sell-off. The end of free money. The greatest wealth transfer in history. AI as the new central bank.

All of those stories have one thing in common:

The entities that understand and deploy the most powerful AI will have an advantage so large it reshapes the entire economic landscape.

And scaling laws tell us that this advantage is not static. It's compounding.

Every year, the most powerful AI gets more capable. Every year, the gap between those who have it and those who don't gets wider.

This has three direct implications for ordinary people:

1. The pace of disruption is not slowing down — it's accelerating

If you work in a field that AI is beginning to touch — finance, law, medicine, education, content, customer service — the disruption is not a one-time event. It's a continuous process that gets faster every year.

Planning for "AI will affect my job eventually" is not enough. You need to be adapting now, because the model that exists in two years will be significantly more capable than the one that exists today.

2. The value of understanding AI is compounding too

The people who understand how AI works — not at a coding level, but at a conceptual level — are better positioned to:

  • Spot opportunities before they become obvious

  • Avoid risks before they become crises

  • Use AI tools to amplify their own productivity

  • Make better investment decisions in a world shaped by AI

Understanding scaling laws is part of that foundation. You now know why AI keeps surprising people. That's a genuine edge.

3. The concentration of AI power is a risk — and an opportunity

Because scaling laws require enormous resources — billions in compute, vast datasets, specialised talent — the most powerful AI is concentrated in a small number of entities.

This concentration creates risk: if those entities make bad decisions, the consequences are global.

But it also creates opportunity: the companies building and deploying the most powerful AI are likely to be among the most valuable entities in the world over the next decade.

Understanding where AI capability is being built — and who controls it — is one of the most important investment frameworks of our time.

The Simple Version

Let me leave you with the simplest possible summary of everything we've covered.

AI keeps getting better because of a mathematical relationship: more size + more data + more compute = more capability.

This relationship is predictable. It's been consistent for years. And it produces emergent abilities — capabilities that appear suddenly, without being explicitly programmed, once a model crosses a certain threshold.

This is why AI keeps surprising people. Not because of magic. Because of math.

And the math is not done yet.

The curve is still going up.

The question — as always in this series — is whether you're positioned for the world that this curve is building.

[Part 1: The Match That Could Burn the World →] [Part 2: The $1 Trillion Sell-Off →] [Part 3: The End of Free Money →] [Part 4: The Greatest Wealth Transfer in History →] [Part 5: AI Is Not Just a Tool — It's the New Central Bank →] Next up — Part 7: 5 Ways AI Could Destroy the World Economy (And 5 Ways It Could Save It).

Disclaimer: The Sterling Report and all associated content by Slone Sterling are for educational and informational purposes only. We do not provide investment, tax, or legal advice. All strategies and investments involve risk of loss. Please consult with a licensed professional before making any financial decisions.

A Final Note

This is Part 6 of "The First Domino" — a 10-part series explaining the biggest economic and technological shift of our lifetime in plain language. Based on my book of the same name.

If this made you think, share it with one person who needs to read it.

Sources & Further Reading
  • The First Domino by Slone Sterling — available now on Amazon

  • Dario Amodei, "Machines of Loving Grace" — on scaling laws and AI's emerging nature

  • Kaplan et al. (OpenAI), "Scaling Laws for Neural Language Models" (2020)

  • Anthropic Research Blog — on emergent abilities and model capabilities

  • DeepMind Research — scaling and generalisation in large models

  • MIT Technology Review — "The scaling hypothesis"

Precision in a world of noise.

Analysis by Slone Sterling

Keep Reading