- NextWave AI
- Posts
- Can Scaling Laws Keep AI Improving Forever? Why History Suggests Caution
Can Scaling Laws Keep AI Improving Forever? Why History Suggests Caution
Startups who switch to Intercom can save up to $12,000/year
Startups who read beehiiv can receive a 90% discount on Intercom's AI-first customer service platform, plus Fin—the #1 AI agent for customer service—free for a full year.
That's like having a full-time human support agent at no cost.
What’s included?
6 Advanced Seats
Fin Copilot for free
300 Fin Resolutions per month
Who’s eligible?
Intercom’s program is for high-growth, high-potential companies that are:
Up to series A (including A)
Currently not an Intercom customer
Up to 15 employees
Artificial intelligence has witnessed explosive growth over the past few years, driven largely by the belief that bigger models—trained with more data, powered by more chips, and housed in ever-larger data centres—will continue becoming smarter. This belief rests on what are commonly known as scaling laws: curves derived from experiments suggesting that as you increase an AI model’s size and the amount of compute behind it, its intelligence improves in a predictable manner.
OpenAI CEO Sam Altman has been among the most enthusiastic advocates of scaling laws. He argues that the intelligence of a model roughly equals the logarithm of the resources used to train and run it. In simple words, feed an AI system exponentially more data and computing power, and it becomes substantially more capable. This philosophy has fueled an industry-wide race to acquire GPUs, construct mega data centres, and even revive dormant nuclear plants to generate sufficient power for future AI models.
But as history shows, scaling laws do not always scale forever. While some represent deep mathematical or physical truths, others are merely trends that hold until unforeseen constraints bring them to a halt. As AI marches into a decade defined by massive investments and equally large expectations, it is worth asking: Can scaling laws keep delivering continuous improvement, or is the world heading toward an eventual ceiling?
Where Scaling Laws Succeed: Lessons from Engineering and Computing
Scaling laws are not unique to AI. Engineers have used them for decades to model aircraft, ships, industrial fans, and turbines. Through the Buckingham π theorem, they learned that small-scale prototypes in wind tunnels could accurately predict how a full-size aircraft would behave—provided certain ratios and parameters matched. These insights helped fuel the aerospace revolution.
Similarly, Moore’s law—the prediction that transistor counts on chips would double roughly every two years—guided semiconductor innovation for decades. Alongside this, Dennard scaling promised that smaller transistors would run faster without consuming more power. These trends allowed laptops, smartphones, and supercomputers to become increasingly compact yet powerful.
However, these examples also illustrate that not all scaling laws are immutable. Many eventually run into scientific, economic, or engineering barriers.
Where Scaling Laws Break: The Tacoma Narrows Warning
A powerful example of failed scaling is the 1940 collapse of the Tacoma Narrows Bridge. Engineers assumed that scaling up designs of earlier bridges would work for a longer, slimmer structure. But the aerodynamic behaviour of the larger bridge changed dramatically. Moderate winds triggered aeroelastic flutter—an oscillation that grew until the bridge tore itself apart just four months after opening.
In technology, transistor miniaturisation faced similar limits. Once transistors reached nanometre scales, quantum effects, current leakage, and noise destroyed the neat patterns predicted by Moore’s and Dennard’s laws. The industry still innovates, but through different methods—parallelism, chiplet designs, 3D stacking—rather than simple shrinkage.
Where AI Scaling Laws Stand Today
The scaling curves for large language models (LLMs) are real and have been remarkably reliable so far. When GPT-3, PaLM, LLaMA, and other early models were analysed, researchers noticed that as models grew larger and were trained on more data, their error rates fell predictably. The insights were so consistent that they provided a roadmap: “Build larger models, give them vastly more data, and the performance will increase.”
Yet these curves are still empirical fits to past behaviour. They lack the mathematical elegance of aerodynamic scaling rules or the theoretical grounding of Moore’s law. More critically:
They do not account for limits in high-quality training data.
They do not incorporate energy constraints.
They ignore economic feasibility.
They assume that intelligence continues scaling without fundamental cognitive barriers.
In reality, data scarcity, power shortages, and prohibitively high costs could break the curves sooner than expected.
The Financial Reality: A Growing Funding Gap
Deutsche Bank recently highlighted an emerging “AI funding gap,” citing estimates that the industry faces an USD 800 billion mismatch between expected AI revenues and the investment needed to build chips, data centres, and power infrastructure.
Meanwhile:
JP Morgan predicts that the AI sector will need around USD 650 billion in annual revenue just to earn a modest return on the massive infrastructure investments underway.
Some analysts warn that electrical grids in the US, India, Singapore, and Europe may not keep up with the demands of new data centres.
Governments are being pulled in to facilitate new nuclear plants, renewable farms, and grid expansions.
These realities suggest that even if intelligence theoretically could keep scaling, the world might not be able to pay the energy bill required to fuel it.
What Happens Next?
There are two competing visions for the future:
1. Scaling Laws Continue to Work
If scaling curves hold, then building larger models—and the infrastructure behind them—will continue delivering predictable and dramatic advances. Under this scenario, companies like OpenAI, Google, Meta, and Anthropic are right to bet hundreds of billions on GPUs, nuclear power deals, and massive data centres.
2. Scaling Laws Hit New Bottlenecks
History shows that bottlenecks often appear suddenly:
Physical limits (power, cooling, chip supply)
Cognitive limits (difficulty solving new or abstract tasks)
Economic limits (users unwilling to pay for extremely costly AI)
Data limits (running out of high-quality training material)
Any of these could significantly slow AI progress.
Conclusion: Scaling Isn’t a Law of Nature
Scaling laws have brought AI to an astonishing point, but they are still empirical observations—not guaranteed truths. They may continue to deliver breathtaking advances, or they may break when confronted with real-world constraints such as energy availability, cost pressures, or fundamental scientific limits.
Sam Altman and other AI pioneers believe the scaling story has many more chapters. Yet the cautionary tales of the Tacoma Narrows Bridge and the limits of Moore’s law remind us that expanding a system beyond its tested boundaries can reveal hidden instabilities.

