• NextWave AI
  • Posts
  • Human Typing Speed May Be Holding Back the Rise of AGI, Says OpenAI Codex Lead

Human Typing Speed May Be Holding Back the Rise of AGI, Says OpenAI Codex Lead

In partnership with

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

Artificial Intelligence is progressing at an unprecedented pace. From generative chatbots to advanced coding assistants, AI systems are increasingly capable of performing tasks that once required human intelligence. Major technology companies such as OpenAI, Google, Microsoft, and others are investing hundreds of billions of dollars into AI research and infrastructure. Yet, despite this rapid progress, the long-envisioned goal of Artificial General Intelligence (AGI)—AI that can think, reason, and learn like a human—remains elusive.

According to Alexander Embiricos, the lead of OpenAI’s Codex project, the biggest barrier to achieving AGI may not be limitations in AI models themselves, but rather an unexpected and underappreciated factor: human typing speed.

A Surprising Bottleneck in AGI Development

Speaking on an episode of Lenny’s Podcast, Embiricos argued that the current AI ecosystem is constrained by how humans interact with machines. While AI models can process massive amounts of data and generate outputs at incredible speeds, they still depend heavily on humans to write prompts, supervise workflows, and validate results.

This reliance on manual input, Embiricos believes, has become a critical bottleneck. He described human typing speed and the ability to multitask while crafting prompts as the “current underappreciated limiting factor” in the journey toward AGI.

In simple terms, AI systems are advancing faster than humans can effectively communicate with and oversee them.

The Human-in-the-Loop Problem

Today’s AI workflows are built around a “human-in-the-loop” model. Humans instruct AI systems, review their outputs, correct errors, and approve final results. While this approach ensures accuracy and safety, it also slows progress—especially as AI systems become more capable.

Embiricos highlighted that even when AI agents are able to observe and assist with human tasks, the need for humans to manually validate every output significantly limits scalability. “You can have an agent watch all the work you’re doing,” he explained, “but if you don’t have the agent also validating its work, then you’re still bottlenecked on reviewing everything.”

This is particularly evident in software development, where AI-generated code must still be carefully checked by human developers. As AI systems take on larger workloads, the time required for human oversight becomes increasingly impractical.

Rethinking AI System Design

To overcome this limitation, Embiricos advocates for a fundamental shift in how AI systems are designed and deployed. Rather than relying on humans to constantly guide and verify AI behavior, future systems should be capable of self-review and self-validation.

His vision involves AI agents that are “default useful”—systems that can autonomously perform tasks correctly without requiring detailed instructions or constant supervision. By reducing the need for humans to write prompts and review outputs, productivity could increase dramatically.

“We need to unburden humans from having to write prompts and validate AI’s work,” Embiricos said, emphasizing that humans simply aren’t fast enough to keep pace with increasingly capable AI systems.

The Promise of Hockey-Stick Growth

Embiricos used the term “hockey stick growth” to describe the kind of productivity surge that could occur once AI systems become more autonomous. In business and technology, hockey-stick growth refers to a slow initial phase followed by a sharp, exponential increase.

He believes that once AI agents can independently execute and verify tasks, productivity gains will rise rapidly—first among early adopters, and eventually across entire industries. “Starting next year, we’re going to see early adopters starting to hockey stick their productivity,” he predicted. Over time, larger organizations will follow suit, leading to widespread automation.

These productivity gains, Embiricos suggests, will feed back into AI research itself. Faster workflows and more efficient experimentation will accelerate innovation within AI labs, bringing AGI closer to reality.

No One-Size-Fits-All Solution

Despite his optimism, Embiricos acknowledged that there is no universal solution for fully automated AI workflows. Different applications—such as healthcare, finance, software development, and creative industries—will each require customized approaches to autonomy and validation.

Safety, reliability, and ethical considerations remain critical challenges. Allowing AI systems to validate their own outputs raises concerns about error propagation and unintended behavior. As a result, the transition away from human oversight will need to be gradual and carefully engineered.

Nevertheless, Embiricos remains confident that progress is imminent. He believes that advances in AI agent architecture will soon allow systems to operate with minimal human intervention, at least in well-defined domains.

A New Perspective on AGI

Traditionally, discussions around AGI have focused on model size, computational power, and data availability. Embiricos’ perspective offers a fresh angle: the idea that human limitations, rather than machine limitations, may now be the primary obstacle.

If AI systems can learn not only to perform tasks but also to evaluate their own performance, the role of humans could shift from direct supervision to higher-level oversight. This transition may mark a critical turning point in the development of AGI.

As Embiricos concluded, once productivity gains driven by autonomous AI begin flowing back into AI research itself, “that’s when we’ll basically be at AGI.”

Conclusion

The path to Artificial General Intelligence may not hinge solely on more powerful algorithms or faster hardware. Instead, it may depend on rethinking how humans and AI systems collaborate. By reducing dependence on human typing speed and manual validation, AI could unlock a new era of productivity and intelligence.