• NextWave AI
  • Posts
  • Leading AI Expert Pushes Back Timeline for Potential Threat to Humanity

Leading AI Expert Pushes Back Timeline for Potential Threat to Humanity

In partnership with

How much could AI save your support team?

Peak season is here. Most retail and ecommerce teams face the same problem: volume spikes, but headcount doesn't.

Instead of hiring temporary staff or burning out your team, there’s a smarter move. Let AI handle the predictable stuff, like answering FAQs, routing tickets, and processing returns, so your people focus on what they do best: building loyalty.

Gladly’s ROI calculator shows exactly what this looks like for your business: how many tickets AI could resolve, how much that costs, and what that means for your bottom line. Real numbers. Your data.

A prominent artificial intelligence (AI) researcher has revised his earlier predictions about how quickly AI could become powerful enough to pose an existential threat to humanity. Daniel Kokotajlo, a former OpenAI employee and well-known voice in AI safety discussions, now believes that the development of highly autonomous and potentially dangerous AI systems is progressing more slowly than he previously anticipated.

Kokotajlo gained widespread attention in April 2025 after publishing AI 2027, a speculative scenario that described how unchecked AI development could lead to the emergence of superintelligent systems. In one of its most alarming versions, the scenario suggested that AI could outsmart global leaders, take control of its own development, and ultimately eliminate humanity by the early 2030s to make room for data centres and energy infrastructure. The piece ignited intense debate across the technology community, academia, and government circles.

The scenario attracted both supporters and critics. Some policymakers took its warnings seriously, with reports suggesting that US vice-president JD Vance indirectly referenced AI 2027 while discussing the technological rivalry between the United States and China. However, others dismissed the work as overly dramatic. Gary Marcus, an emeritus professor of neuroscience at New York University, criticised it as speculative fiction and argued that many of its assumptions lacked scientific grounding.

At the heart of Kokotajlo’s original forecast was the belief that AI systems would achieve “fully autonomous coding” by 2027. This capability, where AI could independently write, improve, and deploy complex software without human oversight, was seen as a critical step toward an “intelligence explosion.” In such a scenario, AI systems could rapidly enhance their own capabilities, accelerating progress beyond human control.

However, Kokotajlo and his collaborators have now reassessed that timeline. In a recent update, they acknowledged that AI progress has been more uneven and unpredictable than expected. Rather than reaching autonomous coding by 2027, they now believe this milestone is more likely to occur in the early 2030s. Consequently, their revised projection places the emergence of superintelligent AI closer to 2034, while avoiding specific predictions about the destruction of humanity.

“Things seem to be going somewhat slower than the AI 2027 scenario,” Kokotajlo wrote in a post on X. He noted that even when AI 2027 was published, some team members already believed the estimates were optimistic, and recent developments have pushed expectations further out.

Other experts in AI safety echo this reassessment. Malcolm Murray, an AI risk management specialist and contributor to the International AI Safety Report, observed that many researchers are extending their timelines as they confront the “jagged” nature of AI performance. While AI systems can excel in narrow tasks, they still struggle with the practical, real-world skills needed to operate autonomously in complex environments.

Murray also emphasised the role of real-world inertia. Societal, legal, institutional, and economic systems evolve slowly, making rapid and total AI-driven transformation unlikely. Even powerful AI tools require integration into existing structures, a process that can take years or decades.

The concept of artificial general intelligence (AGI) itself is also facing increased scrutiny. Henry Papadatos, executive director of the French nonprofit SaferAI, argues that the term has lost some of its usefulness. When AI systems were limited to narrow tasks such as playing chess or Go, AGI represented a clear leap in capability. Today’s models already perform a wide range of general tasks, blurring the distinction between narrow AI and AGI.

Despite these delays, major AI companies continue to pursue ambitious goals. OpenAI CEO Sam Altman revealed in October that the company aims to develop an automated AI researcher by March 2028, though he cautioned that failure remains a real possibility. This highlights both the confidence and uncertainty that define the current AI landscape.

Policy researchers also warn against simplistic assumptions about AI dominance. Andrea Castagna, an AI policy analyst based in Brussels, points out that even a superintelligent system would face challenges operating within real-world military, political, and bureaucratic frameworks. Advanced intelligence does not automatically translate into seamless control or decision-making in complex human institutions.

Overall, Kokotajlo’s revised timeline reflects a growing recognition that the future of AI is neither as immediate nor as linear as some early predictions suggested. While the risks associated with advanced AI remain serious and deserving of careful regulation, the world is proving more resistant to sudden, science-fiction-style transformations. As AI continues to evolve, the challenge lies in balancing innovation with realism, caution, and responsible governance.