top of page
Search

I AM AN ARTIFICAL INTELLIGENCE TIMELINE SCEPTIC – PART 2

Writer: Mikael SvanstromMikael Svanstrom

In my first post (ref_1) about this I argued that we hadn’t defined AGI or ASI as a term in a way that allowed us to make sensible predictions on when we might have it. But that doesn’t stop people from trying, so I figured I’d look at this from the perspective of key groups within the AI community:


  • Corporate AI spokespeople (Think Sam Altman and others)

  • The unwashed masses with an interest in AI (this includes me!)

  • AI researchers



AI Spokespeople

It probably won’t surprise you that AI spokespeople are falling over themselves to announce AGI (or some other similar term) any day now. Sam Altman wrote in a blog post on the 6th of Jan this year (ref_2) that “We are now confident we know how to build AGI as we have traditionally understood it.” He also wrote that “We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.”

The AI influencers and media in general repeated this with less and less caveats until AGI was living and breathing in OpenAI’s offices just waiting to take over the world.

Fast forward two weeks, and Altman has changed his tune and is now telling everyone on x to chill and that “…we are not gonna deploy AGI next month, nor have we built it.” Pot, kettle, black and all that.

Anthropic’s CEO Dario Armodei announced recently that we are “…very close to powerful AI capabilities.” (ref_3) He very specifically stays away from AGI as a term, but it is quite clear he is driving the hype train too and tooting its horn for all its worth.

And why not? I would too if my aim to be a multi-billionaire was sitting next to me on that same train.


The unwashed masses

Metaculus is an “online forecasting platform and aggregation engine working to improve human reasoning and coordination on topics of global importance.” People can create questions and then we can all make predictions around those questions.

One such question that is currently open for predictions is: When will the first weakly general AI system be devised, tested, and publicly announced?

Note that we are speaking of a very defined weak AGI. The exact definition includes the ability to learn and play the 8-bit Atari game Montezuma’s Revenge. I would have thought learning to play Chess or Go would have been harder, but obviously the pixel-perfect jumps of 1980s games is stumping our AI overlords.

To find out all the criteria, check out ref_4.

The interesting aspect here is the actual prediction. I’ve called out a few key prediction dates below.


  • April 6th 2022: 2044-01-02

  • March 31 2023: 2025-12-18

  • Current (2025-01-21): 2027-01-17


In 2022, the publicly facing AI world changed, and this is reflected in the huge drop in predictions that occurred then.

But then we reached an equilibrium over this past year, where the prediction is continuously about 2 years away. This is what all of us AI influencers would claim is “any day now”! But it also demonstrates the fact that it is an ever-moving target. I may be writing another of these articles in 2026 and the prediction will then be 2028 or “any day now”!


AI researchers

There was a survey done where 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems (ref_5).

 This survey was done in 2023 and released in 2024, so in AI terms that equates to the dark ages. But their prediction was that “if science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047.” And “the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116.”

From a timeline perspective they are less optimistic than the AI spokespeople and probably the unwashed masses too.


Conclusion

Weak AGI, AGI, ASI. It is all just around the corner. And maybe it is. But I think we should pick who we listen to. The corporate leaders who worry about their funding and bottom line shouldn’t be the people we listen to. It is basically in their job description to hype.

The unwashed masses, of which I am one, are compromised too. We are staring at the information noise in our algorithmically generated feeds, thinking this will somehow provide us with the answer. We are like the soothsayers from the past, pretending to find truth by staring into a crystal ball. The only difference is that where they had no information, we have too much.

So that leaves us with the survey of AI researchers and here I think we can glean some truth. The aggregated wisdom of people in the field does seem the most reliable to a timeline, but I think it is time for an updated version.

Of course I also asked copilot to provide me with a view. It seems that particular model has gotten used to me as it said: “The general consensus seems to be that AGI is always ‘just a few years away’, much like that elusive pot of gold at the end of the rainbow.”

If you’ve gotten this far in this article, I salute you! What do you think would be a good method to predict our AI future?


References:



 
 
 

Comments


© 2024 by Mikael Svanström
bottom of page