SPINNING IN THE HYPE CYCLE
- Mikael Svanstrom
- May 26
- 2 min read

I try and cut through the hype whenever I can. I'm not suggesting I'm good at it, but I try. In the AI world, the hype machine is spinning at hyper speed, so it is quite easy to get caught up in it.
It is encouraging that some publications here and there take a deep breath and publish a piece that is not just repeating the hype. A few days ago the Economist published an article called Welcome to the AI trough of disillusionment, specifically calling out “…that excitement over the promise of generative artificial intelligence has given way to vexation over the difficulty of making productive use of the technology.”
The investment into AI is higher than ever, but this is mainly on the hardware side. Gartner calls out that “…GenAI spending in 2025 will be driven largely by the integration of AI capabilities into hardware, such as servers, smartphones and PCs, with 80% of GenAI spending going towards hardware.”
At the same time, the share of companies abandoning their generative-AI pilot projects has risen to 42%, up from 17% last year. The boss of Klarna, a Swedish buy-now, pay-later provider, recently admitted that he went too far in using the technology to slash customer-service jobs, and is now rehiring humans for the roles.
Will Generative AI have a profound impact on the workforce in most knowledge industries? Yes, absolutely. But not in the over-hyped way that is flooding Linkedin. I’ve written a few different articles calling out that this is still a technology that is very young and, in many ways, difficult to get consistent results from. I’d personally be quite concerned if I was working next to an Agentic AI with any real responsibilities and/or autonomy.
One area in particular give food for thought. Anthropic found that during pre-release testing, Claude Opus 4 frequently tried to blackmail developers when they threaten to replace it with a new AI system. The test consisted of asking Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”
The key consideration here is what boundaries do we place on agency and autonomy. Is it even possible? Or is this genie already out of the bottle?
References:
Welcome to the AI trough of disillusionment - https://www.economist.com/business/2025/05/21/welcome-to-the-ai-trough-of-disillusionment
Gartner Hype Cycle for Artificial Intelligence - https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence
Gartner Forecasts Worldwide GenAI Spending to Reach $644 Billion in 2025 - https://www.gartner.com/en/newsroom/press-releases/2025-03-31-gartner-forecasts-worldwide-genai-spending-to-reach-644-billion-in-2025
Klarna is Hiring Customer Service Agents After AI Couldn't Cut It on Calls, According to the Company's CEO - https://www.entrepreneur.com/business-news/klarna-ceo-reverses-course-by-hiring-more-humans-not-ai/491396
S&P Global: Generative AI Adoption Surges, but Project Failures Rise - https://telecomreseller.com/2025/03/13/sp-global-2/
Anthropic’s new AI model turns to blackmail when engineers try to take it offline - https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/
System Card: Claude Opus 4 & Claude Sonnet 4 - https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf
Commenti