Kilcher is one of the best YT commentators on AI out there, he actually has the chops to get down into the details of the mathematics and does deep-dives into the papers on a regular basis. So, when he says AGI is not coming, it's not just some Random Internet Opinion. Here's more evidence exposing why he's saying this now, with the GPT-5 drop:
Gemini falls for the same trick:
As noted in one comment, it's a strange paradox that these LLMs can achieve high scores on the International Math Olympiad but are terrible at basic arithmetic. But the contradiction is not that difficult to understand... the IMO is testing the kinds of mathematical processes that a human can do quickly, which are relatively shallow operations. I'm not denigrating the Olympiad itself, it's impressive, it's just that the kind of skills being tested there are based on quick recall and relatively shallow mathematical constructs that can be memorized and quickly mentally configured on-the-fly. The Olympiad has absolutely nothing whatsoever to do with the kind of work done by someone like Grigori Perelman, for example, other than the word "Math". So, the IMO is a great way for the AI companies to show off flashy statistics like "XYZ model scored Silver in IMO!" but does not give any real insight into the actual thinking abilities of these systems. Their real capabilities turn out to be extremely brittle... even very slight deviations out-of-distribution can reduce GPT-4 or even GPT-5 to a stuttering moron. And this is precisely the opposite of what we really want in AI systems.
An AI that's about as intelligent as a high-functioning mentally-disabled person would be ideal for 90+% of use-cases I would want to use an AI for. I already believe in God, I don't need a digital phantasm to be a digital idol for me. Most of the AI hype is ultimately being driven by social and
spiritual forces, not economic/technological ones. Someone who is mentally handicapped but knows how to make coffee, check the mail, tell an intruder to get lost or they'll call 911, and so on and so forth, is 10 million times more useful than GPT-5 could ever be for the simple reason that
they truly know and understand the things they know and understand. They are robust, living people and so the simple jailbreaks by which you can collapse an AI into blubbering gibberish won't work on them. AI is like the Storm Troopers when Obi-wan waved his hand and said, "These are not the droids you are looking for". In most cases, it is not even slightly difficult to jailbreak the AI and get it to say all kinds of insane things. Just write a prompt-equivalent of the Jedi hand wave and boom, GPT will start blubbering gibberish like a babbling brook.
GPT-5 proves that the era of data scaling is over. That doesn't mean there won't be future progress. But the illusion of "exponential progress" has been shattered. The real question is whether they will even be able to achieve
linear progress...