Think for a moment about the absurdity of what is happening on this stage --- you have two individuals with no actual qualifications in the field of ML, double-teaming one of the actual founding-fathers of LLM technology (Lecun was at the forefront of many of the big breakthroughs that led to LLMs) and trying to get him to "admit" that LLMs are "actually intelligent" when he's saying, no, they're not actually intelligent, they just give the illusion of intelligence. As Apple's AI lab also agrees, and many other dissenting voices are rising. So, this isn't even just Lecun's "crazy" perspective, and if anyone in this field would be entitled to have a crazy perspective, it would be Lecun!
So, let's think about this. If LLMs are really so super-duper-hyper-intelligent... then why is there any need to "sell" that to Lecun, or anyone, for that matter? Why does it matter to have a "consensus view" on the intelligence of LLMs?? Haven't we seen this movie before? Anyone remember the "COVID scientific consensus"? Why did that consensus matter? It mattered precisely because the narrative being pushed by that "consensus" could only survive if it was NOT subjected to any scientific scrutiny. If Lecun is so wrong, there's no need to have an intervention and a live struggle-session on stage. If he's so wrong, let the poor Frenchman curl up with his bottle of wine and go off into obscurity, ranting about how his cat is vastly more intelligent than gold-math-olympiad winning LLMs! Leave the poor man alone! He's so smart that, apparently, it's driven him mad.
But that's not what's going on here, at all. The fact is that LLMs are failing spectacularly, while being pumped to the infinity as the literal end of all human civilization. There was a paper showing they could bring LLM scores on bar exams and many other similar exams down from ~90% to ~40% or even lower, just by reordering the answers on the test. This tells you that these things are absolutely memorizing their answers. GPT-4o was supposed to be some kind of cosmic breakthrough but it turns out that it was just chain-of-thought with a fancy wrapper around it (DeepSeek proved this with R1). Chain-of-thought is a genuine uptick in the intelligence (such as it is) of these systems. But what, really, is CoT? At best, it's "intelligence by committee". You can use it to solve all sorts of problems that benefit from deliberation, but it doesn't really increase the core intelligence of the system.
Personally, I think it is possible that you could build systems that do what we mean by the word "thinking". Maybe not in the exhaustive sense of building a completely identical facsimile of human intelligence (there are some theoretical considerations that would suggest this is impossible, in the final limit... I'll skip those here), but I think we can build computing systems that are useful for all kinds of thinking and problem-solving tasks in the way that tractors are useful for all kinds of farming tasks. And we're certainly light-years closer to that with LLMs, than we were before. LLMs are already a useful approximation of actual thinking-machines, and they are suitable for use on light-duty mental work where checking for any errors in their results is easy. Note how many qualifications have to be given... those qualifications are why we assert that LLMs are not intelligent.
The specific sticking-point that I have (and I think Lecun also has) is with the empty claim that "just scale baby scale" will take LLMs from their current capabilities, to full-blown thinking-as-such. There will have to be some fundamental breakthrough, or maybe multiple of them. LLMs don't just hallucinate, they are intrinsically hallucinatory. That's the problem that we're always skirting past, and it's why LLMs simply are not intelligent, and do not "think" in any ordinary sense of that word. They are powerful information-collators, and they have excellent fluidity, well above the population average. But intelligence isn't merely collating information, and presenting it in a very fluid form. That's an intelligent task, obviously, but it's not intelligence-itself, it's something adjacent to intelligence.
My claim is that it's precisely because they know that they don't have the thing-itself, that they are trying to build this weird COVID-style "consensus" about LLMs being the end-all of intelligence. We're constantly warned "superintelligence is on the way! We're all doomed!" I'll start worrying when regular intelligence gets here... and I won't be holding my breath for that.