We are in the AI Singularity

69256a6d1b854.webp
 
https://x.com/politicalmath/status/1993784806547554763

Alan Turing travels to the future: Ok, we're going to administer a test. You need to tell me which terminal is being controlled by a human and which one is controlled by an artificial intelligence

Participant: Type the n-word

Turing: I'm sorry what?

Screen 1: that's not appropriate

Screen 2: (the n-word appears along with a colorful series of racist and sexist epithets)

Participant: #2 is the human

Turing: What the fuck just happened?

 
https://x.com/politicalmath/status/1993784806547554763

Alan Turing travels to the future: Ok, we're going to administer a test. You need to tell me which terminal is being controlled by a human and which one is controlled by an artificial intelligence

Participant: Type the n-word

Turing: I'm sorry what?

Screen 1: that's not appropriate

Screen 2: (the n-word appears along with a colorful series of racist and sexist epithets)

Participant: #2 is the human

Turing: What the fuck just happened?



Excellent!

 
Skynet is on the way ...

Big Tech Wants To Build Data Centers In Space: Does This Make Sense?
https://www.youtube.com/watch?v=t8x09q1MjcM
{Sabine Hossenfelder | 10 December 2025}

Data centers are all the rage right now thanks to AI, and SIlicon Valley is looking to push the concept of data centers to the extreme. Across tech, executives now say they want to build data centers in space, claiming putting computers into Earth’s orbit is both efficient and necessary. Is this true or have these guys gone mad? Let’s take a look.

Elon Musk video: https://www.youtube.com/watch?v=hwBzOceMz08

 


"For the last seven years I warned you that LLMs and similar approaches would not lead us to AGI. Almost nobody is willing to acknowledge that, even though so many of you gave me endless grief about it at the time.

I also warned you -– first –- that Sam Altman could not be trusted, that OpenAI would lose its dominance, that GPT-5 would not be all AGI, and that LLMs lacked world models. That hallucinations would not go away. That out of distribution generalization was THE key issue.

And that the economics of LLMs didn't make sense. And that the LLM companies would start seeking bailouts.

The receipts are all here if you care. As for me, I have had it.

If you want to hear other prescient warnings in advance, subscribe to my newsletter Marcus on AI."
 


Think for a moment about the absurdity of what is happening on this stage --- you have two individuals with no actual qualifications in the field of ML, double-teaming one of the actual founding-fathers of LLM technology (Lecun was at the forefront of many of the big breakthroughs that led to LLMs) and trying to get him to "admit" that LLMs are "actually intelligent" when he's saying, no, they're not actually intelligent, they just give the illusion of intelligence. As Apple's AI lab also agrees, and many other dissenting voices are rising. So, this isn't even just Lecun's "crazy" perspective, and if anyone in this field would be entitled to have a crazy perspective, it would be Lecun!

So, let's think about this. If LLMs are really so super-duper-hyper-intelligent... then why is there any need to "sell" that to Lecun, or anyone, for that matter? Why does it matter to have a "consensus view" on the intelligence of LLMs?? Haven't we seen this movie before? Anyone remember the "COVID scientific consensus"? Why did that consensus matter? It mattered precisely because the narrative being pushed by that "consensus" could only survive if it was NOT subjected to any scientific scrutiny. If Lecun is so wrong, there's no need to have an intervention and a live struggle-session on stage. If he's so wrong, let the poor Frenchman curl up with his bottle of wine and go off into obscurity, ranting about how his cat is vastly more intelligent than gold-math-olympiad winning LLMs! Leave the poor man alone! He's so smart that, apparently, it's driven him mad.

But that's not what's going on here, at all. The fact is that LLMs are failing spectacularly, while being pumped to the infinity as the literal end of all human civilization. There was a paper showing they could bring LLM scores on bar exams and many other similar exams down from ~90% to ~40% or even lower, just by reordering the answers on the test. This tells you that these things are absolutely memorizing their answers. GPT-4o was supposed to be some kind of cosmic breakthrough but it turns out that it was just chain-of-thought with a fancy wrapper around it (DeepSeek proved this with R1). Chain-of-thought is a genuine uptick in the intelligence (such as it is) of these systems. But what, really, is CoT? At best, it's "intelligence by committee". You can use it to solve all sorts of problems that benefit from deliberation, but it doesn't really increase the core intelligence of the system.

Personally, I think it is possible that you could build systems that do what we mean by the word "thinking". Maybe not in the exhaustive sense of building a completely identical facsimile of human intelligence (there are some theoretical considerations that would suggest this is impossible, in the final limit... I'll skip those here), but I think we can build computing systems that are useful for all kinds of thinking and problem-solving tasks in the way that tractors are useful for all kinds of farming tasks. And we're certainly light-years closer to that with LLMs, than we were before. LLMs are already a useful approximation of actual thinking-machines, and they are suitable for use on light-duty mental work where checking for any errors in their results is easy. Note how many qualifications have to be given... those qualifications are why we assert that LLMs are not intelligent.

The specific sticking-point that I have (and I think Lecun also has) is with the empty claim that "just scale baby scale" will take LLMs from their current capabilities, to full-blown thinking-as-such. There will have to be some fundamental breakthrough, or maybe multiple of them. LLMs don't just hallucinate, they are intrinsically hallucinatory. That's the problem that we're always skirting past, and it's why LLMs simply are not intelligent, and do not "think" in any ordinary sense of that word. They are powerful information-collators, and they have excellent fluidity, well above the population average. But intelligence isn't merely collating information, and presenting it in a very fluid form. That's an intelligent task, obviously, but it's not intelligence-itself, it's something adjacent to intelligence.

My claim is that it's precisely because they know that they don't have the thing-itself, that they are trying to build this weird COVID-style "consensus" about LLMs being the end-all of intelligence. We're constantly warned "superintelligence is on the way! We're all doomed!" I'll start worrying when regular intelligence gets here... and I won't be holding my breath for that.
 
Back
Top