We are in the AI Singularity

 
LLMs will not replace you

So good!

Prediction: LLMs Will Only Get Dumber

I’m calling it right now, and I pledge to never edit this statement out of this blog post in the future:

Model collapse has begun, and LLMs will only get dumber from this date forward.

Hopefully this will lead to the AI hype bubble finally bursting. But what do I know? Investors continue to pour billions of dollars into AI startups, despite the reality that currently:

- No company has proven that they’re saving money and succeeding by replacing workers with LLMs.
- Even if a company COULD replace developers with LLMs successfully, they will still need skilled developers to oversee and correct the outputs, which necessitates the continuing software developer career path.
- LLM companies like OpenAI are not profitable.
- In 2024, AI companies nabbed 45 percent of all US venture capital tech investments, up from only nine percent in 2022.
- Time and time and time again, experiments in having LLMs write code begin to fail as soon as the code moves beyond basic examples and use cases.

In any case, LLMs are not the path to AGI.

Preach!
 
DeepSeek R1 (open-source model) is near parity with the best proprietary frontier models (ChatGPT 4-o3, Gemini 2.5, etc.):

 
image.png
 
The following video gives a shocking impression on how at the current state of technology: (swarms of) killer drones can be made and sent to targets, with the weapons making the decision they cannot be stopped, and could be used to assassinate political leaders or dissidents.

 
The following video gives a shocking impression on how at the current state of technology: (swarms of) killer drones can be made and sent to targets, with the weapons making the decision they cannot be stopped, and could be used to assassinate political leaders or dissidents.


It's an old video but, yeah, we know from lessons in Ukraine that FPV drones are extremely deadly, even able to take out tanks. Swarms of AI-controlled, anti-personnel LAWS drones could be as dangerous as any WMD, a point that the Army itself has acknowledged here back in 2020.

 
WOW! Must-watch!



If you can believe that AI can become hyper-intelligent to the point it can predict the future and dominate all human capacity to think and act....... but you can't believe in God and the Scriptures... you're not thinking straight...
 
Apple just nuked LLM-based AI from orbit...

Hacker News: The Illusion of Thinking: Understanding the Limitations of Reasoning LLMs [pdf] (cdn-apple.com)

Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles.
 
Last edited:
ChatGPT was down this morning. Apparently, millions of users somehow managed to survive...

image.png
 
Back
Top