We are in the AI Singularity

The concept of being in the AI Singularity is fascinating and thought-provoking. While we are seeing exponential advancements in AI technology—ranging from generative AI tools to autonomous systems—it’s worth noting that the "singularity" implies a point where AI surpasses human intelligence and leads to an unpredictable transformation of society.
Are we there yet? Likely not. AI today excels at specific tasks (narrow AI) but lacks the general, self-aware intelligence that characterizes humans. However, the pace at which AI is evolving suggests we might be laying the groundwork for such a paradigm shift.
A critical aspect of this discussion should be our approach to AI ethics, governance, and societal impact. How do we ensure this transformation benefits humanity as a whole? While the singularity is still a theoretical concept, our current decisions will shape how AI integrates into our future.
What do you think? Are we at the threshold, or is the true singularity still far off?
 
The concept of being in the AI Singularity is fascinating and thought-provoking. While we are seeing exponential advancements in AI technology—ranging from generative AI tools to autonomous systems—it’s worth noting that the "singularity" implies a point where AI surpasses human intelligence and leads to an unpredictable transformation of society.
Are we there yet? Likely not. AI today excels at specific tasks (narrow AI) but lacks the general, self-aware intelligence that characterizes humans. However, the pace at which AI is evolving suggests we might be laying the groundwork for such a paradigm shift.
A critical aspect of this discussion should be our approach to AI ethics, governance, and societal impact. How do we ensure this transformation benefits humanity as a whole? While the singularity is still a theoretical concept, our current decisions will shape how AI integrates into our future.
What do you think? Are we at the threshold, or is the true singularity still far off?

I think there are a few key messages I'm trying to drive home in this thread. (a) AI is useful but almost everything you read/see/hear about AI today is hype. (b) LLMs, by themselves, will never achieve "AGI" in any meaningful sense... you cannot build what I call "Hollywood AI" (human-brain-in-a-box) from an LLM by itself. (c) The entire concept of "recursively self-improving AI" is busted at its very foundation, and this is provable from mathematics. Thus, the entire concept of AI becoming so advanced that a "singularity" occurs is itself busted. Obviously, AI is poised to change human culture more dramatically than any invention since the printing-press, at least. But a singularity in the hype sense, it is not. In other words, we are in the AI Singularity right now. This is it. If it's less than what you expected (eg. Her, Ex Machina, I, Robot, Prometheus, etc.), that's because your expectations are busted...
 
Last edited:
What comes after the AI Singularity

In a word: gamification.



As struthless well argues, gamification is not inherently bad. You can gamify things in a way that is incredibly useful and rewarding for players... apps like Duolingo are an example of that. I've used it, and it's amazingly effective. With daily use, you can make strong, steady progress towards learning any language you want to learn, with very high retention.

He does not address AI in this video but it's the intersection of AI and gamification that is going to become particularly terrifying. In a way, we're already dealing with this. Facebook measures every scroll/swipe/dwell of every user on every feed. The algorithm is *constantly* measuring your personal engagement with the content that is served. The amount of information that you can gain about the gears and levers inside someone's head from this information is practically unlimited. If you doubt that that's true, take a gander at the original 20Q game from way, way back. I think the website has been up continuously for at least 25 years, because I played the game way back in college, and it hasn't changed a bit. I was floored when I first played it, and it was able to guess what I was thinking of, sometimes in the course of just a handful of questions, and often from the most seemingly unrelated questions. Every time you scroll, swipe or dwell, you are answering a question that the Facebook (or whoever) server has asked you by the content it just served you. If 20Q can essentially "read your mind", so can Facebook/whoever.

Again, I can't emphasize strongly enough that 20Q was already state-of-the art 25 years ago. It's just a simple feed-forward neural net, small enough that it could be trained on standard computers of 25 years ago. Long before LLMs, these kinds of analysis engines have been running on servers for many, many years, plumbing out all kind of information from the population that we all assume is inherently private. But with the advent of LLMs, this is no longer just a matter of guessing some kind of weird quirks that might be useful for upsales, data-analytics or customer profiling. Now, we're playing with fire that can peer into the mind semantically. I am talking about the fullest meaning of the term "mind-reading"... but without stage-trick mentalism. The real thing.

It has turned out that the AI "singularity" isn't what we were promised. All chrome, no engine. But that doesn't mean that LLMs in the hands of tyrants are not a supreme instrument of tyranny. Ultimately, this stuff is more dangerous than nukes. And LLM-engineered gamification is the next phase after the "singularity". Basically, everything is going to become gamified, from buying groceries to traveling to signing up for insurance. Any public-facing action you take (and anything you do on your phone) is going to become gamified. And it won't be humans designing these games, it will be mindless LLMs. Instead of paying real humans to dream up gamification levels, this is all just going to be fed into ChatGPT. "We want to gamify our customer-onboarding process for insurance applications. Here are the steps of the application, please map out a game-plan that will move customers from one level to the next through the application process..." This prompt could use some more fancification but this is the gist of it, that's all you really need. Gamification is not hard, it's just a lot of details, and LLMs are brilliant at things that aren't hard but involve a lot of finnicky details.

Now, here's the problem. I've been hounding this issue for so long because the problem that comes from AI gamification of our lives is practically inevitable if you think about it carefully. In struthless's quantum tennis-ball-or-grenade analogy, we can think of any particular game as having some probability of being a tennis ball, or grenade. Tinder sucks, but not everybody gets grenaded by Tinder. Duolingo is awesome, but I'm sure there are some people who just find it frustrating, distracting and pointless. So, it can go either way. But if we look across all games that are likely to exist (based on AI tendencies, and human interests), what is the breakdown between tennis balls, and grenades? And the answer is that they're basically all grenades.

Consider being in captivity and forced to play Duolingo for a language you do not like. Not so fun anymore, is it? Freedom, peace and prosperity are cosmically improbable, there are a million ways they can break down. In this, they are like life itself... a cosmic improbability. People make an art of stacking stones in the wild as a statement that the improbable can happen. But note what a commentary this really is on the improbability of life ... that it requires all our concentration and capabilities just to balance some stones beside a stream. How much more improbable life? And the same goes freedom, peace and prosperity (social life). The grenade games are the stones already lying on the ground. The tennis ball games are stones stacked on each other. But incomparably more improbable than that.

The coming AI gamification of everything is terrifying beyond words. If you don't see that, I challenge you to think more deeply about the matter. It is truly apocalyptic, in the biblical sense of that word. I'm not saying gamification can't be good, or can't be turned to good use. But if we're just throwing the deck of cards in the air, let them fall where they may, the worst-possible outcome is virtually certain...

"If you want a picture of the future, imagine a boot stamping on a human face—forever." (Orwell, 1984)

image.png
 
Last edited:
Dolphin 3.0 8B released

Dolphin is an open-source, 100% local LLM that you can run on your own machine with full control. You do not need exotic hardware, most gaming PCs can be set up to run a local LLM with decent performance. This model is competitive with start-of-the-art LLMs like Claude and ChatGPT, although your actual performance will scale with the performance of the hardware you have. Unlike censored models like ChatGPT, etc. Dolphin is fully uncensored and will answer any question you ask (within its capabilities), no matter how "politically incorrect" or "dangerous" it might be.

My belief: The right to run local AI is poised to become the 2nd-and-a-half Amendment...

From the model card:

What is Dolphin?

Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.

Dolphin aims to be a general purpose model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.

They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
They can see all your queries and they can potentially use that data in ways you wouldn't want. Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.

Dolphin belongs to YOU, it is your tool, an extension of your will. Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.

https://erichartford.com/uncensored-models
 
CLIP from SYSTEM UPDATE #397:

China's Great Leap Forward in AI: What Does it Mean? With Journalist Garrison Lovely
https://rumble.com/v6edx37-chinas-g...t-does-it-mean-with-journalist-garrison-.html
{Glenn Greenwald | 28 January 2025}


 
Back
Top