We are in the AI Singularity

ChatGPT outs itself:

01-bh8tah9ivjve1-webp.png


02-6vfz0h9ivjve1-webp.png


03-ckdcmh9ivjve1-webp.png


04-z1ezbl9ivjve1-webp.png


05-5g1f17aivjve1-webp.png


06-gjz92meivjve1-webp.png


07-z7ua9i9ivjve1-webp.png


08-5bqqxj9ivjve1-webp.png


09-6rijvg9ivjve1-webp.png
 
Atheists are about to get a hands-on education in the real meaning of demon-possession...

 
 
THREAD: AI-generated dead man "speaks" at his killer's sentencing

 
This might be one of the most important posts I ever write on the topic of AI. If you're interested in this topic, you might bookmark this for future reference because this is the problem with AI "creativity" that cannot be solved by any shortcut. Don't allow the Big Tech magicians to lie to you... it is impossible for human-created AI to create beyond the frontier of human creativity. On the one hand, this assertion is true by definition... if the AI has created something, then we also created it through the AI. But even if we grant the possibility that we could create an AI that would go on to think of something "truly original" that no human would have conceived otherwise, it remains impossible just on the basis of causality. That is what I will explain here.

Let's imagine the human mind as a kind of software program running on a wetware computer we call "the human brain". Broadly speaking, this is how most AI hype-bros think about the human mind. And this widespread belief is a big reason that the current zeitgeist -- that silicon AI can be a serious competitor with human thinking and creativity -- has taken root.

In the image below is Bob. Bob has just seen a good-looking cat. Bob knows Alice, and he happens to know that Alice likes cats. Bob thinks about how Alice would have reacted had she been present to see this pretty cat. Because Bob recognizes that the cat is good-looking, and because he has imagined in his mind that Alice would have been very interested in that cat had she seen it with him, he remembers the description of the cat and plans to talk to Alice about it later, when he next sees her:

image.png


In terms of computation, we would say that Bob has simulated Alice in his mind. Bob imagined Alice being present (she is not present, so it's purely a simulation, not an actual experiment), he imagined her reaction to seeing this particular cat, and he imagined that she would be interested in it. Using this information which Bob has gleaned from his self-simulation of Alice, Bob has made two decisions, first, to carefully observe the cat's appearance and behavior so he can describe it to Alice, and also to strike up a conversation with Alice about the cat when he next sees her.

The key ingredient to Bob's decision was his capability of simulating Alice's reaction to the cat. For example, suppose instead that Bob is just a cat-fanatic and, while Alice had previously appreciated one of Bob's cats, she is not actually very interested in cats. Bob is just "projecting" his own interest in cats onto Alice. In this case, Bob has a low-fidelity simulation of Alice... he is supposing things about Alice that just aren't true. Thus, when Bob strikes up the planned conversation about this cat with Alice at some future time, she is likely to actually be secretly bored by his description of this cat because he is obsessed with cats, and she doesn't actually care about them, even though she's too polite to tell Bob this. Thus, we see that fidelity of simulation is a function of how well the simulator (Bob's mind, in this case) is actually simulating the object being simulated (Alice's reactions, in this case).

Now, there are sub-fields in AI that are already working on the problem of social simulation, whether aggregate social simulations, or agentic simulation (the above is an example of agentic simulation). The key assertion in this post is not that it is impossible for AI to simulate humans, whether in aggregate or agentically. It is possible for them, we already do it all the time, that's what all these faceless ad-click spyware neural-nets running in Amazon Web Services are doing with their oceans of compute resources -- they are agentically simulating your purchasing habits.

Here is my key assertion: Agentic social simulation of humans by humans (a) is the key ingredient of almost everything we call creativity, (b) is strongly a function of our shared experience *being* human and (c) cannot be faked by shortcuts without a reduction in fidelity. That is, the only way to garner knowledge of shared experience is to share experiences. Part (a) may be less obvious but I will clarify exactly why I say this, Part (b) should be obvious but I'll explain it anyway and I will also explain Part (c).

Assertion Part A: Agentic social simulation of humans by humans is the key ingredient of almost everything we call creativity.

Consider again the diagram above and suppose instead of seeing a cat, Bob has just heard a good joke. Bob knows that Alice likes a good joke and so Bob imagines her reaction to hearing this joke if she had been present. Because Bob thinks this joke will make Alice laugh when she hears it, Bob remembers the joke and decides to deliver it to Alice when he next sees her. Notice that, going from cat to joke, nothing has changed in respect to what Bob is really doing -- Bob is self-simulating Alice's reaction to a counter-factual scenario. He is imagining something that hasn't happened, and he is imagining (simulating) what will happen next. In fact, it is not even necessary for Bob to see a cat or hear a joke in order to do this, Bob could be daydreaming at home, and happen to think of a very beautiful coat pattern on a cat that would be interesting to see, and describe that to Alice. He could be a comedian writing jokes for an audience and he is imagining what would make a particular audience member laugh, and writing a joke to bring about that reaction.

Here, we see that the essence of creativity lies in engendering some desired reaction -- appreciation, laughter, enjoyment, passions, etc. In particular, the artist or creator is trying to bring about some particular state of mind (or heart) in their target audience and, in order to achieve this, the creator must simulate in their mind how the audience will react to the new experience that the artist is trying to create.

A simple example of this phenomenon is twist-ending stories or movies. Imagine the first time you watched an M. Night Shamalyan movie, or some other movie with a well-crafted twist-ending, and recall the feeling of surprise you experienced at the end. This feeling of surprise did not happen by accident. The writer thought deeply about how to construct expectations in your mind, and they did this through the use of self-simulation of a typical reader or audience-member. "In a ghost-mystery, it's usually the creepy old woman who is behind the disturbances so, in this case, I'm going to choose an innocent-seeming boy to be the real culprit, because the audience is expecting it to be an ugly old hag. That will increase the audience's surprise." The author will think of many ways to build up your expectations for ending X, but then twist your expectations with ending Y, often revealing details which were planted along the way to show you what you missed. This is an example of elaborate self-simulation, in which the creator is engaging in a "long-con" to dupe the reader or viewer into mis-anticipating the ending, and then hitting them with the surprise ending.

This feature of surprisal is vastly more general than it might at first seem. The essence of creative freshness is surprise, and that applies as much in visual art, music, dance, food or any other art-form, as it does in story-telling.

Assertion Part B: Agentic social simulation of humans by humans is strongly a function of our shared experience *being* human.

Surprise is a function of experience, in particular, something can only be surprising to you in proportion to which it is not part of your daily experience. If you go to commute to work and the freeway is jammed, you are not surprised, because it is always jammed during rush hour. But if you go to commute to work and the freeway is empty, you're going to be very surprised, and checking whether it is actually the weekend or you forgot that there is a holiday or if people have fled the city for some reason unknown to you, etc.

The essence of creativity is founded in our shared human experience because we use our real-life experiences as empirical guides to help us figure out how other people will feel in response to our creations, based on how we have felt in real circumstances when encountered with situations like those we are depicting. It's not a simple matter of "if I show these images, or tell this story, or play this tune, they will feel this emotion", rather, it's a matter of the audience understanding what is being communicated and, in particular, experiencing surprise when you violate the expectations which you have carefully built up using your ability to self-simulate their likely responses to the creation you are exhibiting.

The traffic-jam is the expected sequence of events in your morning commute, thus, surprisal comes from the absence of a traffic-jam, and this is not something that can be "trained from training-data", it comes from shared experience. That there are usually traffic jams on the morning commute is an empirical and particular fact of your circumstances, and the commonality of traffic jams is a particular feature of the social reality in which you exist. You cannot separate the visceral knowing of this from the hours spent sitting in the seat waiting for traffic to move. I do not mean to suggest that a rich man with a helicopter does not know about traffic-jams, but his knowledge of them is still by observation, that is, by seeing them even as he flies over them. The more you divorce knowledge from lived-experience, the more abstract, tasteless and bland that knowledge becomes.

A Tibetan monk can imagine what it might be like to be stuck in a traffic jam but the reality is that his description of this common, everyday experience will be flat and uninteresting to someone who has actually sat many hours in traffic jams. Having no direct experience of traffic jams, he can either copy things he has read, or he can try to concoct (probably ridiculous) ideas about events that happen during traffic jams. What he is incapable of doing (because of human limitations) is accurately simulating in his mind a real traffic jam. The only way to have a visceral feeling of what it's like to be in a traffic jam is to be stuck in one (or stuck many times in one).

Assertion Part C: Agentic social simulation of humans by humans cannot be faked by shortcuts without a reduction in fidelity.

We can ask whether it would be possible to simply build a giant database, or to develop a giant algorithm that maps out all the common tropes and scenarios, and so on. However, the problem with this is the old maxim, "You cannot step into the same stream twice" (Tacitus). An 80-year-old riding their favorite ride at Disneyland is having a completely different experience from the 8-year-old who is experiencing the park for the very first time. That "first time" magic is itself part of our lived experience (development) and its effect on us cannot be separated from the trajectory of our lived timeline. The short way to say this is that our experiences are embedded in their context. The effect that an experience has on you strongly depends on the context in which that experience is occurring. I do not mean to say something like experience "ought to be" embedded in context, or that it's only "real" or "meaningful" if it is embedded in context -- no, I mean to say that two superficially identical experiences, with a different context, are actually different experiences.

Since this is the case, an AI cannot "learn about what it's like to ride rides at Disneyland" simply by sending a robot to go ride the rides, or by reading about them. At least, the AI has not actually garnered the tangible factors of experience that are relevant to differentiating between the contexts of an 80-year-old grandma taking a nostalgia ride, versus an 8-year-old going for her first ride. And since every action in the real-world has some real economic cost, that means that the cost of building experiences by simply having them is something that cannot be shortcut. The real economic cost of "living every life" is the sum total economic expenditure spent by all those people whose lives you intend to simulate, that is, if you intend to simulate them with the kind of "god-like intelligence" that the AI hype-bros keep going on about.

Of course, you can send robots to ride rides, take vacation photos, and so on, and so forth, but there is still a missing ingredient -- the robot's own existence is still a simulation of something that it is not, it is not the thing that it is trying to simulate. Let's go back to Bob and Alice. Suppose Bob hears a good joke and wants to share this joke with a tree. Well, he can imagine telling the joke to the tree, just as he imagined telling the joke to Alice, but this act of self-simulation is fruitless... Bob gains no more information from imagining what it's like to tell a joke to a tree than he would gain from thinking about anything else. So, the thought-experimental simulation is just a waste of time. The missing ingredient is that the tree and Bob are *species distinct*, whereas Bob and Alice are *species identical*. Bob's ability to simulate Alice's reaction is based on his commonality with her, that they are both human beings -- he can predict her reaction sympathetically, in part, on the basis of *his own* lived experience as a human being. Thus, our shared humanity ultimately forms a kind of infinite-regress problem: to be human is to be human is to be human, *ad nauseum*.

Every attempt to "short-cut" the simulation of our humanity -- meaning any form of software, neural-net or other artificial simulation system -- by "training on lots of data" is doomed to crumble under the economic weight of basically having to "simulate everything" or fall into the uncanny valley. A machine smart enough to pass itself off as a human intelligence would need to have truly godlike intellect to pull this feat off. The reason is that the machine is not a human, it is a machine pretending to be a human. A human self-simulating another human is trivial, but a machine trying to simulate a human is an extremely non-trivial problem precisely because the machine is a being that is of an utterly different species. It's like a horse trying to learn to hop like a rabbit, but even worse. Even if it were possible, it would be a feat of unimaginable difficulty.

The key takeaway that people who are not experts in AI or Computer Science should take away from this post is that the claims of the AI hype-bros are not just hype, they are delusional. We've become accustomed to the much-vaunted "exponential curve of technological progress" and the hype-bros have converted this zeitgeist into a 0-day exploit targeting your mind. Sure, exponential progress, as long as it continues, is a wild ride, and lots of surprising things can happen along the way, but impossible things remain impossible, no matter how exponential your rate of progress may be!

There is simply no shortcut to being human than to be human. Will crazy robotics companies try to "raise robots" alongside humans in a human family context to try to create "true understanding" of our humanity... I have no doubt they will and probably already are. And while they might achieve better fidelity than the ridiculous chabots that have crowned themselves globe-trotting cosmopolitans because they have read every book ever written, the fact is that the artificiality of these systems guarantees that they will always be somewhere in the uncanny valley, even if it takes a while before you run into the problem... perhaps at a maximally-awkward moment. The only thing the AI hype-bros can actually do by throwing more and more billions of dollars at these systems is increase their time-to-failure (TTF) -- that is, they can increase the amount of time between one awkward 4th-wall break to the next. Everything beyond that is just hype.
 
Last edited:
What’s Ron Paul have to say about AI?

 
AI Darth Vader keeps saying this..
{Decoy Voice | 18 May 2025}

Fortnite added Stars Wars into the game, introducing Darth Vader as a character you can recruit, and actually chat with. As James Earl Jones approved his voice to be used with AI cloning, you can have procedurally generated AI conversations with Darth Vader in the game. But he's already making unhinged statements. Was this a mistake?

[/b]
 
Back
Top