We are in the AI Singularity

Oh boy. Here it comes...



OK, so let me predict a few steps ahead so folks will be forewarned as to what comes next. Current-generation AI has absolutely no grounding whatsoever. Once you embody an AI in a robot, you can use embodiment as a kind of numbskull-level grounding. "Is the cup on the table?" <robot looks at the table> "Yes, the cup is on the table." The sensors become the robot's "ground truth" and, in this way, the AI itself will seem to become grounded.

In addition, embodiment will seem to give robots the ability to reason, especially physical reasoning and similar forms of reasoning (e.g. Euclidean geometrical reasoning). "What happens if the cup is pushed over the edge of the table?" <robot ponders briefly> "The cup will fall on the floor." YouTube AI reviewer Matthew Berman uses a basic physical reasoning question that asks the AI to solve the following problem: "Suppose a marble is placed on a table and a cup is placed upside down over the marble. Next, the cup is lifted from the table and placed into the microwave. Where is the marble?" Success-rate on this question is, I believe, 0% to-date, that includes GPT-4, Claude-3, Llama3, you-name-it. SOTA AI simply cannot perform basic physical reasoning that even a child can perform. While embodiment doesn't automatically solve this problem, you basically have to solve this problem on the path of enabling robots to move around in physical space. Thus, this problem is going to be solved (if nothing else, by brute-force in simulation), and this is going to add a second layer of "OoOoHhH AaAaHhH" to the rapidly approaching fully-embodied robotic AI humanoid (Figure-01, TeslaBot, whatever).

These are the two biggest missing components in current-generation AI (grounding, and robust reasoning ability). If you thought the ChatGPT hype is bad -- the fanbois are already claiming that current-generation AI is literally the mind-of-God -- just wait till we have embodied AI robots. I cannot imagine a scenario in which this does not trigger the Apocalypse. I cannot imagine how the floodgates of idolatry will stand up against such cosmic-scale pressure. We will surely have people bowing down and physically worshiping these limping refrigerators in very short order.

While I do believe that robotics and AI will be part of many real improvements in the world (e.g. assisting the disabled), it should be obvious by now that the real interest of the Clown-"elites" has nothing to do with helping anything except helping you and me into an early grave, and especially making sure that we do not produce any more spawn that would likely soil their future Poo-topia. Do not sleepwalk into what is coming. One way or another, whether sooner or later, this ride is about to get extremely rough...
 
IMG_7911.jpeg
 
If you want to get a download of the AI Agenda, watch this:



If a group of KGB, Gestapo, Stasi and CIA agents had a month-long, coke-fueled gay orgy while sharing notes on every form of tyranny and torture ever devised, implemented or conceived by their respective agencies; reading 1984 to each other with Shakespearean gusto while brain-storming every possible inescapable mechanism of tyranny they could possibly imagine ... what they could so devise would not be a drop in the ocean compared to the toolbox of tyranny which has been placed in the lap of the modern omnipotent State by SOTA AI in the last 3 years since ChatGPT. The idea that this technology is just going to be used to help poor people access quality medical advice is bonkers. The idea that this is just somehow magically not going to be used in an attempt to implement a global 1984 tyranny is equally bonkers.

Wake up!
 
Last edited:


Great opening discussion.

I think the "synthesis" between the two views is found by examining the role played by randomness during training.

During training, the procedure of backpropagation (BP) is performed by an algorithm called "stochastic gradient-descent", or SGD. The name SGD might seem quite intimidating, but the concept is not. Basically, imagine the error-rate of the neural-net as a 2D landscape. Initially, when you begin training, you are at a very high peak, because the error-rate is very high. You want to find the lowest point in the landscape (or a point that is nearly the same altitude), how do you do that? Well, you find the direction where the gradient has the steepest descent, and you follow that. This is called gradient descent and this is how classical BP is done. But there's a problem. Because GD is deterministic, if you had started at a very slightly different point on the mountain peak, you could have ended up at a completely different "low point" which may actually be nowhere near the lowest altitude (lowest error-rate). This is called a "local minimum", think of a bowl-shaped valley which is at the top of an old mountain; the valley is the lowest point near itself, but the whole valley is still at a high altitude. If you had just happened to go down towards that valley, instead of down the outer sides of the mountain, you would have gotten stuck in a local minimum. SGD adds stochasticity to gradient descent, which acts a little bit like having a big pogo-stick which you occasionally pull out and use to take a random leap in some direction. By doing this, you have greatly improved chances of finding a global minimum (or a minimum close to the global minimum), and significantly reduced chances of ending trapped up in a local minimum. In the case of the bowl-shaped mountain-top valley, if you got your pogo-stick out while you were part way down the descent into the valley, you could end up back on the outside slope of the mountain and escape the local-minimum.

OK, so far so good, this is just an algorithm like any other algorithm. However, let's examine very closely the source of randomness in this procedure. There is a concept in computational complexity theory called derandomization. Basically, it is provable that any algorithm that uses randomness can be derandomized without altering any of the provable properties of that algorithm, in particular, without altering the running time, precision, etc. What this means is that, for every random algorithm A which you give me (a "random algorithm" is any algorithm that uses randomness), I can give you back a non-random algorithm A' that satisfies all the provable properties of A. Thus, there exists a non-randomized version of SGD that performs just as well as SGD does.

Now, let's move to the domain of password cracking. In cracking, we would love to fool a user into believing he has used a truly random seed when, in fact, it is a pseudo-random (non-random) seed which we ourselves control. In this way, we have fooled the user into choosing a password/key that was random when, in fact, it was non-random and we can quickly reconstruct the key without having to do a brute-force search. You can think of this kind of password-generation attack as an instance of derandomization. Password-generation is a random algorithm that just consists of saying, "Give me X random bits". Derandomizing that algorithm allows me to satisfy whatever constraints you have on your password-generator ("use lower-case, upper-case, numerals, special-characters", etc.) but in a way where the result that is generated is actually not random.

Now, let's say I'm a villain with access to some kind of cosmic-scale computing resource. I want your AI-training to complete in such a way that you believe you have trained it using true randomness in the SGD algorithm. However, I also want to satisfy an additional constraint that whenever the neural net (NN) hears the word "frobnicate", it goes into some kind of kill-mode. By virtue of derandomization, we know that this is possible and, in addition, you will never realize that this has been done, even as you watched the NN be trained, step by step. The way I will do this is by interposing my random-number generator into your SGD algorithm so that, when your SGD algorithm asks "give me a random number!" I will generate a number that is going to guide your SGD algorithm to a particular path that I want it to go to. That is, when you get on your pogo-stick, the place you land up is actually not random. That I can do this subtle manipulation of your training procedure is possible due to derandomization! You give me SGD (which is a random algorithm), and I give you back some SGD' which seems to you indistinguishable from SGD but which, in actual fact, I have control over, at least to some degree.

Back to the original discussion in the video, this shows that "rolling the dice" during SGD in training of these NNs is in the same category as divination, when these systems are applied to real-world action. Please understand that I'm not making a blanket claim that ML is witchcraft, or anything like that. I'm making a very narrow claim: when these systems are given agency and implement real-world actions, those actions are susceptible to influence by witchcraft because the random-number generation process itself cannot be distinguished from divination.

A simpler way to see this is to imagine a man who has a tablet that continually gives him 3 options of what he can do next. This tablet is in front of him and reads like, "A) Go straight down the sidewalk. B) Turn right into the coffee shop. C) Cross the street on the crosswalk." As he chooses what to do, he taps the screen to update and gets the next set of options. He is only permitted to take one of the actions on the tablet, and nothing else. After a while, he gets tired of consciously choosing what to do next, so he begins rolling a d3 instead. That is divination, plain and simple. And it is also what I am asserting that AI-guided robotic agents in real space are also doing, they're just doing it at zillions of cycles per second.

Thus, we don't have to think of robots as either "mere tools" or "self-directed agents". Rather, they are extensions of human agency which operate, insofar as they rely on apparent randomness, by divination. Final note of clarification: I'm not saying that all forms of robotics are witchcraft and sin. However, because of what I have explained here, we can confidently predict that giving robots agency at parity with humans will inevitably result in tragedy because they are completely susceptible to influence through witchcraft. And I am particularly identifying the "point of entry" to be the random-number generator.
 


@12:20, Altman signals that OpenAI's current internal model is very significantly advanced beyond GPT-4. "GPT-4 is the dumbest model that any of you will ever have to use again, by a lot." So far, OpenAI has consistently understated its products so, if the trend continues, we can reasonably expect that GPT-5 is going to be a significant advance over GPT-4. While I'm pleased that OpenAI is making progress, I parse this mainly in terms of the question of how the open-source community will catch up. Open-source LLMs are trailing OpenAI by about 6 months (that's my preferred measuring-stick, some people give OpenAI more lead than I do.) Technology is made of "ingredients", so to speak, so the question as I see it is, what are their ingredients?

@19:47 "I wonder... how long it will take us to figure out the new social contract..." This quote perfectly exhibits why I consider OpenAI the worst-case AI safety scenario. People keep wringing their hands, "How do we keep ChatGPT a safe AI?!?" but ChatGPT/OpenAI just is the unsafe scenario, it is the worst-case AI-safety trajectory. OpenAI is precisely what an AGI nightmare-scenario looks like in its nascent stage. Musk knew this (and has as much as said it) and I believe this is the real reason he left OpenAI. While his lawsuit has some pretty wild stuff in it, it's not all silly, there are some serious components in it.

@26:22 " ... a human right to get access to a certain amount of compute ..." Smoking-gun. Straight-up Marxism. So, there's a new social contract coming, a core part of this social contract is supposed to be that "we all" get "access" to "a certain amount of compute", that is, access to this digital-mind-of-god which OpenAI is building.

Note that this is almost precisely conformal to the ancient concept of temple sacrifice to receive the attention of an idol. "I need rain for my crops. I need the attention of the rain-god. I shall travel to the temple of the rain-god and offer this goat in sacrifice to him in the hopes that he will solve my rain problem." OpenAI is on a trajectory to that, but this will be the omni-idol, since its "intelligence" is being sold as being able to "solve all other problems." Recall that Demis Hassabis explicitly stated that his goal in founding DeepMind was to "Solve intelligence. Then use that to solve everything else." While I don't oppose that concept, as stated, the fact is that these people are spiritually shallow (carnally-minded in Christian terminology) and they simply have not thought through the real implications of what they're trying to do. They've obviously thought through the material implications in great depth -- the effect on infrastructure, logistics, employment, production, lifestyles, and so on. They've thought about all of that to great depth. But they haven't thought about the spiritual implications, on the specious theory that our material lives and spiritual lives are clinically separate. The current AGI agenda might work if life was living in a Virtual Reality Ikea catalog. Life is not a Virtual Reality Ikea catalog. Maybe a few nerds in Silicon Valley would even consider that Paradise. The vast majority of us do not.

@37:50 "... the balance of power in the world, it feels like it does change a lot..." -- In other words, the Marxist New Social Contract is going to be implemented by the power of the [ChatGPT] pen, which is mightier than the status quo [national governments] sword.

@43:48 "... society is far smarter than you now, society is an AGI as far as you can tell..." This is Marx's theory of the Collective, restated in technological language. The reason that you and I are disposable cogs in the machine, is that the Collective is the thing-itself. We are like gut-bacteria floating around the intestines of the Collective. The Collective uses us, but it has no need for any one of us, in fact, it is not even aware of our existence. Sooner or later, we will be expelled in death, but the Collective is eternal. Thus, every social theory which attempts to start with Man The Individual is delusional, is doomed to end in fatal contradiction. From the standpoint of the Collective, a truly capitalist, free-market society would be a tumor. And Marxists are the anti-cancer cells. This is the best metaphor that I can give to explain the mindset of the "ALL-IN" Marxist, whether the useful-idiots or the psychotic string-pullers. They are delusional beyond anything that can even be expressed in words. "... brick by brick ..." --> "All in all, you're just another brick in the wall"
 
Last edited:
It's all about back-door Marxism through UBI, folks... you heard it here first:



One mistake in this video: he claims that Musk's own AI is closed-source and this is false. Anyone can download X's AI, called Grok-1, and run it on their own hardware, see here. Musk has completely proved his commitment to open-source in the AI space beyond all doubt.

Also, one cringe-moment: Unironically quoting Bernie Sanders.

Otherwise, good video.
 
The news just won't give me a chance to breathe... :LMAO



Made a meme for this...

image.png
 
Them: THEY MADE THE AI IN THE MOVIE!!!
Me: :shrugging:



I never used to pay much attention to the quiet people. I always assumed that they just liked to be left alone and to leave others alone, in return. I assumed that they were just nice, but quiet.

Now I understand that quiet people are actually psychopaths. Maybe not all of them, but certainly enough of them to create effectively infinite demand for a service whose true purpose can only be summarized as: "Please, God, never make me have to speak to another human being again. Just murder everyone and leave me here by myself to talk to machine instances pretending to be human because that is so much more tolerable for a psychopath like me."

What is wrong with this generation? None of the people I grew up with were like this. Or maybe a few of them secretly were, but at least they knew to keep their psychopathy to themselves. Now, AI-fueled misanthropy is the new Wokism. All of these omnicidal AI sychophants are crawling out of the woodwork from every corner of society in synchrony. Where did they all spawn from, and when?! "It was public school"... no, because I've been to public school and my friends were all educated there, and they weren't a bunch of omnicidal maniacs. "It's social media". No, that explanation doesn't work, either, because I was a fully-grown adult when the social-media boom began, and while I saw the transformative effect it was having on my generation, it never actually had a chance to "bake in" to the social fabric. So, it was a hype fad, but it didn't reach the level of truly inter-generational burn-in. Clown World is like what I would expect to happen if you fast-forwarded the circa-2010 trajectory about 3-5 generations. But it happened literally overnight, like a snap of the fingers. I went to bed one night in non-Clown World, and the next morning, I woke up in Clown World. WTH is going on?!
 
Last edited:
Back
Top