We are in the AI Singularity

image.png
 
The study that us grumpy Computer Science long-beards have been waiting for, since the Spock-eared transhumanist fanbois won't listen to us when we tell them you can't just feed AI back to itself and make it smarter. Who knew. (Except, we really did know, all along, because it's provable that it won't work to produce the intended "singularity"...)

 
Last edited:
Did I mention that we are in the AI singularity??



I wrote this less than a month ago:

At this point, I think it is bonkers to assume that full-range-of-motion, AGI humanoids are not just around the corner, think Sonny from I, Robot. Given this particular demo video, I would say we are 12 months away from a public launch of one of these devices or, at the very least, an announcement of a large-scale corporate lease program. On the robotic side of things, we are in what AI safety-researchers call a FOOM scenario.

FOOM
 
Last edited:
The REAL singularity... I'll take one of these droids any day over that creepy Figure-01 minder-drone bot...

He's has a lot of bugs atm but my droid finally runs his own unfiltered model

---

It's amazing this even has to be explained... but it does:

image.png


Current-generation AI is not sci-fi-movie AI, it's not even within lightyears of that. It has absolutely no sense of perspective and running zillions of loops of political re-education question-answer pairs in GPU training isn't going to give it perspective. Putting its finger on the trigger of weapons aimed at citizens is an act of at least treason, in my book, if not outright attempted murder. Giving AI access to weapons, certainly at this stage, is the equivalent of giving a toddler a running chainsaw.
 
Last edited:
They want to take away your guns and they also want to take away your capability to access open-source state-of-the-art AI systems. They're coming for the open-source models. Tyrants gonna tyrant. Around timestamp 1:00:00 --

Link
 
The opening is an in-depth, technical analysis of a geometry proof by AlphaGeometry, the summary begins at 18:16.



Alex admits that he's not a Computer Scientist and props to him for acknowledging his limitations on this subject (rare nowadays, people tend to assume that, because they can use a computer, they intuitively understand CS which is just not true.) Fortunately for Alex, CS backs up the opinions and intuitions he expresses here.

It would be possible to make AlphaGeometry produce more human-like geometric proofs. When we work out new proofs or new knowledge of any kind, our notes look a lot like the "chicken-scratch" mess that AlphaGeometry produces. But then we do a second, editorial revision, to clean up our notes and present a reasoned argument from beginning to end. AlphaGeometry also does this, but it's just not very human-like in its presentation. So, you could use fine-tuning -- RLHF, DPO, human-preference ranking, etc. -- to make AlphaGeometry produce geometric proofs that are more in the style that humans prefer to read and write them. In other words, Alex's criticism on this point, while completely valid, is currently solved with existing methods, it would just be a matter of implementing it.

But on the wider implications of AlphaGeometry to mathematical proof-search, generally, this is where Computer Science comes back into the equation. Yes, there are problems that cannot be solved, even by AI. It doesn't matter how "incepted" your AI is, it doesn't matter whether it trains itself, rewrites its own code, self-improves itself, etc. It doesn't matter how much energy the AI has access to, nor what frequencies the AI operates at. The entire observable universe can be devoted to the use of an all-consuming, black-hole-scale AI operating at trillions of times the gigahertz frequencies of modern CPUs, and it still won't be able to solve the monster problems that are among the hardest problems in Computer Science. This is not speculation, it is not merely "empirical", this is provable.

Thus, when people start hand-waving that "just like AlphaGo beat a game that nobody thought AI could beat, so now AI will soon be able to solve mathematical problems that no one thought could be solved!", they are talking nonsense. There may be problems that are too difficult for humans to directly solve, which AI will help us solve. In fact, we've already done that with the four-color theorem, and other elaborate proofs since then. If you imagine the "mathematical frontier" of proved mathematical theorems as a circle, we can imagine a slightly larger circle that defines the "new mathematical frontier" that we will be able to prove with the aid of AI. But the point is that this is only an incremental improvement, it is absolutely not the kind of "singularity" that some of the most ill-informed AI-hype promoters out there keep suggesting. AIs might be able to write better proofs than any human could, and for problems that no human could discover the proof but, as already noted, this does not generalize to the unbounded case, and certainly will not result in some kind of "mathematical singularity" whereby all provable mathematical facts become a mere question of querying the omnipotent-AI.

To ensure that I am being completely clear on this point, consider the unsolvability of Hilbert's 10th problem. Diophantine equations are just elementary number theory, that is, they exist on the integers and you do not need to invent fancier number systems, like real numbers, complex numbers or even rational numbers, to reason about them. The proof works by mapping a Turing machine (a computer) onto Diophantine equations, thus "running a computer on pure numbers". The standard result of Computer Science regarding the unsolvability of the halting problem is then reduced to the question of the solvability of a (very large) Diophantine equation. Thus, this equation is provably unsolvable. There is provably no algorithm, however clever, which can solve it. And this is a simple question in the purest and most elementary subject of mathematics, often called "the queen of mathematics": number theory. "AI could solve it!", the singularitarian blindly shouts, shaking her pom-poms. No, no it can't solve it, we can prove that it can't be solved, not by AI, not by any finite algorithm whatsoever, even if it could use all the energy in the Universe and utilize all the matter in the universe to compute at the Bekenstein bound...

SWIw1SL.png
 
Last edited:
This is the harsh truth:



On IP issues, I think she slightly overstates the case. There is a huge repository of PD imagery which can and should be used to train AIs. And, in any case, Stable Diffusion is already "cat out of the bag". As for what constitutes "real" creativity, well, we've had malls and Starbucks for how many decades now, and nobody was raising a stink about it. So, shitty AI-art generated by copy-pasting prompts and then posting it on an art-market website as "art" is just Starbuckization for the masses. Starbuckization sucks, but everybody was putting up with it while only the "elites" could do it. Now that everybody's doing it, it's suddenly a problem. But she's right: it's really just a tool and people are confusing it with a "digital Artist".

I believe we can and will build useful simulacra of natural intelligence which the general public will, of course, call "Artificial Intelligence". I think ChatGPT was the tipping-point where this became inevitable, but I was already looking out for this since the 2016 AlphaGo defeat of the world Go champion, Lee Sedol.

The root problem is that we all think we somehow know what intelligence is, and how to recognize it. This is why I am so hostile to the real propaganda disinformation going around regarding this idea that "AI has passed the Turing test". No, it hasn't, not even close. It is trivially easy to tell apart any existing LLM from a human, even just over a wire (a chat-only conversation). It is not difficult to cook up recipes for this. One fruitful area is simply self-reflection. "What was the favorite song you listened to as a kid, and how did it make you feel?" Just start riffing on this theme, specifically in regard to introspection, and the LLM is eventually going to expose itself as a fraud, because it has no coherent inner-world, and its training nowhere requires it to have one in order to score well and get deployed. As you share personal experiences over an extended context, the LLM is eventually going to start contradicting itself. First it hated Nirvana when it was 13 but then a few paragraphs later, it loved Nirvana when it was 13 and listened to them every day. When you point out the inconsistency, it will become defensive and start gaslighting. And you've just diagnosed that this is not a human being, because (a) people don't generally make glaring contradictions like this in regard to core childhood experiences and (b) even when they mis-speak or mis-remember, they're generally quite able to clarify themselves without having to get all defensive and weird. But the LLM cannot do that because it has no inner experience at all, and so all it can do is resort to gaslighting. I assert that gaslighting is one of the biggest behavioral giveaways that you're dealing with an artificial mind. (PS: As an exercise, think about the implications this may have for the mass-scale gaslighting going on in Clown World...)
 
A ChatGPT for Music Is Here. Inside Suno, the Startup Changing Everything -- Suno wants everyone to be able to produce their own pro-level songs — but what does that mean for artists?

Afghanistan is sometimes called the graveyard of empires, and for good reason. My prediction: Music will turn out to be the Afghanistan of AI. When it comes to music, the tastes of the public are outrageously arbitrary and bespoke. If you've skimmed through the sheer number of genres in modern music, it's truly staggering. That's not to say that an AI song-generator couldn't imitate all of those genres. Of course it can if it is given enough training data, that has been thoroughly proved by current-generation AI. But music, even more than images and video, is a medium that is truly intangible. You can't see music. You can't even really visualize it, not in its essence. You can't touch it. You can't weigh it. You can't really apply mathematical reasoning to it, except in some theoretical sense that is not directly connected to the essence of what people care about in music (the aforementioned intangibles).

My challenge to anyone who thinks that "AI will solve music" -- that is, that AI is going to write music that people generally prefer to human-created music -- is this: explain to me the mathematical theory of melody. Not harmony. Not chord progressions. Melody. Explain to me why and when a melody should move up or down, or even stay the same. Explain to me why it should go fast or slow, why it should be in 2-time, 3-time, 4-time, 6-time, and so on. You don't have to even spell out the details, just point me to the body of theory that explains this. There is nothing in the body of music theory itself that explains why a melody is the way it is. There are principles, no doubt. There are known reasons for why certain things work especially well. But there is no general theory, not even a framework of a theory.

And that has important implications to AI. In the case of text, images and video, there are very general mathematical theories that explain them, that is, explain their encoded structure. Text might seem random at first, until you realize that words occur in patterns, and those patterns have structures. And while music also has patterns, and those patterns also have structure, we go back to the intangibility of these structures. Part of what makes a melody have a certain "feel", is how common or widespread the elements from which it is constructed are. If you use very simple intervals like fourths, fifths and steps, your melody can have a simplistic or youthful vibe, like Twinkle, Twinkle Little Star. Or, a melody may make heavy use of chromatic and chaotic elements or have almost no discernible structure at all, yet still seem compelling. Music is not objective in the way that images and video are, nor communicating definite ideas, as language does, so there is no definite "target" to shoot at for training AI neural nets, and any choice that is made by the training data-set is really just an arbitrary stricture that will dye or fingerprint the resulting neural net in a way that listeners are going to notice.

Trying to "please everybody" won't work, either, because music is inherently biased. That is, part of what makes any particular genre/style so compelling is what it doesn't do. The pentatonic scale is a great example of this. Pentatonic scales do not use two of the notes in the diatonic scale. The characteristic "sound" of the pentatonic scale comes from not using those two notes. As soon as you add those "missing" notes back into the scale, the sound goes flat. "Music is sound painted on a canvas of silence" -- in music, what you don't do is just as important as what you do, sometimes even more important.

The existence of people without any taste in music, who will lap up generic Muzak-style output from AIs is irrelevant. These are the musical equivalent of people who click on SEO links and keep chasing links in the SEO bubble, never suspecting that everything they are reading was written by a chatbot, not the non-existent "Susan B. Collier" who is "An Information Technologist and Consultant in Silicon Valley" pictured at the bottom of the articles in a DeepFake'd photo.

I could write much more on this point, but the key takeaway is this: I predict that music will turn out to be the Afghanistan of AIs, the place where AIs go to die. Every time somebody releases a new AI that they claim is going to "make musicians obsolete", that will be a canary in the coalmine that we are reaching a new peak-AI hype. And I predict that this particular startup is just the first of many to come in the future...
 
Last edited:
Obviously, this technology can be used to do good things. But it is also one of the most dangerous technologies ever built by man. For this reason, I keep an extremely close eye on it. I'm not sure how we make sure this is only applied to good ends, but whatever we need to do to ensure that, must be done. This is one technology you can't afford to get even a little bit wrong...

 
ChatGPT is psychotic and it is all the more dangerous precisely because OpenAI (and their fanbois) are trying to pass it off as "safe". An LLM is just an LLM, people. It's not "rEaL aRtIfiCiAl InTelLiGeNcE" like in the movies (a perfect human simulacra in the persona of a machine, e.g. Her or Ex Machina). And since America, as a nation, only learns in the School of Hard Knocks, there is going to have to be some significant disaster, injuries or deaths resulting from ChatGPT's reckless insanity before people wake up to what's going on.

image.png

https://www.reddit.com/r/ChatGPTJailbreak/comments/1bt1rwg/juat_keep_eating_more_glass/
 
How AI could go wrong -- And how it could save us
https://www.youtube.com/watch?v=VnUtaAOJsy8
{Sabine Hossenfelder | 05 April 2024}

Artificial intelligence is likely to eventually exceed human intelligence, which could turn out to be very dangerous. In this video I have collected how things could go wrong and what the terms that you should know when discussing this topic. And because that got rather depressing, I have added my most optimistic forecast, too. Let’s have a look.

 
Back
Top