We are in the AI Singularity

This is what I was getting at, it may be too late. There would have to have been extensive testing on closed systems that mimic networks.

Yep, the dangers and risks are unknown and, frankly, un-quantifiable.

That's not how I see it at all. I think they know it's dangerous, and are fine with it as long as they are in control. They've always been in control, even of their parents' nannies, so why not?

Well, yes, you're right about that. But I think we agree that they are missing that psychological fear of total existential destruction because... they never got smacked as a kid, so the word "consequences" isn't even in their vocabulary. And now the rest of us get to pay for it... great...

Normie I would say is too scared of it, because of scifi and ignorance. I can see how two pixels can be placed dimensionally, and then the computer can be asked what the next most likely point on the matrix would be, given we're trying to create a chair here. But I don't see generalized intelligence. I only see glorified wacky mad-libs that can mimic intelligence. The less experience one has with computers, the more convincing.

The normies scared in the wrong ways... scared of things they shouldn't be, and not scared enough of things they should be. Political systems amplify risk. That's actually their purpose, when you think carefully about it. A capital is a single point in space which, when seized, permits the invader to take control of a vast territory. Centralization increases risk and the purpose of all political systems, except liberty, is to ideally centralize 100% of everything in a single point.

As for what's going on inside the neural nets, I caution against seeing this stuff as mere toys. When you sit down to think hard about something, your mind goes through a sequence of logical steps. What those steps are is not as important as that there are steps, and they are sequential. This is exactly what these artificial neural nets are doing. We can debate the philosophy of whether they're "really" thinking or not, but that's like debating whether an image-generation AI is "really" drawing or not... it's just missing the point. It's producing an output that is conformal to the real thinking patterns in our brains, and it's doing this on such a scale that it's able to begin mimicking human thought through sheer enumeration of nearly all the thinking patterns we use. It's not that it's "creative"... it can't be. It's that its "thought-database" is so enormous that you will only rarely use a thinking pattern that is not already part of its database of thought-patterns. Again, you can wave that away as mere digital trickery but, at some point, what difference does it make? Especially when we start thinking about security threats that AI agents pose to human society (and there is a boundless ocean of such threats, including many existential threats).

Agree, but not autonomously without errors that require human intervention. It can only be a tool that facilitates the desires of the sentient being, until and unless we learn to create sentient beings. This does not look even close to that to me, though I am listening.

Sentience/consciousness is somewhat besides the point. The real issue is agency ("decisions" with real-world effects.) As long as the model is not able to interactively initiate communications IRL, then it's "trapped" in its sandbox. But as soon as you give it the power to autonomously compose and send an email/etc. you've just given it agency. That is the problem. And a sufficiently complex agent is an unbounded threat. We simply don't know what it might do. These things are black-boxes that are smarter than us in many specialized domains, and becoming increasingly similar to our "general-purpose problem-solving" capabilities across all domains. Our power to think is not as special as we like to think. As someone who works in this field, I cannot caution strongly enough. The kind of thinking you do when solving a problem is algorithmic, and computers can (and do) imitate that. And when you have a black-box neural net that is doing this kind of thinking, we have no idea what exactly is going on inside there. Even if its outputs seem "crazy" to us, that's no guarantee that it isn't actually smarter than us. Most of the things a human does in a day -- other than eat, drink and use the toilet -- must surely seem insane to a dog. We do those "crazy" things precisely because we're smarter than the dog.
 


ChatGPT is a bigger deal than you think. Some have compared it to the invention of the printing-press.... others have compared it to the invention of the wheel. I avoid hype like the plague but, in this case, it's practically impossible to exaggerate just how big a deal this really is. My intention in this thread is not to hype you to go use it, I don't care if you use it. I only interact with it to keep my finger on its pulse. Rather, people need to wake up and understand that their world has been changed overnight, and they just haven't realized it yet. The cyborgism/transhumanism dimension also doesn't interest me. What concerns me is that a lot of our social institutions (including foundational institutions, such as romantic relationships, home life and family in general) are going to be disrupted in a way that was impossible just one year ago. Not "could be" disrupted... they will be disrupted. We said this when books began to be published, and we said it again when radio and TV were invented, and yet again when the Internet was invented... and it was true each time, but we survived. But the disruption those changes caused will look like a drop in a bucket compared to the disruption that AI is going to cause. The cat's already out of the bag, so there's no putting it back in. But there's a difference between letting a cat out of the bag versus getting pounced by a leopard in the dark. People need to wake up and realize there is a wild leopard loose in the house. I don't know what the ramifications of AI will be, I only know that they're going to be enormous...
 
Ready or not, here it comes...



Good video. I feel dumb for thinking of only one AI. Of course there will be many with all kinds of rules competing. Theoretically it only takes one dominant evil one, though I don't believe evil people are capable of engineering as well as normal people, all else equal (being evil is a side effect of not understanding self and the world).
 
Yep, the dangers and risks are unknown and, frankly, un-quantifiable.

I can see that now, thanks.

Well, yes, you're right about that. But I think we agree that they are missing that psychological fear of total existential destruction because... they never got smacked as a kid, so the word "consequences" isn't even in their vocabulary. And now the rest of us get to pay for it... great...

Well, we agree that they lack the incentive to change, but I don't believe hitting people solves problems. It may solve the hitter's problem, but it's just selfish to ignore all the other problems it has caused. I am still dealing with side effects of someone else being aggressive with my child, years later.

The normies scared in the wrong ways... scared of things they shouldn't be, and not scared enough of things they should be.

I agree

Political systems amplify risk. That's actually their purpose, when you think carefully about it. A capital is a single point in space which, when seized, permits the invader to take control of a vast territory. Centralization increases risk and the purpose of all political systems, except liberty, is to ideally centralize 100% of everything in a single point.

Excellent point that I've never thought of.

The kind of thinking you do when solving a problem is algorithmic, and computers can (and do) imitate that.

Heuristic?

And when you have a black-box neural net that is doing this kind of thinking, we have no idea what exactly is going on inside there. Even if its outputs seem "crazy" to us, that's no guarantee that it isn't actually smarter than us. Most of the things a human does in a day -- other than eat, drink and use the toilet -- must surely seem insane to a dog. We do those "crazy" things precisely because we're smarter than the dog.

LOL true, true
 
Heuristic?

I hesitate to put this out there -- given how far down the rabbit-hole we already are with ChatGPT -- but here goes. Human thought is not just heuristic. Our thinking ability is a combination of "fuzzy heuristics" (metaphors, analogies, visualizations, etc.) ... sitting on top of a core algorithm that is, well, exact. Your capacity for reasoning is not fuzzy. When you understand that a chess position is mate-in-1 or mate-in-2, your understanding is not just a "fuzzy heuristic", rather, "you know that you know that you know..." (ad nauseum) that the position is checkmate. Of course, we mostly use our heuristic thinking. For example, searching the board for a checkmate that is there, but which you haven't seen yet, is largely heuristic. However, we also rely on exact-reasoning much more frequently than we realize. Spatial reasoning is a great example of this. In football, the ball is either in-bounds or out-of-bounds (discrete logical categories) and what rules apply depends on which state the ball is in (spatially). This is not a "fuzzy heuristic" except in the extremely rare case where the ball is so close to the line that a ref or the judges have to make a call. But even then, the call that is made is defined to be the state of the ball, so we go right back to exact-logic. And keep in mind that a lot of the snap decisions that a football-player makes as he's in motion are based on an internal calculus regarding the various penalties based on the spatial location of the ball. Some of this is heuristic, obviously, but a lot of it is actually high-speed, exact reasoning!

GOFAI will yet have its vengeance!
 
Last edited:
Theoretically it only takes one dominant evil one, though I don't believe evil people are capable of engineering as well as normal people, all else equal (being evil is a side effect of not understanding self and the world).

Bill Gates can't code. But he can sure as hell pay people to code. Computers. DNA.

Why do they deliberately trash the economy? Because then they have all the money. They may have less money than they could leech off of a healthy economy. But if there's anything in this world that can erode morality, it's hunger.

They can get their evil engineering done.
 
AGI is here. It turns out that it's not that complicated. General-purpose language-model + task-list, basically (and some kind of action-space). This guy does kind of like to hear himself talk, but he's at least throwing out keywords that people should be increasing their awareness of...

 
I, for one, am no longer scared of our ChatGPT Overlord...

h05x8ugvh4ta1.jpg


y0gmwsgvh4ta1.jpg


r2z5ssgvh4ta1.jpg


opb8sugvh4ta1.jpg


8jtacugvh4ta1.jpg


x0p5ysgvh4ta1.jpg


1gud7ugvh4ta1.jpg


nzq6zsgvh4ta1.jpg


The Left can't meme...

ChatGPT can't meme... in exactly the same way...

:confused: Hmmm.................
 
Last edited:
You thought psych-med fueled shootings were bad... wait 'til we get AI-fueled shootings...

@50:56
 
Last edited:
Back
Top