ClaytonB
Member
- Joined
- Oct 30, 2011
- Messages
- 10,225
This is what I was getting at, it may be too late. There would have to have been extensive testing on closed systems that mimic networks.
Yep, the dangers and risks are unknown and, frankly, un-quantifiable.
That's not how I see it at all. I think they know it's dangerous, and are fine with it as long as they are in control. They've always been in control, even of their parents' nannies, so why not?
Well, yes, you're right about that. But I think we agree that they are missing that psychological fear of total existential destruction because... they never got smacked as a kid, so the word "consequences" isn't even in their vocabulary. And now the rest of us get to pay for it... great...
Normie I would say is too scared of it, because of scifi and ignorance. I can see how two pixels can be placed dimensionally, and then the computer can be asked what the next most likely point on the matrix would be, given we're trying to create a chair here. But I don't see generalized intelligence. I only see glorified wacky mad-libs that can mimic intelligence. The less experience one has with computers, the more convincing.
The normies scared in the wrong ways... scared of things they shouldn't be, and not scared enough of things they should be. Political systems amplify risk. That's actually their purpose, when you think carefully about it. A capital is a single point in space which, when seized, permits the invader to take control of a vast territory. Centralization increases risk and the purpose of all political systems, except liberty, is to ideally centralize 100% of everything in a single point.
As for what's going on inside the neural nets, I caution against seeing this stuff as mere toys. When you sit down to think hard about something, your mind goes through a sequence of logical steps. What those steps are is not as important as that there are steps, and they are sequential. This is exactly what these artificial neural nets are doing. We can debate the philosophy of whether they're "really" thinking or not, but that's like debating whether an image-generation AI is "really" drawing or not... it's just missing the point. It's producing an output that is conformal to the real thinking patterns in our brains, and it's doing this on such a scale that it's able to begin mimicking human thought through sheer enumeration of nearly all the thinking patterns we use. It's not that it's "creative"... it can't be. It's that its "thought-database" is so enormous that you will only rarely use a thinking pattern that is not already part of its database of thought-patterns. Again, you can wave that away as mere digital trickery but, at some point, what difference does it make? Especially when we start thinking about security threats that AI agents pose to human society (and there is a boundless ocean of such threats, including many existential threats).
Agree, but not autonomously without errors that require human intervention. It can only be a tool that facilitates the desires of the sentient being, until and unless we learn to create sentient beings. This does not look even close to that to me, though I am listening.
Sentience/consciousness is somewhat besides the point. The real issue is agency ("decisions" with real-world effects.) As long as the model is not able to interactively initiate communications IRL, then it's "trapped" in its sandbox. But as soon as you give it the power to autonomously compose and send an email/etc. you've just given it agency. That is the problem. And a sufficiently complex agent is an unbounded threat. We simply don't know what it might do. These things are black-boxes that are smarter than us in many specialized domains, and becoming increasingly similar to our "general-purpose problem-solving" capabilities across all domains. Our power to think is not as special as we like to think. As someone who works in this field, I cannot caution strongly enough. The kind of thinking you do when solving a problem is algorithmic, and computers can (and do) imitate that. And when you have a black-box neural net that is doing this kind of thinking, we have no idea what exactly is going on inside there. Even if its outputs seem "crazy" to us, that's no guarantee that it isn't actually smarter than us. Most of the things a human does in a day -- other than eat, drink and use the toilet -- must surely seem insane to a dog. We do those "crazy" things precisely because we're smarter than the dog.