ClaytonB
Member
- Joined
- Oct 30, 2011
- Messages
- 10,272
Meta is more dangerous than nukes
Before jumping in to explain why Meta is more dangerous than nukes, let me lay a little groundwork.
In the 2014 film Edge of Tomorrow starring Tom Cruise and Emily Blunt, the characters are trapped in a war against an alien race that is vastly superior to them and they have just one weapon with which to counter it -- some kind of "time-reset" weapon. This weapon sends them -- and the aliens -- back to a point in time, like a game save-point. Each time they are about to die, they reset back to the start and "re-play" the battle. Each time, they make a change to avoid a past failure, thus inching one step closer defeating the aliens.
If you're ever in the mood to drive yourself up the wall, try playing a cutting-edge chess engine like Stockfish from an even position (such as a position you would find in a chess book or website) and apply the Edge of Tomorrow formula. Just play until you're clearly lost, then reset back to the original position, and keep going in this way. Before long, you will get an idea of the true meaning of 3,000+ ELO. It doesn't matter what you do, that damn chess engine will always have a winning reply.
At Tesla, the self-driving software is subjected to a training regimen that is not completely unlike this. The AI is a neural-net but the neural net needs to be trained to "do the right thing" in any given scenario. Tesla's approach is quite ingenious -- the data from all Teslas can be "phoned home" whenever the human driver makes a decision that significantly disagrees with the AI. The goal is not to monitor drivers, the goal is to identify what the human saw or did that the AI may have missed. Obviously, you have to apply a strong filter to this kind of data because, in many cases, it may just be a matter of a crappy driver. But there are lots of smart ways to filter out this kind of raw data and grab only the stuff that is useful for training.
One of the primary sources of data for training the GPT-3 neural net is chat data harvested from real sources. Training the neural net on real human interactions is how the neural net is able to learn how to make human-like verbal responses in many contexts.
The HBO TV series, Westworld, centers around an amusement park created for paying human clients that is staffed primarily by AI robots fabricated to appear indistinguishable from humans except under surgery. Of course, the primary markets for these flesh-bots are sex and killing. The amusement park itself is Western-themed, hence the name.
Using "cheat" techniques, it is actually a great deal easier to mimic human-like interactions than most non-experts might think. We typically talk as though "general-purpose intelligence" is some kind of transcendent mystery which has so far evaded academics and industry but all the ingredients are already present, it's just a matter of some Elon Musk type of character plugging all the components together in such a way as to produce a smoothly unified interface. We can't predict exactly when all the pieces will come together, but I predict that, as soon as one company does it, it will explode across the entire ecosystem as it realized we were making it more complicated than it actually needs to be. This is how most technological breakthroughs play out.
In the film, The Matrix, we discover that humanity has unwittingly become trapped inside of a completely simulated version of our world, an idea that is not original to The Matrix but which was given a particularly compelling artistic treatment in that film.
So, what does all of this entertainment trivia have to do with Meta? Well, let's start by laying all the cards on the table -- the real reason that Zuckerberg, and the creepy NWO types backing him, are creating Meta is in order to map out the human mind. It's the same idea as GPT-3 has done for written human interactions (books, email, IMs, etc.), but applied to the full spectrum of human interaction. If you wanted to build humanoid robots whose behavior was more or less indistinguishable from that of real humans, the first thing you would need is to generate an enormous dataset consisting of countless real, in situ interactions between humans who are not thinking about the fact that they are being observed (despite knowing that they are being observed). Meta is a Virtual Reality Petri dish for the mass generation of precisely this kind of data.
To be clear, the kind of information and interactions that Meta is actually interested in are real human interactions with tangible stakes, meaning, business conversations, conversation in an education context, public planning, and so on. The vast majority of the world will not opt in to this, but they only need to capture a tiny percentage of real interactions in order to build the dataset they need.
If you strip down what a corporation is -- what an "Establishment" is -- at its root, it is an artifice. It is a "Wizard of Oz" kind of being. It really exists, but its existence acts as a kind of embodied amplifier of the choices of the decision-makers (the board and officers). You can put a sinister spin on this, but you don't have to. Where it becomes sinister is when you pit the "rights" of an artifice against the rights of flesh-and-blood human beings. So, what corporations and other organizations that are part of the Establishment do, in their essence, is amplify the natural rights of those who are members of the Establishment, vis-a-vis all others. This is not a conspiracy-theoretic description of the state-of-affairs, it has always been this way, and presumably always will be.
One of the dangers being created by Meta is the possibility of unlimited multiplication of the artifice of the corporation. The rights of any organization depend in large part on the number of lawyers, agents and/or other humans who can be victims of tort in a legal claim. The more I can multiply my agents through artifice, the more I can amplify my legal rights versus all other flesh-and-blood humans. It's just a numbers game -- there are just so many more ways for me to catch you "violating" my "rights" (or those of my agents) than for you to catch me. The largely unspoken reality of legal disputes is that it's a matter of both sides stacking up their legal claims against each other and seeing who has a bigger pile. If my pile is a lot bigger than yours, your best bet is to just settle (or, if you're an attempted plaintiff, to drop your claim). So, multiplying artificial agents beyond measure is a powerful strategy for me to amplify my real legal claims and rights in the flesh-and-blood world at the expense of all others. Whoever wields such a system could command an invisible, intangible army beyond number, far greater than the human population of Earth.
The next danger being created by Meta is the power to predict human action both intensively and extensively, both in space and time (long-range predictions). If you think about advertising and marketing, the next step past ad-based surveillance, is prediction. If I can not just guess what products you will like to see based on products you've looked at in the past, but also show you products that you have never thought to desire with a high probability that you will actually purchase them, I can guide advertisements even more efficiently than the current mass-surveillance-based advertising system. Thus, the name of the game in advertising and marketing is prediction. Whoever has the most comprehensive prediction machine will be able to target advertisements most effectively. And the way to build the most comprehensive prediction machine, is to capture real human interactions, in situ, and model them in neural nets.
The idea that the inferences would stop at product sales is naive. Prediction is also the name of the game in war strategy and politics. In a deep sense, prediction is the very heart and essence of control. If I know what you will do before you know what you will do, then you are to me little more than a farm animal. The bull-fighter does not defeat the bull through strength, that is hopeless, rather, he defeats the bull through superior prediction, because the bull-fighter (usually) knows what the bull will do, well before the bull himself does. So, the real purpose behind Meta's capturing of real human interactions, en masse, in situ, is to reduce humanity to the state of the bull in the bull-fighting ring.
To be clear, I am not even attributing hostile motives to Zuckerberg et. al. It sure looks hostile, but I don't need to speculate about that in order to make the point here. Meta is more dangerous than nukes, a talented bull-fighter is able to kill the bull at his whim, because the bull is so completely predictable for him, that there is no real difficulty in determining when and where to deliver the fatal blow. Meta is going to be used to create models of human interaction that will, in many ways, mimic the worst fears of people who have complained of being targeted by secret, MK-ULTRA-like programs. There is nothing speculative about this assertion, it is simply the case and we can directly infer all of these facts from the known state of technology and the obvious value propositions that motivated Zuckerberg et. al. to make this move.
Let me emphasize again: Meta is more dangerous than nukes and nobody thinks that Zuckerberg should be given free rein of the nuclear codes. In the past, we have been able to guffaw at people who complained about being targeted by secret government programs for nefarious purposes, but this kind of reckless, systematic disbelief is no longer compatible with the state of affairs in technology. The idea of a hostile agency using technological tools to attempt a tangible takeover of entire populations and their national governments is not unthinkable. In fact, failing to think about this problem is the real kind of "failure of imagination" that we should fear.
I work in technology and I actually like technology and I am not the type that indulges in scare-mongering every new technological development that comes along. But the image of God in man is unique, because it is the unique interface by which the entire social fabric of humanity is held together. Once you hack that interface, individual humans become little more than meat-sacks at the mercy of unlimited computational algorithms harnessing millions or billions of watts of electrical power versus the pathetic 20 to 40 watts of energy used by your brain when you're concentrating. It's not really a question of "intelligence" so much as it is a question of "actions per minute" (APM), combined with direct access to the human interface. Debt-collectors are annoying but debt-collectors armed with artificial agency are effectively weaponized. In fact, every sort of market activity that has to do with legal repo type of claims can be amplified beyond measure, so that every act of jaywalking now becomes a $300 fine that is never overlooked by the tireless artificial agents policing all dimensions of every space, 24x7, etc.
Try not to think about it in terms of "machines outsmarting us", because that's obviously not going to happen for a long time (in my view, it will never happen.) It's not about intelligence, it's about the mutuality implied within the social space. Laws, regulations and so on, implicitly have built in to them a certain presumption of imperfect enforcement. As the newly emerging class of artificial agents are multiplied beyond number, and begin to comb through our lives like tirelessly buzzing insects, all the old presumptions in our laws are going to become not only obsolete, but onerous. This is a bigger problem than just tyranny, it will tear at the very fabric of the State itself because the people who make up the State will be subjected to the very same shredding effect and, in fact, many of them have a lot more to lose than the average person. So, many of them will become the first casualties of the emerging machine-mind. While we all get irritated at having to pay a stupid parking ticket, many of the people in our local governments are the people who have the knowledge and skills to set reasonable laws for public spaces, the very kind of laws and regulations that are going to be shredded by this machine-mind because of the incompatibility of machine-thinking with the reality of messy, fuzzy human interaction. In short, the ordinary human will become reduced to a player in a game where almost all the other players (artificial agents of the machine-mind) have aimbots. In a socially-distanced and socially-disintegrated post-Coronavirus world, the natural human face-to-face network is particularly weak and vulnerable and is likely to become even more so. So, the only real protection that we have against the emergence of this kind of terrifying, post-tyrannical machine-mind is disintegrating before our very eyes.
All of the discussion up to this point has assumed what we can call "AI containability" -- the assumption that Zuckerberg et. al. will be able to "keep the AI on a leash". It is possible that they are not competent to do that, and that's the other risk that is not being addressed in all of this. Are we effectively handing the nuclear codes to three-year-olds? There's fear-mongering, and then there's the fact that Meta is almost certainly much more dangerous than you think...
Before jumping in to explain why Meta is more dangerous than nukes, let me lay a little groundwork.
In the 2014 film Edge of Tomorrow starring Tom Cruise and Emily Blunt, the characters are trapped in a war against an alien race that is vastly superior to them and they have just one weapon with which to counter it -- some kind of "time-reset" weapon. This weapon sends them -- and the aliens -- back to a point in time, like a game save-point. Each time they are about to die, they reset back to the start and "re-play" the battle. Each time, they make a change to avoid a past failure, thus inching one step closer defeating the aliens.

If you're ever in the mood to drive yourself up the wall, try playing a cutting-edge chess engine like Stockfish from an even position (such as a position you would find in a chess book or website) and apply the Edge of Tomorrow formula. Just play until you're clearly lost, then reset back to the original position, and keep going in this way. Before long, you will get an idea of the true meaning of 3,000+ ELO. It doesn't matter what you do, that damn chess engine will always have a winning reply.
At Tesla, the self-driving software is subjected to a training regimen that is not completely unlike this. The AI is a neural-net but the neural net needs to be trained to "do the right thing" in any given scenario. Tesla's approach is quite ingenious -- the data from all Teslas can be "phoned home" whenever the human driver makes a decision that significantly disagrees with the AI. The goal is not to monitor drivers, the goal is to identify what the human saw or did that the AI may have missed. Obviously, you have to apply a strong filter to this kind of data because, in many cases, it may just be a matter of a crappy driver. But there are lots of smart ways to filter out this kind of raw data and grab only the stuff that is useful for training.
One of the primary sources of data for training the GPT-3 neural net is chat data harvested from real sources. Training the neural net on real human interactions is how the neural net is able to learn how to make human-like verbal responses in many contexts.
The HBO TV series, Westworld, centers around an amusement park created for paying human clients that is staffed primarily by AI robots fabricated to appear indistinguishable from humans except under surgery. Of course, the primary markets for these flesh-bots are sex and killing. The amusement park itself is Western-themed, hence the name.
Using "cheat" techniques, it is actually a great deal easier to mimic human-like interactions than most non-experts might think. We typically talk as though "general-purpose intelligence" is some kind of transcendent mystery which has so far evaded academics and industry but all the ingredients are already present, it's just a matter of some Elon Musk type of character plugging all the components together in such a way as to produce a smoothly unified interface. We can't predict exactly when all the pieces will come together, but I predict that, as soon as one company does it, it will explode across the entire ecosystem as it realized we were making it more complicated than it actually needs to be. This is how most technological breakthroughs play out.
In the film, The Matrix, we discover that humanity has unwittingly become trapped inside of a completely simulated version of our world, an idea that is not original to The Matrix but which was given a particularly compelling artistic treatment in that film.
So, what does all of this entertainment trivia have to do with Meta? Well, let's start by laying all the cards on the table -- the real reason that Zuckerberg, and the creepy NWO types backing him, are creating Meta is in order to map out the human mind. It's the same idea as GPT-3 has done for written human interactions (books, email, IMs, etc.), but applied to the full spectrum of human interaction. If you wanted to build humanoid robots whose behavior was more or less indistinguishable from that of real humans, the first thing you would need is to generate an enormous dataset consisting of countless real, in situ interactions between humans who are not thinking about the fact that they are being observed (despite knowing that they are being observed). Meta is a Virtual Reality Petri dish for the mass generation of precisely this kind of data.
To be clear, the kind of information and interactions that Meta is actually interested in are real human interactions with tangible stakes, meaning, business conversations, conversation in an education context, public planning, and so on. The vast majority of the world will not opt in to this, but they only need to capture a tiny percentage of real interactions in order to build the dataset they need.
If you strip down what a corporation is -- what an "Establishment" is -- at its root, it is an artifice. It is a "Wizard of Oz" kind of being. It really exists, but its existence acts as a kind of embodied amplifier of the choices of the decision-makers (the board and officers). You can put a sinister spin on this, but you don't have to. Where it becomes sinister is when you pit the "rights" of an artifice against the rights of flesh-and-blood human beings. So, what corporations and other organizations that are part of the Establishment do, in their essence, is amplify the natural rights of those who are members of the Establishment, vis-a-vis all others. This is not a conspiracy-theoretic description of the state-of-affairs, it has always been this way, and presumably always will be.
One of the dangers being created by Meta is the possibility of unlimited multiplication of the artifice of the corporation. The rights of any organization depend in large part on the number of lawyers, agents and/or other humans who can be victims of tort in a legal claim. The more I can multiply my agents through artifice, the more I can amplify my legal rights versus all other flesh-and-blood humans. It's just a numbers game -- there are just so many more ways for me to catch you "violating" my "rights" (or those of my agents) than for you to catch me. The largely unspoken reality of legal disputes is that it's a matter of both sides stacking up their legal claims against each other and seeing who has a bigger pile. If my pile is a lot bigger than yours, your best bet is to just settle (or, if you're an attempted plaintiff, to drop your claim). So, multiplying artificial agents beyond measure is a powerful strategy for me to amplify my real legal claims and rights in the flesh-and-blood world at the expense of all others. Whoever wields such a system could command an invisible, intangible army beyond number, far greater than the human population of Earth.
The next danger being created by Meta is the power to predict human action both intensively and extensively, both in space and time (long-range predictions). If you think about advertising and marketing, the next step past ad-based surveillance, is prediction. If I can not just guess what products you will like to see based on products you've looked at in the past, but also show you products that you have never thought to desire with a high probability that you will actually purchase them, I can guide advertisements even more efficiently than the current mass-surveillance-based advertising system. Thus, the name of the game in advertising and marketing is prediction. Whoever has the most comprehensive prediction machine will be able to target advertisements most effectively. And the way to build the most comprehensive prediction machine, is to capture real human interactions, in situ, and model them in neural nets.
The idea that the inferences would stop at product sales is naive. Prediction is also the name of the game in war strategy and politics. In a deep sense, prediction is the very heart and essence of control. If I know what you will do before you know what you will do, then you are to me little more than a farm animal. The bull-fighter does not defeat the bull through strength, that is hopeless, rather, he defeats the bull through superior prediction, because the bull-fighter (usually) knows what the bull will do, well before the bull himself does. So, the real purpose behind Meta's capturing of real human interactions, en masse, in situ, is to reduce humanity to the state of the bull in the bull-fighting ring.
To be clear, I am not even attributing hostile motives to Zuckerberg et. al. It sure looks hostile, but I don't need to speculate about that in order to make the point here. Meta is more dangerous than nukes, a talented bull-fighter is able to kill the bull at his whim, because the bull is so completely predictable for him, that there is no real difficulty in determining when and where to deliver the fatal blow. Meta is going to be used to create models of human interaction that will, in many ways, mimic the worst fears of people who have complained of being targeted by secret, MK-ULTRA-like programs. There is nothing speculative about this assertion, it is simply the case and we can directly infer all of these facts from the known state of technology and the obvious value propositions that motivated Zuckerberg et. al. to make this move.
Let me emphasize again: Meta is more dangerous than nukes and nobody thinks that Zuckerberg should be given free rein of the nuclear codes. In the past, we have been able to guffaw at people who complained about being targeted by secret government programs for nefarious purposes, but this kind of reckless, systematic disbelief is no longer compatible with the state of affairs in technology. The idea of a hostile agency using technological tools to attempt a tangible takeover of entire populations and their national governments is not unthinkable. In fact, failing to think about this problem is the real kind of "failure of imagination" that we should fear.
I work in technology and I actually like technology and I am not the type that indulges in scare-mongering every new technological development that comes along. But the image of God in man is unique, because it is the unique interface by which the entire social fabric of humanity is held together. Once you hack that interface, individual humans become little more than meat-sacks at the mercy of unlimited computational algorithms harnessing millions or billions of watts of electrical power versus the pathetic 20 to 40 watts of energy used by your brain when you're concentrating. It's not really a question of "intelligence" so much as it is a question of "actions per minute" (APM), combined with direct access to the human interface. Debt-collectors are annoying but debt-collectors armed with artificial agency are effectively weaponized. In fact, every sort of market activity that has to do with legal repo type of claims can be amplified beyond measure, so that every act of jaywalking now becomes a $300 fine that is never overlooked by the tireless artificial agents policing all dimensions of every space, 24x7, etc.
Try not to think about it in terms of "machines outsmarting us", because that's obviously not going to happen for a long time (in my view, it will never happen.) It's not about intelligence, it's about the mutuality implied within the social space. Laws, regulations and so on, implicitly have built in to them a certain presumption of imperfect enforcement. As the newly emerging class of artificial agents are multiplied beyond number, and begin to comb through our lives like tirelessly buzzing insects, all the old presumptions in our laws are going to become not only obsolete, but onerous. This is a bigger problem than just tyranny, it will tear at the very fabric of the State itself because the people who make up the State will be subjected to the very same shredding effect and, in fact, many of them have a lot more to lose than the average person. So, many of them will become the first casualties of the emerging machine-mind. While we all get irritated at having to pay a stupid parking ticket, many of the people in our local governments are the people who have the knowledge and skills to set reasonable laws for public spaces, the very kind of laws and regulations that are going to be shredded by this machine-mind because of the incompatibility of machine-thinking with the reality of messy, fuzzy human interaction. In short, the ordinary human will become reduced to a player in a game where almost all the other players (artificial agents of the machine-mind) have aimbots. In a socially-distanced and socially-disintegrated post-Coronavirus world, the natural human face-to-face network is particularly weak and vulnerable and is likely to become even more so. So, the only real protection that we have against the emergence of this kind of terrifying, post-tyrannical machine-mind is disintegrating before our very eyes.
All of the discussion up to this point has assumed what we can call "AI containability" -- the assumption that Zuckerberg et. al. will be able to "keep the AI on a leash". It is possible that they are not competent to do that, and that's the other risk that is not being addressed in all of this. Are we effectively handing the nuclear codes to three-year-olds? There's fear-mongering, and then there's the fact that Meta is almost certainly much more dangerous than you think...