Meta is more dangerous than nukes

ClaytonB

Member
Joined
Oct 30, 2011
Messages
10,270
Meta is more dangerous than nukes

Before jumping in to explain why Meta is more dangerous than nukes, let me lay a little groundwork.

In the 2014 film Edge of Tomorrow starring Tom Cruise and Emily Blunt, the characters are trapped in a war against an alien race that is vastly superior to them and they have just one weapon with which to counter it -- some kind of "time-reset" weapon. This weapon sends them -- and the aliens -- back to a point in time, like a game save-point. Each time they are about to die, they reset back to the start and "re-play" the battle. Each time, they make a change to avoid a past failure, thus inching one step closer defeating the aliens.

image.png


If you're ever in the mood to drive yourself up the wall, try playing a cutting-edge chess engine like Stockfish from an even position (such as a position you would find in a chess book or website) and apply the Edge of Tomorrow formula. Just play until you're clearly lost, then reset back to the original position, and keep going in this way. Before long, you will get an idea of the true meaning of 3,000+ ELO. It doesn't matter what you do, that damn chess engine will always have a winning reply.

At Tesla, the self-driving software is subjected to a training regimen that is not completely unlike this. The AI is a neural-net but the neural net needs to be trained to "do the right thing" in any given scenario. Tesla's approach is quite ingenious -- the data from all Teslas can be "phoned home" whenever the human driver makes a decision that significantly disagrees with the AI. The goal is not to monitor drivers, the goal is to identify what the human saw or did that the AI may have missed. Obviously, you have to apply a strong filter to this kind of data because, in many cases, it may just be a matter of a crappy driver. But there are lots of smart ways to filter out this kind of raw data and grab only the stuff that is useful for training.

One of the primary sources of data for training the GPT-3 neural net is chat data harvested from real sources. Training the neural net on real human interactions is how the neural net is able to learn how to make human-like verbal responses in many contexts.

The HBO TV series, Westworld, centers around an amusement park created for paying human clients that is staffed primarily by AI robots fabricated to appear indistinguishable from humans except under surgery. Of course, the primary markets for these flesh-bots are sex and killing. The amusement park itself is Western-themed, hence the name.

Using "cheat" techniques, it is actually a great deal easier to mimic human-like interactions than most non-experts might think. We typically talk as though "general-purpose intelligence" is some kind of transcendent mystery which has so far evaded academics and industry but all the ingredients are already present, it's just a matter of some Elon Musk type of character plugging all the components together in such a way as to produce a smoothly unified interface. We can't predict exactly when all the pieces will come together, but I predict that, as soon as one company does it, it will explode across the entire ecosystem as it realized we were making it more complicated than it actually needs to be. This is how most technological breakthroughs play out.

In the film, The Matrix, we discover that humanity has unwittingly become trapped inside of a completely simulated version of our world, an idea that is not original to The Matrix but which was given a particularly compelling artistic treatment in that film.

So, what does all of this entertainment trivia have to do with Meta? Well, let's start by laying all the cards on the table -- the real reason that Zuckerberg, and the creepy NWO types backing him, are creating Meta is in order to map out the human mind. It's the same idea as GPT-3 has done for written human interactions (books, email, IMs, etc.), but applied to the full spectrum of human interaction. If you wanted to build humanoid robots whose behavior was more or less indistinguishable from that of real humans, the first thing you would need is to generate an enormous dataset consisting of countless real, in situ interactions between humans who are not thinking about the fact that they are being observed (despite knowing that they are being observed). Meta is a Virtual Reality Petri dish for the mass generation of precisely this kind of data.

To be clear, the kind of information and interactions that Meta is actually interested in are real human interactions with tangible stakes, meaning, business conversations, conversation in an education context, public planning, and so on. The vast majority of the world will not opt in to this, but they only need to capture a tiny percentage of real interactions in order to build the dataset they need.

If you strip down what a corporation is -- what an "Establishment" is -- at its root, it is an artifice. It is a "Wizard of Oz" kind of being. It really exists, but its existence acts as a kind of embodied amplifier of the choices of the decision-makers (the board and officers). You can put a sinister spin on this, but you don't have to. Where it becomes sinister is when you pit the "rights" of an artifice against the rights of flesh-and-blood human beings. So, what corporations and other organizations that are part of the Establishment do, in their essence, is amplify the natural rights of those who are members of the Establishment, vis-a-vis all others. This is not a conspiracy-theoretic description of the state-of-affairs, it has always been this way, and presumably always will be.

One of the dangers being created by Meta is the possibility of unlimited multiplication of the artifice of the corporation. The rights of any organization depend in large part on the number of lawyers, agents and/or other humans who can be victims of tort in a legal claim. The more I can multiply my agents through artifice, the more I can amplify my legal rights versus all other flesh-and-blood humans. It's just a numbers game -- there are just so many more ways for me to catch you "violating" my "rights" (or those of my agents) than for you to catch me. The largely unspoken reality of legal disputes is that it's a matter of both sides stacking up their legal claims against each other and seeing who has a bigger pile. If my pile is a lot bigger than yours, your best bet is to just settle (or, if you're an attempted plaintiff, to drop your claim). So, multiplying artificial agents beyond measure is a powerful strategy for me to amplify my real legal claims and rights in the flesh-and-blood world at the expense of all others. Whoever wields such a system could command an invisible, intangible army beyond number, far greater than the human population of Earth.

The next danger being created by Meta is the power to predict human action both intensively and extensively, both in space and time (long-range predictions). If you think about advertising and marketing, the next step past ad-based surveillance, is prediction. If I can not just guess what products you will like to see based on products you've looked at in the past, but also show you products that you have never thought to desire with a high probability that you will actually purchase them, I can guide advertisements even more efficiently than the current mass-surveillance-based advertising system. Thus, the name of the game in advertising and marketing is prediction. Whoever has the most comprehensive prediction machine will be able to target advertisements most effectively. And the way to build the most comprehensive prediction machine, is to capture real human interactions, in situ, and model them in neural nets.

The idea that the inferences would stop at product sales is naive. Prediction is also the name of the game in war strategy and politics. In a deep sense, prediction is the very heart and essence of control. If I know what you will do before you know what you will do, then you are to me little more than a farm animal. The bull-fighter does not defeat the bull through strength, that is hopeless, rather, he defeats the bull through superior prediction, because the bull-fighter (usually) knows what the bull will do, well before the bull himself does. So, the real purpose behind Meta's capturing of real human interactions, en masse, in situ, is to reduce humanity to the state of the bull in the bull-fighting ring.

To be clear, I am not even attributing hostile motives to Zuckerberg et. al. It sure looks hostile, but I don't need to speculate about that in order to make the point here. Meta is more dangerous than nukes, a talented bull-fighter is able to kill the bull at his whim, because the bull is so completely predictable for him, that there is no real difficulty in determining when and where to deliver the fatal blow. Meta is going to be used to create models of human interaction that will, in many ways, mimic the worst fears of people who have complained of being targeted by secret, MK-ULTRA-like programs. There is nothing speculative about this assertion, it is simply the case and we can directly infer all of these facts from the known state of technology and the obvious value propositions that motivated Zuckerberg et. al. to make this move.

Let me emphasize again: Meta is more dangerous than nukes and nobody thinks that Zuckerberg should be given free rein of the nuclear codes. In the past, we have been able to guffaw at people who complained about being targeted by secret government programs for nefarious purposes, but this kind of reckless, systematic disbelief is no longer compatible with the state of affairs in technology. The idea of a hostile agency using technological tools to attempt a tangible takeover of entire populations and their national governments is not unthinkable. In fact, failing to think about this problem is the real kind of "failure of imagination" that we should fear.

I work in technology and I actually like technology and I am not the type that indulges in scare-mongering every new technological development that comes along. But the image of God in man is unique, because it is the unique interface by which the entire social fabric of humanity is held together. Once you hack that interface, individual humans become little more than meat-sacks at the mercy of unlimited computational algorithms harnessing millions or billions of watts of electrical power versus the pathetic 20 to 40 watts of energy used by your brain when you're concentrating. It's not really a question of "intelligence" so much as it is a question of "actions per minute" (APM), combined with direct access to the human interface. Debt-collectors are annoying but debt-collectors armed with artificial agency are effectively weaponized. In fact, every sort of market activity that has to do with legal repo type of claims can be amplified beyond measure, so that every act of jaywalking now becomes a $300 fine that is never overlooked by the tireless artificial agents policing all dimensions of every space, 24x7, etc.

Try not to think about it in terms of "machines outsmarting us", because that's obviously not going to happen for a long time (in my view, it will never happen.) It's not about intelligence, it's about the mutuality implied within the social space. Laws, regulations and so on, implicitly have built in to them a certain presumption of imperfect enforcement. As the newly emerging class of artificial agents are multiplied beyond number, and begin to comb through our lives like tirelessly buzzing insects, all the old presumptions in our laws are going to become not only obsolete, but onerous. This is a bigger problem than just tyranny, it will tear at the very fabric of the State itself because the people who make up the State will be subjected to the very same shredding effect and, in fact, many of them have a lot more to lose than the average person. So, many of them will become the first casualties of the emerging machine-mind. While we all get irritated at having to pay a stupid parking ticket, many of the people in our local governments are the people who have the knowledge and skills to set reasonable laws for public spaces, the very kind of laws and regulations that are going to be shredded by this machine-mind because of the incompatibility of machine-thinking with the reality of messy, fuzzy human interaction. In short, the ordinary human will become reduced to a player in a game where almost all the other players (artificial agents of the machine-mind) have aimbots. In a socially-distanced and socially-disintegrated post-Coronavirus world, the natural human face-to-face network is particularly weak and vulnerable and is likely to become even more so. So, the only real protection that we have against the emergence of this kind of terrifying, post-tyrannical machine-mind is disintegrating before our very eyes.

All of the discussion up to this point has assumed what we can call "AI containability" -- the assumption that Zuckerberg et. al. will be able to "keep the AI on a leash". It is possible that they are not competent to do that, and that's the other risk that is not being addressed in all of this. Are we effectively handing the nuclear codes to three-year-olds? There's fear-mongering, and then there's the fact that Meta is almost certainly much more dangerous than you think...

 
Nukes and Meta are equally dangerous in my view.

Nukes will never launch themselves. Meta, once they build their Deep Learning human-behavior model, could walk right into the Pentagon (or possess someone with credentials to do so) and cause a nuclear launch. "The President is secure." Secure is relative. Those who work in the three-letter agencies and spend their time wringing their hands over the possibility of nano-bots should instead pay attention to Meta. Nanobots, if they are ever really manufactured, will be prohibitively expensive to manufacture and you will need billions of them to do anything useful. Maybe you can make a handful of them in a research laboratory, but it doesn't scale commercially. Meta scales, that's why they formed it. Watch the interview of Eliezer Yudkowsky linked above:

A brain is this enormous, complicated, poorly understood system with all sorts of laws governing it that people don’t know about, that none of us know about at the time. So the idea that this is secure—that this is a secure attack surface, that you can expose a human mind to a superintelligence and not have the superintelligence walk straight through it as a matter of what looks to us like magic, like even if it told us in advance what it was going to do we wouldn’t understand it because it takes advantage of laws we don’t know about—the idea that human minds are secure is loony.

Transcript
 
Relevant 2020 paper:

Predicting human decision making in psychological tasks with recurrent neural networks

Closely related prior work:



"A downside of Machine Learning is that Machine Learning models have trouble generalizing when data is scarce and this is particularly a problem when predicting human behavior as most existing research datasets tend to be fairly small." Meta solves that problem. The rest of the pipeline (for building the predictive neural-net models, at scale) already exists.

Somewhat related:



Human behavior is much more predictable than we like to think. Almost everyone vastly underestimates how badly they approximate randomness, such as coin tosses or die rolls. This is one of the reasons that Meta is much more dangerous than you probably think it is. You probably significantly underestimate how predictable you (and other people) really are. It's a built-in cognitive bias. Yes, we have free will. No, that doesn't mean you're "unpredictable", it just means that, in extremis, human behavior can never be perfectly predicted. But your "intuitive" theory of free-will probably massively underestimates just how predictable people are.
 
Last edited:
Forget mind-control -- AI is coming for your heart

mood-chart.png




The subject of the interview has the sunny optimism that you would expect from a startup-firm that makes its money doing sentiment analysis with AI. It's his bread and butter. But I do not share his optimism, in fact, I think it is difficult to be too pessimistic about the dangers of mixing AI and human emotions.



As a human, what is your weak-spot? Is it really your mind? Or is it your heart? Can a stage-magician perform a trick that will convince you that the sky is actually green? That you want to put him in your will? That 2+2 can sometimes equal 5? Of course not, the rational mind readily recognizes such simple errors of logic and it is not liable to persuasion in clear-cut cases like these (and most of "common-sense" involves these kinds of simple, clear-cut facts about the world).

But the heart is another matter altogether. In fact, most of the mistakes that we make as humans, we make from emotion. The single biggest danger of AI is not that it's going to outsmart us, but that it's going to out-cruel us. This is a massive blind-spot in all discussion surrounding AI and AI-safety nowadays. AI is more dangerous than we realize. I said "we" in order to include the people building AI and the people they're building it for. A Chinese bureaucrat who loves the CCP and its goal of implementing total, AI-based social control still has emotions. He is still liable to manipulation by the very system which he is building and which he is a part of.

The human interface is not secure. And when AI is weaponized, it's going to hit us where it really hurts.



"And just when you think it can't get worse
I'm gonna hit you where it hurts"
 
Last edited:
Could a revival of arcades and arcade games save us from the dangers of virtual-reality?

 
April 19, 2022 Reuters -- A secretive billionaire has purchased enormous tracts of desert land and has begun construction of eerie "ghost-cities". When the project is completed, it will be used to house "social experiments" that will be able to generate immensely valuable data. By measuring human behavior and social interactions in real public and private spaces, in a manner similar to the reality TV show, Big Brother, the company hopes to attract the interest of Big Data firms that work with architects, city-planners and others who design human spaces. "This is huge. As a designer, this is the kind of data that I wish I could harvest from private firms but the legal problems are too complex to share data including people's faces and identities, with or without their consent."

OK, just kidding, that's not a real article. But I thought of this when thinking about how to explain why Meta is so serious. If you read a headline like this, your curiosity would be piqued, to say the least. "What in the world is this billionaire creep planning to do with all this empty real-estate?"

But that's exactly what Meta is -- times a million. And unlike real estate where the only way to generate data about human behavior is to have real people inside, Meta allows the virtual world to be populated with virtual agents that human players may mistake as real. Thus, their interactions will still generate useful data about human behavior. This allows the scale at which the data is being harvested to be multiplied by a large factor, at least 10x, maybe even as much as 100x or 1,000x in certain conditions. In addition, using techniques similar to something called "knowledge-base completion", it may be possible to scale the simulations up even further. Knowledge-base completion is a set of algorithms used for inferring new facts from already known facts. If I know that you usually break to the right when coming head-on to another person in a narrow hallway, then I can invoke that behavior probabilistically in an AI bot modeled on you, and then populate "you" into other virtual environments you have never entered and find out how your presence affects other players (thus generating "counter-factual data".) As you can see, this kind of amplification is exponential, meaning, the Meta bots become exponentially more realistic for each human-player-hour invested. In fact, we can predict that, before long, the bots will be "more human", in many ways, than real humans. "Painfully human", as the writers of HBO's adaptation of Westworld put it. I predict that it will quickly become difficult to genuinely sort real players from artificial players (NPCs).

"So what? Only nerds will use this thing. I'm never logging in." I hope that, by now, we all understand just how much these technological platforms like Facebook and Twitter affect us, whether we participate in them or not. If you enter an urban area, your presence is almost certainly known to Facebook even if you don't have any Facebook app installed or even if your phone is powered off -- if your face appears in someone else's video call, it will be checked against FB's face recognition. "They don't have my photo" -- maybe, maybe not... the point is that you have no way to be certain of that.

The whole social fabric is entangled with every other part of the social fabric. Like it or not. You may be immune to any direct effect from Meta but that doesn't mean that the human ticking time-bomb in the lane next to you is immune. If I can simulate enough "billiard balls" on the "social pool-table", a threshold is reached where hyper-determinism kicks in --- the few remaining stochastic (unpredictable) balls on the table can be "corralled" and rendered effectively null and void in respect to my social predictive models. The result is that I can create a condition of complete society-wide determinism despite the fact that a certain portion of human behavior will always be unpredictable. And this is the whole problem... I can start to put IRL individuals -- many of whom may never even log in to Meta -- into deterministic patterns of mood and mental states. These wholly-predictable patterns render these individuals as NPCs in respect to the algorithm, even though we understand that they are still real humans capable of genuine choice... they just happen to be trapped in an invisible cage and they do not realize it.

The real danger starts to kick in when you consider the fact that the Meta behavior model will quickly become vastly more experienced than any human individual, meaning, the model will have millions of hours of experiential knowledge of human reactions in various circumstances, making it infinitely "wiser" than any human could ever be. I can then ask the model, "what conditions will create a feeling of fear/terror in John Q. Jones?" and it will spit out various conditions. I can then create a filter to select only conditions that would not be noticeable to police or other observers who might take an interest, and then I can send IRL agents to harass Mr. Jones in a way that induces feelings of terror in him, without anyone being able to observe anything being done to Mr. Jones, and without Mr. Jones having any possibility of recourse against his attackers. But there is no reason to restrict ourselves to fear, we might want to induce feelings of self-destruction and anger, and we might want to point the targeted individual towards a specific target, even a military target. In fact, there's nothing stopping us from modeling the behavior of individuals in the military itself, including those who handle nuclear weapons (and you might surprised just how common that actually is). And remember, that the Meta model does not even need Mr. Jones to log in on their platform in order to target him -- it only needs to be able to select a reasonably accurate model of Mr. Jones and it might be able to do this based on seemingly irrelevant metadata, such as Mr. Jones's waking hours, driving habits, and the many other bits of metadata that Facebook et. al. will always be able to extract.

I will repeat it again: Meta is more dangerous than nukes, by a long shot. Nukes will never launch themselves. Meta will very soon have the power to hack the human interface in ways that we cannot even anticipate.
 
Last edited:
Jim Cramer Says During On-Air Apology That He Was Wrong About Meta’s Stock

https://dailycaller.com/2022/10/27/jim-cramer-apology-meta-stock/

CNBC’s Jim Cramer nearly broke down Thursday while apologizing for being wrong on Meta’s stock.

Shares of META dropped 24% Thursday after the company’s third-quarter earnings led experts to anticipate a weak fourth-quarter, according to CNBC. Profits for Meta, which owns Facebook, dropped 52% to $4.4 billion, according to the report.

CNBC’s “Squawk on the Street”‘ hosts were breaking down the numbers and played a clip before Cramer profusely apologized for his past predictions.

“There are a lot of things going on right now in the business and in the world and so it’s hard to have a simple, we’re going to do one thing and it’s going to solve the issues,” Zuckerberg said. “There’s a lot of competition, there are challenges especially coming from Apple and then there’s some of the longer term things that we’re taking on expenses because we believe they’re going to provide greater returns over time.”


“By expenses he means operating expenses up 19,” co-host Carl Quintanilla.


“Cash flow last year, $9.5 billion, now less than $1 billion,” Cramer began.



“Let me say this, fellas. I made a mistake here. I was wrong, I trusted this management team. That was ill advised. The hubris here is extraordinary. I apologize.”


“Okay um, what did you get wrong?” co-host David Faber said.

“I had a belief that there was a recognition that there is an amount that you can’t spend. Contrast that with Jim Farley, who took a project that was his, and closed it, because it was not ready. It was not near enough. This situation is almost a rogue situation. I thought there would be an understanding that you just can’t spend, and spend through free cash flow, that there had to be some level of discipline, and I didn’t get it but, David, what did I get wrong? I trusted them, not myself, for that I regret. I’ve been in this business for 40 years and I did a bad job. I’m not proud. I’m not proud.”


“I don’t really care about that. I fucked up,” Cramer said when told he wasn’t the only analyst to think Meta’s stock would perform well.
 
I don't agree with Musk's political ideas, but his warnings about AI should have been heeded and I hope that ChatGPT is helping people understand why this technology is, truly, an even greater existential threat than nuclear war would be...

 
So the Zuck made a move on Twitter. With predictable results.

IMG_2799.jpeg


IMG_2800.jpeg


IMG_2802.png


What do you know? "Use my platform and find someone to hook up with" was a successful business model for the guy, with CIA help. "Use my platform so we can censor you" doesn't seem to be working out quite so well.

If there really is some way Meta could be more dangerous than nukes, please don't tell Fuckerburg or his Company pals. Because they clearly aren't figuring it out on their own.
 
Last edited:
So the Zuck made a move on Twitter. With predictable results.

IMG_2799.jpeg


IMG_2800.jpeg


IMG_2802.png


What do you know? "Use my platform and find someone to hook up with" was a successful business model for the guy, with CIA help. "Use my platform so we can censor you" doesn't seem to be working out quite so well.

If there really is some way Meta could be more dangerous than nukes, please don't tell $#@!erburg or his Company pals. Because they clearly aren't figuring it out on their own.

They're like machines, they never give up even in the face of boundless absurdity.

Thankfully, the world has changed a lot since I wrote the OP (April 2022). In October of 2022, Meta stock hit an all-time low of $90, a massive fall from the highs of around $350 around a year earlier. The entire concept of "Metaverse" flopped and that was a fundamental failure of Zuck's big bet, that "VR is the future". And at the end of 2022, a new threat emerged on the scene, showing just how dangerous Meta really is: ChatGPT. Note that Llama (Facebook's own internal version of ChatGPT) is actually a lot more performant than OpenAI's ChatGPT at half the number of weights (65B for Llama versus 175B+ for ChatGPT). This means that the "mass social reverse-engineering" I described in my OP has, indeed, been going on behind closed doors even though, in April 2022, we had no idea of that.

Meta's near-collapse last year doesn't seem to have just been a timing problem, either. VR tech really is ready for widespread use, unlike the 1990s. It still has some wrinkles to work out, but they really are just wrinkles. The tech is there, the channels are there, the software stack is there. But I think projects like Threads show that Meta is not and never really was about "social networking". Unless I've missed my guess, its true agenda/purpose is far darker (than even mere censorship.)

And just when you thought you could catch a break, Google is raising the dystopia-barometer another 10 degrees higher: AGI is just around the corner...
 
Last edited:
Back
Top