The Threats of AI Come from the State

PAF

Member
Joined
Feb 26, 2012
Messages
13,561
Mises Wire
Finn Andreen
08/06/2025


There have been many comments in the last year about the potential dangers of artificial intelligence, from such AI luminaries as Elon Musk, Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Gary Marcus and others. But they might not be the right people to listen to in this regard, because the threats of AI are fundamentally political. Most scientists and technical experts, however intelligent, do not have training in politics. They generally do not have the mindset to think about politics, with the exception of the regulatory impact to their sector. Nobody expects an inventor to grasp the political and social implications of his invention.


The Blind Spot of AI Threats


This explains why these AI experts usually make rather naïve and unimaginative comments regarding the threats of AI; such as “we need to urge companies to pause AI,” “the government definitely needs to involved,” “humans can hurt others with AI,” we don’t want “AI to fall into the wrong hands,” because “bad actors” could use AI, etc. Also, sometimes the potential threats of AI seem to be minimized and sometimes exaggerated. What all these AI threat assessments have in common is that they never recognize the “bad actor” with the worst record of all: the state.

This is clearly a blind spot. For these AI scientists the fundamental distinction between state and society is non-existent; it’s always a collective “we” that needs to manage the potential threats of AI. This is precisely the warning that Murray Rothbard expressed so clearly in Anatomy of the State (1974): “With the rise of democracy, the identification of the State with society has been redoubled… The useful collective term “we” has enabled an ideological camouflage to be thrown over the reality of political life.”

Though it is difficult to distinguish the state from society in this age of statist interventionism and crony capitalism, it is essential to do so. The state, according to the standard Weberian definition, is “a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory.” The state is, thus, by its very nature radically different from the rest of society. As Ludwig von Mises warned in Liberty and Property: “Government is essentially the negation of liberty.” In other words, freedom suffers when state coercion increases. Though crony corporate power can influence government in order to get preferential treatment when the rule of law can be bent (as if often can), it is clear who holds the reins. It is necessary to abandon the myth of the “benevolent state.”

Seen in this light, for all new technology it is necessary to ask to what extent the state controls this technology and its development. In this respect, the record of AI is poor, since most major AI players (like Google, Microsoft, OpenAI, Meta, Anthropic, etc.), their founders, and their core technologies have been supported since their inception in important ways by US government funding, research grants, and infrastructure. DARPA (Defense Advanced Research Projects Agency) and the NSF (National Science Foundation) funded the early research that made neural networks viable (i.e., the core technology for all major AI labs today).

This evolution is not in the least surprising, since the state naturally tries to use all possible means in order to maintain and expand its power. Rothbard again: “What the State fears above all, of course, is any fundamental threat to its own power and its own existence.” Thus, the threats of AI should be seen from two sides. On the one hand, the state can actively use AI to enhance its power and its control over society (as per above), but on the other hand, AI could also represent a challenge for the state, in empowering society both economically and politically.

Will AI Tilt the Balance of Power?


The threat of AI should be assessed, therefore, in terms of the potential impact it can have on the uncertain balance of power between state and society, or to express it more sociologically, between the ruling minority and the ruled majority. This relationship depends on who benefits most from new instruments of power, such as the printing press, modern banking, television, internet, social media, and artificial intelligence. In some cases, the state used these tools to enhance its control, but some of them may empower society. For instance, television was a medium that arguably strengthened the position of the ruling minority, while social media is currently enhancing the majority’s political influence at the expense of the ruling minority. The same question, therefore, concerns AI: will AI empower the state at the expense of society, or vice versa?

As seen above, the state got involved in AI long ago, already at the theoretical and inception stage. Today, fake libertarian Peter Thiel’s Palantir is providing AI analytics software to US government agencies to enhance their power of surveillance and control of the population by building a centralized, national citizen database (including the nightmarish possibility of “predictive policing”). Anthropic is also teaming up with Palantir and Amazon Web Services to provide US intelligence and defense agencies access to its AI models. And Meta will make its generative AI models available to the US government. It is true that such initiatives might, in theory, make state bureaucracy more efficient, but this might only increase the threat to individual freedom. Worryingly, this development is considered “normal” and not raising any eyebrows among AI industry journalists and experts.

From the point of view of society, AI will eventually lead to radical corporate changes and productivity increases, far beyond the internet’s information revolution. The political consequences could be significant, since AI can give each individual a personal research assistant and provide a simpler access to knowledge even in fields with gatekeepers. Routine tasks can be taken over by AI, freeing up time for higher-value tasks including political engagement. For instance, AI can make it easier to understand and check government activity, such as summarizing legislation in plain language, analyzing budgets and spending data, fact-checking claims in real time; thereby reducing the knowledge gap between governments and ordinary citizens.

Of course, this increased political empowerment of society could be stymied if access to AI is conditioned. If the state keeps the upper hand in AI, it could weaken dissidents and discredit independent journalists who use AI by surveillance, manipulation, or worse, in particular where the state feels only loosely bound by its constitutional limitations. This is unfortunately the case not only in the US but also with most states and supranational organizations.

The future of AI—such as AGI, agentic AI and physical AI—is only going to make this discussion of AI threats more important. These evolutions will enhance the possibility for rights violations by the state, but also increase the opportunities and possible countermeasures at individual and community level. A lot could depend on whether the numerous AI functions of the future will be mostly open, decentralized, and encrypted. This future is still uncertain, but the political framework presented here arguably remains valid.

The political stakes involved with AI are far more consequential than those data scientists developing AI seem to recognize. The threats of AI are consistent with the threats that all new technologies represent if they are used nefariously by the state. It is essential, therefore, for the public not only to learn about AI and embrace its potential, but also to see it in the larger context of the political struggle for freedom.




 
Elon Musk and Bezos are both big fans of Iain Banks Culture series... maybe that's what they're going for.

Which could be a good or a bad thing. I don't read books so I don't know what they're about besides the basic premise.

I'm waiting for them to make a TV series about it
 
Typical Mises honey trap deception. At once he outlines, rather innocuously, the evident dangers of AI, then he writes about how we must embrace it and not even attempt to restrict it. Worthless propaganda.
 
Typical Mises honey trap deception. At once he outlines, rather innocuously, the evident dangers of AI, then he writes about how we must embrace it and not even attempt to restrict it. Worthless propaganda.

It's a tool, much like spreadsheets, hammers and guns. How is it a honey trap, considering "you can't stop the change"? [see the following quote:]

The political stakes involved with AI are far more consequential than those data scientists developing AI seem to recognize. The threats of AI are consistent with the threats that all new technologies represent if they are used nefariously by the state. It is essential, therefore, for the public not only to learn about AI and embrace its potential, but also to see it in the larger context of the political struggle for freedom.
 
a tool, much like spreadsheets, hammers and guns. How is it a honey trap, considering "you can't stop the change"? [see the following quote:]
He concludes: Bold emphasis and Red commentary

The future of AI—such as AGI, agentic AI and physical AI—is only going to make this discussion of AI threats more important. These evolutions will enhance the possibility for rights violations by the state, but also increase the opportunities and possible countermeasures at individual and community level. (Oh, Ok!, So, it will increase opportunities for individuals and communities! That's good...) A lot could depend on whether the numerous AI functions of the future will be mostly open, decentralized (Yeah, that's the ticket. Open to us all. We all can use it equally, because there's no way the billionaires and globalists would ever be able to use it more effectively than US, we can match them right from our bedrooms in our pajamas, it's not a big deal), and encrypted. (Oh wow. Encryption so sexy, so private. Don't worry about what software does the "encryption" and who owns it and their development and surveillance capabilities. Plus, encryption means they can do all sorts of things like Dark Web and secret money laundering. It's great! Only they will know about it, of course, we can't access) This future is still uncertain, but the political framework presented here arguably remains valid. (Roll with the Changes, that song by REO Speedwagon, it's a political framework. Dont'cha know?)

The political stakes involved with AI are far more consequential than those data scientists developing AI seem to recognize. The threats of AI are consistent with the threats that all new technologies represent (nothing different about this one, eh? Just like the old telephone from Ma Bell! Yuppers!) if they are used nefariously by the state. (We'll definitely be more than able to keep an eye on that. Build it and make it strong. We can always vote for champions to protect us from the big bads. Don't worry, We Got This!). It is essential, therefore, for the public not only to learn about AI and embrace its potential,( I am sending little Jimmy to the AI indocrination classes pronto! Little Jimmy has a great future ahead of him, he is so smart, and hopes to one day make big bucks supporting the AI! This is freedom from the man!) but also to see it in the larger context of the political struggle for freedom. (I totally hear ya on that one. We can't do freedom nearly as well as the super AI can. It's so smart, it does freedom for us! Wow!)
 
  • Like
Reactions: PAF
He concludes: Bold emphasis and Red commentary

The future of AI—such as AGI, agentic AI and physical AI—is only going to make this discussion of AI threats more important. These evolutions will enhance the possibility for rights violations by the state, but also increase the opportunities and possible countermeasures at individual and community level. (Oh, Ok!, So, it will increase opportunities for individuals and communities! That's good...) A lot could depend on whether the numerous AI functions of the future will be mostly open, decentralized (Yeah, that's the ticket. Open to us all. We all can use it equally, because there's no way the billionaires and globalists would ever be able to use it more effectively than US, we can match them right from our bedrooms in our pajamas, it's not a big deal), and encrypted. (Oh wow. Encryption so sexy, so private. Don't worry about what software does the "encryption" and who owns it and their development and surveillance capabilities. Plus, encryption means they can do all sorts of things like Dark Web and secret money laundering. It's great! Only they will know about it, of course, we can't access) This future is still uncertain, but the political framework presented here arguably remains valid. (Roll with the Changes, that song by REO Speedwagon, it's a political framework. Dont'cha know?)

The political stakes involved with AI are far more consequential than those data scientists developing AI seem to recognize. The threats of AI are consistent with the threats that all new technologies represent (nothing different about this one, eh? Just like the old telephone from Ma Bell! Yuppers!) if they are used nefariously by the state. (We'll definitely be more than able to keep an eye on that. Build it and make it strong. We can always vote for champions to protect us from the big bads. Don't worry, We Got This!). It is essential, therefore, for the public not only to learn about AI and embrace its potential,( I am sending little Jimmy to the AI indocrination classes pronto! Little Jimmy has a great future ahead of him, he is so smart, and hopes to one day make big bucks supporting the AI! This is freedom from the man!) but also to see it in the larger context of the political struggle for freedom. (I totally hear ya on that one. We can't do freedom nearly as well as the super AI can. It's so smart, it does freedom for us! Wow!)

While I personally agree and I have absolutely no use for it, others [including on this forum] have embraced and use it, even knowing the dangers.

It's like telling them an iPhone 5 is plenty good enough, and the new 16 will track and report you, but they still rush out to by it as if their life depended on it - otherwise they get the jitters.
 
While I personally agree and I have absolutely no use for it, others [including on this forum] have embraced and use it, even knowing the dangers.

It's like telling them an iPhone 5 is plenty good enough, and the new 16 will track and report you, but they still rush out to by it as if their life depended on it - otherwise they get the jitters.
C0491027-People_using_phones_falling_over_cliff,_illustration.jpg
 
  • Like
Reactions: PAF
While I personally agree and I have absolutely no use for it, others [including on this forum] have embraced and use it, even knowing the dangers.

I work in tech and my employer pays for frontier-model access for their employees, so I have unlimited access to all the latest models. The simple fact is that the AI-hype is lying about what AI can do. It's useful, make no mistake, but it's just not useful in the way, to the degree, and for the things, they keep claiming. They're lying, plain-and-simple!

I explained state-of-the-art AI on another forum, this way:

In Perl, there is a concept called DWIM, which stands for "Do What I Mean". The idea is that Perl is a language that is designed to be as forgiving as possible to the programmer's intent. The DWIM principle means, for example, that library writers try to choose really good defaults "out-of-the-box", that we try to choose the most obvious names for functions and data objects, that we try to build syntax that is not only obvious but also flexible so that whatever mnemonic works for a given programmer is available to them, and so on. All these tricks, added together, form the DWIM principle in Perl. LLMs are DWIM on steroids.

But the lesson that non-computer folks have yet to learn is that the problem of "do what I mean" versus "do what I said" never really goes away, it just morphs. And with black-box systems like Transformer-based LLMs, separating the two becomes even harder as your system becomes more opaque. This is why guardrailing, jailbreaking, hallucinations, and so on, are going to become bigger and bigger problems with AI over time as its "intelligence" increases. People think that the cognitive gaps in the AI can somehow magically all be papered over, but they can't. The space you are exploring (query-response strings) is literally infinite. Your training tokens are a tiny dot of an island in an ocean vast beyond imagination. Pre-training will never escape this problem...
 
  • Like
Reactions: PAF
Back
Top