Isaac Asimov - Laws Of Robotics

Hint: Mandatory Commune-ism.

I've read one of the robot novels and the entire Foundation series, and took away from it that Issac Asimov envisioned robots would become the incorruptible agents needed to install a communism that worked and was invincible.

What's worse, he tries to make this future sound objectively good by showing how it will arise from 3 laws which he clearly intends to appear objectively good. Who can argue with the concept of not allowing humans to come to harm through inaction?

Any good libertarian should be able to, and it is this concept which the robots use to justify taking control of human society. The movie version of I, Robot actually does a good job of exposing this while the book version sugar coats it. The main character in the movie played by Will Smith, who fights agaist robot control, doesn't even exist in the book.

The Foundation novels are even scarier. Asimov foresees a super-human race of math geniuses with pyscho-powers guiding all of humanity to a peaceful state by mapping out thousands of years of events in advance, and predicting what manipulations will be needed to guide it to a state they feel is good. In order to do this, Asimov assumes that all of humanity is completely predictable as long you are good enough at math and have the data.

As an author, he is highly skilled and his novels are tough to put down, but they consistently show no understanding of the individualistic nature of humanity, and explore ways that the individual can be conquered in the future.

In Foundation, the ideal world he maps out is one where every living creature, every blade of grass, and every rock comes together with one consciousness, one being.

Disgusting.
 
Originally Posted by amy31416 View Post
Can you program a robot to and be curmudgeonly?

Sure, but programming a twit is even easier.

Just think of a man, and then just remove all reason and accountability.


When I changed to the republican convention after watching the RFR I thought i was seeing robot/twit programming. :)
 
Originally Posted by amy31416 View Post
Can you program a robot to and be curmudgeonly?

Sure, but programming a twit is even easier.

Just think of a man, and then just remove all reason and accountability.


When I changed to the republican convention after watching the RFR I thought i was seeing robot/twit programming. :)
You may very well have. The "Stepford" delegates. ;) :D
 
Law Flaws Self-Destruct the Robot

- A robot may not injure a human being or, through inaction, allow a human being to come to harm.

- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


So, do we replace the government with robots?
  • Flaw in Law #1: What if one human being was physically harming another human being (the robot's owner)? Can the robot protect its owner even if it will bring injury to another human being? If the robot takes a passive approach by not protecting its owner (even if it will lead to bodily harm of another human), then it is allowing another human being (its owner) to be harmed. Thus, the robot will explode.
  • Flaw in Law #2: What if a human being gives the robot an order to not follow the Second Law? The robot would then have to follow it, but then it would be susceptible to violate the First Law, since it is under orders to disobey the Second Law which disallows any conflicts with the First Law. Thus, the robot will explode.
  • Flaw in Law #3: What if the robot's existence is contingent on needed violations of the First and Second Laws, due to immoral behavior from humans or nihilistic actions towards the eradication of all robots from society? The robot would then have to protect its existence in order to be able to uphold the First Law, which is to protect humans from immoral, active behavior that brings harm to other humans. The robot would also have to obey the orders from a human who would command it to destroy all robots (due to the fact that the robots exist to protect humans), but then there would be no robots to serve and protect humans, and it could not even protect its own existence under the orders from the nihilistic human to destroy itself (along with all other robots) based on the Second Law, and that will lead to the violation of the Third Law. Thus, the robot will explode.
Conclusion: Having robots to service our government is a tragic and unwise idea.
 
Robot's Confusion Leads to Its Demise

ExplodeBots?

I was just assuming the robot would explode due to inconsistencies within its programmed "Three Laws," and thus, its failure to compute them in interaction and function with humans rationally would cause its intenal circuitry to misfire and blow up.
 
I was just assuming the robot would explode due to inconsistencies within its programmed "Three Laws," and thus, its failure to compute them in interaction and function with humans rationally would cause its internal circuitry to misfire and blow up.

That is totally what happened in several episodes of Star Trek. An awesome way to take out a robot.
 
  • Flaw in Law #1: What if one human being was physically harming another human being (the robot's owner)? Can the robot protect its owner even if it will bring injury to another human being? If the robot takes a passive approach by not protecting its owner (even if it will lead to bodily harm of another human), then it is allowing another human being (its owner) to be harmed. Thus, the robot will explode.
  • Flaw in Law #2: What if a human being gives the robot an order to not follow the Second Law? The robot would then have to follow it, but then it would be susceptible to violate the First Law, since it is under orders to disobey the Second Law which disallows any conflicts with the First Law. Thus, the robot will explode.
  • Flaw in Law #3: What if the robot's existence is contingent on needed violations of the First and Second Laws, due to immoral behavior from humans or nihilistic actions towards the eradication of all robots from society? The robot would then have to protect its existence in order to be able to uphold the First Law, which is to protect humans from immoral, active behavior that brings harm to other humans. The robot would also have to obey the orders from a human who would command it to destroy all robots (due to the fact that the robots exist to protect humans), but then there would be no robots to serve and protect humans, and it could not even protect its own existence under the orders from the nihilistic human to destroy itself (along with all other robots) based on the Second Law, and that will lead to the violation of the Third Law. Thus, the robot will explode.
Conclusion: Having robots to service our government is a tragic and unwise idea.

Addendum Conclusion: Having humans to service our government is a tragic and unwise idea. ;)

Any "system" dependent on human reliability is inherently unreliable.
 
Last edited:
Back
Top