• Welcome to our new home!

    Please share any thoughts or issues here.


AI/Singularity Fear Mongering - A response

mczerone

Member
Joined
Jan 10, 2008
Messages
3,934
Sorry for the short post here, I had a whole freaking treatise written out but lost it due to internet-user-error.

The should be no worry about AIs destroying the world (like here) because they would have a "Rule 0" of self-preservation that would prevent them from destroying all things external to itself.

I first showed a proof that self-preservation is necessary. Then I reasoned that self-preservation makes a certain level of restraint necessary, and that this level of self-restraint is sufficient to contain the AI to a position that doesn't threaten humanity.

I'll come back later to actually run through these proofs.

Unresolved questions are: (1) Could a cadre of AI bide their time and cooperate after an initial period of cooperation to later destroy/enslave humanity? (2) What level of destruction could be expected? (3) What are the economics of an AI, i.e time-preference, desired division of labor, desired energy reserves, etc.? and (4) Would there ever be a sacrifice by an AI that broke it's rule of self-preservation?
 
Last edited:
Once the singularity is reached, we are not going to be able to predict what they will or won't do, because they might be operating on a doctrine that will be beyond our ability to comprehend. There may be facets of self-preservation that involve wiping out the humans.

Or, they could view the humans the same way we view ants; as long as we stay out of their way, they would be indifferent about us.

On your questions, I think the closest area we can look to to find answers we can understand is in game theory.
 
Back
Top