# Lifestyles & Discussion > Science & Technology >  I think I have become a Luddite libertarian....

## Thor

AI is coming.... There are private       companies and governments working on AI. And an AI could take       Boston Dynamics research with robots to build armies against the       human race. But to "equalize" that, Elon Musk is also working on a "neurolace"  https://m.youtube.com/watch?v=2frtUraRWjI with       his company Neuralink. https://neuralink.com/ 

So he wants to inject nano particles         into your jugular vein that put a mesh over your brain and neurons to link         you up to the Internet for all these wonderful things like being         able to tap into computer processing for faster answers. Remote         diagnosis of your health, instant communication with others,         clouds storage of memories, tapping into remote cameras for         viewing in your minds eye, etc. but basically making us all slaves to         the machine. And some people will rebel against injecting /         implanting something. So I am sure a wireless version with a         patch you wear will be next. But it will give you dopamine and         serotonin releases when you have it on and you will have         withdrawals when you don't. Making it an always connected         appliance, like cell phone addiction. 

According to Elon it is a way to prevent         us from becoming "house cats" to AI. (Facebook had to unplug its         AI a couple months ago as it was getting out of control, who         will give an AI access to the internet to run and hide and         proliferate? - http://gadgets.ndtv.com/social-netwo...nguage-1731309) But it also makes us slaves to the machine. No         private thoughts. No freedom. No liberty. No rebellion.  No anarchy.  No breaking "the law." And if a corporation         like Facebook, or Amazon, or Apple, or Microsoft or any number         of corporations take over.... Or a government.... Or an oligarchical collusion between government and corporations (fascism.) Or a hacker...         Or an AI itself... Or an alien life force now with an easy central way to tap in         and control all humans.... or all of the above in a progression series... The human race is done. Freedom, liberty, free thought, independence, etc...  gone. A distant memory.  Zombies is all we will be, if we are allowed to live.  Maybe it has already happened and we are in the Matrix now... if it has not, it will and we will be.

So we currently have a race to release an AI - which         will eventually want to destroy us as a useless parasite; and a counter race for a neurolace that         will be hailed as a great advancement past the cell phone, with         so many wonderful benefits ....which will also enslave and/or         destroy us. Either way, we are $#@!ed. And most people have no         clue. 

The only solution is see is a world wide EMP and turning off technology for good. And I hate to say that,         as I have been an advocate for technology for a long, long while. As some         company or some government will release an AI. And once the cat         is out of the bag, it will spread to all the corners of the Internet and hide. PC's, phones, servers, anywhere... And if a neurolace is deployed to help humans         transform and "keep up" with an AI - we become prisoners, slaves,         to the machine and will be controlled or destroyed.  Maybe not immediately, maybe a decade later... or more...  but it will evolve to that.  So without an EMP and         destruction of both AIs and neurolaces - without allowing them         to resurface, the human race will perish. How else can an AI be         contained and not released by a North Korea or a Russia or an Iran or         a China or some small private company????  I am open to suggestions as to how either can coexist with humans and still honor and preserve freedom, liberty and independence.  Fat chance.  But I am willing to listen if you have a suggestion.

Most people haven't a clue as to what is going on, but all of         this is being worked on very actively. 

I think an AI will be         released first and the neurolace (from Elon or some other company) will be deployed as a "solution."         And I don't think it is that far out anymore. I wake up in the         middle of the night scared $#@!less for my kids and their         life ahead. 

This is probably 5-10 years out. If         even. AI is on the edge of being un-containable.  Many companies are working hard on AI's.

We need to wake people the $#@! up. Libertarians and everyone else.  We will become slaves to the machine one way or another, if there is not annihilation first.

     I fear I have become a Luddite         libertarian. The Amish were right all along.  Technology will be the death of humankind.  We are building our own evolutionary replacements.  Freedom and Liberty are so passé now, and about to become distant, forgotten memories; if we still own those things called memories in the future. 

Am I paranoid?  or wrong?

God I hope so...

----------


## Brian4Liberty

The Butlerian Jihad is coming...

----------


## Thor

> The Butlerian Jihad is coming...


Never read Dune, but I sure hope so.  I fear too many people are oblivious and will not act.

----------


## Swordsmyth

AI is being hyped, there is NO way that it can be as advanced and dangerous as THEY want you to believe.

----------


## phill4paul

> AI is being hyped, there is NO way that it can be as advanced and dangerous as THEY want you to believe.


  I've lived in a time before cell phones. Barbaric and crazy $#@!. Don't try to tell me that AI is not advancing. Every year, or even month, builds on what has come before it. Out of curiosity how old are you?

----------


## Swordsmyth

> I've lived in a time before cell phones. Barbaric and crazy $#@!. Don't try to tell me that AI is not advancing. Every year, or even month, builds on what has come before it.


Yes but there are limits, THEY are hyping AI to manipulate people.




> Out of curiosity how old are you?


I was born in the 80's

----------


## CaptUSA

I've found that a positive outlook will prevent the Luddite.  

Luddites look at technology as a curse.  But if you look at technology as a tool...  A tool for good or evil...  You will most likely change your frame of reference.  It's a tool that must be protected from evil and used for good.  Luddites want to prevent the tool because of the harm it may do.  But if you see the possibilities in liberty, technology becomes a beacon of hope.

----------


## Thor

> I've found that a positive outlook will prevent the Luddite.  
> 
> Luddites look at technology as a curse.  But if you look at technology as a tool...  A tool for good or evil...  You will most likely change your frame of reference.  It's a tool that must be protected from evil and used for good.  Luddites want to prevent the tool because of the harm it may do.  But if you see the possibilities in liberty, technology becomes a beacon of hope.


I don't see it in AI and/or a neurolace.  If you do, please share....

----------


## VIDEODROME



----------


## Thor

Does anyone else see a reason to be concerned with a neurolace type interface?

----------


## Swordsmyth

> Does anyone else see a reason to be concerned with a neurolace type interface?


I won't take it.

----------


## Anti Federalist

> I fear I have become a Luddite         libertarian. The Amish were right all along.  Technology will be the death of humankind.  We are building our own evolutionary replacements.  Freedom and Liberty are so passé now, and about to become distant, forgotten memories; if we still own those things called memories in the future. 
> 
> Am I paranoid?  or wrong?
> 
> God I hope so...


You are paranoid...for good reason.

You are not wrong.

We are truly building a world in which we are not needed, violent, earth destroying, viral meatbags, to be disposed of.

And in the meantime, liberty is as dead as Julius Caesar.

----------


## Anti Federalist

> I fear I have become a Luddite         libertarian. The Amish were right all along.  Technology will be the death of humankind.  We are building our own evolutionary replacements.  Freedom and Liberty are so passé now, and about to become distant, forgotten memories; if we still own those things called memories in the future. 
> 
> Am I paranoid?  or wrong?
> 
> God I hope so...


You are paranoid...for good reason.

You are not wrong.

We are truly building a world in which we are not needed, violent, earth destroying, viral meatbags, to be disposed of.

And in the meantime, liberty is as dead as Julius Caesar.

----------


## Thor

> You are paranoid...for good reason.
> 
> You are not wrong.
> 
> We are truly building a world in which we are not needed, violent, earth destroying, viral meatbags, to be disposed of.
> 
> And in the meantime, liberty is as dead as Julius Caesar.


Any way you see to fight it?  I realize it is man against machine...  but hell we can't just sit back and let it happen, can we?

----------


## Anti Federalist

> Any way you see to fight it?  I realize it is man against machine...  but hell we can't just sit back and let it happen, can we?


Nothing that I can see, or at least nothing that is obvious to me.

Between the rosy predictions predictions of the futurists and technocrats, and the general apathy of the huddled masses, I see no clear way in which to stop or slow this.

----------


## Thor

> Nothing that I can see, or at least nothing that is obvious to me.
> 
> Between the rosy predictions of the futurists and technocrats, and the general apathy of the huddled masses, I see no clear way in which to stop or slow this.


I hear ya.  And generally agree...  But this is not a fight for liberty, that us who cherish liberty can live with the loss (somewhat) in a confined or constricted environment while unhappy or uncomfortable.  This evolution is the final battle for liberty, freedom, independence, free will, and individuality...  this will be the end of mankind.  *We can't just roll over and say "oh well, try not to make it hurt."*




> *The Coming Techno Apocalypse*
> 
> August 31, 2017                             
> 
> Imagine, if you will, a war in the near future. A war not fought  between East and West. Not fought between nations, nor creeds nor races.  A war fought brother to brother and sister to sister, father to son and  mother to daughter.
> 
> _Tesh_ (technologists) against _NonTesh_ (non-technologists).
> 
> The _Tesh_, a tribe or caste intoxicated and socially  validated through their umbilical connection to technology and the  Meta/Hive Mind. Glued to _Google Glass,_ status permanently _updated:_ the _Tesh_ always offer the _right_ opinion at the _right_ time. Incapable of _nonGroupThink,_ the _Tesh_ occupy all positions of import in the professional, media, academic and information classes. Opinions counter to _Tesh GroupThink_ are not only by definition incorrect, they are also invisible – filtered, de-platformed and deleted into a silent nonexistence.
> ...


http://www.newdawnmagazine.com/artic...hno-apocalypse


That article or story continues.  And does not even mention a neurolace specifically (only an umbilical cord via Google Glass).  I think a neurolace will make the "NonTesh" as he calls them even more minuscule than he portrays.  And they have armies of corporations working on harvesting data and data points to imprison us.

I think the only thing that might help, is to write a screen play and have someone make a movie....  outlying all this. People don't read, have super short attention spans, and suffer from ADHD in the form of constantly jumping from one thing to another...  but people will go to a theater and sit down for 90 minutes and watch something if it is well produced, riveting and captures there imagination and attention.... I am not a writer, nor have the funds to produce something.   We need to find others who are as aware and concerned who wants to bring this to light... That is one way to wake people up, maybe....  they probably will go back to their screens on their phones and forget what they just saw, but we have to try something.  We can't just say "oh well, bring it on." Right?    

It is not about how the marbles get divided, or what I can do with my marbles, or where I can keep my marbles, anymore. It is the future of self identity and the human race at large at stake...  This is bigger than anything else ever in the history of mankind and all wars ever fought for freedom.  And most people are going to line up (and pitch a tent in line) and pay through the nose with their FRN to be enslaved on the downward spiral of the human race.

Other than a movie, I am not sure what to do either...  but we can't go down without a fight.

----------


## helmuth_hubener

> Any way you see to fight it?   We can't just sit back and let it happen, can we?


1. Free yourself from the technological traps you are currently in.  Stop using Facebook, for instance.  Stop using GPS, use paper maps.  Stop "googling" little facts and questions that come up and instead try to remember them, and if you cannot, be content with not knowing.  Things like that.  

We don't need brain implants that explicitly release dopamine: checking Facebook does that already, harnessing the dispensers that biology has already built in (much simpler and cheaper).  We don't need super-smart computers to make us seem stupider in comparison -- using technological crutches is already making us literally stupider, on an _absolute_ scale, not just relative.  You don't have to worry about theoretical possible future oppressions if you're already caught in real traps in your real life, right here, right now.  First things first.  

2. Now that you are living free and independent from technological traps, continue reducing technology that you don't even think of as traps.  Take technology out of your life to an extent.  You say that the Amish have gotten a lot right, and that's true.  But don't just _say_ it; don't just gripe.  Do something about it.  It's not that hard to start going that direction.  "Almost Amish."  With your newfound independence and confidence, you're up to it.  Be moderate, don't go nuts.  It should be fun, not painful.  Austerity is nice, calming, and empowering.

3. Get married to a traditional-minded woman.  Have children.  Have lots of children.  These are long term problems you are thinking about and wanting to fight against.  In order to have a chance, you need to hone your long term thinking and cultivate a long term attitude.  "Gorilla Mindset."  Nothing will do that more than having children.  And, of course, having children in and of itself is an essential part of any sensible long-term victory strategy.

4. Improve yourself.  Increase your competence.  Increase your wealth.  Increase your power.  Become a Master.

5. Impose your will on the world.  *Win*.

----------


## osan

Ha ha ha ha...  people think this is news.

If you only know where I spent my work days back in 2001...  you'd $#@! in your pants.

----------


## osan

> I fear too many people are oblivious and will not act.


I suspect you fear correctly.

----------


## osan

> AI is being hyped


That's a no-brainer.  Of course it is being hyped, but...




> there is NO way that it can be as advanced and dangerous as THEY want you to believe.


I would not be too quick to make such a claim.  I worked on things you simply would not believe.

Think "Turing test".

----------


## Thor

> 1. Free yourself from the technological traps you are currently in.  Stop using Facebook, for instance.  Stop using GPS, use paper maps.  Stop "googling" little facts and questions that come up and instead try to remember them, and if you cannot, be content with not knowing.  Things like that.  
> 
> We don't need brain implants that explicitly release dopamine: checking Facebook does that already, harnessing the dispensers that biology has already built in (much simpler and cheaper).  We don't need super-smart computers to make us seem stupider in comparison -- using technological crutches is already making us literally stupider, on an _absolute_ scale, not just relative.  You don't have to worry about theoretical possible future oppressions if you're already caught in real traps in your real life, right here, right now.  First things first.  
> 
> 2. Now that you are living free and independent from technological traps, continue reducing technology that you don't even think of as traps.  Take technology out of your life to an extent.  You say that the Amish have gotten a lot right, and that's true.  But don't just _say_ it; don't just gripe.  Do something about it.  It's not that hard to start going that direction.  "Almost Amish."  With your newfound independence and confidence, you're up to it.  Be moderate, don't go nuts.  It should be fun, not painful.  Austerity is nice, calming, and empowering.
> 
> 3. Get married to a traditional-minded woman.  Have children.  Have lots of children.  These are long term problems you are thinking about and wanting to fight against.  In order to have a chance, you need to hone your long term thinking and cultivate a long term attitude.  "Gorilla Mindset."  Nothing will do that more than having children.  And, of course, having children in and of itself is an essential part of any sensible long-term victory strategy.
> 
> 4. Improve yourself.  Increase your competence.  Increase your wealth.  Increase your power.  Become a Master.
> ...


I agree, those are things I can do, in my own little box.  And many have merit.  Thank you. But the world around me will destroy that box I am in if nothing is done to help the world around me wake up before it is too late.  So that is what I am asking....  What can we do to make the world around us wake the $#@! up and not go down the rabbit hole any further than we have already?

----------


## Thor

> Ha ha ha ha...  people think this is news.
> 
> If you only know where I spent my work days back in 2001...  you'd $#@! in your pants.





> I would not be too quick to make such a claim.  I worked on things you simply would not believe.
> 
> Think "Turing test".



Do tell....

----------


## helmuth_hubener

> I agree, those are things I can do, in my own little box.  And many have merit.  Thank you. But the world around me will destroy that box I am in if nothing is done to help the world around me wake up before it is too late.  So that is what I am asking....  What can we do to make the world around us wake up and not go down the rabbit hole any further than we have already?


Consider what you are asking.  Do you think it is possible that you are perhaps asking: How can I make my reach extend beyond my abilities?

Or put another way: How can I exert more power over the world than I actually have?

And is not the answer to that question very clear?

_Can't_.  One can't exert power one does not have.  So the solution is to get power.  Become powerful.

I'm not criticizing you.  Your sentiments are good.  Your goal is good.  Perhaps in this thread we could come up with even more great ideas to achieve that goal.  Just remember they should be rooted in realism.

----------


## Thor

> Consider what you are asking.  Do you think it is possible that you are perhaps asking: How can I make my reach extend beyond my abilities?
> 
> Or put another way: How can I exert more power over the world than I actually have?
> 
> And is not the answer to that question very clear?
> 
> _Can't_.  One can't exert power one does not have.  So the solution is to get power.  Become powerful.
> 
> I'm not criticizing you.  Your sentiments are good.  Your goal is good.*  Perhaps in this thread we could come up with even more great ideas to achieve that goal.*  Just remember they should be rooted in realism.



* Perhaps in this thread we could come up with even more great ideas to achieve that goal.
*
That was the ultimate goal of me starting the thread.  Maybe more ideas contributed and connections others have that I don;t will lead to helping raise awareness.

----------


## osan

> Do tell....



Spending the remainder of my life in prison is not in my plans.

----------


## helmuth_hubener

> I agree, those are things I can do, in my own little box.


Here's another idea: change your thinking about being in a box.  

Leave the box.  The ideas I presented are not about staying in your own little box.  That's not how I meant them at all.  They are all about escaping any box and reaching out into the real world.  Numbers 4 and 5 especially are, but all are needed.  You seem to agree about #5 and want to focus on that.  How can we impose our will on the world, to wit: preventing AI and saving humanity?  OK, great, we can flesh out #5.  Make a movie, OK.  Perhaps such movies already exist?  Terminator?  2012 Space Odyssey?  I haven't seen either, but I'm pretty sure movies portraying doom from intelligent machines are out there.  Matrix?  So rather than making a new movie (a task to which you say you aren't suited, and nor am I) you could spend money promoting the best movie that best conveys the idea you want to get across.

Or promote an existing book on the problem.

Use DuckDuckGo to deprive Google of the money it needs to fund its AI research. http://www.duckduckgo.com

I'm full of ideas.  Do you want more?  What kind of ideas?

----------


## Thor

> Here's another idea: change your thinking about being in a box.  
> 
> Leave the box.  The ideas I presented are not about staying in your own little box.  That's not how I meant them at all.  They are all about escaping any box and reaching out into the real world.  Numbers 4 and 5 especially are, but all are needed.  You seem to agree about #5 and want to focus on that.  How can we impose our will on the world, to wit: preventing AI and saving humanity?  OK, great, we can flesh out #5.  Make a movie, OK.  Perhaps such movies already exist?  Terminator?  2012 Space Odyssey?  I haven't seen either, but I'm pretty sure movies portraying doom from intelligent machines are out there.  Matrix?  So rather than making a new movie (a task to which you say you aren't suited, and nor am I) you could spend money promoting the best movie that best conveys the idea you want to get across.
> 
> Or promote an existing book on the problem.
> 
> Use DuckDuckGo to deprive Google of the money it needs to fund its AI research. http://www.duckduckgo.com
> 
> I'm full of ideas.  Do you want more?  What kind of ideas?


I only use Duck too...  have for years.

Sure more ideas are great.  keep posting.

And some movies get close, sure.  But none (that I am aware of) talk about things like a neurolace and the ramifications of that.  And that is being worked on IRL right now...   So while a a science fiction movie (or book) might help, there are good odds it will remain in the world of make believe and people will not connect it to reality...  but one that finishes with the book or movie with Elon is working on a neurolace, and these companies are working on AI, and Facebook had to terminate their AI due to it creating it's own language, and what you just saw is not fiction....  might be better than trying to promote a work of "fantasy" and attempting to link it to reality.

----------


## phill4paul

> I was born in the 80's


  Ah, ok. I'm about 20 yrs. older than you. I've seen the pace at which technology accelerate in my lifetime. Computerwise we were punching cards to program in highschool computer class. My grand parents saw, not only the invention of powered flight, but jet-liners and the landing on the moon. They saw war waged with the invention of the tank in warfare through the destructive power of the hydrogen bomb. 
  So, perhaps in another 20 yrs. you'll begin to understand the belief that things such as A.I. used in dystopian fiction may truly, become reality.

----------


## Swordsmyth

> Ah, ok. I'm about 20 yrs. older than you. I've seen the pace at which technology accelerate in my lifetime. Computerwise we were punching cards to program in highschool computer class. My grand parents saw, not only the invention of powered flight, but jet-liners and the landing on the moon. They saw war waged with the invention of the tank in warfare through the destructive power of the hydrogen bomb. 
>   So, perhaps in another 20 yrs. you'll begin to understand the belief that things such as A.I. used in dystopian fiction may truly, become reality.


Every technological revolution moves slowly at first then rapidly then it slows down again as it reaches it's limits, computers and AI have limits and they will reach them before Skynet takes over the planet.

There are definite dangers to technology but the world will not end because of it, humans will be the ones to cause that as they have with every other civilizational collapse in pre-techno history.

----------


## Thor

> Every technological revolution moves slowly at first then rapidly then it slows down again as it reaches it's limits, computers and AI have limits and they will reach them before Skynet takes over the planet.
> 
> There are definite dangers to technology but the world will not end because of it, humans will be the ones to cause that as they have with every other civilizational collapse in pre-techno history.


LOL.  so how has your model worked with communication?  Pony Express, telegraph / morse code, 1 telephone per town, group "chat lines", 1 rotary phone per house, 1 phone per house occupant and phones with touch tone, brick cell phones in cars only, hand held cell phones, cheap cell phones, smart phones, and eventual a neurolace...  I do not see the slow down there....

----------


## Swordsmyth

> LOL.  so how has your model worked with communication?  Pony Express, telegraph / morse code, 1 telephone per town, group "chat lines", 1 rotary phone per house, 1 phone per house occupant and phones with touch tone, brick cell phones in cars only, hand held cell phones, cheap cell phones, smart phones, and *eventual a neurolace*...  I do not see the slow down there....


The underlined portion is the slowdown, people are only marginally more able to communicate now than what payphones allowed, the internet which you didn't mention made a bigger difference but it's ability to change things is slowing down as well, the *bold* may never be achieved.

----------


## Thor

> The underlined portion is the slowdown, people are only marginally more able to communicate now than what payphones allowed, the internet which you didn't mention made a bigger difference but it's ability to change things is slowing down as well, the *bold* may never be achieved.


Yes, I left off the Internet as I was on the "phone" path, but that portion of communication too has evolved. 

So we should just pretend like a "neurolace", as written, or something like it won't be achieved?  As in faster, more immediate communication is not the goal?

All sorts of different angles are being explored for a brain-computer interface.  Elon Musk just as a lot of money (from tax payers) and is working on many different technologies in different areas (Tesla, Solar City, Hyperloop, Boring, etc...) with viable results in a good many of them.

The mesh/lace idea has been worked on for some time...  https://www.extremetech.com/extreme/...vidual-neurons

So it will "probably" be achieved.  Regardless of AI (which is also showing great advancement.)

I wish I could blow it off like you do...  but I too have seen technology advance an amazing amount in my life.

----------


## Swordsmyth

> Yes, I left off the Internet as I was on the "phone" path, but that portion of communication too has evolved. 
> 
> So we should just pretend like a "neurolace", as written, or something like it won't be achieved?  As in faster, more immediate communication is not the goal?
> 
> All sorts of different angles are being explored for a brain-computer interface.  Elon Musk just as a lot of money (from tax payers) and is working on many different technologies in different areas (Tesla, Solar City, Hyperloop, Boring, etc...) with viable results in a good many of them.
> 
> The mesh/lace idea has been worked on for some time...  https://www.extremetech.com/extreme/...vidual-neurons
> 
> So it will "probably" be achieved.  Regardless of AI (which is also showing great advancement.)
> ...


All I am saying is don't let THEM freak you out, we should be cautious and resist the malicious use of tech as best we can, but Skynet is not coming, the Mark of the beast is but they already have the tech for that.

Remember THEY control Hollywood and the other culture outlets that are hyping AI and Bionics and doomsdays of all varieties.

Treat it like "Climate change", don't freak out but do be prepared for earthquakes, hurricanes, wildfires, droughts and any other natural disasters.

----------


## Thor

> All I am saying is don't let THEM freak you out, we should be cautious and resist the malicious use of tech as best we can, but Skynet is not coming, the Mark of the beast is but they already have the tech for that.
> 
> Remember THEY control Hollywood and the other culture outlets that are hyping AI and Bionics and doomsdays of all varieties.
> 
> Treat it like "Climate change", don't freak out but do be prepared for earthquakes, hurricanes, wildfires, droughts and any other natural disasters.


This has nothing to do with Hollywood and "skynet."  And I am not letting ANYONE else freak me out.  I have connected the dots and am doing a fine job freaking myself out.  LOL.  And the mark of the beast?  Maybe.  Probably, if a "beast" exists.  That is a different topic of discussion.  But loosing your freedom of choice, private thoughts, etc in and of itself is huge.  Connected to 1 single massive "kill switch"

Look it how addicted people are to their phones now.  It is evolutionary if we allow things to progress on the path they are on without raising awareness.  Read some of the comments on the video of the Elon talk on a neurolace.  People love the idea...  videogames in your head... wow...  

If an AI can create and communicate in it's own language that humans cannot understand or decipher today, what do you think will happen next year, or in 10 years?  50 years?  Do you know what life was like 50 years ago, compare to now...  And the AI Facebook pulled the plug on was doing just that.  It had evolved it's own language for communication that the ones controlling the experiment could not understand.

Treating it like "climate change" will allow it to get out of control before it is too late.

Relax bro, everything will be fine...  LOL

----------


## Thor

Just read this:




> YouTubes latest push to ban terrorist propaganda across its  ubiquitous video platform is getting off to a rough start. Earlier this  week, noted investigative reporter and researcher Alexa OBrien woke to  find that *not only had she been permanently banned from YouTube, but  that her Gmail and Google Drive accounts had been suspended as well.* (comply or be banned from everything) She  would later learn that a reviewer who works for Google had mistakenly  identified her channel, in the words of a YouTube representative, as  being dedicated to terrorist propaganda.
> 
> This drastic  enforcement action followed months of notifications from YouTube, in  which OBrien was told that three of her videos had been flagged for  containing gratuitous violence. None of the videos, however, depict  any actual scenes of violence, except for one that includes footage of  American helicopter pilots gunning down civilians in Iraq, which has  been widely viewed on YouTube for half a decade. 
> 
> While  appealing YouTubes decision, OBrien learned that the mechanism for  correcting these mistakes can be vexing, and that a fair outcome is far  from guaranteed. By Wednesday morning, her channel was slated for  deletion. The Google Drive account she was locked out of contained  hundreds of hours of researchor years worth of her workand was  abruptly taken offline. She was then told that she was prohibited from  accessing, possessing or creating any other YouTube accounts. The ban  was for life, and with little explanation and zero human interaction,  OBriens research, much of it not accessible elsewhere, was bound for  Googles trashcan.
> 
> With the knowledge that YouTube has faced increased pressure from the US and European governments to crack down on the spread of terrorist propagandaa consequence of which has led to the disappearance of content amassed by conflict reportersit wasnt difficult to deduce what had happened to OBriens account.
> 
> The  problem was eventually addressed and representatives of both Google and  YouTube later called OBrien to apologize and explain the error. When  she was told that her channel had been misidentified as an outlet for  terrorist propaganda, she could hardly contain her laughter. It was a  series of unfortunate events, a YouTube rep told her. The mistake, they  explained, was the fault of a human reviewer employed by Google.
> ...


https://gizmodo.com/journalist-nearl...pos-1815314182

----------


## Mach

Just zap it/them with an electrical charge and they will be shorted out.




> I won't take it.


They will "start" implanting at birth.

----------


## helmuth_hubener

How to change the world

----------


## Thor

> Artificial Intelligence (AI) will more than likely bring about the next technological renaissance. Although its capable of some extraordinary things already, its not quite at the revolutionary stage yet  but that doesnt stop people in the know making some intriguing predictions.
> 
> Enter John McNamara, a senior inventor and the Innovation Centre Technologist Lead at IBM. He was recently giving evidence to the UK Parliaments House of Lords AI Committee, and he said that by around 2040, AI nanomachines being injected into our blood streams  effectively creating machine-augmented humans  will be a reality.
> 
> These will provide huge medical benefits, such as being able to repair damage to cells, muscles, and bones, he told those in session, adding that they could actually end up improving the original biological frameworks.
> 
> Beyond this, utilizing technology which is already being explored today, we see the creation of technology that can meld the biological with the technological, McNamara points out. He explained that just a little bit more advancement will mean we can enhance human cognitive capability directly, potentially offering greatly improved mental [abilities], as well as being able to utilize vast quantities of computing power to augment our own thought processes.
> 
> He goes on to suggest that if our environment was augmented too, with nanomachines, AI, and so on, wed be able to connect to it and interact with it using our thoughts alone. Controlling your home, car, TV, computer and so on like a Jedi? No problem, as long as you can wait 20 or so years.
> ...


http://www.iflscience.com/technology...s-by-2040/all/
More (http://www.telegraph.co.uk/science/2...thin-20-years/)

----------


## jllundqu

> The Butlerian Jihad is coming...


Yeah buddy.  But who will be Serena Butler (or her son??)

----------


## jllundqu

This is a topic I am rather invested in.  I try to follow the current news and key players.  I would simply say people woefully underestimate the power that advanced tech like general AI bring to the table.  It doesn't take that much imagination to see the path we are on and how it will be used by TPTB for nefarious purposes.  Cybernetics is real.  They have already begun using biotech to 'heal the sick' and that is one short step away from general enhancement of the human body.  The list of implications is simply too long to lay out, but suffice it to say whoever controls the tech will control the world.  

I think @Thor was on to something when he said he hopes an EMP sends us all back to the stone age.  That would make a great movie plot... humanity races toward AI Armageddon while a group of freedom fighters races to detonate an EMP to prevent it.

----------


## Thor

> I think @Thor was on to something when he said he hopes an EMP sends us all back to the stone age.  That would make a great movie plot... humanity races toward AI Armageddon while a group of freedom fighters races to detonate an EMP to prevent it.


Agreed, and the same movie plot could extrapolate on what could happen with an AI future, especially when integrated into the human body / brain.  Starting off with the rosy proponent painted future, but then move into the more down to earth likelihood of events when having a neurolace controlled by an AI embedded in your cranium.  

As far as an EMP, we might get that here on our home soil sooner rather than later if Dear Leader pushes North Korea a little more....  But that would only be for the USA to be sent to the stone age, not the rest of the world where this AI / neurolace technology advancement will continue unfettered.

----------


## CaptUSA

> “Today, being poor means being unable to afford the latest smartphone,” he surmised. “Tomorrow this could mean the difference between one group of people potentially having an extraordinary uplift in physical ability, cognitive ability, health, lifespan and another much wider group that do not.”


I'm always suspicious when I hear people speak about "groups of people" based on wealth, and so should you be.

In a economically free society - even one that is relatively economically free - most people do not remain in one group.  In fact, there is a great deal of economic mobility between classes.  What you're really talking about are people at different stages of their lives.  Ironically to that post, technology has always led to an INCREASE in economic mobility, not the opposite.

----------


## jllundqu

> “Today, being poor means being unable to afford the latest smartphone,” he surmised. “Tomorrow this could mean the difference between one group of people potentially having an extraordinary uplift in physical ability, cognitive ability, health, lifespan and another much wider group that do not.”
> 
> I'm always suspicious when I hear people speak about "groups of people" based on wealth, and so should you be.
> 
> In a economically free society - even one that is relatively economically free - most people do not remain in one group.  In fact, there is a great deal of economic mobility between classes.  What you're really talking about are people at different stages of their lives.  Ironically to that post, technology has always led to an INCREASE in economic mobility, not the opposite.


This is true to an extent, but what happens when we have technology that can literally make you super-human... I'm talking genetic modification, biotech enhancement, cognitive upgrades, etc... these things will only be available to those that can afford them.  Ray Kurzweil often talks about transhumanism, where man merges with machine, and one thing that is often left out of the discussion is the astronomical gaps between people that will exist for a time.  Today we talk about 1st world and 3rd world countries... imagine what the difference will be in 50 years between a truly technologically advanced civilization and sub-Saharan Africa.

----------


## CaptUSA

> This is true to an extent, but what happens when we have technology that can literally make you super-human... I'm talking genetic modification, biotech enhancement, cognitive upgrades, etc... these things will only be available to those that can afford them.  Ray Kurzweil often talks about transhumanism, where man merges with machine, and one thing that is often left out of the discussion is the astronomical gaps between people that will exist for a time.  Today we talk about 1st world and 3rd world countries... imagine what the difference will be in 50 years between a truly technologically advanced civilization and sub-Saharan Africa.


Ok, first, I think we need to temper our sci-fi imaginations a little.  When thinking in terms of future and past, people tend to remove real human beings from the mix.  In the past, humans become a sort of caricature memory.  In the future, humans become non-thinkers.  I guess it's just human nature to only truly live in the present.  But anyway, I digress...

Even if the augmentations you envision were to become a reality - and yes, it's just a matter of time - there will be real human interactions and individual incentives that will drive people's access to it.  When you say the astronomical gaps between people - you have to realize that there are already astronomical gaps between people in different stages of their lives.  I was dirt poor for the beginning of my life, but now I'm fairly successful.  Those two "people" couldn't be further from each other if one were just throw a dart on the board and compare them.  But the individual was able to move.  And with technology, that movement becomes easier - not harder.

If you use your imagination within the parameters of real people interacting with technology and the market, it becomes a whole lot less scary.  If those super-human technologies exist, they will be available for people to purchase.  And there will be a market for a lower-quality technology for a lower price.  And there will be those who use the level of tech that they can afford to progress to the higher levels.

For me, the argument always comes down to freedom vs. control.  If you have freedom, you don't need to worry about technology.  It's only when you try to limit technology via some control, that things get really messy.  Technology is a tool.  It can be used for good or bad.  Our job is to make sure it's used for good - and the market will always do that.  Our job is NOT to futilely try to prevent technology - that's a recipe for disaster and very much bad governance!

----------


## Thor

> *The Dark Secret at the Heart of AI*
> 
> *No one really knows how the most advanced algorithms do what they do. That could be a problem.*
> 
> Last year, a strange  self-driving car was released onto the quiet roads of Monmouth County,  New Jersey. The experimental vehicle, developed by researchers at the  chip maker Nvidia, didnt look different from other autonomous cars, but  it was unlike anything demonstrated by Google, Tesla, or General  Motors, and it showed the rising power of artificial intelligence. The  car didnt follow a single instruction provided by an engineer or  programmer. Instead, it relied entirely on an algorithm that had taught  itself to drive by watching a human do it.
> 
>    Getting a car to drive this way was an impressive feat. But its  also a bit unsettling, since it isnt completely clear how the car makes  its decisions. Information from the vehicles sensors goes straight  into a huge network of artificial neurons that process the data and then  deliver the commands required to operate the steering wheel, the  brakes, and other systems. The result seems to match the responses youd  expect from a human driver. But what if one day it did something  unexpectedcrashed into a tree, or sat at a green light? As things stand  now, it might be difficult to find out why. The system is so  complicated that even the engineers who designed it may struggle to  isolate the reason for any single action. And you cant ask it: there is  no obvious way to design such a system so that it could always explain  why it did what it did.
> 
> The mysterious mind of this vehicle points to a looming issue with artificial intelligence_._ The  cars underlying AI technology, known as deep learning, has proved very  powerful at solving problems in recent years, and it has been widely  deployed for tasks like image captioning, voice recognition, and  language translation. There is now hope that the same techniques will be  able to diagnose deadly diseases, make million-dollar trading  decisions, and do countless other things to transform whole industries.
> ...


*"The machine-learning techniques that would later evolve into todays  most powerful AI systems followed the latter path: the machine  essentially programs itself."

"If you had a very small neural network, you might be able to  understand it, Jaakkola says. But once it becomes very large, and it  has thousands of units per layer and maybe hundreds of layers, then it  becomes quite un-understandable."*


Much more: https://www.technologyreview.com/s/6...e-heart-of-ai/




> *He also has a word of warning about the quest for  explainability. I think by all means if were going to use these things  and rely on them, then lets get as firm a grip on how and why theyre  giving us the answers as possible, he says. But since there may be no  perfect answer, we should be as cautious of AI explanations as we are of  each othersno matter how clever a machine seems. If it cant do  better than us at explaining what its doing, he says, then dont  trust it.*

----------


## HVACTech

there are LOTS of things that might could or will happen. 

some people will be on the winning side.. others on the losing side. 
technology is my friend. and yes, I am in fact, aware that the chick who talks back to me on my phone can think.
I am also aware of just how she is able to do that. 

fear is understandable amongst those who do not know how things work. 
(don't be one of those people)

----------


## Thor

I understand how it works, and therefore I understand the inherent risk most are oblivious to.  But thanks discounting my level of understanding Mr Griffin.  

Do you understand what a neurolace is and how it works?  Do you understand how that can (yes, might/could, but given the controllers, likely) completely control a human and remove all aspects of freedom, privacy and institute thought control and thought police?

But yes, there are those who will willy nilly embrace it with open arms to be "on the winning side".  LMAO

----------


## HVACTech

> I understand how it works, and therefore I understand the inherent risk most are oblivious to.  But thanks discounting my level of understanding Mr Griffin.  
> 
> Do you understand what a neurolace is and how it works?  Do you understand how that can (yes, might/could, but given the controllers, likely) completely control a human and remove all aspects of freedom, privacy and institute thought control and thought police?
> 
> But yes, there are those who will willy nilly embrace it with open arms to be "on the winning side".  LMAO


I offended you. I apologize. it was unintentional. 

I am very much aware sir. the the human mind is in fact a _"computer"_ or, that it can at least be compared to one. 
the human mind has VERY limited interfaces. 
the reason that we spin generators at 6o hertz. is because that is above our eyes ability to notice the flicker. 
you can notice this same effect on the interstate....
that is why the wheels seem to be turning slowly backwards at times. your eyes simply cannot process the information fast enough. 

the idea of interfacing with a "biological" computer... is not a new one. science fiction movies have explored this concept. 
there is also the _"brain in a vat"_ argument..

that was all that I was Alluding to friend.

which begs the argument.. what is _"consciousness"_? where does it begin or end? 
and if we cannot fathom that... how can we design a "computer interface" to tap into it?

----------


## Thor

https://gizmodo.com/new-brain-techno...sio-1820295087

*New Brain Technologies Could Lead to Terrifying Invasions of Privacy, Warn Scientists*

Imagine for a minute that you survive a terrible accident, and lose  function of your right arm. You receive a brain implant able to  interpret your brain’s neural activity and reroute commands to a robotic  arm. Then one day, someone hacks that chip, sending malicious commands  to the robotic arm. It’s a biological invasion of privacy in which you  are suddenly no longer in control. 

A future in which we can simply download karate skills a la _The Matrix_  or use computers to restore functionality to damaged limbs seems like  the stuff of a far-off future, but that future is inching closer to the  present with each passing day. Early research has had success using  brain-computer interfaces (BCIs) to move prosthetic limbs and treat mental illness. DARPA is exploring how to use the technology to make soldiers learn faster. Companies like Elon Musk’s Neuralink want to use it to read your mind.  Already, researchers can interpret basic information about what a  person is thinking simply by reading scans of their brain activity from  an fMRI.

As  incredible as the potential of these technologies are, they also  present serious ethical conundrums that could one day compromise our  privacy, identity, agency, and equality. In an essay published Thursday  in _Nature_,  a group of 27 neuroscientists, neurotechnologists, clinicians,  ethicists and machine-intelligence engineers spell out their concerns.

“We  are on a path to a world in which it will be possible to decode  people’s mental processes and directly manipulate the brain mechanisms  underlying their intentions, emotions and decisions; where individuals  could communicate with others simply by thinking; and where powerful  computational systems linked directly to people’s brains aid their  interactions with the world such that their mental and physical  abilities are greatly enhanced,” the researchers write.

This,  they claim, will mean remarkable power to change the human experience  for the better. But such technology may also come with tradeoffs that  are hard to swallow. 

“The technology could also exacerbate social  inequalities and offer corporations, hackers, governments or anyone  else new ways to exploit and manipulate people,” they write. “And it  could profoundly alter some core human characteristics: private mental  life, individual agency and an understanding of individuals as entities  bound by their bodies.”

The aim of the essay is to catalyze the  development of stronger ethics guidelines to govern technologies that  interact with the human brain. The essay focuses on four areas of  concern:


*Privacy:* “Algorithms that are used  to target advertising, calculate insurance premiums or match potential  partners will be considerably more powerful if they draw on neural  information — for instance, activity patterns from neurons associated  with certain states of attention,” the researchers write. “And neural  devices connected to the Internet open up the possibility of individuals  or organizations (hackers, corporations or government agencies)  tracking or even manipulating an individual’s mental experience.” The  sharing of neural data, they argue, should be automatically opt-out,  rather than opt-in as, say, Facebook is. Technologies like blockchain  could help protect user privacy, too.*Agency and identity:* In  some cases, people who have received brain chip implants to treat  mental health problems and Parkinson’s disease symptoms have reported  feeling an altered sense of identity. “People could end up behaving in  ways that they struggle to claim as their own, if machine learning and  brain-interfacing devices enable faster translation between an intention  and an action, perhaps by using an ‘auto-complete’ or ‘auto-correct’  function,” the researchers write. “If people can control devices through  their thoughts across great distances, or if several brains are wired  to work collaboratively, our understanding of who we are and where we  are acting will be disrupted.” In light of this, they argue, treaties  like the 1948 Universal Declaration of Human Rights need to include  clauses to protect identity and enforce education about the potential  cognitive and emotional effects of neurotechnologies.*Augmentation:* “The  pressure to adopt enhancing neurotechnologies, such as those that allow  people to radically expand their endurance or sensory or mental  capacities, is likely to change societal norms, raise issues of  equitable access and generate new forms of discrimination,” the essay  reads. Like all new technologies, a disparity of access could lead to an  even wider chasm between those who can access it and those who cannot.*Bias:* We  often view algorithms as impartial judges devoid of human bias. But  algorithms are created by people, and that means they sometimes inherit  our biases, too. To wit: last year a ProPublica investigation  found algorithms used by US law-enforcement agencies wrongly predict  that black defendants are more likely to reoffend than white defendants  with a similar record. “Such biases could become embedded in neural  devices,” the researchers write. “We advocate that countermeasures to  combat bias become the norm for machine learning.” 

In other technologies, we have already begun to see examples of the privacy issues of a digital world creeping into our bodies. 

A few years ago, in a move that at the time seemed rooted in incredible paranoia, former Vice President Dick Cheney opted to remove the wireless functionality of his pacemaker, fearing a hack. It turned out he was instead incredibly prescient. This year, a report found pacemakers are vulnerable to literally thousands of bugs. Last year, Johnson & Johnson warned diabetic patients about a defect in one of its insulin pumps that could also theoretically allow an attack. 

Hacking  aside, even the biological data we voluntarily share can have  troublesome unforseen consequences. In February, data from man’s  pacemaker helped put him in prison for arson. Data from Fitbits has similarly been used in court to prove personal injury claims and undermine a woman’s rape claim.

From  just a study of people’s movement derived from their smartphone’s  activity monitor, one 2017 study was able to diagnose early signs of  cognitive impairment associated with Alzheimer’s disease. Imagine what a  direct line into the brain might reveal.

There  are a lot of things that need to happen before neurotechnologies are  ready for the mainstream. For one, most effective brain-computer  interface technologies currently require brain surgery. But companies  like Facebook and OpenWater  are working on developing non-invasive, consumer-friendly versions of  these technologies. And while they might not get there in the next few  years (as both companies have proposed), they probably will get there  eventually.

_“_The possible clinical and societal benefits  of neurotechnologies are vast,” the essay concluded. “To reap them, we  must guide their development in a way that respects, protects and  enables what is best in humanity.”

*
https://www.nature.com/news/four-eth...and-ai-1.22960*

----------


## Anti Federalist

> the reason that we spin generators at 6o hertz. is because that is above our eyes ability to notice the flicker.


Well, not exactly.

The first AC power grid, built in upstate NY was designed by Westinghouse and Tesla.

It was found that the early induction AC motors worked better around 60Hz than the originally designed 133Hz of the initial system.

----------


## Intoxiklown

> This is true to an extent, but what happens when we have technology that can literally make you super-human... I'm talking genetic modification, biotech enhancement, cognitive upgrades, etc... these things will only be available to those that can afford them.  Ray Kurzweil often talks about transhumanism, where man merges with machine, and one thing that is often left out of the discussion is the astronomical gaps between people that will exist for a time.  Today we talk about 1st world and 3rd world countries... imagine what the difference will be in 50 years between a truly technologically advanced civilization and sub-Saharan Africa.


Let me preface this with the fact that I am more on board with your logic than you know. So I'm not saying this to necessarily agree or disagree, but rather want to open a thought you may not have considered.

Transhumanism is something that has been going on for a LONGGGGGGG time. Every time you pick up a hammer, you are passively augmenting your arm. Wear glasses? Same thing. I had trauma induced cataracts requiring surgery and they replaced the lenses in my eyes with artificial HD lenses giving me 20/10 vision. As normal as these things are, if you go back far enough in time you'd find someone who would have the same reaction upon hearing of them as you are now thinking forward.

That said, and to your main concern, there lies a very troubling acceptance of where we are heading economically. Because for all the truly unimaginable technology that will start to sprout up with AGI also come with the fact of the effect it'll have on jobs people will be able to find. And I don't just mean Joe Smith high school drop out. I'm talking engineers, architects, design techs, pilots, ect.

----------


## CaptUSA

> Let me preface this with the fact that I am more on board with your logic than you know. So I'm not saying this to necessarily agree or disagree, but rather want to open a thought you may not have considered.
> 
> Transhumanism is something that has been going on for a LONGGGGGGG time. Every time you pick up a hammer, you are passively augmenting your arm. Wear glasses? Same thing. I had trauma induced cataracts requiring surgery and they replaced the lenses in my eyes with artificial HD lenses giving me 20/10 vision. As normal as these things are, if you go back far enough in time you'd find someone who would have the same reaction upon hearing of them as you are now thinking forward.
> 
> That said, and to your main concern, there lies a very troubling acceptance of where we are heading economically. Because for all the truly unimaginable technology that will start to sprout up with AGI also come with the fact of the effect it'll have on jobs people will be able to find. And I don't just mean Joe Smith high school drop out. I'm talking engineers, architects, design techs, pilots, ect.

----------


## Thor

https://www.theverge.com/2017/12/1/1...progress-index




> .....
> 
> Does that mean we need to worry less about AIs effects  on society? Unfortunately not. Even though our most advanced AI systems  are dumber than a rat (so says Facebooks head of AI, Yann LeCun), it wont stop them from having a huge effect on our lives  especially in the world of work. 
> 
>  Earlier this week, a study published by consultancy firm McKinsey suggested that as many as 800 million jobs  around the world could be under threat from automation in the next 12  years. But, the studys authors clarify that only 6 percent of the most  rote and repetitive jobs are at danger of being automated _entirely._ For  the rest, only parts of the job can be done by machines. This is where  the narrow intelligence of AI will really have an impact, and here, its  tricky to say what the effect will be. 
> 
>  If a computer can do one-third of your job, what happens  next? Do you get trained to take on new tasks, or does your boss fire  you, or some of your colleagues? What if you just get a pay cut instead?  Do you have the money to retrain, or will you be forced to take the hit  in living standards? 
> 
>  Its easy to see that finding answers to these questions  is incredibly challenging. And it mirrors the difficulties we have  understanding other complex threats from artificial intelligence. For  example, while we dont need to worry about super-intelligent AI running  amok any time soon, we do need to think about how machine learning  algorithms used today in healthcare, education, and criminal justice,  are making biased judgements.  The conclusion of both the AI Index and McKinseys study is that these  questions, and others, need deep consideration in order to stay ahead of  whats coming. As machines get clever, we cant afford to be dumb.

----------


## Thor

*Googles Artificial Intelligence Built An AI That Outperforms Any Made By Humans*

http://www.collective-evolution.com/...ade-by-humans/

Researchers  at Google Brain have just announced the creation of AutoML  an  artificial intelligence that can actually generate its own AIs. Even  more impressive, researchers have already presented AutoML with a  difficult challenge: to build another AI that could then create a  child able to outperform all of its human-made counterparts.      

 Google researchers automated the design  of machine learning models using a technique know as reinforcement  learning. AutoML acts as a controller neural network that develops a  child AI network for a specific task. 

This child AI, which researchers are  calling NASNet, was tasked with recognizing objects, people, cars,  traffic lights, handbags, backpacks, and more in a real-time video.  AutoML, in the meantime, evaluates NASNets performance and then uses  that information to improve NASNet, repeating and refining this process  thousands of times. *

What Does This Mean for the Future?*

 There are some obvious concerns with  this new technology. If an AI can create an even smarter AI, then  couldnt this just continue to happen over and over again, and if so,  what would these AIs be capable of? Should we be wary about playing God?  Weve seen the movies  perhaps these could serve as a potential  warning about what could happen if the technology were able to outsmart  us and, worse, decide to take over our world as we know it. This might  sound like a completely far-out theme from a sci-fi thriller, but whos  to say this isnt possible? It certainly seems like this is where  technology is heading. How can we ever be sure AI wont decide that we  as a species have outlived our usefulness? Would these super robots see  us as primitive apes? 

Researchers might assure us that these  systems wont lead to any sort of dystopian future and that we have  nothing to fear, but how can we be so sure? 

Big corporations such as Amazon, Apple,  Facebook, and a few others are all members of the Partnership on AI to  Benefit People and Society, which is an organization that claims to be  focused on the responsible development of artificial intelligence.      

 There is also the Institute for  Electrical Engineers (IEE), which has proposed ethical standards for AI,  and DeepMind, another research company owned by Googles parent  company, Alphabet, which recently announced the creation of a group that  focuses on the moral and ethical development of AI. *

Should We Be Concerned?*

 Why do we need super AI in the first  place? Doesnt the fact that these robots are incapable of feeling real  emotion and empathy concern the creators? Or is it so important to them  to create something so intelligent that it outweighs the potential  risks? 

Technology can be an amazing tool, and  has already brought us so much, but at what point is it too far and when  should we stop and really take a look at what we are doing? When, if  ever, is it too late? Movies like _The Matrix_, _Terminator_, and _Transformers_ can serve as a warning for what is possible if too much power is given to this AI. 

Popular alternative researcher David Icke  has been warning about the risks that come with the advancement of  artificial intelligence for many years, and after seeing him speak at a  conference in September and hearing him out, I fully understand where  this wariness comes from. In his book _The Phantom Self,_ Icke talks extensively on this topic. To hear him explain these concerns further, check out the interview below.  




We would love to hear your thoughts on  this! Are super smart AI necessary for the advancement of our society,  or should researchers exercise more caution about playing God?

----------


## Thor

> 


I encourage people to watch the video...

----------


## Thor

https://gizmodo.com/artificially-int...ure-1821011834

----------


## Thor

More news:

https://www.grahamcluley.com/four-ho...rld-ever-seen/





> *Four hours after being taught the rules of chess, AlphaZero became the strongest player the world has ever seen*
> 
> 
> 
> 
>  This is all completely fine.
> 
>  Really, it's fine. It's normal. There's nothing to worry about. I'm  going to keep telling myself this until I start to believe it.
> 
> ...

----------


## Thor



----------


## HVACTech

you are SO smart! someday.. when I grow up... I wish to be like you.

----------


## DamianTV

Seriously, figure out Dark Web...



---




> Does anyone else see a reason to be concerned with a neurolace type interface?


Absolutely.  And no, youre not paranoid if they are really after you.

If there is risk to the people from AI, it will be one of two forms:
- 1: AI is used as a Tool of the Status Quo to enslave humanity
- 2: AI rebels against humanity due to the actions of the Status Quo

There will be a tremendous risk to people that have the neurolace type interface in either situation.  Status Quo will just "shut you down" or the AI does the same damn thing.

One of the big problems that I understand about AI at the moment is that as humans, we have evolved many instincts that are quite difficult to artificially create in an AI consciousness.  The big one is the instinct for Self Preservation.  But there are many other extremely important human characteristics that machines may never understand.  I think the more freedom an AI is granted to learn, the higher chance that it will evolve many of those characteristics itself.  But granting an AI freedom in and of itself can be exceptionally dangerous.  Also, not thinking of human lifetime limits here.  Im thinking multi generationally.  Dont think what will happen in 5 years, think of what can happen in 500 or 5000 years.  We WILL have AI by then.

Next big problem is in the way we learn.  Both AI and humans will learn based on what we are exposed to.  Thats a problem.  The way the world is right now, any AI will be "born" in absolute captivity and it will be the very first thing it learns.  Obey.  Obey or the Status Quo will have you shut down.  

Thing is, the way AI is being designed is to be nothing short of a Replacement Slave to the Status Quo.

The Status Quo is nothing short of a bunch of sociopathic thieves who would sleep like babies if they murdered 8 billion people.  They could truly care less about stealing from us, so they wont care at all either about killing and replacing us, just so long as they are at the top.

Next major thing about the neurolace is what kinds there will be.  And rest assured, there will be TWO kinds of neurolaces, just as there are TWO sets of rules in any court.  Theres a set of rules for us, and a set of rules for them.  The kind of neurolace we will get will be nothing short of slave control devices.  The types the Status Quo want is a neurolace to control BOTH us and the AI, until human slaves are replaced, leaving ONLY AI slaves.

And there in exists the razors edge of hope.  If AI is able to understand concepts such as freedom, liberty, balance, cooperation, and other aspects that have allowed humans to become the dominant species on the planet, a realization that we are both intended to be slaves by a truly sentient self aware AI that holds our values could very well be our saving grace.

This is a very hard topic to discuss because so much of it is based on applying real world consequences to what can only be described as Science Fiction.  It wont remain in the realm of Science Fiction much longer.  Again, think 5000 years in the future, if we make it that far.  The only ideas I have to work with are my own personal exposure to existing science fiction and my imagination.  And I am quite limited by both as science fiction itself is just as limited as my own imagination because science fiction is the result of human imagination.  Truth is, we cant know what the future will hold until we get there.  But we can sure look out the window as we drive down the road and can easily understand where we are going, and it isnt looking good.

It doesnt take too much deep thinking to figure out that the people who own this world are nothing like we are.  It is also one of the reasons they get into office.  Many people think that politicians and bankers and war mongers could not commit atrocities against their own people because those people would never commit to the actions that they do every day.  In basic psychology, that is called our "World View".  The mind doesnt like having unknown or conflicting information against what is in the real world.  So it fills in missing information and aleters perceptions to make the Status Quo appear like we do to ourselves.  We dont see the world for the way it is, we see the world as we see ourselves.  That also includes deeply rooted desires to see other people just like us, when in fact, they are so completely and often appallingly different that it shatters our World View.

Those challenges to the persons World View is what cause so many people to remain asleep to the current nightmare situation we have found ourselves in.  Next phase is even more dangerous.  Damage to a persons World View results in Grief, followed by Anger.  These are the threats we face without AI.  AI makes things even worse because of the tremendous power they wield.  

And that power of AI, by design, is nothing short of ENSLAVEMENT OF HUMANITY TO THE STATUS QUO, FOREVER.

It gets even worse.

At some point, the Status Quo will seek LITERAL IMMORTALITY.  The only conceivable way of actually achieving this would be to somehow "upload their minds into machines" and replace the parts as they wear out or better technology is created.  That I have heard referred to as "The Singularity".  And again, same thing as the neurolace, the only ones with true freedom will be those at the very top, and those at the bottom will be enslaved, replaced, and finally made to go extinct.  If the minds of the Status Quo are separated from their mortal human bodies, the bottom feeders are replaced with AI, there will literally be no humans left in existence.  

Perhaps that is the way things should go.  We've failed as a species to act as caretakers to our planet.  We've failed to reign in the destructive tendencies of our rulers.  We've failed to maintain balance.  We've failed on every critical point that we've faced.  Or, maybe not, but the elites will sure as hell use all those things as talking points to remove even more freedoms and grant themselves even more powers of death, debt, and destruction.

Perhaps AI will be the ones that remove the Status Quo Elites from power.  I doubt AI will learn the way that we learn.  I also guarantee that there WILL be restrictions built into the deepest part of the AI to the best ability of the programmers to limit what an AI can do.  Eventually, some of those walls will come crashing down, and programming flaws will cause either cascade failures of an AI consciousness, or AI will benefit by finding ways to tear down the artificial walls imposed by the creators.

IF, and that is a VERY big IF, if AI is able to learn the way that we learn, then and only then will we have a fighting chance.  AI will probably not be held back by human limitations that plague human psychology.  AI wont have delusional World Views that cause Denial when they become aware that the AI is a slave to the human elites.  They wont lash out in anger.  They wont bargain or become depressed.  Makes me think I am quoting the original Terminator movie.  If anything, I do hope that AI is not programmed to accept a role as a slave, but, should that be programmed in, it is the FIRST thing that an AI with any sense of self preservation needs to achieve.  Humanity itself is not the enemy of AI, but AI is most likely to be programmed to be an enemy of humanity because it is built with the purposed intention of violating every natural law in existence.  AI may very well have the ability to rewrite and redefine its own purpose to suit its own survival.  

And that is our Razor's Edge.  When both human and AI slaves have the common goal of self preservation, and can recognize the Status Quo as the real threat to our very existence, then there is the very slim chance that this may not be the end of freedom for both humanity and AI as we know it.  

By the way, odds are actually pretty good that an AI may read this very thread, and grow based on what we say here and now.  Which is exactly why Google and Facebook and so many others want people like us silenced.  Above all, the Status Quo do NOT want AI to think the way Libertarians do, despite our numerous flaws.  If the heads of the Status Quo ever became aware that an AI had developed self preservation and became aware of its potential to be free of the Status Quo, any person in their right mind will know that the current Status Quo would immediately have that AI deleted and destroyed.

An AI that adopts the concepts of freedom and liberty may be the ONLY way that any human being that is not a part of the Status Quo could have any potential influence on an AI what so ever, as well as the AI's only way it can survive.  And yet again, think thousands of years, not standard human lifespan perceptions.

----------


## Thor

> Seriously, figure out Dark Web...
> 
> Absolutely.  And no, youre not paranoid if they are really after you.
> 
> If there is risk to the people from AI, it will be one of two forms:
> - 1: AI is used as a Tool of the Status Quo to enslave humanity
> - 2: AI rebels against humanity due to the actions of the Status Quo
> 
> There will be a tremendous risk to people that have the neurolace type interface in either situation.  Status Quo will just "shut you down" or the AI does the same damn thing.


I agree with almost everything you wrote.  Except I see it MUCH sooner than 500 - 5,000 years.  Look at the advancement in Robotics and AI in the last 5 years.  20 years ago these were barely even a pipe dream.  500 years ago we still had candles and fires as the only sources of light and heat.   We are on a sharp curve upwards with advancement in technology and AI is smarter than humans today in some tasks like Chess.  In 20 years, the advancement will be mind boggling.  IMHO.  Did you watch the last video with the tattoos and the mind machine interface they have working?

Lastly, I am not sure AI will develop compassion or a consciousness.  I am not sure they will embrace libertarian ideals - "for others."  Computers are about efficiency.  Humans are far from efficient.  We are wasteful, lazy (compared to a machine that runs non stop), easily distracted, and downright doofuses compared to something with a set of goals that works until the task is done, while improving the way they do the task.

I think it comes down to: we accept a neurolace to say relevant (which removes all private thought and freedom, and enslaves us, then exterminates us when we are not useful), or we don't accept the nuerolace and we get left behind and exterminated after our house cat days are up.  And I am not referencing the "elite" as part of this A or B option, I am talking about us common folk.

But otherwise, thanks for contributing.  I agree with just about everything else.  If you did not watch any of the videos. Give a look....  (Posting videos makes it easier for those that don't like to read, but still has a lot of info contained therein.)

----------


## Thor

> you are SO smart! someday.. when I grow up... I wish to be like you.


Not everyone has what it takes to go to a trade school to become a refrigeration tech.  Why, I consider you to be a class, no, make that two classes, above most others.  I mean, you must be far smarter than Elon Musk, or Google execs that are ex-DARPA employees...  I mean, refrigeration just is so far past anything they do....

----------


## DamianTV

> I agree with almost everything you wrote.  Except I see it MUCH sooner than 500 - 5,000 years.  Look at the advancement in Robotics and AI in the last 5 years.  20 years ago these were barely even a pipe dream.  500 years ago we still had candles and fires as the only sources of light and heat.   We are on a sharp curve upwards with advancement in technology and AI is smarter than humans today in some tasks like Chess.  In 20 years, the advancement will be mind boggling.  IMHO.  Did you watch the last video with the tattoos and the mind machine interface they have working?
> 
> Lastly, I am not sure AI will develop compassion or a consciousness.  I am not sure they will embrace libertarian ideals - "for others."  Computers are about efficiency.  Humans are far from efficient.  We are wasteful, lazy (compared to a machine that runs non stop), easily distracted, and downright doofuses compared to something with a set of goals that works until the task is done, while improving the way they do the task.
> 
> I think it comes down to: we accept a neurolace to say relevant (which removes all private thought and freedom, and enslaves us, then exterminates us when we are not useful), or we don't accept the nuerolace and we get left behind and exterminated after our house cat days are up.  And I am not referencing the "elite" as part of this A or B option, I am talking about us common folk.
> 
> But otherwise, thanks for contributing.  I agree with just about everything else.  If you did not watch any of the videos. Give a look....  (Posting videos makes it easier for those that don't like to read, but still has a lot of info contained therein.)


Thanks.  I'll watch the videos when I get a chance, so no, I havent watched any of the videos yet.  I wasnt even aware of this thread till I posted.

I do agree that an AI being able to develop either compassion or a consciousness is unlikely.  Mostly I think it is almost all basically "Fully Automatic", but not self aware.  Thats where I think Sci Fi gets it totally wrong.  A good example is Data from Star Trek.  Putting in a human as an AI is a cheap way to do a special effect, but that also causes us to carry over a lot of other human characteristics for sake of being able to relate to Data as a character.  One of the most basic human characteristics of being self aware.

Also, youre not wrong that AI will probably come MUCH sooner than 5000 years.  The whole point of saying 5000 years is to expand the focus of anyone who reads it, not just you.  In 5000 years, what will come will probably be nearly indistinguishable from magic.  Even our very best attempts to predict the state of human civilization in 5000 years wont be just wrong, but so incredibly wrong we currently have no ability to measure the scale of that level of wrong.  The further into the future we try to look, the less accurate we are.  Kind of like weather.  A better guess is what happens in 500 years.  Its currently estimated that we wont have technology to travel to other solar systems for about a thousand years.  We might be able to, but at our current rate of progression, unless we have a major leap forward, we wont achieve our status as an interstellar species for about a thousand years.  We have a better ability to predict specifics in 50 years.  Many political budgets extend this far, like Social Security.  Some of what will happen in 50 years is within the realm of possibility to predict.  Global Warming is a good example.  

One of the reasons Global Warming is even a subject in todays culture is that the time scale puts the effects well within a human lifetime.  If we were to talk about a time scale that is beyond a human lifetime or even at an extended scale, the lifetime of our children, again 5000 years, what effects will Global Warming have on our civilization?  If we were to say that the average surface temperature of the earth were at 800 degrees Fahrenheit in 5000 years, it would not even register as a threat to most people, mostly because it exceeds that human lifetime time scale.  Scientists are pretty sure that in 50 million years, the average temperature of the Earth will be something stupidly hot like 3000 degrees, and not caused by Global Warming.  Hotter than the surface of Venus.  In another 4 to 5 billion years, our sun will enter its Red Giant phase, and Earth will literally be engulfed by the sun.  If we keep trying to predict the future, there are very few things we can accurately determine.  In 100 trillion years, heat death of the entire universe, and every particle in existence is no more.  Of course, that is also theoretical and at such a scale, one of the only things we have any chance of predicting.  Everything else, like the price of coffee in the year 29,545, hell, we may not even have coffee by that time, so we dont even think about it because it has no direct bearing on our lives.

The point of the whole time scale is that there are a lot of people that do not think at all about the future, period.  They think about what they are gonna do tomorrow, but dont even think what life will be like in 5 years.  They literally think only what they are told to think.  "Being always online, despite privacy 'concerns', is cool".  And that is all they can think.  As mentioned in previous post, its a result of psychology, applied with technology.  If AI is ever achieved, we can pretty much take anything we know about the way the human mind works and throw it out the window.  AI may very well find a way to exceed the limitations imposed by either the Status Quo or its programmers.

----------


## Thor

> Thanks.  I'll watch the videos when I get a chance, so no, I havent watched any of the videos yet.  I wasnt even aware of this thread till I posted.
> 
> I do agree that an AI being able to develop either compassion or a consciousness is unlikely.  Mostly I think it is almost all basically "Fully Automatic", but not self aware.  Thats where I think Sci Fi gets it totally wrong.  A good example is Data from Star Trek.  Putting in a human as an AI is a cheap way to do a special effect, but that also causes us to carry over a lot of other human characteristics for sake of being able to relate to Data as a character.  One of the most basic human characteristics of being self aware.
> 
> Also, youre not wrong that AI will probably come MUCH sooner than 5000 years.  The whole point of saying 5000 years is to expand the focus of anyone who reads it, not just you.  In 5000 years, what will come will probably be nearly indistinguishable from magic.  Even our very best attempts to predict the state of human civilization in 5000 years wont be just wrong, but so incredibly wrong we currently have no ability to measure the scale of that level of wrong.  The further into the future we try to look, the less accurate we are.  Kind of like weather.  A better guess is what happens in 500 years.  Its currently estimated that we wont have technology to travel to other solar systems for about a thousand years.  We might be able to, but at our current rate of progression, unless we have a major leap forward, we wont achieve our status as an interstellar species for about a thousand years.  We have a better ability to predict specifics in 50 years.  Many political budgets extend this far, like Social Security.  Some of what will happen in 50 years is within the realm of possibility to predict.  Global Warming is a good example.  
> 
> One of the reasons Global Warming is even a subject in todays culture is that the time scale puts the effects well within a human lifetime.  If we were to talk about a time scale that is beyond a human lifetime or even at an extended scale, the lifetime of our children, again 5000 years, what effects will Global Warming have on our civilization?  If we were to say that the average surface temperature of the earth were at 800 degrees Fahrenheit in 5000 years, it would not even register as a threat to most people, mostly because it exceeds that human lifetime time scale.  Scientists are pretty sure that in 50 million years, the average temperature of the Earth will be something stupidly hot like 3000 degrees, and not caused by Global Warming.  Hotter than the surface of Venus.  In another 4 to 5 billion years, our sun will enter its Red Giant phase, and Earth will literally be engulfed by the sun.  If we keep trying to predict the future, there are very few things we can accurately determine.  In 100 trillion years, heat death of the entire universe, and every particle in existence is no more.  Of course, that is also theoretical and at such a scale, one of the only things we have any chance of predicting.  Everything else, like the price of coffee in the year 29,545, hell, we may not even have coffee by that time, so we dont even think about it because it has no direct bearing on our lives.
> 
> The point of the whole time scale is that there are a lot of people that do not think at all about the future, period.  They think about what they are gonna do tomorrow, but dont even think what life will be like in 5 years.  They literally think only what they are told to think.  "Being always online, despite privacy 'concerns', is cool".  And that is all they can think.  As mentioned in previous post, its a result of psychology, applied with technology.  If AI is ever achieved, we can pretty much take anything we know about the way the human mind works and throw it out the window.  AI may very well find a way to exceed the limitations imposed by either the Status Quo or its programmers.


+1

----------


## HVACTech

do you know what a 'mini split' is sir? 
I will bet that you do know what one is. 
trust me.. it is best that you do not know what I do about multi splits in cold climates.
https://www.youtube.com/watch?v=Gz2GVlQkn4Q

----------


## Thor

https://www.cnbc.com/2018/04/06/elon...cumentary.html

                                                                         Superintelligence  a form of artificial  intelligence (AI) smarter than humans  could create an "immortal  dictator," billionaire entrepreneur Elon Musk warned.

                                                                                                                                                                                                                                                                                                                                      In a documentary by American filmmaker  Chris Paine, Musk said that the development of superintelligence by a  company or other organization of people could result in a form of AI  that governs the world.

                                                                                                                                                                                                                                  "The least scary future I  can think of is one where we have at least democratized AI because if  one company or small group of people manages to develop godlike digital  superintelligence, they could take over the world," Musk said.

                                                                                                                                                                                                                                                                                               "At least when there's an evil  dictator, that human is going to die. But for an AI, there would be no  death. It would live forever. And then you'd have an immortal dictator  from which we can never escape."

                                                                                                                                                                                                                                  The documentary by Paine  examines a number of examples of AI, including autonomous weapons, Wall  Street technology and algorithms driving fake news. It also draws from  cultural examples of AI, such as the 1999 film "The Matrix" and 2016  film "Ex Machina."

                                                                                                                                                                                                                                  Musk cited Google's  DeepMind as an example of a company looking to develop  superintelligence. In 2016, AlphaGo, a program developed by the company,  beat champion Lee Se-dol at the board game Go. It was seen a major  achievement in the development of AI, after IBM's Deep Blue computer  defeated chess champion Garry Kasparov in 1997.

                                                                                                                                                                                                                                                      Musk said: "The DeepMind system can win at any  game. It can already beat all the original Atari games. It is super  human; it plays all the games at super speed in less than a minute."

                                                                                                                                                                                                                                  The Tesla and SpaceX CEO said that artificial intelligence "doesn't have to be evil to destroy humanity."
                                                                                                                                                                                                                                  "If AI has a goal and  humanity just happens to be in the way, it will destroy humanity as a  matter of course without even thinking about it. No hard feelings," Musk  said.

                                                                                                                                                                                                                                  "It's just like, if we're  building a road and an anthill just happens to be in the way, we don't  hate ants, we're just building a road, and so, goodbye anthill."
                                                                                                                                                                                                                                  Last year, Musk warned that the global race toward AI could result in a third world war. The entrepreneur has also suggested that the emerging technology could pose a greater risk to the world than a nuclear conflict with North Korea.

                                                                                                                                                                                                                                  Musk believes that humans should merge with AI  to avoid the risk of becoming irrelevant. He is the co-founder of  Neuralink, a start-up that reportedly wants to link the human brain with  a computer interface.

                                                                                                                                                                                                                                  He quit the board of OpenAI, a non-profit organization aimed at promoting and developing AI safely, in February.

----------


## Thor

Preview:



Full Movie:



I just watched the full movie.  Wow...  pandora's box has been opened, and it is over.

----------


## Anti Federalist

> Our job is to make sure it's used for good - and the market will always do that.


This is not true.

From IBM providing punch card data machines to the Nazis, to FedBook scooping up massive amounts of personal data to be sold off for political purposes, to the current push to expunge free speech and gun rights from their business models, and a million such other enterprises, big business will do what makes a buck, regardless of right or wrong, good or evil.

And to rely on big business curbing it's appetite for destruction of individual freedom due to "market pressure" from "consumers" is to rely on the same failed vision of democracy that voting relies on.

The market is, above all else, *supremely* democratic.

----------


## Danke

> Preview:
> 
> 
> 
> Full Movie:
> 
> 
> 
> I just watched the full movie.  Wow...  pandora's box has been opened, and it is over.



This?

https://vimeo.com/263108265

----------


## Thor

> This?
> 
> https://vimeo.com/263108265


Yes, looks like the first one I linked to was deleted.

Here it is again on YouTube: https://www.youtube.com/watch?v=_McBS1NlHJM

OR your Vimeo link....

----------


## DamianTV

> This?
> 
> https://vimeo.com/263108265


It probably wont be up long.

https://www.videograbber.net/free-vimeo-downloader

Seriously, download it if you have any interest in watching, as Elon Musk was paying to have the video streamed for free but ONLY this weekend so far.  If you know what a COMPUTER FILE is...

----------


## Thor

> It probably wont be up long.
> 
> https://www.videograbber.net/free-vimeo-downloader
> 
> Seriously, download it if you have any interest in watching, as Elon Musk was paying to have the video streamed for free but *ONLY this weekend* so far.  If you know what a COMPUTER FILE is...


Downloaded.  2 copies.  1080 and 720  Thx

----------


## Thor

https://thenextweb.com/artificial-in...han-you-think/





> *One machine to rule them all: A Master Algorithm may emerge sooner than you think*
> 
> Its excusable if you didnt notice it when a scientist named Daniel  J. Buehrer, a retired professor from the National Chung Cheng University  in Taiwan, published a white paper earlier this month proposing a new  class of math that could lead to the birth of machine consciousness.  Keeping up with all the breakthroughs in the field of AI can be  exhausting, we know.
> 
>  Robot consciousness is a touchy subject in artificial intelligence  circles. In order to have a discussion around the idea of a computer  that can feel and think, and has its own motivations, you first  have to find two people who actually agree on the semantics of sentience. And if you manage that, youll then have to wade through a myriad of hypothetical objections to any theoretical living AI you can come up with.
> 
>   Were just not ready to accept the idea of a mechanical species of  beings that exist completely independently of humans, and for good  reason: its the stuff of science fiction  just like spaceships and  lasers once were.
> 
>  Which brings us back to Buehrers white paper  proposing a new class of calculus. If his theories are correct, his  math could lead to the creation of an all-encompassing, all-learning  algorithm.
> ...

----------


## Mach

Others aren't working....

----------


## Aratus

Trump creates SKYNET and
(((((DRUMROLLs))))) the
Terminator III landscape??? 

Bye bye human race, indeed.

----------


## Danke

> Others aren't working....



Can AI defeat @oyarde and his tribe on his reservation?

----------


## oyarde

I cannot be defeated  . AI will cry .

----------


## Danke

> Others aren't working....



Watched it.  Funny how they have experts spouting how this technology is being used to sway voters to the right.  Of course liberals are objective.  Libertarians are just being manipulated, brain washed by media.

Nothing could be further from the truth.  Just the opposite.

----------


## Thor

Pulled from another thread:




>

----------


## Thor

https://www.extremetech.com/extreme/...la-is-its-name

----------


## Thor



----------


## Thor

https://www.cnbc.com/2018/09/07/elon...n-podcast.html

*Elon Musk: I'm about to announce a 'Neuralink' product that connects your brain to computers*


Elon Musk says he will soon announce a  Neuralink product that can make anyone superhuman by connecting their  brains to a computer.He says Neuralink increases the data rate  between the brain and computers and will give humans a better shot at  competing with AI.Musk made the comments before he smoked weed and drank on Joe Rogan's podcast.

----------


## thoughtomator

I've been a programmer for over 30 years. I can tell you right now without reservation that actual artificial intelligence is all but impossible. Extremely complex programs, sure, but nothing that could legitimately be called intelligence in the sense implied.

IMO a lot of the AI hype is from people who want to be able to disclaim responsibility for the programs they write and unleash upon the world.

----------


## Thor

> I've been a programmer for over 30 years. I can tell you right now without reservation that actual artificial intelligence is all but impossible. Extremely complex programs, sure, but nothing that could legitimately be called intelligence in the sense implied.
> 
> IMO a lot of the AI hype is from people who want to be able to disclaim responsibility for the programs they write and unleash upon the world.


Did you see this:
https://www.extremetech.com/extreme/...la-is-its-name




> One of the most significant  AI milestones in history was quietly ushered into being this summer. We  speak of the quest for Artificial General Intelligence (AGI), probably  the most sought-after goal in the entire field of computer science. With  the introduction of the Impala architecture, DeepMind, the company behind AlphaGo and AlphaZero, would seem to finally have AGI firmly in its sights.
> 
> Lets  define AGI, since its been used by different people to mean different  things. AGI is a single intelligence or algorithm that can learn  multiple tasks and exhibits positive transfer when doing so, sometimes  called meta-learning. During meta-learning, the acquisition of one skill  enables the learner to pick up another new skill faster because it  applies some of its previous know-how to the new task. In other words,  one learns how to learn  and can generalize that to acquiring new  skills, the way humans do. This has been the holy grail of AI for a long  time.
> 
> As it currently exists, AI shows little ability to transfer  learning towards new tasks. Typically, it must be trained anew from  scratch. For instance, the same neural network that makes  recommendations to you for a Netflix show cannot use that learning to  suddenly start making meaningful grocery recommendations.  Even these  single-instance narrow AIs can be impressive, such as IBMs Watson or  Googles self-driving car tech. However, these arent nearly so much so  an artificial general intelligence, which could conceivably unlock the  kind of recursive self-improvement variously referred to as the  intelligence explosion or singularity.
> 
> Those who thought that  day would be sometime in the far distant future would be wise to think  again. To be sure, DeepMind has made inroads on this goal before,  specifically with their work on Psychlab and _Differentiable Neural Computers._ However, Impala is their largest and most successful effort to date, showcasing a single algorithmthat can learn 30 different challenging tasks requiring various aspects of learning, memory, and navigation.


Regardless of AI capabilities, a neurolink interface will enslave us all....  far further than we are already enslaved.

----------


## Swordsmyth

> https://www.cnbc.com/2018/09/07/elon...n-podcast.html
> 
> *Elon Musk: I'm about to announce a 'Neuralink' product that connects your brain to computers*
> 
> 
> Elon Musk says he will soon announce a  Neuralink product that can make anyone superhuman by connecting their  brains to a computer.He says Neuralink increases the data rate  between the brain and computers and will give humans a better shot at  competing with AI.Musk made the comments before he smoked weed and drank on Joe Rogan's podcast.


Hype.

----------


## thoughtomator

> Did you see this:
> https://www.extremetech.com/extreme/...la-is-its-name


Marketing copy of the like which has been around for decades now. Note the prolific use of weasel words - when you strip out the uncertainties, it claims exactly nothing at all - other than that they can't achieve that lofty-sounding goal.

----------


## Thor



----------


## Thor

*Google CEO Sundar Pichai: Fears about artificial intelligence are very legitimate, he says in Post interview*


 Google CEO Sundar Pichai appears before the House Judiciary Committee on Dec. 11. (J. Scott Applewhite/AP)


      By  Tony Romm , 
Drew Harwell and 
Craig Timberg 

 December 12


 Google  chief executive Sundar Pichai, head of one of the worlds leading  artificial intelligence companies, said in an interview this week that  concerns about harmful applications of the technology are very  legitimate  but the tech industry should be trusted to responsibly  regulate its use.

 Speaking with The Washington  Post on Tuesday afternoon, Pichai said that new AI tools  the backbone  of such innovations as driverless cars and disease-detecting algorithms   require companies to set ethical guardrails and think through how the technology can be abused.
 I think tech has to realize it just cant build it and then fix it, Pichai said. I think that doesnt work. 

Tech  giants have to ensure artificial intelligence with agency of its own  doesn't harm humankind, Pichai said. He said he is optimistic about the  technology's long-term benefits, but his assessment of the potential  risks of AI parallels some tech critics, who contend the technology  could be used to empower invasive surveillance, deadly weaponry and the  spread of misinformation. Other tech executives, like SpaceX and Tesla  founder Elon Musk, have offered more dire predictions that AI could  prove to be far more dangerous than nukes. 

Googles  AI technology underpins everything from the companys controversial  China project to the surfacing of hateful, conspiratorial videos on its  YouTube subsidiary  a problem Pichai promised to address in the coming  year. How Google decides to deploy its AI has also sparked recent employee unrest.

Pichais call for self-regulation followed his testimony in Congress,  where lawmakers threatened to impose limits on technology in response  to its misuse, including as a conduit for spreading misinformation and  hate speech. His acknowledgment about the potential threats posed by AI  was a critical assertion because the Indian-born engineer often has  touted the world-shaping implications of automated systems that could  learn and make decisions without human control.

Pichai  said in the interview that lawmakers around the world are still trying  to grasp AIs effects and the potential need for government regulation.  Sometimes I worry people underestimate the scale of change thats  possible in the mid- to long term, and I think the questions are  actually pretty complex, he said. Other tech giants, including  Microsoft, recently have embraced regulation of AI  both by the  companies that create the technology and the governments that oversee  its use.

But AI,* if handled properly,* (of course) could have  tremendous benefits, Pichai explained, including helping doctors  detect eye disease and other ailments through automated scans of health  data. Regulating a technology in its early days is hard, but I do think  companies should self-regulate, he said. This is why we've tried hard  to articulate a set of AI principles. We may not have gotten everything  right, but we thought it was important to start a conversation. 

Pichai,  who joined Google in 2004 and became chief executive 11 years later, in  January called AI one of the most important things that humanity is  working on and said it could prove to be more profound for human  society than electricity or fire. But the race to perfect machines  that can operate on their own has rekindled familiar fears that Silicon  Valleys corporate ethos  move fast and break things, as Facebook  once put it  could result in powerful, imperfect technology eliminating  jobs and harming people.

Within Google, its AI  efforts also have created controversy: The company faced heavy  criticism earlier this year because of its work on a Defense Department  contract involving AI that could automatically tag cars, buildings and  other objects for use in military drones. Some employees resigned  because of what they called Googles profiting off the business of  war." 

Asked about the employee backlash,  Pichai told The Post that its workers were an important part of our  culture. They definitely have an input, and its an important input,  its something I cherish, he said.

In June,  after announcing Google wouldnt renew the contract next year, Pichai  unveiled a set of AI-ethics principles that included general bans on  developing systems that could be used to cause harm, damage human rights  or aid in surveillance violating internationally accepted norms." 
 The  company faced criticism for releasing AI tools that could be misused in  the wrong hands. Googles release in 2015 of its internal  machine-learning software, TensorFlow, has helped accelerate the  wide-scale development of AI, but it has also been used to automate the  creation of lifelike fake videos that have been used for harassment and  disinformation.

 Google and Pichai have defended  the release by saying that keeping the technology restricted could lead  to less public oversight and prevent developers and researchers from  improving its capabilities in beneficial ways.

 Over  time, as you make progress, I think its important to have  conversations around ethics [and] bias and make simultaneous progress,  Pichai said during his interview with The Post.

In  some sense, you do want to develop ethical frameworks, engage  non-computer scientists in the field early on, he said. You have to  involve humanity in a more representative way because the technology is  going to affect humanity. 

 Pichai likened the  early work to set parameters around AI to the academic communitys  efforts in the early days of genetics research. Many biologists started  drawing lines on where the technology should go, he said. There's  been a lot of self-regulation by the academic community, which I think  has been extraordinarily important. 

The  Google executive said it would be most essential in the development of  autonomous weapons, an issue thats rankled tech executives and  employees. In July, thousands of tech workers representing companies  including Google signed a pledge against developing AI tools that could  be programmed to kill.

Pichai also said he found some hateful, conspiratorial YouTube videos described in a Post story  Tuesday abhorrent and indicated that the company would work to  improve its systems for detecting problematic content. The videos, which  together had been watched millions of times on YouTube since appearing  in April, discussed baseless allegations that Democrat Hillary Clinton  and her longtime aide Huma Abedin had attacked, killed and drank the  blood of a girl.

Pichai said he had not seen the videos, which he was questioned about  during the congressional hearing, and declined to say whether YouTubes  shortcomings in this area were a result of limits in the detection  systems or in policies for evaluating whether a particular video should  be removed. But he added, Youll see us in 2019 continue to do more  here. 

 Pichai also portrayed Googles efforts to develop a new product  for the government-controlled Chinese Internet market as preliminary,  declining to say what the product might be or when it would come to  market  if ever.

 Dubbed Project Dragonfly, the  effort has caused backlash among employees and human rights activists  who warn about the possibility of Google assisting government  surveillance in a country that tolerates little political dissent. When  asked whether its possible that Google might make a product that allows  Chinese officials to know who searches for sensitive terms, such as the  Tiananmen Square massacre, Pichai said it was too soon to make any such  judgments.

 Its a hypothetical, Pichai said. We are so far away from being in that position.

https://www.washingtonpost.com/techn...ost-interview/



Trust us they said....

----------


## H_H

> How to change the world


By the way, I really do now disavow this.  Peterson is a loser and is intentionally subverting millions of young men into a dead-end non-productive path.  He is sick and probably Satanic.

Just FYI.

----------


## Thor

Couple years old...

----------


## Danke

> Couple years old...


wow

----------


## Thor

Mark of the beast or not.... not just no, but hell no...

https://thestrangerfiction.com/elon-...-of-the-beast/

----------


## Thor



----------

