I think I have become a Luddite libertarian....

https://www.cnbc.com/2018/09/07/elon-musk-discusses-neurolink-on-joe-rogan-podcast.html

Elon Musk: I'm about to announce a 'Neuralink' product that connects your brain to computers


  • Elon Musk says he will soon announce a Neuralink product that can make anyone superhuman by connecting their brains to a computer.
  • He says Neuralink increases the data rate between the brain and computers and will give humans a better shot at competing with AI.
  • Musk made the comments before he smoked weed and drank on Joe Rogan's podcast.
 
I've been a programmer for over 30 years. I can tell you right now without reservation that actual artificial intelligence is all but impossible. Extremely complex programs, sure, but nothing that could legitimately be called intelligence in the sense implied.

IMO a lot of the AI hype is from people who want to be able to disclaim responsibility for the programs they write and unleash upon the world.
 
I've been a programmer for over 30 years. I can tell you right now without reservation that actual artificial intelligence is all but impossible. Extremely complex programs, sure, but nothing that could legitimately be called intelligence in the sense implied.

IMO a lot of the AI hype is from people who want to be able to disclaim responsibility for the programs they write and unleash upon the world.

Did you see this:
https://www.extremetech.com/extreme...l-intelligence-is-here-and-impala-is-its-name

One of the most significant AI milestones in history was quietly ushered into being this summer. We speak of the quest for Artificial General Intelligence (AGI), probably the most sought-after goal in the entire field of computer science. With the introduction of the Impala architecture, DeepMind, the company behind AlphaGo and AlphaZero, would seem to finally have AGI firmly in its sights.

Let’s define AGI, since it’s been used by different people to mean different things. AGI is a single intelligence or algorithm that can learn multiple tasks and exhibits positive transfer when doing so, sometimes called meta-learning. During meta-learning, the acquisition of one skill enables the learner to pick up another new skill faster because it applies some of its previous “know-how” to the new task. In other words, one learns how to learn — and can generalize that to acquiring new skills, the way humans do. This has been the holy grail of AI for a long time.

As it currently exists, AI shows little ability to transfer learning towards new tasks. Typically, it must be trained anew from scratch. For instance, the same neural network that makes recommendations to you for a Netflix show cannot use that learning to suddenly start making meaningful grocery recommendations. Even these single-instance “narrow” AIs can be impressive, such as IBM’s Watson or Google’s self-driving car tech. However, these aren’t nearly so much so an artificial general intelligence, which could conceivably unlock the kind of recursive self-improvement variously referred to as the “intelligence explosion” or “singularity.”

Those who thought that day would be sometime in the far distant future would be wise to think again. To be sure, DeepMind has made inroads on this goal before, specifically with their work on Psychlab and Differentiable Neural Computers. However, Impala is their largest and most successful effort to date, showcasing a single algorithmthat can learn 30 different challenging tasks requiring various aspects of learning, memory, and navigation.

Regardless of AI capabilities, a neurolink interface will enslave us all.... far further than we are already enslaved.
 
https://www.cnbc.com/2018/09/07/elon-musk-discusses-neurolink-on-joe-rogan-podcast.html

Elon Musk: I'm about to announce a 'Neuralink' product that connects your brain to computers


  • Elon Musk says he will soon announce a Neuralink product that can make anyone superhuman by connecting their brains to a computer.
  • He says Neuralink increases the data rate between the brain and computers and will give humans a better shot at competing with AI.
  • Musk made the comments before he smoked weed and drank on Joe Rogan's podcast.

Hype.
 
Google CEO Sundar Pichai: Fears about artificial intelligence are ‘very legitimate,’ he says in Post interview

MRNF3YX53QI6RIL6CYVXCLUPYI.jpg

Google CEO Sundar Pichai appears before the House Judiciary Committee on Dec. 11. (J. Scott Applewhite/AP)


By Tony Romm ,
Drew Harwell and
Craig Timberg

December 12


Google chief executive Sundar Pichai, head of one of the world’s leading artificial intelligence companies, said in an interview this week that concerns about harmful applications of the technology are “very legitimate” — but the tech industry should be trusted to responsibly regulate its use.

Speaking with The Washington Post on Tuesday afternoon, Pichai said that new AI tools — the backbone of such innovations as driverless cars and disease-detecting algorithms — require companies to set ethical guardrails and think through how the technology can be abused.
“I think tech has to realize it just can’t build it and then fix it,” Pichai said. “I think that doesn’t work.”

Tech giants have to ensure artificial intelligence with “agency of its own” doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels some tech critics, who contend the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be “far more dangerous than nukes.”

Google’s AI technology underpins everything from the company’s controversial China project to the surfacing of hateful, conspiratorial videos on its YouTube subsidiary — a problem Pichai promised to address in the coming year. How Google decides to deploy its AI has also sparked recent employee unrest.

Pichai’s call for self-regulation followed his testimony in Congress, where lawmakers threatened to impose limits on technology in response to its misuse, including as a conduit for spreading misinformation and hate speech. His acknowledgment about the potential threats posed by AI was a critical assertion because the Indian-born engineer often has touted the world-shaping implications of automated systems that could learn and make decisions without human control.

Pichai said in the interview that lawmakers around the world are still trying to grasp AI’s effects and the potential need for government regulation. “Sometimes I worry people underestimate the scale of change that’s possible in the mid- to long term, and I think the questions are actually pretty complex,” he said. Other tech giants, including Microsoft, recently have embraced regulation of AI — both by the companies that create the technology and the governments that oversee its use.

But AI, if handled properly, (of course) could have “tremendous benefits,” Pichai explained, including helping doctors detect eye disease and other ailments through automated scans of health data. “Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said. “This is why we've tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

Pichai, who joined Google in 2004 and became chief executive 11 years later, in January called AI “one of the most important things that humanity is working on” and said it could prove to be “more profound” for human society than “electricity or fire.” But the race to perfect machines that can operate on their own has rekindled familiar fears that Silicon Valley’s corporate ethos — “move fast and break things,” as Facebook once put it — could result in powerful, imperfect technology eliminating jobs and harming people.

Within Google, its AI efforts also have created controversy: The company faced heavy criticism earlier this year because of its work on a Defense Department contract involving AI that could automatically tag cars, buildings and other objects for use in military drones. Some employees resigned because of what they called Google’s profiting off the “business of war."

Asked about the employee backlash, Pichai told The Post that its workers were “an important part of our culture.” “They definitely have an input, and it’s an important input, it’s something I cherish,” he said.

In June, after announcing Google wouldn’t renew the contract next year, Pichai unveiled a set of AI-ethics principles that included general bans on developing systems that could be used to cause harm, damage human rights or aid in “surveillance violating internationally accepted norms."
The company faced criticism for releasing AI tools that could be misused in the wrong hands. Google’s release in 2015 of its internal machine-learning software, TensorFlow, has helped accelerate the wide-scale development of AI, but it has also been used to automate the creation of lifelike fake videos that have been used for harassment and disinformation.

Google and Pichai have defended the release by saying that keeping the technology restricted could lead to less public oversight and prevent developers and researchers from improving its capabilities in beneficial ways.

“Over time, as you make progress, I think it’s important to have conversations around ethics [and] bias and make simultaneous progress,” Pichai said during his interview with The Post.

“In some sense, you do want to develop ethical frameworks, engage non-computer scientists in the field early on,” he said. “You have to involve humanity in a more representative way because the technology is going to affect humanity.”

Pichai likened the early work to set parameters around AI to the academic community’s efforts in the early days of genetics research. “Many biologists started drawing lines on where the technology should go,” he said. “There's been a lot of self-regulation by the academic community, which I think has been extraordinarily important.”

The Google executive said it would be most essential in the development of autonomous weapons, an issue that’s rankled tech executives and employees. In July, thousands of tech workers representing companies including Google signed a pledge against developing AI tools that could be programmed to kill.

Pichai also said he found some hateful, conspiratorial YouTube videos described in a Post story Tuesday “abhorrent” and indicated that the company would work to improve its systems for detecting problematic content. The videos, which together had been watched millions of times on YouTube since appearing in April, discussed baseless allegations that Democrat Hillary Clinton and her longtime aide Huma Abedin had attacked, killed and drank the blood of a girl.

Pichai said he had not seen the videos, which he was questioned about during the congressional hearing, and declined to say whether YouTube’s shortcomings in this area were a result of limits in the detection systems or in policies for evaluating whether a particular video should be removed. But he added, “You’ll see us in 2019 continue to do more here.”

Pichai also portrayed Google’s efforts to develop a new product for the government-controlled Chinese Internet market as preliminary, declining to say what the product might be or when it would come to market — if ever.

Dubbed Project Dragonfly, the effort has caused backlash among employees and human rights activists who warn about the possibility of Google assisting government surveillance in a country that tolerates little political dissent. When asked whether it’s possible that Google might make a product that allows Chinese officials to know who searches for sensitive terms, such as the Tiananmen Square massacre, Pichai said it was too soon to make any such judgments.

“It’s a hypothetical,” Pichai said. “We are so far away from being in that position.”

https://www.washingtonpost.com/tech...e-are-very-legitimate-he-says-post-interview/



Trust us they said....
 


How to change the world

By the way, I really do now disavow this. Peterson is a loser and is intentionally subverting millions of young men into a dead-end non-productive path. He is sick and probably Satanic.

Just FYI.
 
689b1115c0fe9.webp


A 2024 paper and blog post unveiling Google’s healthcare AI model, Med-Gemini, mentioned “basilar ganglia” — a body part that doesn’t exist, The Verge reported.

After neurologist and researcher Bryan Moore, MD, discovered the mistake, Google fixed the blog post (though not the original paper), according to the Aug. 4 story. The company said it was a misspelling of “basal ganglia,” a part of the brain primarily responsible for motor control, but critics contend it points to the more serious issue of AI hallucination in healthcare.

“What you’re talking about is super dangerous,” Maulin Shah, MD, chief medical information officer of Renton, Wash.-based Providence, told the news outlet. “Two letters, but it’s a big deal.”

 
Technology in and of itself doesn't scare or bother me. Like anything/everything else, once government gets involved, that's when it turns bad [and yes, it will be bad]. More nefarious is when things turn Public-Private Partnership, where government contracts are granted [on the tax payer dole] and information becomes proprietary and out of the reach of the tax payer [FOIA, for a private company???].

Like anything else government related, the only solution is to DEFUND the programs/contracts out of existence and let the true Free Market determine winners and losers.

It [defunding out of existence] can happen, but not very likely, because too many corp/gov interests are at stake, and not enough people will demand that it happen [take TSA, for example]. In the end, it's People who determine whatever outcome, but most aren't aware and/or don't even care.
 
Last edited:
Back
Top