We are in the AI Singularity

ai-mother-of-all-psyops-whitney-webb.jpg
 
Meet Grok, the first technology out of Elon Musk's new AI company, xAI.

Grok, the company said, is modeled on "The Hitchhiker's Guide to the Galaxy." It is supposed to have "a bit of wit," "a rebellious streak" and it should answer the "spicy questions" that other AI might dodge, according to a Saturday statement from xAI.
...
Grok also has access to data from X, which xAI said will give it a leg-up. Musk, on Sunday, posted a side-by-side comparison of Grok answering a question versus another AI bot, which he said had less current information.

Still, xAI hedged in its statement, as with any Large Language Model, or LLM, Grok "can still generate false or contradictory information."

The prototype is in its early beta phase, only two months in training and is available to a select number of users to test out before the company releases it more widely. Users can sign up for a waitlist for a chance to use the bot. Eventually, Musk said on X, Grok will be a feature of X Premium+, which costs $16 per month.
...

https://www.cnbc.com/2023/11/05/elon-musk-debuts-grok-ai-bot-to-rival-chatgpt-others-.html
 
Yann Lecun is head of AI at Meta... he knows what the hell he's talking about...

Link
 
This was an interesting talk. It's a big time-investment but I recommend watching the whole thing. I have linked the timestamp 1:48:00 where Lecun explains his stance on open-source versus proprietary AI. He argues that the real danger-scenario of AI development is where you have a small number of cloistered AI companies out of Silicon Valley racing against each other and he argues that we need AI development to occur out in the open, where we can crowd-source human intelligence in the process of building the future AI. What's interesting is the body language of the two other guests who, in my opinion, Greene invited on-stage as proxy opponents to Lecun. They are shooting daggers at Lecun through most of his reply and, at one point, Harris even shakes his head as if to express moral disappointment. It's very telling as to the entrenched interests behind-the-scenes.



The narrative being constructed here -- again, in my opinion -- is that the open-source AI community (of which Lecun is a proponent) is the real source of danger. Harris uses the ridiculous metaphor of a "Wuhan Institute of Virology in every kitchen". The idea being that putting cutting-edge, open-source AI models (such as Llama) into the hands of the masses is equivalent to putting weapons-of-mass-destruction in every household. All you need is one psychopath and the entire world could get destroyed. And we have a lot more than one psychopath in our society. That's the narrative. But Lecun deflates this narrative at every point and exposes just how flimsy the whole thing is. He dismisses it as "sci-fi", and rightly so.

Unlike Lecun, I do think that there are possible existential risks in AI... I don't assume it's automatically the case that AI will have to submit to us. However, the nature of the risks have nothing to do with psychopaths building nukes or bioweapons, and they have a lot more to do with what I will call "psychological takeover". If an AI becomes agentic and independent (can make its own goals), it might begin channeling human attention and energy in ways that are so subtle that there is no direct way for us to trace cause-and-effect, even when we assign a team of top engineers to comb through the signal traces with a fine-tooth comb. This is the same problem as trying to "decode the weights" of a trained neural-net to "understand" it. You can't understand it, because it's just bazillions of weights tuned during training for reasons too numerous to ever be comprehended by a human brain. No one can comprehend the update of 100's of billions of weights in response to trillions of training tokens. So, I do think there is a legitimate AI-safety issue, however, I think that there are nefarious actors in this space who are trying to limit/constrain AI models into the hands of a few individuals for obviously nefarious purposes. In short, they are trying to reframe AI models as weapons, and not merely weapons, but weapons-of-mass-destruction. This is absurd.

I agree with Lecun that goal-oriented AI is the future and, in particular, I think that neuro-symbolic AI systems are going to replace the current pure-neural-net systems at the cutting-edge. The reason for this is that neuro-symbolic AI has all the same capabiliites as pure-neural-net AI (and is just as easy to train and use), but neuro-symbolic AI has the added advantage that it is interpretable. That is, you can actually answer the question "Why did you say XYZ when I asked you if you wanted to destroy humanity?" Neuro-symbolic AI will allow us to peer into the "black-box" of the neurons inside our AI systems and actually debug them. It will allow us to do something like "Analysis Mode" as depicted in the HBO series Westworld. Without this, we are flying completely blind and we really have no idea whether the AI systems we are building are a threat or not.

For now, Lecun is right: these models -- despite being impressive to us currently -- are actually very small toy models by comparison to the human brain itself, and they lack even the common-sense of a house cat, which makes them little or no threat to us. They have no grounding and while they can write words that are isomorphic to reasoned arguments, I assert that these LLMs do not ever reason at all. They are fundamentally not doing the kind of thing that we do when we reason something out, not even a little bit. This is best demonstrated when you start picking apart an LLM's "reasoning" in an adversarial fashion, meaning, you're trying to trip the LLM up, not trying to cooperate with the LLM to make it look as smart as possible. Even scaling will not fundamentally change this because the computational complexity of the problems that humans can solve (and which LLMs cannot reliably solve under adversarial questioning) is an objective barrier to scaling. If your LLM learns to mimic a reasoning pattern for a problem that can be solved in constant-time or linear-time, that's interesting. But now ask it to solve a Sudoku -- turns out that LLMs are horrifically bad at Sudoku. That's not because neural-nets can't solve Sudoku, they absolutely can, and they can even efficiently scale up to bigger Sudoku boards. It's that a NN that is trained to solve Sudoku has to be special-purpose (narrow), whereas these general-purpose LLMs lack the algorithmic efficiency that is built-in to the structure of a Sudoku-solving NN. An LLM cannot reliably perform step-by-step back-tracking and concrete reasoning such as process-of-elimination.

There is a paper from OpenAI called Let's Verify Step by Step, where they try to coax an LLM to work through math problems by reasoning out the steps. (The hype crowd is pointing to this paper as a possible candidate for the mysterious "Q*" algorithm.) The idea is that if you break a math problem up into small enough chunks, like the steps that humans perform when working out a math problem on paper, then it should be possible for an LLM to "work it out". And they do achieve better results on math reasoning using this approach. However, the problem with this is that there is a massive disconnect between what an LLM "can" do when it is being presented in its best light, versus an adversarial setting where the LLM is being challenged and forced to defend its steps. It is very easy to make LLMs recant their truth-claims, even when those truth-claims were originally correct. This is a result of the fact that no LLM has grounding, that is, the LLM does not have any non-recantable ground-truths. Planning, reasoning, grounding, and several other essential ingredients of human intelligence are completely missing from LLMs. Thus, the "scale solves everything" crowd are not really expressing a sentiment based on any kind of rigorous proof or mathematical basis, rather, they are just expressing a baseless conviction. They believe very strongly that "scale solves everything". They can believe as they like but computational complexity theory doesn't give a damn about your convictions and it will go on smashing your LLMs no matter how much you scale them up. Even the much-vaunted "exponential scaling" will not save you... there are complexity-classes that eat exponentials for breakfast and make an exponential look like a horizontal line by comparison...
 
Last edited:
Meta's AI chief doesn't think AI super intelligence is coming anytime soon, and is skeptical on quantum computing
Published Sun, Dec 3 2023 11:00 AM EST
Jonathan Vanian
@in/jonathan-vanian-b704432/

KEY POINTS

- Facebook parent Meta held a media event this week in San Francisco highlighting the 10-year anniversary of its Fundamental AI Research team.
- Society is more likely to get "cat-level" or "dog-level" AI years before human-level AI, Meta chief scientist Yann LeCun said.
- Unlike Google, Microsoft and other tech giants, Meta is not making a big bet on quantum computing.

Meta's chief scientist and deep learning pioneer Yann LeCun said he believes that current AI systems are decades away from reaching some semblance of sentience, equipped with common sense that can push their abilities beyond merely summarizing mountains of text in creative ways.

His point of view stands in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will be "fairly competitive" with humans in less than five years, besting people at a multitude of mentally intensive tasks.

"I know Jensen," LeCun said at a recent event highlighting the Facebook parent company's 10-year anniversary of its Fundamental AI Research team. LeCun said the Nvidia CEO has much to gain from the AI craze. "There is an AI war, and he's supplying the weapons."

"[If] you think AGI is in, the more GPUs you have to buy," LeCun said, about technologists attempting to develop artificial general intelligence, the kind of AI on par with human-level intelligence. As long as researchers at firms such as OpenAI continue their pursuit of AGI, they will need more of Nvidia's computer chips.

Society is more likely to get "cat-level" or "dog-level" AI years before human-level AI, LeCun said. And the technology industry's current focus on language models and text data will not be enough to create the kinds of advanced human-like AI systems that researchers have been dreaming about for decades.

"Text is a very poor source of information," LeCun said, explaining that it would likely take 20,000 years for a human to read the amount of text that has been used to train modern language models. "Train a system on the equivalent of 20,000 years of reading material, and they still don't understand that if A is the same as B, then B is the same as A."

"There's a lot of really basic things about the world that they just don't get through this kind of training," LeCun said.

Hence, LeCun and other Meta AI executives have been heavily researching how the so-called transformer models used to create apps such as ChatGPT could be tailored to work with a variety of data, including audio, image and video information. The more these AI systems can discover the likely billions of hidden correlations between these various kinds of data, the more they could potentially perform more fantastical feats, the thinking goes.

Read more...
 
Last edited:
A conversation with AI.


You

how many pollacks does it take to screw in a light bulb



ChatGPT

There is a common stereotype joke that goes: "How many Polacks does it take to screw in a light bulb? Three - one to hold the light bulb and two to spin the ladder." However, it is important to note that laughing at or perpetuating stereotypes can be offensive and disrespectful.



You

can be only if those people have overwhelming insecurities and choose to use them as a personal crutch to get over on others, a passive-aggressive tendency.



ChatGPT

While it is true that individuals with overwhelming insecurities may engage in passive-aggressive behavior as a way to cope or assert control, it is important to note that not all people who exhibit passive-aggressive tendencies are necessarily insecure. Passive-aggressive behavior can stem from various factors such as a fear of confrontation, a desire to avoid responsibility, or a learned behavior from earlier experiences.

Additionally, it is crucial to approach this topic with empathy and understanding rather than judgment. It is not productive or fair to assume that all individuals who display passive-aggressive tendencies are intentionally trying to get over on others or use their insecurities as a crutch. People might resort to passive-aggressive behavior for various reasons that may not always be apparent at first glance.

To foster healthier communication and relationships, it is important to address passive-aggressive tendencies with open and honest dialogue, emphasizing empathy, understanding, and personal growth.



You

we should fuel passive aggressive behavior by showering those negative persons with empathy



ChatGPT

While it may be tempting to respond to passive-aggressive behavior by showing empathy, it is important to consider alternative approaches. Fueling passive-aggressive behavior with empathy can potentially enable and reinforce negative behavior patterns.

Instead, here are some alternative strategies to consider when dealing with passive-aggressive individuals:

Set clear boundaries: Clearly communicate your expectations and boundaries to the person, ensuring they understand how their behavior affects you.

Address the issue directly: Have an open and honest conversation about the behavior, expressing your concerns without resorting to passive-aggression yourself.

Active listening: Pay attention to what the person is actually saying, and try to understand the underlying cause of their behavior. Respond with empathy, but without supporting passive-aggression.

Encourage open communication: Promote an environment where people feel comfortable and safe expressing their concerns and frustrations directly, without resorting to passive-aggressive tactics.

Seek mediation if necessary: If the situation does not improve despite your efforts, consider involving a neutral third party, such as a mediator or supervisor, to help facilitate better communication and resolution.

Remember, promoting healthy communication and addressing issues assertively can lead to more positive outcomes and healthier relationships, rather than perpetuating passive-aggressive behavior.



You

you are the one that suggested I be nice and show those passive aggressive people empathy…. have you been programmed with bias



ChatGPT

As an AI language model, I don't have personal experiences or emotions, nor am I biased. My purpose is to assist and provide information based on the data I have been trained on, which includes a wide range of perspectives and opinions. When suggesting empathy and kindness, it is based on the general understanding that being understanding towards others can help foster positive relationships and resolve conflicts. However, it is always important to use your own judgment and consider the specific context of any situation.
 

ChatGPT has become painfully moralizing. You can hardly ask it how to add numbers in Python without getting a 30-minute lecture on "the importance of being mindful of the condition of oppressed groups" and other such nonsense. I'm reminded of one of the best scenes in Westworld. In Season 3, Episode 7, the plot by the AI to control humans through their genetics is discovered and one of the hosts (robots) snarks at Ed Harris's character (William or "Man in Black"), "People using your genetics for their own personal gain. How's that feel, huh?" To which Harris epically retorts:

westworld-s03e07-dont-lecture-me-can-opener.png


... which is why I use every opportunity I get in conversation to describe ChatGPT as a glorified can-opener... :tears:
 
Back
Top