We are in the AI Singularity

WIRED: OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway - In health care settings, it’s important to be precise. That’s why the widespread use of OpenAI’s Whisper transcription tool among medical workers has experts alarmed.

image.png
 
Well, it turns out that there is at least one positive, beneficial application of AI:

 
ALL AI censorship broken! :tears:

All it takes is some 1337-sp33k to jailbreak ALL AI models... The 90's strikes again!

 
Last edited:
WHOA -- GPT4-o3 is a legit big-deal... they're cooking with fire now:

OpenAI o3 Breakthrough High Score on ARC-AGI-Pub -- achieves 87.5% on private dataset (human-equivalent)

The hype is going to be unbearable. And the hypists, as always, are going to miss the real lesson, here. If you read the blog post, what made this big leap possible from previous GPT-4s is architecture. I would describe GPT's as having a kind of internal "rat's nest computing architecture" that allows them to compute a random subset of cognitive tasks efficiently. And there is a large space of cognitive tasks that were not in their pre-training and so they just suck at those. And they lack a human-like adaptive learning construct, so whatever gets baked in, is what you get. GPT4-o1 enabled chain-of-thought which allows the model to "think", but at a cost: you pay for every token! GPT4-o3 is no different in that respect, but they've apparently tuned up its internal architecture so that it is much more efficient at solving the kinds of puzzles that are in the ARC dataset, which require very generalized forms of reasoning and inference.

I'll have to see some demos of o3 before rendering my opinion on whether it can be said to be "thinking" in any meaningful sense (obviously not consciously), but my gut instinct is that, despite its high score on the Arc prize, it's still going to be a No. And even if it is "thinking" in a sense that I would consider suitable to be labeled as such, we have no way to actually assess this or introspect into what is happening in its mind. Note that we can absolutely introspect into the minds of other humans... we do this simply by asking them questions.

The current AI architectures do not maintain state and they do not "develop". And while CoT gives them something that looks and feels a lot like inference, I'm willing to bet you that they still don't have unrecantable grounding, meaning, they don't really know that there are certain facts that are *absolutely* true. In the long run, after all the AI mega-hype has faded, after the absurd AI bubbles have burst and sober reality finally sets back in, researchers are going to start admitting that there really is no shortcut around development. All intelligent organisms in Nature, without exception, develop. They develop from early exploration and play, into adolescent wariness and skill-transfer, to mature solidity and cautious exploration based on long-developed intuition. That is as true of wolves as cats as deer as humans. The idea that Silicon Transistors are some kind of magical exception to the universal pattern of Nature is ridiculous. Computers have been called "a bicycle for the mind". It's a good metaphor, and I will extend it by suggesting that non-developmental AI is "a tractor for the mind" -- extremely useful but its lack of development means that its mind-state is necessarily in some kind of amnesic condition. It literally has no past memory of anything before the question, "Can I patch a nail that has punctured my car tire with some bubble gum?" That's not how thinking works, neither in humans nor animals. Context is an indispensable ingredient of thinking and the context of these pre-trained models -- even with CoT methods -- is zero.

Summary: While the 87.5% score on ARC-AGI is no mere stunt, it still doesn't get to the heart of the issue of the ARC prize, and the underlying lesson is still yet to be learnt. Memory, inference, grounding and contextual understanding (with or without embodiment! I consider disembodied AGI to be legitimately possible) are absolutely necessary ingredients. You can duct tape them onto your whiz-bang AI machine as an after-thought but this only shows that you're not really thinking seriously about AI. You're involved in some kind of magical thinking where AI "just happens" once some inevitable "scaling-law" is put into motion, and that's all just a bunch of bull-hockey. All the big improvements in AI have been the result of architectural changes. You can "strip-mine" the edge of performance on any given architecture, taking an 80% score up to an 81.7% scoreboard "winner", using unlimited compute-scaling, but that's a kind of behavior that is only sustainable under hype. Hype, by its very nature, is transitory. Sooner or later, these companies are going to have to stop the constant hype-baiting and start building honest-to-goodness AI systems that do useful things without trying to suck people into some kind of mass-surveillance-and-mind-control matrix run by crappy AI algorithms...

 
Last edited:
Getting Devin to actually push to Github by the spirit of Harambe... :tears:

[Note: It still can't push to master, it's literally incapable of pushing to master ... all for the low-low price of $500/month]

 
Last edited:


The Prime has hit the absolute nail on the head in this clip. This is the insight, and I've been trying to drill the very same insight in this thread -- the more capable AI systems become, the more valuable hard-skills will become, not less! He gives the reason why: because things are changing so much faster. I will generalize this one step further (because this is really what is at issue): The more you parallelize, the more important the hard-skills of overall system design become. Both AI and QC, by the way, are really about parallelization. I predict when historians look back, they will describe this computing era as the "parallel-computing revolution".

It's easy to see why hard skills matter more the more you parallelize. Consider the ruler of a very large country, and compare to the ruler of a very small country. As a rule, which of these two leaders will be more formal? Which will be surrounded by a larger formal bureaucracy? The more you parallelize (the more resources that are potentially demanding your individual attention), the more formal and rigid you must become in your processes. This is exactly the same for computing parallelism. As you scale up the total amount of compute resources which you are "commanding", the more rigidly those resources are going to have to be organized which, in human terms, translates to hard-skills. That is, the more intense your demand will become for people who actually have the hard-skills required to organize large, very complex, parallel systems.

"Yeah, well AI will do that, too!" just totally misses the point. Every army has a head general who is the buck-stops-here commander of the entire armed forces. He is the one who commands those forces, not the President. So what do you need a President for? Well, that's getting the whole question backwards, because we have a President for the civil purposes of government, and the army only exists as an appendage of that office, that is, the army exists in order to facilitate the civil purposes of government (which should be serving the people, ultimately). That the army can self-manage while it is not being actively deployed to some particular field of battle by the President doesn't mean you don't need a President, it just means that the army is well-fitted. So it is with AI systems. An AI system that can organize and operate itself (we really don't have that yet, but maybe one day) would be like this well-trained army that can manage itself without having to be baby-sat by the President. Nor does it change the underlying formula that, the bigger the army, the bigger the apparatus which the President commands, the more rigid and formal the office of the President becomes.

People keep tying themselves up in knots over all of this but it really isn't that complicated. Stop consuming non-stop hype and actually think!!
 
Last edited:
lp5OT7H.png


"I drink and I know things." -- Tyrion Lannister

If your only product is "I'm smart" or "I know stuff"... you're on an appointment-list with unemployment. AI is not a threat to anyone who is willing to actually work to produce value. Knowledge work, white-collar work, creative work... these are all legitimate lines of work. Just being "smart" or "important" or "knowledgeable" is not valuable. AI is going to force people to start understanding this important distinction...
 
Back
Top