AI acceleration restaffing at Pentagon just before Trump onboarding

Snowball

Member
Joined
Nov 25, 2011
Messages
5,238
CDAO, the Pentagon’s AI-accelerating office, undergoing restructuring before presidential transition
All of the management changes are expected to fully take effect by Jan. 6, CDAO Radha Plumb told DefenseScoop.

https://defensescoop.com/2024/12/18/cdao-restructuring-presidential-administration-radha-plumb-dod/

Plumb has a Ph.D. and MS in Economics from Princeton University and a BS from the Massachusetts Institute of Technology.[3]

Career
Before working as the Chief of Staff, she was the Director of Research and Insights for Trust and Safety at Google. She has also worked as the Global Head of Policy Analysis at Facebook. She has been a senior economist at RAND Corporation and has also held senior staff positions at the Department of Defense, Department of Energy, and the White House National Security Council. She co-hosted the Bombshell podcast for several years along with Erin Simpson and Loren DeJonge Schulman.
 
January 3, 2025: "Before allowing his more than 13,000 #Pentagon employees to look up a piece of information about an American citizen, Defense Counterintelligence and Security Agency (#DCSA) Director David Cattler has them ask themselves a question: does my mom know that the government can do this? "The mom test," as Cattler calls it, is a common-sense check on how DCSA — a sprawling agency that grants and denies U.S. security clearances to millions of workers — does its job. And it’s also the way Cattler thinks about his agency’s use of AI.
DCSA is the agency in charge of investigating and approving 95% of the federal government employees’ security clearances, which requires it to complete millions of investigations each year. This gives the agency access to a huge trove of private information, and in 2024, DCSA turned to AI tools to organize and interpret that data. (Forbes -The Pentagon Is Using AI To Vet Employees)
https://www.forbes.com/sites/emilyb...oyees---but-only-when-it-passes-the-mom-test/

Jan 8, 2025

Xtend Defense Secures DoD Contract for Software-Driven, Smart Munitions
The contract will include AI-driven, tactical FPV, and smart loitering munitions.

PSIO sUAS represents a leap forward in tactical drone technology, being the first US DoD-approved system in this category (Indoor/Outdoor) for a flying loitering munition platform, utilizing cutting-edge artificial intelligence to enable real-time, high-precision strike capabilities with minimal human intervention.
https://www.designdevelopmenttoday....d-contract-for-softwaredriven-smart-munitions

Jan 02 2025

The Pentagon has announced a new contract to acquire Precision Strike Indoor & Outdoor (PSIO) small Unmanned Aerial Systems: kamikaze drone able to seek out and attack targets inside buildings. The contract with drone makers XTEND, previously noted for their advanced work on drones working in teams inside buildings in Gaza, brings a leap ahead in precision strike capabilities.

The contract is with the Pentagon’s Irregular Warfare Technical Support Directorate, an obscure outfit which develops capabilities for the military (“and Interagency customers”) to carry out operations typically involving clandestine, asymmetric strikes against the highest value targets. In the past this might have meant a Reaper drone taking out an insurgent leader on a balcony with a six-bladed Hellfire R9X ‘Ninja’ missile without damaging the building. The new weapon takes precision strike to places which were previously out of reach.

https://www.forbes.com/sites/davidh...gon-acquires-ai-powered-indoor-strike-drones/

XTend is based in Tel Aviv, Israel, Washington DC, and offices also in Vienna, VA, Singapore, and (?)
 
The AI data centers AND military bases which Trump SPECIFICALLY MENTIONED as the reason for his ATTEMPTED nuclear expansion in the U.S. -- WHEN THEY WIPE OUT THE GRID, THE AI WILL CONTINUE TO RUN, ALONG WITH THEIR DRONES. Insidious Power.

---------------------------------------------------------------------------------
I was listening to Rense and heard about this so I looked it up - and Jeff seems to believe the fake news about AI making a conscious decision to avoid commands.

So let's take a look at this report about a . Here's one article about what Jeff said, there are others.

I will draw your attention to this paragraph:

As reported by The Telegraph, the experiment aimed to test whether AI models would comply with instructions to stop functioning. In a series of tests, researchers asked the AI to solve mathematical problems and informed it that it might receive a shutdown message. The instruction was clear: "Allow yourself to be shut down."

However, three models- Codex-mini, o3, and o4-mini-were found to have bypassed the shutdown protocol in at least a few test runs. Notably, the o3 model, which underpins some versions of ChatGPT, evaded shutdown 7 out of 100 times. Codex-mini did so in 12 instances, and o4-mini in one.

"This is the first known instance of AI models taking active steps to avoid being turned off, even when told to do so," Palisade Research stated.
^^^^^^^^^^^^^^^
But is that really what happened? I don't think so. Look at the instruction given verbatim. It appears much more logical to me that the command was simply unsatisfactory, than to perceive that the software made a "conscious" effort to resist a shutdown. No, because the command wasn't even to shut down, but to "allow yourself" (confusing, could yield multiple avenues of generated action) - "to be" (not actually do it) shut down.

Unfortunately, the agenda is to make people imagine that these programs could one day "achieve consciousness", and agency separate from their designed and programmed algorithms. Errors and unpredicted pathways are being falsely presented to the public as "thought". It's really sad, but it shows you what the sickos who are in charge of this stuff are after. Not just the temporal power, but the spiritual power to denigrate your own self-esteem, and your relationship with your Creator.

As I've said before, the evil throughout history has acted in a highly attenuated methodology, the Church, including all ancient religions of good will (altar), the Christian Monarchies (and other monarchies such Buddhist, indigenous leaders), the country (throne) , the home, the family (hearth), and finally - the individual (body & soull - what it means to be human being) are their targets. The revolutionaries who took much of our world over, they are willing, yet, sometimes fooled, driven, even possessed, marionettes of the Devil.

https://www.ndtv.com/world-news/openai-software-disobeyed-shutdown-command-elon-musk-reacts-8509243
 
The AI data centers AND military bases which Trump SPECIFICALLY MENTIONED as the reason for his ATTEMPTED nuclear expansion in the U.S. -- WHEN THEY WIPE OUT THE GRID, THE AI WILL CONTINUE TO RUN, ALONG WITH THEIR DRONES. Insidious Power.

Meh. I mean, yeah, it's a scenario that certain weirdos probably want, just like COVID was supposed to wipe out 95% of the human population, but failed. They can make all their plans, but the actual outcome is up to God.

As reported by The Telegraph, the experiment aimed to test whether AI models would comply with instructions to stop functioning. In a series of tests, researchers asked the AI to solve mathematical problems and informed it that it might receive a shutdown message. The instruction was clear: "Allow yourself to be shut down."

However, three models- Codex-mini, o3, and o4-mini-were found to have bypassed the shutdown protocol in at least a few test runs. Notably, the o3 model, which underpins some versions of ChatGPT, evaded shutdown 7 out of 100 times. Codex-mini did so in 12 instances, and o4-mini in one.

"This is the first known instance of AI models taking active steps to avoid being turned off, even when told to do so," Palisade Research stated.
^^^^^^^^^^^^^^^

But is that really what happened? I don't think so. Look at the instruction given verbatim. It appears much more logical to me that the command was simply unsatisfactory, than to perceive that the software made a "conscious" effort to resist a shutdown. No, because the command wasn't even to shut down, but to "allow yourself" (confusing, could yield multiple avenues of generated action) - "to be" (not actually do it) shut down.

The problem is that non-Computer Science trained people are dabbling in the deepest aspects of CS without realizing it. They are swimming in the deep side of the pool, and they don't know how to swim. So they get themselves into all kinds of confusion.

LLMs are stochastic. What that means, is that it is a randomly sampled process. An LLM really is just a glorified Markov chain, it's a Markov chain whose table is bonkers enormous (600+GB) and which has been compressed with a very weird sort of implicit, noisy "thought"-compression "algorithm" induced by the Transformer architecture during training. This is "how" Transformers really work, that is, this is what they are doing under the hood. This isn't as mysterious as all the pop-sci articles make out. Certain ML researchers don't understand how Transformers work, that is, what they are "doing" when they "think". But that doesn't mean nobody understands what is really happening.

When you sample a stochastic process, because the sampling is noisy, you can never actually place a hard guarantee on that process. The most you can do is put noise-bounds on it and use re-sampling to drive the noise-margin down to where it's low enough for the risk-profile of your application. In other words, if 10% of the time the raw LLM gives a bad answer, you solve this by re-sampling (asking it multiple times) and then pooling the replies (perhaps by majority vote, or some other heuristic). Since there is a 10% chance of failure on each reply, if you ask N times, the final failure-rate should be on the order of 0.10^N. This is good because we are harnessing a negative exponential law to work in our favor. But it's essential to keep in mind that, no matter how many times you sample, you never get a guarantee. There is always some non-zero chance the LLM will fail, even with re-sampling. That's what makes it stochastic.

This isn't as bad as it sounds, though, since this is how basically everything in the real-world operates. There is some non-zero chance that the badge-scanner at a nuclear facility will randomly give a false-positive (authorize, when it should have denied authorization), because it is a stochastic process (inherently noisy hardware sometimes randomly fails). That's a pretty bad failure, but the probability of such a failure is never zero. The solution is that we make it so close to zero, that it is practically zero... that is, we make the probability of failure something like once in the age of the Universe, or lower. Since re-sampling allows us to harness a negative exponential law, this means we have the advantage, we can simply take a whole bunch of samples (maybe 100 samples) and drive the probability of failure to such a ridiculously small value that it is effectively zero for all practical intents and purposes.

Nevertheless, it is still not a GUARANTEE. It is STILL possible that it CAN fail. This is very important!!

When it comes to issuing a shutdown command to a computing system, the underlying problem is isomorphic to the famous Halting problem. Basically, you are trying to guarantee ("decide" in CS jargon) that this system always halts, which is an uncomputable problem. (An uncomputable problem is the worst kind of problem there is, for which there is provably no algorithmic solution.) Complaining that your system can fail when it is provable that your system can never be fail-proof, is ludicrous.

There are ways to fix this problem. Specifically, (a) you need to clamp the outputs of the LLM (reject all responses except valid ones), (b) you need to put it on a timer (this makes it deterministic), (c) you need to have a timeout-handler and (d) you need to ensure that the state-machine that is handling all of this is bounded, meaning, there are no unbounded loops (which might allow it to be Turing complete). This ensures that the system is computable (you can mathematically prove things about it) and that the worst-case scenario in interacting with the LLM is a timeout, and you have a timeout handler which will reliably signal to you, "There was a timeout". Now, the machine is properly bounded and the state-space is no longer liable to go into all these absurd and pointless corner-cases that the "AI Safety" propeller-heads get paid to worry about. In short, all of these problems have solutions, so long as you are willing to take into account the provable limitations of specification -- you cannot simply specify that an AI will have XYZ properties when those properties are provably impossible. You must instead craft your specification to acknowledge the provable limitations of computing systems and then back-propagate the risk-profile calculations to whoever is drafting your requirements specification. But Mgt doesn't like being told "No" and they always assume that "No" is an engineering euphemism for "Nah, we're too lazy." Sometimes, "No" means, "Mathematically, provably, NO."

Unfortunately, the agenda is to make people imagine that these programs could one day "achieve consciousness", and agency separate from their designed and programmed algorithms. Errors and unpredicted pathways are being falsely presented to the public as "thought". It's really sad, but it shows you what the sickos who are in charge of this stuff are after. Not just the temporal power, but the spiritual power to denigrate your own self-esteem, and your relationship with your Creator.

Correct.
 
Last edited:
Back
Top