PAF
Member
- Joined
- Feb 26, 2012
- Messages
- 13,559
by Juan Sebastián Pinto , former Palantir employee
.
.
Gaza, one of the most extensive testing grounds of AI-enabled air doctrine to date, is today’s equivalent of Guernica in 1937. Over the past year of conflict, it has become the latest testing ground of breakthrough warfare technologies on a confined, civilian population — and a warning for more atrocities to come. The Israeli Defense Forces’ use of American bombs and AI-powered kill lists generated, supported, and hosted by American AI software companies has inflicted catastrophic civilian casualties, with estimates suggesting up to 75% of victims being non-combatants. Lavender, an error-prone, AI-powered kill list platform used to drive many of the killings, has been strongly linked to (if not inspired by) the American big-data company Palantir. Intelligence agents for the IDF have anonymously revealed that the system deemed 100 civilian casualties an acceptable level of collateral damage when targeting senior Hamas leaders.
Yet, instead of reckoning with AI’s role in enabling humanitarian crimes, the public conversation on the subject of AI has largely revolved around sensationalized stories driven by deceptive marketing narratives and exaggerated claims. Stories which, in part, I helped shape. Stories which are now being leveraged against the American people, in the rapid adoption of evolving AI technologies across the public and private sector — all upon an audience that still doesn’t understand the full implications of big-data technologies and their consequences.
I know because — for a year and a half after the pandemic — I worked at Palantir Technologies from their new headquarters in Denver, Colorado. There I marketed their core software offerings — Gotham, Foundry, and Apollo — while also developing written materials and diagrams regarding the AI kill chain: the semi-autonomous network of people, processes, and machines (including drones) involved in executing targets in modern warfare. These technologies, which Palantir co-developed with the Pentagon in Project Maven, sought to become the “spark that kindles the flame front of artificial intelligence across the rest of the Department" according to Lt. Gen. of the United States Air Force Jack Shanahan.
But this was only the beginning. While helping my team explain the advantages of AI warfare to US defense departments, I would also simultaneously help sell AI technologies to Fortune 100 companies, civilian government agencies, and even foreign governments in a range of applications, from healthcare to sales.
For a time, I truly felt — as Palantir CEO Alex Karp recently put it — that the Palantir “degree” was the best degree I could get. That I would mostly participate in creating efficient and needed solutions to the world’s most complicated problems. However, over the course of bringing dozens of applications to market, I would soon come to a dark personal realization: The core idea underlying most commercial AI analytics applications today — and the philosophy underlying the kill chain framework in the military — is that through continual surveillance, data analysis, and machine learning we can achieve a simulated version of the world, where a nation, army, or corporation can gain competitive advantage only by knowing everything about targets and delivering results autonomously before their adversaries.
Building these competing simulations, of a factory, a battleground, a connected vehicle fleet — often called “digital twins” — is not only the business of Palantir, but of established technology players like IBM, Oracle and hundreds of new startups alike. They are all furiously staking their market share of data applications across every industry and world government, unleashing a paranoid process of comprehensive digitalization and simulation, while establishing surveillance infrastructures and “moats” of proprietary knowledge, information, and control in every market and in every corner of our lives.
.
.
.
Palantir, Anduril, SpaceX, and OpenAI are now reportedly in talks to form a consortium to bid on defense contracts, meanwhile, Google has abandoned its pledge of not using its technology for weapons and surveillance systems. Next, Palantir’s CEO, Alex Karp — along with many other tech leaders and their political and business allies — will argue that we should become a “technological republic” and that it’s time we welcome the intervention of Silicon Valley startups into many more of our government and public institutions. Along with many other tech companies vying for a piece of the action, they stand ready to transform many of our democratic systems, government functions, and decision-making with largely unproven technologies reined in by few restrictions and controlled by the most powerful individuals in the world.
.
.
.
Continue to full article:
zigguratmag.substack.com
.
.
Gaza, one of the most extensive testing grounds of AI-enabled air doctrine to date, is today’s equivalent of Guernica in 1937. Over the past year of conflict, it has become the latest testing ground of breakthrough warfare technologies on a confined, civilian population — and a warning for more atrocities to come. The Israeli Defense Forces’ use of American bombs and AI-powered kill lists generated, supported, and hosted by American AI software companies has inflicted catastrophic civilian casualties, with estimates suggesting up to 75% of victims being non-combatants. Lavender, an error-prone, AI-powered kill list platform used to drive many of the killings, has been strongly linked to (if not inspired by) the American big-data company Palantir. Intelligence agents for the IDF have anonymously revealed that the system deemed 100 civilian casualties an acceptable level of collateral damage when targeting senior Hamas leaders.
Yet, instead of reckoning with AI’s role in enabling humanitarian crimes, the public conversation on the subject of AI has largely revolved around sensationalized stories driven by deceptive marketing narratives and exaggerated claims. Stories which, in part, I helped shape. Stories which are now being leveraged against the American people, in the rapid adoption of evolving AI technologies across the public and private sector — all upon an audience that still doesn’t understand the full implications of big-data technologies and their consequences.
I know because — for a year and a half after the pandemic — I worked at Palantir Technologies from their new headquarters in Denver, Colorado. There I marketed their core software offerings — Gotham, Foundry, and Apollo — while also developing written materials and diagrams regarding the AI kill chain: the semi-autonomous network of people, processes, and machines (including drones) involved in executing targets in modern warfare. These technologies, which Palantir co-developed with the Pentagon in Project Maven, sought to become the “spark that kindles the flame front of artificial intelligence across the rest of the Department" according to Lt. Gen. of the United States Air Force Jack Shanahan.
But this was only the beginning. While helping my team explain the advantages of AI warfare to US defense departments, I would also simultaneously help sell AI technologies to Fortune 100 companies, civilian government agencies, and even foreign governments in a range of applications, from healthcare to sales.
For a time, I truly felt — as Palantir CEO Alex Karp recently put it — that the Palantir “degree” was the best degree I could get. That I would mostly participate in creating efficient and needed solutions to the world’s most complicated problems. However, over the course of bringing dozens of applications to market, I would soon come to a dark personal realization: The core idea underlying most commercial AI analytics applications today — and the philosophy underlying the kill chain framework in the military — is that through continual surveillance, data analysis, and machine learning we can achieve a simulated version of the world, where a nation, army, or corporation can gain competitive advantage only by knowing everything about targets and delivering results autonomously before their adversaries.
Building these competing simulations, of a factory, a battleground, a connected vehicle fleet — often called “digital twins” — is not only the business of Palantir, but of established technology players like IBM, Oracle and hundreds of new startups alike. They are all furiously staking their market share of data applications across every industry and world government, unleashing a paranoid process of comprehensive digitalization and simulation, while establishing surveillance infrastructures and “moats” of proprietary knowledge, information, and control in every market and in every corner of our lives.
.
.
.
Palantir, Anduril, SpaceX, and OpenAI are now reportedly in talks to form a consortium to bid on defense contracts, meanwhile, Google has abandoned its pledge of not using its technology for weapons and surveillance systems. Next, Palantir’s CEO, Alex Karp — along with many other tech leaders and their political and business allies — will argue that we should become a “technological republic” and that it’s time we welcome the intervention of Silicon Valley startups into many more of our government and public institutions. Along with many other tech companies vying for a piece of the action, they stand ready to transform many of our democratic systems, government functions, and decision-making with largely unproven technologies reined in by few restrictions and controlled by the most powerful individuals in the world.
.
.
.
Continue to full article:

The Guernica of AI
A warning from a former Palantir employee in a new American crisis

Last edited: