In 2003 the investor Peter Thiel and the social-theory PhD Alex Karp registered a company named after the seeing-stones in The Lord of the Rings—artifacts that allow one to look across distance. In Tolkien’s tale, one palantír belonged to the wizard Saruman: through the stone he spoke with the Dark Lord and gradually crossed to his side.
The name carries another symbolic layer. In Tolkien’s legendarium one stone—the Elostirion stone—did not connect its holder to the other palantíri. Its sole function was to gaze West, over the Sea, towards the elves’ lost homeland. For a company that openly proclaims the defence of Western civilisation, the reference is unlikely to be accidental.
By 2026 Palantir Technologies is the main software contractor to the US Department of Defense and the intelligence services, and one of the most debated technology firms. Karp openly states that its task is “to ensure the West’s obvious superiority” and “sometimes to kill” its opponents.
In 2025, together with his director of corporate communications, Nicholas Zamiska, he published The Technological Republic: Hard Power, Weak Faith, and the Future of the West. Its key thesis: Silicon Valley must “repay a moral debt to the state” and take part in the nation’s defence. We examine how Karp built infrastructure for modern war and what ideology he advances.
Missing the wood for the trees
The main problem Palantir addresses is structural. In America’s intelligence services a “jars of marbles” model evolved: the FBI, CIA, NSA and the police had their own databases, and sharing moved through bureaucratic requests. Each agency kept its data in a separate “vessel”—even knowing that a neighbouring agency might hold crucial information, agents could not get to it quickly.
This fragmentation cost lives. One of the best-known examples is the story of John O’Neill, the FBI’s leading counterterrorism specialist. By the mid-1990s he saw cells of international radical networks, including al-Qaeda, as the chief threat to US security. He warned that terrorists had infrastructure inside the country and pressed for closer inter-agency co-ordination.
Different fragments of information remained split across structures. The FBI logged suspicious domestic episodes—for instance, would-be terrorists’ interest in flight schools. The CIA, for its part, had data on a meeting of al-Qaeda-linked individuals in Malaysia and knew that two participants—Nawaf al-Hazmi and Khalid al-Mihdhar—had entered the United States on visas. But information-sharing between agencies was incomplete and conflict-ridden: FBI staff seconded to the CIA later claimed their attempts to pass the details to O’Neill were blocked inside the agency. Isolated facts never coalesced into a single picture.
In the summer of 2001 O’Neill left the FBI amid internal conflicts and scandals over leaks and misconduct. In August he became head of security at the World Trade Center. On September 11th 2001 O’Neill died while evacuating people from the South Tower.
Palantir built a system that unifies disparate databases into a single model of relationships. The company calls it an ontology—a structure where objects, events and people are linked by explicit relations. An address connects to an owner, a transaction to accounts, a call to subscribers and geolocation. Such a model lets analysts quickly spot patterns that once took weeks of manual work.
In 2005 Palantir’s first institutional backer was In-Q-Tel, a venture fund set up by the CIA in 1999 to finance dual-use technologies. It put in about $2m and for several years remained the company’s only outside investor.
In 2011 Bloomberg reported that Palantir’s technology had become a key tool for US intelligence in the “war on terror” and was used to analyse data in counterterrorism operations.
For its first years Palantir Technologies was almost absent from public view. It seldom spoke to the press, shunned publicity and built its business largely around contracts with US government bodies.
Palantir engineers worked directly at customers’ sites—in intelligence, the military and law enforcement. In tech and defence the firm was well known, but to the wider public it remained invisible. Even in Silicon Valley many could not quite grasp what Palantir actually did: a “Google for spies”, or just a very expensive database.
Gotham, Foundry and AIP
Palantir develops three core products:
- Gotham—a platform for the military, intelligence and law enforcement. It is named after the city (“that is never safe”) from Batman comics. The platform pulls data from satellites, ground sensors, signals intelligence, legacy databases and battlefield channels into a single pane. It can task sensors (for example, direct a reconnaissance drone to co-ordinates), identify targets and suggest weapons employment options. In military parlance this is the “kill chain”.
- Foundry—the civilian version. ExxonMobil uses it to optimise extraction, Swiss Re to assess risks, and media group Ringier to manage subscribers. In Australia Foundry has been deployed at the Coles supermarket chain.
- Artificial Intelligence Platform (AIP)—an AI layer launched in 2023. AIP sits atop Gotham and Foundry and lets users converse with data in natural language. An operator asks: “What hostile forces are in this area?” The system queries connected sources, composes an answer and proposes actions.
Daniel Trusilo—formerly a US Army officer who served in Iraq, later an AI-ethics researcher at the University of St Gallen—notes a crucial feature of Palantir: the same technological base is used for dual purposes. In his words, “the same software that optimises supply chains now runs military operations.”
The ChatGPT moment
For years Palantir lost money. After listing on the New York Stock Exchange in 2020 its shares went nowhere. Analysts struggled to see how the firm could make money in the civilian sector—too niche a product.
That changed with large language models (LLMs). When ChatGPT arrived in late 2022, Palantir argued that its long bet on ontologies and a semantic data layer had suddenly become valuable.
“We were pleasantly surprised to discover how closely the world we had been building lined up with the era of large language models. It became clear: you cannot realise the potential of LLMs without such structures,” said the company’s CTO, Shyam Sankar.
In another interview he also said that “in many ways all the work on Foundry and Gotham seemed to be waiting for the arrival of large language models.”
Palantir’s logic is that LLMs are unreliable on their own without structured context. A language model needs a layer that connects a text interface to the objects, events and real processes inside an organisation. That is the role the company assigns to ontologies—a system of relations among people, transactions, devices, documents and actions.
Palantir rewrote its roadmap, embedded LLMs into its products and launched AIP. From that moment the shares began to climb.
In 2023 PLTR rose 167%, in 2024—340%. In the first half of 2025 Palantir became the top performer in the S&P 500 and Nasdaq-100.
The Technological Republic
In 2025 Karp and Palantir’s communications chief, Nicholas Zamiska, published The Technological Republic: Hard Power, Weak Faith, and the Future of the West.
In spring 2026 the company posted a condensed version on X in 22 theses. The thread spread across social media and sparked debate far beyond tech: some saw an attempt to justify a tighter alliance among technology companies, the state and the military; others, a near-complete political programme of techno-nationalism.
In the preface the authors write:
“A reckoning has come for the West. The loss of ambition and interest in scientific and technological achievement, accompanied by a decline in state-led innovation in such key areas as medicine, space exploration, and military development, has produced an innovation gap.”
Silicon Valley, in their view, went the other way—to a world dominated by “online advertising, shopping, social networks and video platforms”.
From this premise the manifesto unfolds. The engineering elite of Silicon Valley “must take part in the defence of the nation and in formulating a national idea: what this country is, what we value, and what we stand for.” The age of soft power, Karp argues, is ending:
“For free and democratic societies to prevail requires something more than moral superiority. It requires hard power, and in this century hard power will be built on software.”
The atomic age of deterrence, the authors argue, is also passing. In its place comes AI-based deterrence:
“We are building software that can become a weapon of mass destruction. The potential integration of AI with armaments creates risks, especially if programs acquire self-awareness and their own intentions. But the call to stop development is wrong. Our adversaries will not waste time on theatrical debates about the merits of designing technologies that are strategically vital to military security. They will act,” write Karp and Zamiska.
The red threat
The ideology of the Technological Republic does not remain on paper. It is backed by political infrastructure whose scale became evident in 2026.
Leading the Future—a super PAC created to defend the AI industry’s interests—has amassed over $140m in donations and commitments. Among the main sponsors are OpenAI co-founder Greg Brockman, Palantir co-founder Joe Lonsdale and the venture firm Andreessen Horowitz. Palantir as a company says it made no corporate donations. OpenAI says the same. But their key figures are the fund’s largest individual donors.
In May 2026 WIRED journalist Taylor Lorenz revealed that Leading the Future’s affiliate—a nonprofit called Build American AI—funds native ads on TikTok and Instagram. Influencers are offered $5,000 per video with the message: China threatens America’s AI leadership, and this affects everyone. Sample scripts include lines such as: “I learned that China is trying to overtake the US in AI. If they succeed, my data and my children’s data could end up under China’s control.” The ads are labelled as paid partnerships, but the sponsor—Build American AI—is not named.
The campaign’s rhetoric mirrors Karp’s main theses.
“We will be the dominant player or China will be the dominant player—and the rules will depend on who wins. […] When people worry about surveillance—yes, there is a danger, but you will have far fewer rights if America is not the leader,” he said in an interview with Axios in November 2025.
In parallel, Leading the Future is campaigning against lawmakers seeking to regulate AI. The most high-profile case is an attack on New York State Assembly member Alex Bores, a co-author of the RAISE Act—among the first American AI-safety laws. According to The New York Times, the super PAC is spending millions to discredit the inconvenient politician. Bores explained it this way:
“They want to beat me up politically so badly that in the future, when AI regulation comes up, politicians run the other way. They want to make an example out of me.”
The situation around Palantir is part of a broader shift. In February 2026 OpenAI signed a contract with the Pentagon to supply language models for the military. The deal followed Anthropic—OpenAI’s chief rival—walking away from talks after refusing to lift restrictions on mass surveillance and autonomous weapons.
The Trump administration in response designated Anthropic a supply-chain risk and ordered its tools wound down within six months. OpenAI took the vacated spot.
The full text of the Pentagon agreement was not disclosed. Brad Carson, a former general counsel of the US Army, commenting on excerpts and contractual language released by OpenAI, said:
“They are trying to blind you with complex legal terms that ordinary people understand quite differently. Lawyers know what that means. And lawyers know it is no constraint at all.”
A partial truth
Alex Karp does not try to seem nice. He does not speak the language of “innovation” and “transformation”: his rhetoric revolves around global rivalry and technological dominance. He believes the West is in a race with China that will set the balance of power for generations.
In an extended essay an analyst writing as MachineSovereign describes Palantir not as the West’s saviour but as “the infrastructural layer through which the state increasingly sees, coordinates, decides, and acts.” Formal institutions keep their powers: they authorise decisions, speak in public and uphold symbolic legitimacy. But the operational layer is shifting into technical infrastructure that determines what the state is able to see, analyse and use for decision-making.
Karp’s supporters respond that the world is moving this way regardless. Spurning such systems will not stop their development—only hand the initiative to those who will build similar tools without regard for human rights, transparency and public oversight. In this logic the question is no longer whether such platforms will appear, but who will control them and in the interests of which political systems they will work.
The palantír in Tolkien is an instrument that does not lie outright, but shows only part of reality. He whose will is stronger can impose on others his own picture of the world.
Palantir, Anduril, Mithril, Erebor, Narya—Silicon Valley long ago turned Middle-earth into a catalogue of brands for defence and technology start-ups.
Tolkien would likely have greeted this without enthusiasm. He deeply mistrusted industrialisation and the concentration of power—motifs that run through all his work. He wrote about a world in which danger lay not in the power of weapons but in a monopoly on knowledge. The palantíri doomed not because they showed falsehood, but because they showed a selective truth: the stone’s owner decided which slice of reality the onlooker would see.
Modern data-analysis platforms are gradually changing the very mechanism of governance. Who sees threats first, who sets priorities, who wins the right to interpret reality for everyone else—such questions are moving from politicians’ offices to contractors’ server rooms. In the AI era you do not need to forbid access to information. It is enough to decide what people should see.
Text: Sasha Kosovan
