Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
Published today. The Guardian broke the story on 22 March 2026: the FCA has awarded Palantir a contract to run its AI system Foundry against the regulator’s internal intelligence data lake.
Changelog
| Date | Summary |
|---|---|
| 22 Mar 2026 | Initial publication. |
The Financial Conduct Authority has given Palantir access to some of the most sensitive financial intelligence in the British state. Case files marked “highly sensitive.” Information on problem firms. Reports from lenders about proven and suspected fraud. Consumer complaints, phone recordings, emails, social media trawls. The full operational picture of how the FCA detects financial crime.
This is a 3-month trial. It pays more than £30,000 per week. It will almost certainly lead to a full procurement if the trial works. And the contractual protections, while genuine, don’t address the problem that actually matters.
What the FCA is trying to do
The case for this project is real. Financial crime data is seriously under-exploited. The FCA sits on an enormous corpus of intelligence about fraud, money laundering, and insider trading, and current analytical tools aren’t extracting the value from it. Palantir’s Foundry system is genuinely capable AI infrastructure – that’s not the question.
The FCA wants to use that data to find patterns it’s currently missing. Better detection of money laundering. Earlier identification of problem firms. That’s a legitimate ambition. Regulators using AI to stay ahead of financial criminals is not an inherently bad idea.
Only one other competitor was considered for the contract. Whether that reflects a genuine market assessment or a procurement process that was too narrow is worth scrutinising, but it’s a separate issue.
The protections are real – and they’re not enough
The FCA has put serious controls in place. Palantir is designated as “data processor” not “data controller.” The FCA retains exclusive control over encryption keys for the most sensitive files. The data is hosted and stored solely in the UK. Palantir must destroy the data after the contract ends. No training on FCA data. Any intellectual property derived from the data is retained by the FCA.
These are substantive protections. They address the obvious risks: unauthorised access, data exfiltration, model training on sensitive material. A lawyer reading this contract would find a lot to like.
But there’s a concern the FCA itself has articulated, and a data destruction clause doesn’t touch it. An internal document quoted in the Guardian asks: “Once Palantir understands how we detect money-laundering threats, how do we know that they are ethically reliable enough not to share that information?”
That’s the right question. And it’s unanswerable by contract.
When Palantir’s engineers work through the FCA’s data lake with Foundry, they will learn things. Not just whether the system produces useful outputs – they’ll develop an understanding of which signals the FCA considers significant, which patterns it flags, which thresholds trigger investigation. The methodology is baked into what they’re asked to build.
At contract end, the data gets destroyed. The encryption keys stay with the FCA. The knowledge stays with the engineers who worked on the project. That’s not a flaw in the contract – it’s a limitation of what contracts can do. You can delete a file. You can’t delete what someone learned from it.
Prof Michael Levi, a money laundering expert at Cardiff University, put a specific version of this concern on the record: whether Palantir’s owners might tip off their friends about methodologies. That’s the sharper edge of a broader point. Financial crime detection methodology is itself intelligence. Once you understand how a regulator finds things, you understand how to avoid being found.
The pattern
This isn’t the first time Palantir has extended its footprint in the British state. It has more than £500m in UK public sector deals: a £330m NHS contract in 2023, a £240m Ministry of Defence contract in December 2025, police contracts. Now the FCA.
Each deal has been individually justified. NHS data analytics, defence intelligence, financial crime detection – there are real use cases at each step. But the aggregate picture is a single US company building deep operational knowledge of British state infrastructure across health, defence, and now financial regulation. That pattern is worth naming explicitly, because individual procurement decisions don’t capture it.
The sovereignty question
Palantir was co-founded by Peter Thiel, a significant Trump donor. The company’s technology has been deployed by the Israeli military and in Trump’s ICE immigration enforcement operation. MPs have previously described Palantir as “highly questionable” and “ghastly,” and raised “reports of serious allegations of complicity in human rights violations and the undermining of democratic processes.”
Palantir’s response is that its work with the NHS led to roughly 99,000 extra operations, helped UK police tackle domestic violence, and that it “takes a rigorous approach to respecting human rights.” Those aren’t nothing.
But the context matters. The UK is procuring this contract in a period of genuine uncertainty about the US-UK relationship. The current US administration has shown willingness to treat allied intelligence as leverage. A US company with close ties to that administration now has – or will shortly have – a detailed operational understanding of how British financial regulators detect money laundering and insider trading.
The FCA’s legal protections don’t change the counterparty. Palantir is a US company, subject to US jurisdiction, with ownership and board relationships that sit squarely inside the current US political orbit. Christopher Houssemayne du Boulay, a partner at Hickman & Rose, flagged very significant privacy concerns. The concern isn’t just privacy. It’s whose hands the methodology ends up in.
The question that wasn’t asked
The FCA considered using dummy or scrambled data for the trial and decided only real data would produce a worthwhile test. That’s probably true – anonymised data often destroys the signal you’re trying to find. The real data question was answered correctly.
The question that doesn’t appear to have been seriously asked is: does it have to be Palantir?
The argument for AI-assisted financial crime detection stands regardless of counterparty. There is no inherent reason this capability has to sit with a US company that has this particular ownership structure and this particular geopolitical exposure. The UK has a growing AI sector. European alternatives exist. A joint-venture structure retaining UK control of the underlying methodology is conceivable.
The FCA’s framing – we need real data for a real test – is correct. The further framing – and therefore Palantir – doesn’t follow automatically.
What better looks like
This isn’t a call to abandon the project. Under-exploiting financial crime intelligence is a real problem. AI can genuinely help.
But if this trial works and leads to full procurement, the question of counterparty doesn’t get easier – it gets more locked in. The time to ask whether a UK-owned or European alternative is viable is before the methodology transfer has happened, not after.
At minimum, the procurement process for the full contract should explicitly address the sovereign capability question. Not as a box-ticking exercise, but as a genuine constraint: can this be built without handing the detection methodology to a company with this ownership and this jurisdiction?
The FCA’s protections assume the risk is data leakage. The actual risk is knowledge transfer. Those are different problems, and only one of them is solved by a contract.
The data gets destroyed. The methodology doesn’t.