“It’s in London” isn’t a privacy strategy for AI

Using a US LLM in a London region? UK GDPR may still treat it as a restricted transfer. Learn the UK–US Data Bridge limits, CLOUD Act risk, and what to do.

Duncan Anderson
2026-02-25

A lot of UK teams are starting to ask whether they should be running AI on UK or European infrastructure rather than US-controlled systems. In this series of posts we're going to cover the practical options, from managed UK cloud to running open-weight models on dedicated compute.

But before the "how", there's a "why" worth laying out properly.

It's easy to dismiss Sovereign AI as over-caution. However, the case is more concrete than most people realise.

If you're using a US-controlled LLM on data that includes personal information, there are four specific legal dynamics that should make you at least pause to think. Not in a theoretical way, but in a "your compliance assumptions may have a gap in them" way.

A lot of teams using US-controlled LLMs carry a comforting assumption:

“The model doesn’t store our data — it just processes it (it’s stateless). So privacy risk isn’t a big deal.”

If your prompts include personal data (names, emails, chat transcripts, call notes), that assumption can be wrong.

Here are four reasons why.

1) Your “stateless” prompts probably aren’t stateless

Many providers retain inputs/outputs for a period (often for abuse monitoring, safety, or compliance), commonly around ~30 days.

That matters because once data is retained, it can be preserved (e.g., via legal holds) and potentially disclosed under legal compulsion. This isn’t purely theoretical — there have already been public disputes about preservation obligations in the OpenAI / New York Times litigation.

2) US jurisdiction can reach across borders

If the LLM provider is US-owned or US-controlled, there are legal routes that may require it to produce data within its control, even if that data is stored in a UK/EU region and even if retention is “only” short-term.

3) Disclosure of personal data can become a UK GDPR problem

If your vendor can be compelled to disclose personal data it holds, that can put you (the UK business) in a difficult position under UK GDPR — especially if you haven’t set up the right transfer mechanism and safeguards for that processing.

4) The “stable transfer” story is under strain

The political and legal foundations that make UK→US data flows relatively straightforward are not guaranteed to stay stable.

Independent oversight bodies in the US have come under pressure, and the executive orders that underpin parts of the transfers story can be amended or withdrawn faster than legislation. The practical risk is not “everything breaks overnight” — it’s that what felt easy to justify yesterday becomes harder to justify tomorrow, especially for sensitive use cases.

So what to do about it?

If you’re using a US-controlled LLM on personal (especially sensitive) data, don’t rely on “it’s stateless” or “it’s in a UK data centre” as your privacy strategy.

Start with three quick actions:

  1. Audit what you’re actually sending. Pull up your API calls or integration code. Are you sending names, emails, support tickets, chat transcripts, call summaries, health info, identifiers?
  2. Check retention properly. Read the provider’s actual policy/terms for API data retention and logging — not just a marketing page. Look for things like: abuse monitoring logs, safety review, legal holds, and whether zero data retention exists (and what it takes to qualify). If you haven't explicitly asked for something else, it's almost certain that your inputs and outputs are being stored for at least some period of time.
  3. Watch the parts of the framework that can move quickly. If your risk posture depends on “UK→US is stable”, keep an eye on the underlying safeguards/oversight story — because it affects how comfortable regulators, lawyers and compliance teams can be.

In the next post, we’ll cover more strategic actions (minimisation patterns, retention controls, contract routes, and “sovereign compute” options). Future posts will dive into the technological options and provide practical guidance based on our hands-on experience of doing this at Barnacle Labs. For now, you can get moving on understanding your current exposure.

If you’re still with me: the rest of this post is the reference pack behind those four points. It’s deliberately skimmable — each section maps to one of the four reasons above.

Reason 1: “Stateless” isn’t stateless (retention + legal holds)

Even when providers say they don’t train on your API data by default, many still retain inputs/outputs in logs (abuse monitoring / safety / compliance) for a period.

With normal file storage, encryption can limit the damage even if data is compelled or breached. AI inference removes that option. The model needs to see your prompt in plaintext to generate a response — which means the provider has readable access to whatever you send, and encryption can't function as a meaningful supplementary measure in your risk assessment.

Why this matters: “stateless inference” becomes “stored records” for at least some window, increasing the chance the data could be reviewed internally, breached, or compelled.

Even if the default is “30 days,” litigation can force longer preservation. OpenAI has discussed legal holds and handling around ChatGPT logs in ongoing litigation.

Reason 2: US jurisdiction can reach across borders

Some teams assume that choosing a US provider’s EU or UK region sidesteps transfer concerns. It can reduce operational exposure — but US jurisdiction can still matter.

The US CLOUD Act (2018) created routes for US authorities to compel certain US service providers to reveal data within their control, even if stored outside the US. The situations in which this would occur are likely to be politically sensitive and there’s no guarantee we would know about them if they occurred.

Critically, a CLOUD Act demand doesn't resolve your compliance position — it can't serve as a valid legal transfer mechanism under UK GDPR. If your provider is compelled to disclose data to US authorities, that's not 'out of your hands'. It's still your transfer, and you still need to have had the right safeguards in place beforehand.

Reason 3: UK GDPR cares about access (restricted transfers)

Under UK GDPR, a “restricted transfer” isn’t only “we copied a database to the US.”

It can also include sending personal data to an organisation outside the UK, or making it accessible to them — even if it’s “just processing” and even if it isn’t stored long-term.

The ICO spells this out clearly: transfers include both “sending” and “making accessible.”

So in AI terms: if you send customer data (names, emails, tickets, chat transcripts, call notes, medical details, etc.) to a US model provider for inference, you’re very likely in restricted transfer territory — meaning you need an appropriate transfer mechanism and safeguards.

The “easy button” (when it applies): the UK–US Data Bridge

The UK created an adequacy route for certain transfers to certified US organisations via The Data Protection (Adequacy) (United States of America) Regulations 2023, in force on 12 October 2023.

But it’s partial: it only covers transfers to US organisations that self-certify and opt into the UK Extension. ICO guidance is explicit about scope.

If your AI provider isn’t covered (or the specific contracting entity isn’t), you’ll likely need:

  • Appropriate safeguards (e.g., the UK’s International Data Transfer Agreement (IDTA)), plus
  • A transfer risk assessment (the ICO recently refreshed its guidance/terminology).

That isn’t unusual. It just means the compliance burden shifts onto contracts + risk assessment + controls — and you may need to negotiate/sign terms (which can surprise teams used to self-serve SaaS).

Reason 4: the “stable transfer” story is wobbly (EO 14086 + oversight)

The UK–US “Data Bridge” is the easiest basis on which to build a compliance story for why data transfer to a US organisation is acceptable. It leans heavily on US commitments about signals intelligence safeguards and redress, especially:

  • Executive Order 14086, signed by President Biden in October 2022, is the US commitment that underpins transatlantic data transfers. It limits how US intelligence agencies can collect and use data on non-US persons, requires that surveillance be "proportionate" to a legitimate national security aim, and creates a redress mechanism (the Data Protection Review Court) where EU and UK citizens can challenge alleged surveillance. Without these commitments, the EU and UK might not have agreed that US transfers were "adequate" - so EO 14086 is effectively the legal foundation the Data Bridge sits on.
  • DOJ rule establishing the Data Protection Review Court mechanism (28 CFR Part 201)

PCLOB (the Privacy and Civil Liberties Oversight Board) is explicitly named as having oversight roles tied to EO 14086.

This is where we should be concerned: in early 2025, reporting said the White House asked Democratic PCLOB members to resign, which would leave the board without a quorum. The Congressional Budget Justification for Fiscal Year 2026 states “At present, the Board has one member and is in sub-quorum status.”

EO 14086 is an Executive Order, which makes it structurally easier to amend or withdraw than legislation. There are three specific reasons it may be at risk under the current administration.

First, the order was explicitly designed to satisfy EU requirements — it's straightforward to frame as the US bending to European demands rather than negotiating from strength.

Second, it imposes constraints on US intelligence agencies and creates a redress mechanism for EU and UK citizens. That runs directly counter to a preference for fewer restrictions on government agencies and less bureaucratic oversight.

Third, some critics argue that the limitations on signals intelligence collection hamper national security operations. The current administration may simply prefer to give agencies broader latitude.

None of these guarantees repeal. But they make it a live risk rather than a theoretical one — and one that could materialise quickly, given how fast EO changes can move.

Weakening independent oversight can increase pressure on transfer frameworks and make transfer risk assessments harder to justify, especially for high-sensitivity data.

What could change — and what would that mean operationally?

The point isn’t that UK→US AI use becomes impossible overnight. The risk is that it becomes harder to justify and harder to operate, pushing organisations to do more work to create adequate arrangements:

  • Negotiate specific contract terms with providers
  • Document a data transfer risk assessment
  • Implement stronger controls (minimise what you send, reduce/disable retention where possible, tighten access, segment sensitive workflows)

Disclaimer: I’m not a lawyer and this post isn’t legal advice. It’s general information to help you spot issues—get qualified advice for your specific facts.

Barnacle Labs
Barnacle_Labs

AI for breakthroughs, not buzzwords.

34 Tavistock Street, Covent Garden, London WC2E 7PB

Google Cloud Partner
  • Barnacle Labs Ltd. England & Wales.
  • Company No. 14427097
  • © 2026 Barnacle Labs Ltd.