Regulation
What US Founders Get Wrong About the EU AI Act
Apr 22, 2026 · 8 min read
By Marcos Maceo, Founder, OpSprint
Key Takeaway
The EU AI Act reaches US companies through the same mechanism as GDPR: extraterritorial application to outputs 'used in the EU.' If a single EU resident's data goes through your AI, you're in scope.
The GDPR Parallel
In 2018, a lot of US founders assumed GDPR didn't apply to them because they were incorporated in Delaware, hosted on AWS us-east-1, and had never opened a European office. Most of them were wrong. GDPR applied the moment they processed personal data about EU residents — regardless of where the company was physically located.
The EU AI Act follows the same logic. Article 2 of the Act establishes that it applies to providers who "place on the market or put into service AI systems" in the EU, and to providers "established in a third country" where the output of the AI system is "used in the Union." The phrase "used in the Union" is the mechanism. It doesn't matter where your company is headquartered, where your servers sit, or where you're incorporated. It matters where the AI's output lands.
The EU designed this intentionally. Both GDPR and the AI Act reflect the EU's consistent position that organizations affecting EU residents must meet EU standards — whether or not those organizations have a physical presence in Europe. US founders who lived through the GDPR compliance wave already know how this plays out. Those who didn't are about to learn.
"I Don't Sell to the EU" — The Trap
The "I don't sell to the EU" defense fails because the Act's scope isn't defined by sales transactions — it's defined by where AI output is used or felt. Here are five scenarios where a US-only company gets pulled in scope without ever marketing to EU customers.
You sell a B2B SaaS tool to US companies that have EU offices. Your US customer deploys your AI-powered HR screening tool across their global workforce. When that tool evaluates job applicants in their Frankfurt office, the output is used in the EU. You're in scope under Article 2, even though your contract was with a US entity.
You build a recruiting tool used by an EU-based agency. Even if you never spoke to a single EU customer directly, if an EU company signed up for your platform and uses it to screen candidates, your system is being deployed in the EU market. You're a provider whose system is placed in service in the Union.
EU citizens use your US-facing product. You built a consumer app for the US market, but EU residents living in the US — or EU residents who access your service while traveling — use it. If your AI makes or substantially influences decisions about those individuals, the Act may apply to those interactions.
You sell to US government contractors who deploy globally. US defense and intelligence contractors often deploy tools globally. If your AI system flows into an environment where EU residents are affected — even indirectly — the chain of liability runs back to you as the provider.
Your API is consumed by EU developers. If you offer an AI API and EU-based developers build applications on it that are deployed in the EU, you may qualify as a provider of a high-risk AI system whose output enters the Union. The downstream use shapes your classification.
When Your AI Output 'Enters' the EU
The operative concept under Article 2 is that the AI system's output is "used in the Union" — not that the system is sold, marketed, or licensed to EU entities. This distinction matters because it catches use cases that don't look like sales at all.
Consider a US-based company that builds an AI-powered employee performance monitoring system. They sell it only to US companies. But one of their customers — a US mid-market firm — has a European subsidiary. The performance scores generated by the US tool flow into HR decisions for employees in Amsterdam and Madrid. The output is used in the Union. The US provider is in scope.
Or consider a US company that builds an AI content moderation API. They sell to US platforms only. But those platforms have EU users, and the moderation decisions affect EU residents' access to the service. The moderation output is used in the Union. The API provider may carry compliance obligations under the Act.
After reviewing Article 2's scope provisions, the relevant question isn't "where do we sell?" — it's "where does the output go?" Those are often different answers.
The Authorised Representative Requirement (Article 22)
If you're a non-EU provider whose AI systems are used in the EU, Article 22 requires you to appoint an Authorised Representative established in an EU member state before you place a high-risk AI system on the EU market.
The Authorised Representative is your legal point of contact in the EU. They must be named in the technical documentation accompanying your AI system, registered with the relevant national authority, and empowered to cooperate with EU market surveillance authorities on your behalf. Think of it as the EU equivalent of a registered agent — administrative overhead, but a hard legal requirement.
Failing to appoint an Authorised Representative when one is required is itself a violation of the Act, separate from any underlying product compliance failure. It signals to regulators that you're not taking EU compliance seriously, which tends to attract enforcement attention rather than deflect it. For high-risk systems in particular, this is not an optional step.
For providers of minimal-risk or limited-risk systems, Article 22's requirement may not apply in the same way — but it's worth confirming your tier before assuming you're exempt. Use the Risk Checker to confirm your classification, then assess whether Article 22 applies to your specific situation.
5 "I'm Exempt" Claims Debunked
"My US company has no EU entity, so EU law can't reach me." This conflates jurisdictional reach with enforcement mechanism. The Act explicitly applies to third-country providers — "third country" is EU regulatory language for non-EU. Article 2(1)(c) covers providers established outside the Union whose systems' outputs are used in the Union. No EU entity is required for the obligation to exist. Enforcement is a separate question — but EU market surveillance authorities can restrict or prohibit the import and use of non-compliant AI systems, effectively blocking your access to EU markets regardless of where you're incorporated.
"We only serve US customers." As the scenarios above illustrate, "US customers" doesn't mean "no EU impact." Your US customers may have EU employees, EU subsidiaries, EU users, or EU-based clients who are affected by your AI system's output. The Act focuses on where the output is used and who it affects — not on the nationality or location of your direct customer.
"We're just an SDK or API provider — we don't deploy AI systems directly." The Act's definitions of "provider" and "deployer" are designed to catch the supply chain. Under Article 3(3), a provider is anyone who develops an AI system or has it developed and places it on the market or puts it into service. If EU developers integrate your SDK or API into their applications, and those applications constitute high-risk AI systems under Annex III, the Act may treat you as a provider in that chain. The fact that someone else wrote the final application doesn't automatically shield you from provider obligations.
"We use OpenAI's API — they're responsible for the AI compliance." This is a partial truth that leads to a dangerous conclusion. OpenAI is subject to GPAI (General-Purpose AI) obligations under Chapter V of the Act for their model. But you — as the entity that configures the system, defines the use case, sets the prompts, and deploys it to end users — are a provider under Article 3. The GPAI provider covers the model layer; you cover the system layer. These obligations don't substitute for each other. "We just call an API" is not a compliance defense for the downstream system you've built on top of it.
"Enforcement against US companies is theoretical — the EU can't actually do anything." The EU has several practical enforcement levers that don't require a US court to cooperate. First, the European AI Office and national market surveillance authorities can prohibit the import and commercial use of non-compliant AI systems within EU territory — effectively a market access ban. Second, EU-based partners, resellers, or distributors of non-compliant providers can face direct liability, creating commercial pressure that US companies feel immediately. Third, private complaints from EU residents to national data protection authorities (which share enforcement competence with AI Act authorities in some contexts) can trigger investigations. The enforcement path is longer than for domestic companies — but it's not theoretical, and the market access consequences can be severe.
A 10-Minute Self-Audit
If you're a US-based founder uncertain whether the EU AI Act applies to your operations, work through these questions before assuming you're outside scope.
Step 1: Map your customer base's geographic footprint. Do any of your current customers have EU offices, EU subsidiaries, or EU-based end users? If yes, your AI system's output may be reaching EU territory even if your contracts are with US entities.
Step 2: Trace the output chain. For each AI-powered workflow in your product, ask: where does this output ultimately get used, and by whom? If the output influences a decision about an EU resident — in hiring, credit, access to services, or performance — you may be in scope.
Step 3: Classify your workflows. Use the EU AI Act Risk Checker to determine whether any of your workflows fall into Annex III's high-risk categories or Article 5's prohibited list. Start with the workflows that touch people directly — hiring, scoring, monitoring, access decisions.
Step 4: Check the Article 22 trigger. If any of your workflows classify as high-risk and you have reason to believe they're used in the EU (directly or through your customers), you'll need to evaluate whether you require an Authorised Representative. This is a legal determination — but the first step is knowing whether your high-risk workflows reach EU territory.
Step 5: Document what you found. Regardless of the outcome, document your analysis. A written record that you investigated the Act's applicability, assessed your workflows, and reached a reasoned conclusion is meaningful context if a regulator ever asks. "We never thought about it" is a worse position than "we assessed it and concluded X for the following reasons."
For the full tier framework and penalty structure, see: The EU AI Act in Plain English. For a workflow-by-workflow classification guide with Annex III references, see: Which of Your Workflows Just Became 'High-Risk'?
If you want a structured approach to completing this audit and building a compliance gap list, the AI Readiness assessment is a fast way to map your current stack, and a Sprint engagement can help you build the compliance documentation and oversight structures the Act requires.
Run the free EU AI Act Risk Checker → — 10 minutes to know whether you're in scope and what tier your workflows land in.
For more on the EU AI Act, visit the OpSprint EU AI Act Hub.
Pragmatic readiness guidance — not legal advice. For specific cases, consult EU counsel.
Need help applying this in your own operation? Start with a call and we can map next steps.