Regulation
Your AI Workflows Are About to Be Regulated: The EU AI Act in Plain English
Apr 22, 2026 · 9 min read
By Marcos Maceo, Founder, OpSprint
Key Takeaway
If your AI touches EU residents, you're in scope — regardless of where you're headquartered. The Aug 2, 2026 deadline lands in weeks and penalties reach €35M or 7% of turnover.
What Changes on August 2, 2026
The EU AI Act (Regulation (EU) 2024/1689) is already in force — it entered into force on August 1, 2024. But "in force" and "in effect" are two different things, and this is where most founders get confused.
The Act rolls out in phases. As of February 2, 2025, the prohibitions under Article 5 went live — meaning certain AI practices became illegal across the EU with no grace period. As of August 2, 2025, obligations for general-purpose AI (GPAI) models under Chapter V took effect. The big date most businesses need to prepare for is August 2, 2026: that's when the full framework — including all high-risk AI obligations under Articles 9 through 15 — becomes enforceable for most operators.
If you're deploying AI-powered workflows today, you have weeks, not months, to understand where you stand. Starting that review after August 2 means you're already out of compliance.
Who's Actually in Scope (Including You)
The most consequential thing about the EU AI Act isn't what it bans — it's who it covers. Article 2 of the Act establishes extraterritorial scope using the same logic as GDPR: if your AI system is used in the EU, or its output affects people in the EU, the regulation applies to you. Your company's physical location is irrelevant.
Three triggers pull a non-EU business into scope. First: your AI system is placed on the EU market — meaning it's commercially available to EU customers. Second: the output of your AI system is used in the EU, even if your company and your servers are in the US, Singapore, or Dubai. Third: you're a provider of a general-purpose AI model that EU developers integrate into their own products. If any of these apply, you're in scope.
The GDPR parallel is intentional and instructive. Many US founders learned that "we don't have EU offices" wasn't a defense when GDPR came into force. The same reasoning applies here. If a French startup uses your AI-powered hiring tool, or a German agency runs your lead-scoring workflow on their EU client data, you are subject to the Act.
Article 22 adds one more obligation for non-EU providers whose systems are used in the EU: you must designate an Authorised Representative established in the EU. This representative is your legal point of contact for EU market surveillance authorities. Think of it as the EU equivalent of having a registered agent — it's administrative, but failing to appoint one is itself a compliance violation.
The 4 Tiers in Plain English
Prohibited practices (Article 5). These are outright bans — no compliance path exists. They include AI systems that use subliminal manipulation to influence behavior, social scoring by governments, real-time biometric surveillance in public spaces (with narrow law-enforcement exceptions), and emotion recognition in workplaces and schools. If you're running any of these, you need to kill them now. The deadline for this category was February 2, 2025 — it's already passed.
High-risk AI (Article 6 + Annex III). These systems can continue operating but require significant compliance work — risk management, data governance, technical documentation, logging, human oversight, and accuracy standards. High-risk categories cover areas like hiring and workforce management, credit scoring, essential services access, law enforcement, and critical infrastructure. A practical SMB example: if you use an AI tool to screen CVs or rank job applicants, that workflow is high-risk under Annex III, §4.
Limited-risk AI (Article 50). These systems must meet transparency obligations — primarily, users must know they're interacting with AI. The clearest example is a customer support chatbot. You don't need to build a full risk management system, but you do need to tell users upfront that they're talking to a machine. Failing to disclose is a violation even if the AI itself is benign.
Minimal-risk AI. The vast majority of AI tools fall here — spam filters, inventory forecasting, product recommendation engines, content spell-checkers. No specific obligations apply beyond the baseline requirements of good practice. You can keep using these without formal compliance procedures.
What Most Founders Get Wrong
"I just call OpenAI's API — the regulation is their problem." OpenAI, Anthropic, and Google are subject to GPAI obligations for their models. But as an operator building a system on top of those models, you are responsible for how that system is deployed. The Act distinguishes between "providers" (who build AI systems) and "deployers" (who use them in practice). If you configure the system, set the prompts, define the use case, and deploy it to end users, you are a provider under Article 3. Wrapping someone else's model doesn't change your classification.
"We don't sell to EU customers." As covered above, the threshold isn't where you sell — it's where the output is used. If a single EU resident's employment application goes through your AI hiring screen, or a single EU-based business uses your AI sales tool on their contacts list, you're in scope. You don't have to actively sell to the EU for the Act to reach you.
"We're too small to be a target." The Act contains SMB carve-outs — Article 16 and Recital 97 acknowledge that compliance burdens should be proportionate. But proportionate doesn't mean exempt. The carve-outs reduce documentation complexity, not legal exposure. Micro-enterprises are still required to meet the substantive obligations for prohibited and high-risk AI.
"A disclaimer in our terms of service is enough." Terms of service can't waive your obligations as a provider or deployer. The Act imposes affirmative obligations — you must maintain logs, conduct risk assessments, implement oversight mechanisms — that a disclaimer cannot substitute for. A disclaimer tells users what you disclaim. The Act tells you what you must actually do.
"It's just for AI companies." The Act applies to anyone who deploys an AI system — not just companies that build AI. If you use an off-the-shelf AI tool to automate a hiring decision, a credit evaluation, or a benefits determination, you are a deployer with obligations under Chapter III, Section 3. "We just use the tool" is not a defense.
A 5-Minute Self-Check
Before you can prioritize compliance work, you need to know which tier your specific workflows fall into. The risk classification isn't always obvious — the same underlying technology (say, a scoring model) can be Prohibited, High-Risk, or Minimal depending on what it's scoring and who the output affects.
After reviewing the Act's Annex III categories and Article 5 prohibitions, we built a free classification tool so founders can run that check without needing to read 87 pages of regulation. It walks through your workflow category, output type, and who the decision affects — and returns a tier with the relevant article references.
Run it here: EU AI Act Risk Checker →
For deeper context on which specific workflow types land in the high-risk bucket, see our companion post: Which of Your Workflows Just Became 'High-Risk' Under the EU AI Act? If you're a US-based founder wondering whether this even applies to you, read: What US Founders Get Wrong About the EU AI Act.
Penalties
The enforcement framework under Article 99 is tiered by violation type, and the numbers are large enough to matter even for SMBs.
Violations of the prohibited practices under Article 5 carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. High-risk violations — failure to meet documentation, oversight, or data quality requirements — carry up to €15 million or 3% of turnover. Providing false or misleading information to authorities reaches up to €7.5 million or 1% of turnover.
For SMBs and startups, the Act provides some relief: Article 99(6) specifies that penalties for smaller enterprises should be proportionate to their economic capacity. This doesn't mean zero exposure — it means regulators are directed to use the lower bracket of the penalty ranges. At 3% of even a €2M revenue company, that's still a €60,000 fine for a documentation failure.
Enforcement is handled by national market surveillance authorities in each EU member state, coordinated by the European AI Office at the Commission level. Early enforcement focus will likely follow the GDPR pattern: high-profile cases first, then sector-by-sector sweeps. But private complaints from EU residents can trigger investigations without waiting for a regulator sweep.
The Next 90 Days: A Concrete Readiness Plan
With August 2 approaching, the practical question is: what do you do first? After reviewing the Act's compliance obligations across Articles 9 through 16, here's the sequence that makes sense for most SMBs.
Week 1-2: Inventory. List every AI-powered tool or workflow in your stack. Include anything that automates a decision, generates output used to make a decision, or processes data about people. Don't forget tools embedded in other platforms — the AI inside your ATS, your CRM scoring model, your email personalization engine.
Week 3-4: Classify. For each item on your inventory, determine which tier it falls into using the Risk Checker or by working through Annex III and Article 5 directly. Flag anything that looks like it could be prohibited or high-risk.
Week 5: Kill or pause anything prohibited. If any of your workflows fall under Article 5, shut them down before you do anything else. There is no compliance path for prohibited AI — the only answer is discontinuation.
Weeks 6-10: High-risk compliance work. For high-risk workflows, begin implementing the required measures: a risk management system (Article 9), data governance documentation (Article 10), technical documentation (Article 11), logging and audit trails (Article 12), meaningful human oversight (Article 14), and accuracy monitoring (Article 15).
Weeks 11-12: Disclosure and documentation. Ensure Limited-risk workflows have proper user disclosure under Article 50. Finalize your technical documentation. If you're a non-EU provider, appoint your Authorised Representative under Article 22.
If you want help mapping your workflows to these obligations and building a prioritized 90-day plan, that's exactly the kind of structured analysis we do at OpSprint. We won't replace your EU counsel — but we can make sure you walk into that conversation with a clear inventory, a tier classification, and a gap list.
Run the free EU AI Act Risk Checker → — classify your workflows in 5 minutes, no account required.
For more on the EU AI Act, visit the OpSprint EU AI Act Hub.
Pragmatic readiness guidance — not legal advice. For specific cases, consult EU counsel.
Need help applying this in your own operation? Start with a call and we can map next steps.