Regulation
Which of Your Workflows Just Became 'High-Risk' Under the EU AI Act?
Apr 22, 2026 · 11 min read
By Marcos Maceo, Founder, OpSprint
Key Takeaway
High-risk classification isn't a verdict — it's a checklist. Knowing which of your workflows lands in Annex III tells you exactly what documentation, logging, and oversight you need to add before Aug 2.
How the EU Defines 'High-Risk'
The EU AI Act doesn't classify AI systems by how sophisticated they are — it classifies them by what they do and who they affect. Article 6 establishes the two-gate test: an AI system is high-risk if it (1) falls into one of the eight categories listed in Annex III of the Act and (2) poses a significant risk of harm to health, safety, or fundamental rights.
Annex III's eight categories are: biometric identification and categorisation, critical infrastructure management, education and vocational training, employment and workforce management, access to essential private and public services, law enforcement, migration and border control, and administration of justice. Most SMBs won't touch the last three — but employment, essential services, and education categories catch a surprising number of everyday business workflows.
One important nuance from Article 6(3): the European Commission can update Annex III by delegated act — meaning the list can grow. The compliance work you do now should be built to accommodate additions, not treat the current list as fixed.
Workflow-by-Workflow Tiering
The table below maps common SMB and agency workflows to their tier under the Act. Each entry references the relevant article or Annex III section. Use this as a starting inventory — your specific implementation may shift the tier based on scope and configuration.
| Workflow | Tier | Authority |
|---|---|---|
| CV screening / candidate ranking | High-Risk | Annex III §4(a) |
| Interview video analysis (non-emotion) | High-Risk | Annex III §4(a) |
| Emotion recognition in hiring | Prohibited | Article 5(1)(f) |
| Biometric categorization by protected traits | Prohibited | Article 5(1)(b) |
| Job-ad targeting by AI | High-Risk | Annex III §4(b) |
| Performance monitoring / productivity scoring | High-Risk | Annex III §4(c) |
| Credit scoring / loan decisioning | High-Risk | Annex III §5(b) |
| Lead scoring / customer profiling | Limited or Minimal | Article 50 if gates essential services; otherwise Minimal |
| Fraud detection | Exempt (explicitly) | Article 6(2) + Annex III §5 note |
| Customer support chatbot | Limited | Article 50 (disclosure required) |
| Support ticket routing | Minimal | No Annex III match; no Article 50 trigger |
| Content moderation | Limited (escalates) | Article 50; may escalate if account-gating decisions are automated |
| Spam filtering | Minimal | No Annex III match |
| Inventory forecasting | Minimal | No Annex III match |
The entries below explain the reasoning for the less obvious classifications.
CV screening and candidate ranking (Annex III §4(a)). Any AI system used to "make or substantially influence" decisions on recruitment is explicitly listed in Annex III §4(a). "Substantially influence" is the critical phrase — even if a human makes the final call, an AI system that generates a ranked shortlist is shaping that decision materially. This is one of the most common high-risk workflows in SMB ops stacks today.
Interview video analysis (Annex III §4(a); emotion recognition prohibited under Article 5(1)(f)). AI analysis of recorded interviews — scoring tone, language, or behavioral signals — falls under §4(a) employment decisions. Emotion recognition specifically is prohibited under Article 5(1)(f) when used in employment contexts. If your video interview platform uses emotion analysis as a scoring input, that specific feature is prohibited regardless of whether the overall workflow is high-risk.
Job-ad targeting by AI (Annex III §4(b)). AI systems that determine which candidates see which job ads are covered by §4(b). If your recruiting workflow uses algorithmic targeting to distribute postings — even through a third-party platform — the deployer (you) may carry compliance obligations.
Performance monitoring and productivity scoring (Annex III §4(c)). This catches a category many ops teams don't expect: AI-powered time tracking, activity monitoring, and productivity dashboards that generate scores used in employment decisions. If the output influences performance reviews, promotions, or terminations, it's high-risk.
Credit scoring (Annex III §5(b)). Any AI system used to evaluate creditworthiness — including buy-now-pay-later eligibility, invoice factoring, and B2B credit limits — is high-risk under §5(b). Fintech and SaaS billing teams building AI-assisted credit evaluation need to plan for full high-risk obligations.
Lead scoring and customer profiling. Generic lead scoring — ranking sales prospects by likelihood to convert — doesn't automatically trigger Annex III because it doesn't determine access to essential services or make employment decisions. It typically lands as Minimal. However, if your scoring model gates access to financial services, insurance pricing, or other essential services, §5 may apply, and you're in Limited or High-Risk territory. The distinction matters: know what your scoring output is used to decide.
Fraud detection (exempt). Article 6(2) and the notes to Annex III §5 explicitly carve out fraud detection from the high-risk classification. This is one of the few explicit exemptions in the Act. If you're using AI purely for fraud pattern detection — not for broader creditworthiness or access decisions — you're in Minimal territory. Don't conflate fraud detection with credit scoring; they're treated differently.
Customer support chatbot (Article 50). A chatbot that answers questions, routes tickets, or provides support doesn't trigger Annex III. But Article 50 requires disclosure: users must be told they're interacting with an AI system. The disclosure obligation applies even if the chatbot never makes a consequential decision. Failing to disclose is a violation in the Limited-risk tier.
Content moderation (Article 50; escalates). AI-assisted content moderation that flags or filters content is Limited-risk with a disclosure obligation. It escalates toward High-Risk if the moderation system makes automated decisions that gate account access or essential service access without meaningful human review. Review the output chain before assuming you're in the safe tier.
What High-Risk Actually Requires
High-risk classification doesn't mean you have to shut the workflow down — it means you have to comply with the obligations in Chapter III, Section 2 of the Act. Here's what that looks like at a high level.
Risk management system (Article 9). You must establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. This isn't a one-time assessment — it's an ongoing process that identifies and evaluates known and foreseeable risks, and tests the system against those risks before deployment and after material changes.
Data governance (Article 10). Training, validation, and test datasets must meet quality criteria: relevance, representativeness, freedom from errors, and completeness appropriate to the intended purpose. If you're using an off-the-shelf model, you need to document what you know about the training data used.
Technical documentation (Article 11 + Annex IV). Before deployment, you must prepare technical documentation covering the system's purpose, architecture, training methodology, performance metrics, limitations, and the human oversight measures in place. Annex IV provides the required structure.
Logging and audit trails (Article 12). High-risk systems must automatically log events to the extent technically feasible, with logs retained for the period appropriate to the system's intended purpose — and at minimum, for the period required by applicable law.
Transparency to deployers (Article 13). Providers must ensure that high-risk systems come with instructions that tell deployers what the system does, what it doesn't do, what data it expects, and what human oversight is required.
Human oversight (Article 14). High-risk systems must be designed so that natural persons can effectively oversee the system — including the ability to understand the system's output, override it, and shut it down. "A human reviewed it" isn't sufficient if the human has no practical ability to challenge or override the AI's output.
Accuracy, robustness, cybersecurity (Article 15). High-risk systems must achieve appropriate levels of accuracy for their intended purpose and be resilient to errors, faults, and inconsistencies. The benchmarks depend on the use case.
For a practical introduction to these obligations and how the Act's tier system works overall, see our companion post: The EU AI Act in Plain English.
How to Downgrade Risk
High-risk classification is driven by what your AI system does with its output — not by the underlying model or technology. This means there are legitimate paths to reduce the risk tier of a workflow, if you're willing to change how it operates.
Scope the AI output. If your AI generates a ranked list of candidates that a human then uses to make a decision, that's high-risk. If instead the AI flags potential issues for human review — without ranking or scoring — the decision-influence is lower and the Annex III §4(a) trigger may not apply. Reframing AI output as "input to human judgment" rather than "recommendation" changes the compliance footprint, provided the human review is genuine.
Remove the decision from AI scope. If a workflow uses AI to automate an employment, credit, or access decision, removing that final decision from the AI's scope and requiring a human to make the call independently can move the system out of the high-risk tier. The key word is "independently" — a human who rubber-stamps an AI recommendation doesn't count as human oversight under Article 14.
Add meaningful human review. For workflows where downgrading isn't practical, the quality of human oversight matters both for compliance and for the robustness of the system. Article 14 requires that oversight mechanisms enable overrides. Building a workflow where a human can challenge and reverse the AI's output — and where that override is logged — satisfies the spirit of the requirement and creates an audit trail.
Limit the affected population. Some high-risk triggers depend on the system affecting "natural persons" in ways that produce legal or similarly significant effects. If you can restructure a workflow so that it operates only on aggregate or anonymized data — and no individual-level decision flows from it — you may exit the high-risk category entirely. This requires careful legal analysis, but it's a real design option.
A Simple Decision Tree
- Does the workflow involve biometric data, emotion recognition, or social scoring? If yes → check Article 5 first. If it falls under Article 5(1)(a)-(g), it's Prohibited. Stop here.
- Does the workflow make or substantially influence a decision about a person's employment, promotion, termination, or task allocation? If yes → Annex III §4. High-Risk obligations apply.
- Does the workflow determine or substantially influence access to credit, insurance, or other essential services? If yes → Annex III §5. High-Risk obligations apply (unless it's purely fraud detection, which is exempt).
- Does the workflow determine access to education, training, or evaluation of students? If yes → Annex III §3. High-Risk obligations apply.
- Does the workflow interact with users in a conversational format, or generate synthetic media? If yes → Article 50. Disclosure is required (Limited-risk).
- None of the above? The workflow is likely Minimal-risk. No specific Act obligations beyond general good practice apply.
This decision tree is a starting point, not a substitute for working through the Act's text. For specific workflows, use the EU AI Act Risk Checker — it applies the Annex III and Article 5 logic to your specific use case and returns the relevant article references.
If you're a US-based company wondering whether any of this applies to your workflows at all, start with: What US Founders Get Wrong About the EU AI Act.
For the full regulatory context — deadlines, penalties, and the tier overview — see: The EU AI Act in Plain English.
Once you know which of your workflows is high-risk, the next step is building the compliance infrastructure — risk management documentation, logging, and oversight mechanisms — before August 2. If you want a structured approach to that work, start with an AI Readiness assessment to map your current stack against the Act's requirements, or book a Sprint to build the compliance foundation in five focused days.
Run the free EU AI Act Risk Checker → — map your workflows to the Act's tiers in 5 minutes.
For more on the EU AI Act, visit the OpSprint EU AI Act Hub.
Pragmatic readiness guidance — not legal advice. For specific cases, consult EU counsel.
Need help applying this in your own operation? Start with a call and we can map next steps.