EU AI Act compliance checklist (2025–2027)

EU AI Act Legislation

Europe’s AI law is no longer theoretical. Key obligations already started on February 2, 2025, with more biting from August 2, 2025 and August 2, 2026; high‑risk systems embedded in regulated products get until August 2, 2027. If you build or use AI whose outputs will be used in the EU, the clock is running.

Quick video primer (official):

A quick note about us: abv.dev works with teams shipping AI into regulated environments; you can connect governance workflows and evidence capture to your LLM apps. See the product overview and pricing.

Who must comply

The AI Act applies extraterritorially: providers and deployers outside the EU are in scope whenever the AI output is intended for use in the Union. The law names specific operator roles—provider, deployer, importer, distributor—and assigns duties to each. (EUR-Lex)

Two role traps to avoid:

  • You become the “provider” (and inherit provider duties) if you put your name or trademark on a high‑risk system, or if you make a substantial modification that affects compliance or changes intended purpose. This includes modifying a general‑purpose model so the system becomes high‑risk. (EUR-Lex)
  • Market surveillance authorities can re‑classify your system as high‑risk and escalate enforcement if you under‑classify it. (EUR-Lex)

Example cases

Hiring screening SaaS used by EU customers
You host an ML model that ranks candidates and your EU clients act on it. This is an Annex III employment use, so high‑risk. You, as the provider, must implement risk management, data‑governance controls, human‑oversight design, and logging; complete internal‑control conformity assessment; issue the EU Declaration of Conformity; affix CE marking; and register in the EU database before go‑live. Your public‑sector customers also face FRIA and registration obligations. (EUR-Lex)

Customer‑support chatbot on your website
If it’s not Annex III, it’s likely not high‑risk—but Article 50 still applies. Disclose that users are interacting with AI and label synthetic content your bot publishes, including machine‑readable signals for downstream detection. (EUR-Lex)

Timeline you can plan against

  • February 2, 2025 — bans on “unacceptable‑risk” practices and AI‑literacy duties took effect. (Digital Strategy, EUR-Lex)
  • August 2, 2025 — governance rules and obligations for general‑purpose AI (GPAI) models apply; the EU AI Office is responsible for supervising GPAI. (Digital Strategy, EUR-Lex)
  • August 2, 2026 — most remaining obligations apply, including for high‑risk systems under Annex III. (Digital Strategy)
  • August 2, 2027 — extended transition for high‑risk AI embedded in regulated products (e.g., under MDR/automotive). (Digital Strategy)

The EU AI Act compliance checklist

  1. Map your portfolio and assign roles
    Create a register of each AI system, its intended purpose, and your role per system. Flag where you are the provider versus deployer. If you white‑label or materially alter a system, mark it “provider obligations.” (EUR-Lex)
  2. Classify risk (and document why)
    Identify any Annex III uses (high‑risk), such as education, employment, access to essential services, law enforcement, migration, or justice. If you claim an Annex III system is not high‑risk, note Article 6(3) carve‑outs (e.g., “narrow procedural task,” non‑material influence). Always log your rationale. (EUR-Lex, Artificial Intelligence Act EU)
  3. For high‑risk systems, implement provider requirements
    Providers must operate a quality management system and meet design‑time requirements: risk management, data governance, technical documentation, logging, instructions for use, human oversight, and accuracy/robustness/cybersecurity, then issue an EU Declaration of Conformity and affix CE marking. Articles 9–15 and 17 set these duties; Articles 47–48 cover the declaration and CE mark. (EUR-Lex)
  4. Choose the right conformity assessment path
    Most Annex III systems use the internal‑control route (self‑assessment). Certain biometric systems or products already covered by EU “New Legislative Framework” laws may require a notified body. Check Article 43 and Annexes VI–VII, then plan certification lead times. (EUR-Lex)
  5. Register high‑risk systems in the EU database
    Before placing on the market or putting into service, providers must register Annex III high‑risk systems in the EU database (Article 49/71). Some registrations (e.g., law enforcement or migration) are kept in a restricted section. Public‑sector deployers must also register their use. (EUR-Lex)
  6. Stand up post‑market monitoring and incident response
    Your technical documentation and logs must support monitoring and corrective actions after launch; market authorities can demand records and mandate measures. Keep log retention aligned with Article 12 and your risk profile. (EUR-Lex)
  7. Perform Fundamental Rights Impact Assessments (FRIAs) when required
    Public bodies and private entities providing public services must perform a FRIA before deploying high‑risk systems; there are targeted category triggers (e.g., certain banking/insurance uses). The AI Office will supply a template deployers can use. Notify the market authority once completed. (EUR-Lex)
  8. Meet transparency rules for certain AI (even if not high‑risk)
    Label chatbot interactions, watermark or otherwise mark synthetic content in machine‑readable form, and disclose deepfakes; notify people when exposed to emotion‑recognition or biometric categorization systems, subject to limited exceptions. See Article 50. (EUR-Lex)
  9. Raise AI literacy across teams
    Providers and deployers must ensure a sufficient level of AI literacy for personnel who build or operate AI. Treat it like role‑based training: model limits, human‑oversight procedures, safe‑use playbooks. This duty started on February 2, 2025. (EUR-Lex)
  10. If you provide GPAI models, implement GPAI obligations
    GPAI providers must publish model information, policy documents, and risk‑management measures; “systemic‑risk” models carry extra duties (Article 55). The Commission published a voluntary GPAI Code of Practice on July 10, 2025; aligning to it is one workable path to demonstrate compliance. (Digital Strategy)
  11. Mind the penalties
    Breaching bans on prohibited practices can trigger fines up to €35,000,000 or 7% of worldwide turnover; other operator failures can reach €15,000,000 or 3%; supplying misleading information can reach €7,500,000 or 1%. SMEs benefit from the lower of the thresholds. GPAI‑specific infringements can reach 3% or €15,000,000. Articles 99 and 101 name the bands. (EUR-Lex, Artificial Intelligence Act EU)
  12. Track standards and guidance
    Expect harmonized standards (via CEN/CENELEC JTC 21) and Commission guidance to flesh out how to evidence compliance; watch the AI Office site for new templates and notices. (EUR-Lex)

Common failure modes (and how to avoid them)

  • Misclassifying a high‑risk use as “low‑risk” because a human reviews outcomes “most of the time.” If the AI materially influences a decision, the carve‑outs in Article 6(3) likely don’t save you; authorities can reclassify and fine. Document how decisions are made and when outcomes can be overruled. (EUR-Lex)
  • Becoming the provider by accident. If you substantially modify a high‑risk system or rebrand it as your own, you inherit provider duties—including QMS, conformity assessment, CE marking, and registration. Plan for this in vendor contracts and your architecture. (EUR-Lex)

Where abv.dev fits

Teams use ABV to stitch observability, governance, and compliance evidence into the same pipeline: agent traces, guardrails, cost/token tracking, OpenTelemetry support, governance dashboards, and EU AI Act automation to help compile the artifacts auditors expect. That reduces the gap between “we think we’re compliant” and “we can prove it.” (ABV)

  • Practical tie‑ins to the checklist:
    • Map systems and roles: tag services and prompts by intended purpose and operator responsibilities; export a system inventory for your Annex III logbook. (ABV)
    • Evidence pack: capture prompts, model/router versions, human‑in‑the‑loop actions, and guardrail events to feed technical documentation and post‑market monitoring. (ABV)
    • Transparency ops: keep a ledger of synthetic‑content disclosures and watermarking status for Article 50. (EUR-Lex)
    • Training & literacy: host SOPs and link them to runbooks inside the same dashboard so operators can show role‑based training coverage. (ABV)

One‑page action plan for this quarter

  1. Inventory AI uses and map operator roles; flag Annex III. (EUR-Lex)
  2. For each flagged system, choose assessment route; start technical documentation and QMS updates. (EUR-Lex)
  3. Implement Article 50 transparency in all customer‑facing AI. (EUR-Lex)
  4. Stand up AI‑literacy training and record completion. (EUR-Lex)
  5. Prepare for database registration, declaration, and CE marking where required. (EUR-Lex)

The near‑term implication is simple: if you sell, deploy, or modify AI used in the EU, you should be building evidence now rather than rewriting your stack next summer. If you want help operationalizing this checklist inside your app lifecycle, take a look at abv.dev. (ABV)

Sources and further reading

  • European Commission: AI Act overview and timeline (updated August 1, 2025). (Digital Strategy)
  • EUR‑Lex: Official Journal text of Regulation (EU) 2024/1689 (articles cited above). (EUR-Lex)
  • EU AI Office: mandate and GPAI supervision. (EUR-Lex)
  • Commission: GPAI Code of Practice announcement (July 10, 2025). (Digital Strategy)
  • AI Act Explorer for Annex III and article navigation. (Artificial Intelligence Act EU)