EU AI Act enforcement begins for high-risk systems
The EU AI Act's high-risk provisions are now in force — companies selling AI into Europe must comply or face fines up to 3% of global revenue.
What happened
The European Union's AI Act entered enforcement for 'high-risk' AI systems today, March 14, 2026. High-risk categories include AI used in hiring, credit scoring, medical devices, critical infrastructure, and biometric identification. Companies must maintain technical documentation, ensure human oversight, and register systems in the EU AI database. Non-compliance fines start at €15M or 3% of global annual turnover, whichever is higher.
Why it matters to you
personalizedWhy it matters to you
If you're building AI that touches hiring, lending, healthcare, or identity — in Europe or for European users — you now have legal obligations around technical documentation, logging, and human override capabilities. You need to be able to show an auditor what data trained your model, what it decides, and how a human can override it. These are engineering requirements, not just legal ones.
What to do about it
Check your product's category against the EU AI Act's Annex III (high-risk list). If you're in scope, file a ticket this sprint: 'EU AI Act compliance audit.' Start with documentation of your training data, model outputs, and any automated decision-making flows.
Tags
Sources