The EU AI Act is a regulatory framework introduced by the European Union to ensure the safe, ethical, and transparent use of artificial intelligence (AI) across its member states. It is the first law of its kind, and it aims to balance the promotion of innovation with the protection of fundamental rights and public safety.
In this blog post, we will answer frequently asked questions about the EU AI Act.
What are the key features of the EU AI Act?
There are six features of the EU AI Act we’ll tackle—risk classification, prohibited practices, transparency and accountability, AI governance, penalties, and support for innovation.
Risk classification
- Unacceptable risk: The Act bans AI systems that pose significant harm or violate fundamental rights, such as those that engage in social scoring or real-time biometric surveillance.
- High risk: AI systems used in critical areas (e.g., healthcare, education, and law enforcement) are subject to strict regulation and oversight. These systems must comply with transparency, accuracy, and human oversight requirements.
- Limited risk: AI applications like chatbots must meet transparency requirements, such as informing users that they are interacting with AI.
- Minimal risk: Low-risk systems, such as spam filters, face little to no regulation under the Act.
- High-risk AI obligations: Developers and users of high-risk AI systems must comply with strict guidelines, including conducting risk assessments, maintaining documentation, and implementing human oversight. These systems must be regularly monitored to ensure compliance with EU standards on safety, accuracy, and accountability.
Prohibited practices
The Act outright bans certain AI applications that are deemed harmful or manipulative. This includes AI systems that exploit vulnerabilities of individuals (e.g., children) or deploy subliminal techniques to manipulate behavior.
Transparency and accountability
- Users must be informed when they are interacting with AI systems, especially in cases involving automated decision-making or deepfakes.
- Organizations using AI will need to maintain detailed logs, report incidents, and ensure their AI systems are explainable and auditable.
AI governance
A European Artificial Intelligence Board has been established to monitor the implementation of the law and provide guidance to member states.
Penalties
The Act has established fines for non-compliance, with penalties reaching up to 6% of a company’s global annual turnover.
Support for innovation
The EU AI Act encourages the creation of “regulatory sandboxes” where companies can test innovative AI solutions under regulatory supervision without facing immediate legal consequences.
What are the key dates for the EU AI Act?
The EU AI Act entered into force on August 1, 2024. Below is a chart with key dates.
Articles | When is it applicable after entry into force? | Date applicable |
General provisions and prohibited AI practices | Six months | February 2025 |
Codes of practice | Nine months | May 2025 |
General-purpose AI (GPAI) rules and governance | 12 months | August 2025 |
Obligations for high-risk systems | 24 months | August 2026 |
High-risk AI systems for public authorities* | 48 months | August 2028 |
Large-scale IT systems** | 5.5 years | December 31, 2030 |
*Systems already on the market need to comply if there are significant changes in design or use by public authorities.
**These AI systems are components of large-scale IT systems in areas like freedom, security, and justice.
What are the recommended ISO and other frameworks for compliance?
- ISO/IEC 27001: Information Security Management Systems
- ISO/IEC 42001:2023 AI Management system
- NIST Cybersecurity Framework (CSF)
- ENISA Guidelines
- ISO/IEC 27701: Privacy Information Management
- OECD Privacy Guidelines
- EU High-Level Expert Group (HLEG) on AI Ethics Guidelines
- UNESCO Recommendation on the Ethics of AI
- ISO/IEC 38507: Governance of AI
- AI Fairness 360 Toolkit (IBM): A framework for detecting and mitigating bias in AI models.
- DARPA XAI Program and SHAP (SHapley Additive exPlanations) for interpretability and transparency
- ISO/IEC 29147 (Vulnerability Disclosure) and ISO/IEC 30111 (Vulnerability Handling Processes)
What are the implications of the EU AI Act?
The EU AI Act is expected to influence global AI regulations, particularly for companies operating in the European market.
Businesses that develop or use AI systems, especially those engaged in high-risk sectors, must adhere to stringent rules. While the Act could increase compliance costs for businesses, it will also foster trust and safety.
For consumers, the Act aims to protect users by banning harmful AI applications and ensuring transparency and accountability in AI-driven decisions.
Overall, the EU AI Act seeks to strike a balance between encouraging AI innovation and ensuring that AI technologies are safe, transparent, and aligned with fundamental rights.
How Bugcrowd can help with compliance
Specifically for the risk classification feature of the Act, Bugcrowd offers AI bias assessments and AI penetration testing assessments. These two service offerings can help organizations achieve compliance with these regulations.
The EU AI Act is the first of its kind; it constitutes a step toward strengthening privacy, security and ethics in relation to the use of AI in the European market. Specifically, it aims to create a framework for the safe and ethical development, deployment, and use of AI. Many expect it to set a global precedent for regulating AI while balancing innovation and ethical concerns.
If you’re interested in other EU-based compliance blogs, check out our DORA series and our NIS2 Directive deep dive.