The European Union’s new AI Act, set to take effect on August 1, 2024, is a groundbreaking piece of legislation aimed at regulating artificial intelligence across the EU. The Act introduces a risk-based framework that categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal risk. It targets high-risk AI applications, such as those in healthcare, education, and critical infrastructure, imposing stringent obligations on their developers and users (JD Supra) (Consilium Europa).
The AI Act will be implemented in four phases over the next three years:
- Prohibited AI Practices (February 2, 2025): AI practices deemed to pose an unacceptable risk, such as social scoring and biometric categorization, will be banned.
- General-Purpose AI Systems (August 2, 2025): Specific requirements for providers of general-purpose AI systems (GPAIs) will come into effect.
- General Application (August 2, 2026): Most provisions of the Act, including those related to high-risk AI systems, will apply.
- High-Risk AI System Requirements (August 2, 2027): All remaining obligations for high-risk AI systems, particularly those in existing regulated sectors, will be enforced (JD Supra) (KPMG).
The Act emphasizes transparency, accountability, and the protection of fundamental rights. It includes significant penalties for non-compliance, with fines up to €35 million or 7% of annual global turnover, whichever is higher. Additionally, the Act establishes several governing bodies to ensure proper enforcement and support innovation through regulatory sandboxes (Consilium Europa) (Chatham House).
This legislation is expected to have a global impact, as it sets a high standard for AI regulation that other regions might follow. Companies worldwide will need to adapt to these new rules if they wish to operate in the EU, potentially leading to broader changes in AI governance globally (Chatham House).