The European Union’s AI Act has entered a crucial phase, with the first key provisions in force starting February 2, 2025. These provisions introduce new regulations aimed at ensuring AI systems within the EU market are safe and transparent, especially those with high-risk applications. Among the key elements of the Act are bans on certain AI systems deemed to pose an “unacceptable risk” and new requirements for AI literacy training.
Prohibited high-risk AI systems
One of the core aspects of the EU AI Act is the prohibition of AI systems that are deemed to pose significant risks to individuals’ safety or rights. The list of banned AI activities includes:
- Manipulative techniques: Systems that use harmful subliminal or deceptive methods.
- Social scoring: The use of AI to assign social scores to individuals, which could unfairly impact their lives.
- Facial recognition misuse: The unregulated use of facial recognition technology in public spaces, especially for law enforcement without proper safeguards.
- Emotion recognition: The use of AI to detect emotions in workplace or educational settings, which could lead to privacy invasions.
The penalties for non-compliance with these prohibitions are severe, with fines reaching up to EUR 35 million or 7% of a company’s global annual revenue.
The need for regulation
The rapid growth of AI technology has sparked concerns about its potential misuse, particularly in areas like privacy, security, and fairness. In recent years, several high-profile incidents have highlighted how AI can be manipulated in harmful ways, such as through biased algorithms or unauthorised surveillance. The EU AI Act is designed to address these issues, ensuring that AI systems are developed and used in ways that respect human rights and societal values.
By introducing these regulations, the EU aims to set a global standard for ethical AI practices. It seeks to balance innovation with responsibility, fostering trust in AI technologies while minimizing the risks associated with their use.
Global impact of the AI Act
The EU AI Act is not just a local regulation; it will have a global impact. Companies outside the EU, including those in the US and other regions, will also need to comply with the Act if they provide or deploy AI systems that are used within the EU. This could affect companies offering AI solutions in areas like recruitment, healthcare, and law enforcement, among others.
As highlighted by experts, the Act applies to all organisations that use AI in the EU market, regardless of where they are based. For example, an American company using AI for recruitment in the EU will be subject to the new rules. This broad scope means that the EU AI Act could become a benchmark for global AI regulation, influencing how AI is governed in other countries.
A key requirement under the Act is AI literacy, ensuring that employees understand the basics of AI systems and their ethical implications. Companies are advised to begin cataloging their AI systems and use cases, as many organizations may not yet have a clear overview of the AI technologies they are using.
Experts warn that while the first provisions of the Act are now in place, more substantial requirements will come into effect in the next 18 months. Businesses are encouraged to take the time to ensure they meet all upcoming deadlines and avoid potential penalties.