On August 1st, the European Artificial Intelligence Act entered into force - the world's first comprehensive regulation on artificial intelligence.
The AI ​​Act aims to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people's fundamental rights. The Regulation aims to create a harmonised internal market for AI in the EU, encouraging the deployment of this technology and creating a favourable environment for innovation and investment. The AI ​​Act introduces a forward-looking definition of AI based on product safety and a risk-based approach in the EU.
Minimal risk:Most AI systems, such as AI recommender systems and spam filters, fall into this category. These systems do not face obligations under the Artificial Intelligence Act due to the minimal risk to citizens' rights and safety. Companies can voluntarily adopt additional codes of conduct.
Limited risk: AI systems such as chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deepfake content, must be labeled as such, and users must be informed when biometric categorization or emotion recognition systems are used. In addition, providers will need to design systems so that content in the form of synthetic audio, video, text, and images is tagged in a machine-readable format and marked as artificially created or manipulated.
High risk: AI systems identified as high-risk will be required to meet strict requirements, including risk mitigation systems, high-quality datasets, activity logging, detailed documentation, clear user information, human oversight, and a high level of reliability, accuracy, and cybersecurity. Regulatory labs will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include, for example, AI systems used for recruiting staff or assessing whether someone is eligible to receive a loan or to operate autonomous robots.
Unaccpetable risk: AI systems that are considered to pose a clear threat to the fundamental rights of people will be prohibited. This includes AI systems or applications that manipulate human behaviour in order to circumvent the free will of users, such as voice-activated toys that encourage dangerous behaviour by minors and young people, systems that enable "social scoring" by governments or companies, and certain applications of predictive policing. In addition, certain uses of biometric systems will be prohibited, such as emotion recognition systems used in the workplace, and certain systems for categorising people or remote biometric identification in real time for law enforcement purposes in public places (with limited exceptions).
To complement this system, the AI ​​Act also introduces rules for so-called general-purpose AI models, which are highly capable AI models designed to perform a wide variety of tasks, such as generating human-like text. General-purpose AI models are increasingly used as components of AI applications. The AI ​​Act will ensure transparency along the value chain and address the potential systemic risks of the most suitable models. Member States have until 2 August 2025 to designate national competent authorities to monitor the application of the rules on AI systems and carry out market surveillance activities. The majority of the AI ​​Act rules will apply from 2 August 2026. To bridge the transition period before full implementation, the Commission has developed the AI ​​Pact. This initiative invites AI developers to voluntarily adopt key obligations under the AI ​​Act ahead of the statutory deadlines.
Date
Author
Location