Considering that 96% of professionals expect a significant impact of AI in their current roles, it is interesting to know how this new regulation will affect the regulation of emerging technologies.
The regulation, a pioneer worldwide, establishes a clear and structured framework for the classification and management of risks associated with AI systems, with the aim of protecting the fundamental rights of citizens and promoting responsible innovation, thus marking a significant milestone in its regulation in Europe.
One of the highlights is therefore the classification of AI applications into risk categories, which allows for proportional and specific regulation according to the level of threat they represent. By banning unacceptable risk apps and strictly regulating high-risk ones, the EU ensures that security and fundamental rights are not compromised.
Main Aspects of the EU Regulation on AI
1.- Risk Classification:
The regulation classifies AI applications into different risk categories:
• Unacceptable Risk: Apps that pose unacceptable risks to the security, fundamental rights and values of the EU. These applications will be prohibited.
• High Risk: Applications that may significantly affect fundamental rights or security. These applications will be subject to strict compliance obligations.
• Limited Risk: Applications that require transparency and must inform users that they are interacting with an AI system.
• Minimal or No Risk: Applications that do not pose significant risks and are less regulated.
2.- Obligations for Suppliers and Users:
The regulation sets out various obligations for developers, providers and users of AI systems, including conformity assessment, risk management, transparency and human oversight.
3.- Governance and Supervision:
A European Committee on Artificial Intelligence will be set up to coordinate the implementation and ensure uniform application of the regulation across the EU. Member States will also have national authorities responsible for monitoring compliance. These governance bodies are crucial for maintaining the consistency and effectiveness of regulations across the EU.
4.- Promotion of Innovation:
The regulation provides for the creation of "regulated testing spaces" where companies can develop and test AI innovations in a controlled and supervised environment. This balance between regulation and innovation promotion positions the EU as a global leader in AI governance, setting a model for other regions to follow.
5.- Protection of Fundamental Rights:
The protection of citizens' fundamental rights, including privacy and non-discrimination, is strengthened through the strict regulation of certain AI applications, especially those used in sensitive contexts such as health, justice and employment.
The implementation of this regulation positions the EU as a leader in the regulation of artificial intelligence, with a focus on the protection of fundamental rights and the promotion of responsible innovation. While the AI Regulation will enter into force on 1 August 2024, it will be applicable progressively.