Considering that 96% of professionals expect a significant impact of AI in their current roles, it is interesting to know how this new regulation will affect the regulation of emerging technologies.
The regulation, a pioneer worldwide, establishes a clear and structured framework for the classification and management of risks associated with AI systems, with the aim of protecting the fundamental rights of citizens and promoting responsible innovation, thus marking a significant milestone in its regulation in Europe.
One of the highlights is therefore the classification of AI applications into risk categories, which allows for proportional and specific regulation according to the level of threat they represent. By banning unacceptable risk apps and strictly regulating high-risk ones, the EU ensures that security and fundamental rights are not compromised.
Main Aspects of the EU Regulation on AI
1.- Risk Classification:
The regulation classifies AI applications into different risk categories:
• Unacceptable Risk: Apps that pose unacceptable risks to the security, fundamental rights and values of the EU. These applications will be prohibited.
• High Risk: Applications that may significantly affect fundamental rights or security. These applications will be subject to strict compliance obligations.
• Limited Risk: Applications that require transparency and must inform users that they are interacting with an AI system.
• Minimal or No Risk: Applications that do not pose significant risks and are less regulated.
2.- Obligations for Suppliers and Users:
The regulation sets out various obligations for developers, providers and users of AI systems, including conformity assessment, risk management, transparency and human oversight.
3.- Governance and Supervision:
A European Committee on Artificial Intelligence will be set up to coordinate the implementation and ensure uniform application of the regulation across the EU. Member States will also have national authorities responsible for monitoring compliance. These governance bodies are crucial for maintaining the consistency and effectiveness of regulations across the EU.
4.- Promotion of Innovation:
The regulation provides for the creation of “regulated testing spaces” where companies can develop and test AI innovations in a controlled and supervised environment. This balance between regulation and innovation promotion positions the EU as a global leader in AI governance, setting a model for other regions to follow.
5.- Protection of Fundamental Rights:
The protection of citizens’ fundamental rights, including privacy and non-discrimination, is strengthened through the strict regulation of certain AI applications, especially those used in sensitive contexts such as health, justice and employment.
The implementation of this regulation positions the EU as a leader in the regulation of artificial intelligence, with a focus on the protection of fundamental rights and the promotion of responsible innovation. While the AI Regulation will enter into force on 1 August 2024, it will be applicable progressively.
This coming August 1, the Regulation on AI comes into
effect after its recent publication in the Official Journal of
the EU
Considering that 96% of professionals expect a significant impact of AI in their current roles, it is interesting to know how this new regulation will affect the regulation of emerging technologies.
The regulation, a pioneer worldwide, establishes a clear and structured framework for the classification and management of risks associated with AI systems, with the aim of protecting the fundamental rights of citizens and promoting responsible innovation, thus marking a significant milestone in its regulation in Europe.
One of the highlights is therefore the classification of AI applications into risk categories, which allows for proportional and specific regulation according to the level of threat they represent. By banning unacceptable risk apps and strictly regulating high-risk ones, the EU ensures that security and fundamental rights are not compromised.
Main Aspects of the EU Regulation on AI
1.- Risk Classification:
The regulation classifies AI applications into different risk categories:
• Unacceptable Risk: Apps that pose unacceptable risks to the security, fundamental rights and values of the EU. These applications will be prohibited.
• High Risk: Applications that may significantly affect fundamental rights or security. These applications will be subject to strict compliance obligations.
• Limited Risk: Applications that require transparency and must inform users that they are interacting with an AI system.
• Minimal or No Risk: Applications that do not pose significant risks and are less regulated.
2.- Obligations for Suppliers and Users:
The regulation sets out various obligations for developers, providers and users of AI systems, including conformity assessment, risk management, transparency and human oversight.
3.- Governance and Supervision:
A European Committee on Artificial Intelligence will be set up to coordinate the implementation and ensure uniform application of the regulation across the EU. Member States will also have national authorities responsible for monitoring compliance. These governance bodies are crucial for maintaining the consistency and effectiveness of regulations across the EU.
4.- Promotion of Innovation:
The regulation provides for the creation of “regulated testing spaces” where companies can develop and test AI innovations in a controlled and supervised environment. This balance between regulation and innovation promotion positions the EU as a global leader in AI governance, setting a model for other regions to follow.
5.- Protection of Fundamental Rights:
The protection of citizens’ fundamental rights, including privacy and non-discrimination, is strengthened through the strict regulation of certain AI applications, especially those used in sensitive contexts such as health, justice and employment.
The implementation of this regulation positions the EU as a leader in the regulation of artificial intelligence, with a focus on the protection of fundamental rights and the promotion of responsible innovation. While the AI Regulation will enter into force on 1 August 2024, it will be applicable progressively.
worldwide
Services for Family Businesses in the Pharmaceutical Sector » Next
Archivos
Categorías
Archives
Recent Posts
A new chapter in Saltor Talent: Iñaki Saltor and Álvaro
31/03/2025Cárcel take on new strategic roles
Barcelona: the pharmaceutical hub striving to lead
17/03/2025innovation in Europe
EXECUTIVE SEARCH | General Management for a Family
11/03/2025Office
Trump’s policies: threat or opportunity for managerial
05/03/2025talent in Spain and Europe?
Determining criteria in the choice of a company by
12/02/2025professionals
7 reasons why finding managers today is more difficult
05/02/2025than yesterday
EXECUTIVE SEARCH | Head of Customer Experience for a
03/02/2025leading insurance company
EXECUTIVE SEARCH | Managing the Sustainability Business in the USA for a leading consulting firm
21/01/2025Resilient, stable, driven, adaptable and creative: this is how the Spanish leaders in Human Resources in Spain stand out
14/01/2025EXECUTIVE SEARCH |People Management for a leading
05/11/2024company in the luxury retail sector
Categories
Meta
Categories