The proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI) and amending certain Union legislative acts aims to implement a legal framework for trustworthy AI. The proposal is based on the EU's commitment to strive for a balanced approach to AI, preserving the EU's technological leadership and ensuring that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles. The proposal sets out policy options on how to achieve the twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology.

The proposal defines four levels of risk in AI: unacceptable risk, high risk, limited risk, and minimal or no risk. It proposes a list of high-risk applications and sets clear requirements for AI systems for high-risk applications. It also defines specific obligations for AI users and providers of high-risk applications. The proposal seeks to reduce administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs), while providing AI developers, deployers, and users with clear requirements and obligations regarding specific uses of AI.

The proposed regulation aims to ensure that Europeans can trust what AI has to offer while addressing the risks created by certain AI systems. It proposes a conformity assessment before an AI system is put into service or placed on the market, as well as enforcement after such a system is placed in the market. The proposal also proposes a governance structure at European and national levels.

According to the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI) and amending certain Union legislative acts, there are four levels of risk in AI: unacceptable risk, high risk, limited risk, and minimal or no risk. The proposal defines high-risk AI systems as those that pose a significant risk to health, safety, or fundamental rights. Examples of high-risk AI systems include those used in critical infrastructures (e.g., transport), educational or vocational training (e.g., scoring of exams), safety components of products (e.g., AI application in robot-assisted surgery), employment, management of workers, and access to self-employment (e.g., CV-sorting software for recruitment procedures). The proposal sets clear requirements for AI systems for high-risk applications and defines specific obligations for AI users and providers of high-risk applications. It also proposes a conformity assessment before an AI system is put into service or placed on the market, as well as enforcement after such a system is placed in the market. The proposed regulation aims to ensure that Europeans can trust what AI has to offer while addressing the risks created by certain AI systems.

According to the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI) and amending certain Union legislative acts, unacceptable risk refers to a very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights. Examples of such uses include social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes. These uses will be banned. Non-compliance with the AI regulation will result in administrative fines of up to **20 million euros or 4% of the total worldwide annual turnover for the preceding financial year, whichever is higher** ². The proposed regulation aims to provide AI developers, deployers, and users with clear requirements and obligations regarding specific uses of AI while reducing administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs).

According to the website of the European Commission, the proposed regulation on artificial intelligence (AI) aims to provide AI developers, deployers, and users with clear requirements and obligations regarding specific uses of AI while reducing administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs). The proposal seeks to establish a governance structure at both European and national levels. The governance structure at the European level will consist of a European Artificial Intelligence Board (EAIB), which will be responsible for ensuring consistent application of the regulation across the EU. The EAIB will be composed of representatives from national supervisory authorities, the Commission, and other relevant stakeholders. The proposal also establishes a European Artificial Intelligence Committee (EAIC), which will be responsible for advising the Commission on strategic issues related to AI. The EAIC will be composed of representatives from Member States, the Commission, and other relevant stakeholders ¹³.

Comments:

The AI Act is a proposed legislation by the European Union that focuses on the regulation of Artificial Intelligence (AI). It is the first of its kind in the world and applies to the development, deployment, and use of AI in the EU or when it will affect people in the EU. The draft AI Act has a risk-based approach that classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk.

Although this proposal sets a robust and flexible legal framework, it states that "..this proposal presents a balanced and proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market".

The developement of military-grade AI is left out of scope of the EU AI Act.

The European Council has also called for a clear determination of the AI applications that should be considered high-risk.

Need to note that in this explanatory memorandum the word "risk" is used 457 times! Out of them 302 time the "high-risk" term is used.

An AI EU-wide database is mentionned. To feed this database, AI providers will be obliged to provide meaningful information about their systems and the conformity assessment carried out on those systems. Moreover, AI providers will be obliged to inform national competent authorities about serious incidents or malfunctioning that constitute a breach of fundamental rights obligations as soon as they become aware of them, as well as any recalls or withdrawals of AI systems from the market. National competent authorities will then investigate the incidents/or malfunctioning, collect all the necessary information and regularly transmit it to the Commission with adequate metadata. The Commission will complement this information on the incidents by a comprehensive analysis of the overall market for AI.