Regulation of Artificial Intelligence in Europe



European Union’s (“EU“) pioneering regulatory approach especially in the digital space has often been a role model for other countries. General Data Protection Regulation (GDPR) is an example of this. Similarly, the EU draft Regulation for the purpose of governing the development and use of artificial intelligence (“AI“) released on 21 April 2021 is expected to bring prompt changes in regulations of other jurisdictions. The 108-page Regulations titled “Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act)” is a future-proof document that can be amended as the AI technique advances.

The European Commission has the twin-fold objective of:

  1. Promoting the uptake of AI – Digital technology has become one of the most indispensable aspects of our life. The enhanced use of AI in the digital economy calls into question the reliability of AI’s application.Therefore, to instil a sense of trust among all the stakeholders, the EU Regulations on AI is founded on the principle of human dignity and privacy protection. In a nutshell, the regulatory framework seeks to establish an ‘ecosystem of trust.’
  2. Addressing the risk associated with using AI- The framework addresses four different risks related to AI – minimal, limited, high, and unacceptable risk levels. The Commission is of the view that only high-risk AI is to be addressed. To check on the high-risk AI devices, various safeguards such as transparency, functionality tests, registration, certification, etc. will be considered. It also requires the registration of AI systems to keep the monitoring process as simple as possible.

Key provisions

Definition of AI AI has been defined as software developed with one or more specified techniques and approaches (including machine learning and deep learning) that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.
Levels of risk The European Commission addressed four different levels of risk.

  1. Minimal risk: – Most of the AI market is made up of AI that is not risky.
  2. Limited risk: – This is another group of AI application to which only transparency obligations are applied. This is to make people aware of the fact that they are interacting with the Artificial Intelligence.
  3. High-risk: – This group can include both artificial intelligences embedded in products like medical devices or stand-alone AI that is in software. It can be recruiting applications, e-learning applications or credit-scoring applications.
  4. Unacceptable risk: – All the applications that involves unacceptable risk are prohibited. It involves application that may distort the human behavior or cause physical/psychological damage to an individual. For example- social scoring.
Risk Assessment Various safeguards such as transparency, functionality tests, registration, certification, monitoring, data retention and reporting obligations would be considered to check on the high-risk AI devices. It also mandates the registration of AI systems to keep the monitoring process hassle-free process.
Scope and Jurisdiction The Proposed Regulation applies to –

  1. providers that offer AI in the European Economic Area (EEA), regardless of whether the provider is located in or outside the EEA;
  2. users of AI in the EEA; and
  3. providers and users of AI where the providers or users are located outside of the EEA but the AI outputs are used in the EEA.
Creation of Database Creating a public database wherein all the providers of High-risk AI have to register before entering into the European Economic Area. As a result, it would make the job hassle-free for the stakeholders to check if the high-risk systems violate the standards of the Proposed Regulations’ requirements.
Penalties In situations involving possible violations of the AI’s Proposed Regulation’s, the EAIB would impose the following penalties depending on the nature of offense: – Nature of Offense Level of Fines (whatever is higher)
1. Incorrect, incomplete or misleading information to notified supervisory or other public authorities up to 2% of annual worldwide turnover or €10 million
2. Non-compliant AI systems up to 4% of annual global turnover or €20 million
3. For violations of the prohibitions on unacceptable AI systems and governance obligations. up to 4% of annual global turnover or €20 million
Board composition The Regulation proposes to create a European Artificial Intelligence Board at the Union level, headed by the representatives of the European Commission and the Member States. The Board would aid in the implementation and enforcement of the proposed regulation. It would also play a vital role in the development of common AI standards.
Data Security The Regulation strictly restricts the manipulation of data by unauthorized third parties. It notes that it is the developer and the providers who train and refine data sets. So, any unwanted influence by the third parties may have unintended effects, such as skewed results and incorrect conclusions.
Enforcement and Implementation There will be a 24-month long implementation period after the Proposed Regulation is finalized and comes into effect (a phase expected to take a year or more). The EU’s proposal must be approved by both the European Council and the European Parliament before it becomes law. This will enable businesses to enforce the hefty governance, record-keeping, and registration requirements.

Important takeaways

  1. The proposal is not a regulation based upon technology but about how the AI technology is used or applied in specific applications. It should be noted that the drafters took into account the evolving nature of AI to create a future-proof regulatory framework. As a result, the members came up with a broad definition of AI that can be amended over time as techniques advance.
  2. The primary focus of the regulation is on higher-risk applications of AI. So, those applications having a higher risk of violating fundamental rights and safety were considered because there is no need to regulate the entire AI market.
  3. The regulation identifies specific higher-risk systems that include-
    1. Critical infrastructures such as those providing supply of gas, water etc.
    2. Educational or vocational training someone’s life
    3. Safety components of products
    4. Employment, workers management and access to self-employment
    5. Essential private and public services scoring
    6. Law enforcement
    7. Migration, asylum and border control management
    8. Administration of justice and democratic processes
  4. The regulatory framework introduces new transparency obligations for specific AI systems to make people aware that they are interacting with an AI. This is in light of the usage of chatbots and deep fakes.
  5. The regulatory framework proposes setting up a European Artificial Intelligence Board (“EAIB”) to promote the development of common AI standards.
  6. The Regulation proposes a CE marking and process that indicates that the product complies with the requirements of a relevant Union legislation regulating the product in question. The same approach has been used in safety product regulations, and it has proven to be reliable.
  7. The Regulation also proposes some obligations on the provider and the user of AI systems. The most crucial obligation on the operators is to undertake conformity assessments before the AI system is put on the market and obtain the CE mark. The other set of obligation is on the user, who has to place the human oversight mechanism because it is only the user who can ensure it.
  8. The Regulation prohibits four practices limited to –
    • Subliminal manipulation which may result in physical or psychological harm.
    • The use of AI to exploit children or mentally disabled people.
    • The use of AI for general-purpose social scoring.
    • The use of AI to prohibit real-time remote biometric identification systems with some exceptions (search for a victim of crime, the threat to life, serious crime).

The EU Regulations is a policy on its fair and proper use. The regulation lays down rules and procedures for the benefit of AI systems such as:

  • Prohibition on certain practices
  • Requirements and obligations for specific high risk AI systems
  • Transparency rules when interacting with natural persons, including AI usage in ‘deep fakes’
  • Restrictions on market monitoring and surveillance

Although seemingly comprehensive, the rules have some exclusions. For instance, the AI usage for arms and in military is excluded from the application of these Regulations.

These Regulations would amount to a trade-off between explainability and accuracy. On the contrary, it will open up new paths for more ethical consideration of AI usage. This proposal of EU pioneers for new regulations that are important considering AI advancement globally. As a result, if the EU’s model for AI Regulation is to become a role model for proportionate, effective, and evidence-based regulation in innovative markets, a concerted and rigorous effort from institutions and market participants will be required.


Interns and Paralegals.


As per the rules of the Bar Council of India, we are not permitted to solicit work or advertise. By agreeing to access this website, the user acknowledges the following:

This website is meant only for providing information and does not purport to be exhaustive and updated in relation to the information contained herein. Naik Naik & Company will not be liable for any consequence of any action taken by the user relying on material / information provided on this website. Users are advised to seek independent legal counsel before proceeding to act on any information provided herein.