May 11
2021
Anjum Shabbir
Anjum Shabbir
share
22nd April 2021
Data, Tech & IP

Op:Ed: “The Proposal for an AI Regulation: Preliminary Assessment” by Tiago Sérgio Cabral

1. Introduction

On 21 April, the European Commission presented its proposal for a Regulation on a European Approach for Artificial Intelligence putting forward a single set of rules to regulate Artificial Intelligence in the European Union. This proposal was highly anticipated by legal scholars, practitioners and the industry.

Indeed, if we consider the starting point for the EU’s AI-regulation aspirations to be the 2017’s Civil Law on Robotics Resolution by the European Parliament or the 19 October 2017 European Council Conclusions, it took (give or take) four years just to reach the starting point (there is still an entire legislative procedure to overcome before the Regulation can enter into force).

 

2. A Regulation 

We knew that the legal instrument preferred by the Commission would appearas a Regulation since the public consultation was made available. However, we now have the confirmation. This is in my view the correct instrument for this legislative effort. This is a key piece of legislation for the single market, and avoiding the usual delays in transposition, fragmented application and resulting confusion to citizens and industry should be a priority.

For a recent cautionary tale, one does not have to look further than the European Electronic Communications Code, another key piece of technology legislation that 24 Member States failed to transpose on time. And we are not even going to address how Member States, sometimes, find a way to engage in gold-plating even in maximum harmonisation Directives.

The European AI industry cannot develop if it has to comply with 27 different sets of rules. The proposal works, as it should, on the basis of a uniform set of rules (it gives less margin to Member States than, for example, the GDPR).

 

3. To whom will it be applicable? 

The short answer would be that basically any entity within the AI supply-chain, from providers to entities deploying AI (‘users’ in the Regulation) will have to comply with some new obligations if the proposal is made into law. Importers, product manufacturers, distributors, authorised representatives and other third parties are also covered, when applicable (Articles 24-29). Furthermore, the territorial scope of application extends beyond the EU’s borders since it is applicable to ‘providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union’ and to ‘providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country, along with users located in the EU (Article 2).

 

4. Prohibited AI Practices 

Through Article 5 of the proposal, the Commission aims to establish a strong position against certain types of AI systems, which, with minor exceptions, are to be prohibited under EU law. They can be summarised as follows:

  1. AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause harm;
  2. AI systems exploiting vulnerabilities of a specific group, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause harm;
  3. AI systems used for social scoring;
  4. Use of ‘real-time’ remote biometric identification in publicly accessible places for the purposes of law enforcement.

The objective appears to completely forbid the AI systems referred in (a) to (c), with no exceptions. However, the provision regarding social scoring contained in Article 5(1)c suffers from excess detail. The issue is with subpoints (i) and (ii) that when attempting to explain what should be considered as social scoring can end up opening unnecessary discussions about matters such as what is ‘detrimental or unfavourable treatment’ or what should be considered as ‘unjustified or disproportionate to the their social behaviour or its gravity’. These subpoints could probably be suppressed and the result would be a better and clearer rule.

AI systems used for (d) can exceptionally be used under specific conditions such as to search for specific potential victims of a crime, prevent imminent threats and find people accused of crimes punishable by a custodial sentence or a detention order for a maximum period of at least three years. A number of MEPs have already criticised a previously leaked version of this provision. The exceptions are still broad, but when compared to the previous leak, the current proposal is more developed in this matter and is more restrictive of this type of use, and could be said to be on the right track. Maybe this wording will even be enough to satisfy both the European Parliament and the Council of the EU.

 

5. High-risk AI

A significant majority of the rules in the Proposal are only applicable to high-risk AI (Articles 6 to 51). The objective of this solution is to avoid excessive burdening of AI systems and the related market players which do not represent a significant danger to fundamental rights (it is important to stress that the proposal is very much focused on a specific European vision for fundamental rights applied to AI).

The proposal contains a list of AI uses (Article 5 and Annex III) divided into eight categories that will be considered high-risk, which could, in the future be expanded by the EC, using the criteria established in Article 7 of the proposal. In fact, under Article 84 of the proposal, the EC should reassess the current list once a year.

Some interesting uses are considered as high-risk by the proposal, namely:

  1. AI systems intended to be used for ‘real-time’ and ‘post’ remote biometric identification of natural persons;
  2. AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity;
  3. AI systems to be used to dispatch or establish priority in the dispatching of emergency first response services;
  4. AI systems determining access or assessing persons in the context of educational or vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions;
  5. AI systems used in recruitment as well as for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating work performance and behaviour;
  6. AI systems used to determine the creditworthiness of persons or to establish their credit scores;
  7. AI systems used in assessment of the right to public benefits and services;
  8. AI systems used by law enforcement to make individual risk assessments in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
  9. AI systems to be used by law enforcement as polygraphs and similar tools or to detect the emotional state of a natural person;
  10. AI systems used to predict crimes or events of social unrest;
  11. AI systems used by law enforcement to detect deep fakes;
  12. AI systems used by law enforcement for profiling;
  13. AI systems used intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts; and
  14. AI systems covered under Article 5(1) regarding specific EU harmonisation legislation.

However, the list approach (even if expandable) favoured by the Commission may lack the necessary flexibility for this specific field. A new use can appear that is clearly dangerous from a fundamental rights perspective or an existing use may evolve in such a manner and, until the Commission acts, it will not be considered as high-risk. Additionally, if the use of AI does not fall within the eight predetermined categories the Commission’s hands will be tied and a legislative intervention would be needed to classify it as high-risk.

Market players may be caught in a false sense of security and fail to internalise certain costs because the use of AI that they are developing or deploying is not a part of the current list and then, suddenly, be caught by surprise by an update. With this in mind, a Data Protection Impact Assessment-like set of criteria that can evolve with the technology would probably be a better solution.

Those doubts about this specific solution notwithstanding, it is likely that the list in Annex III will grow in the trilogue negotiations, as this seems one aspect where the European Parliament is likely to make some proposals, and is not exactly a hard pill to swallow for the Council of the EU.

In addition, some of the current high-risk AI systems may (arguably should) jump to the prohibited column, in particular the ones addressing ‘predictive policing’.

 

6. Obligations

As explained above, all entities in the supply-chain, including providers (Article 16, along with Title III, Chapter 2 obligations directed to them), product manufacturers (Article 24), authorised representatives (Article 25), importers (Article 26), distributors (Article 27) and user entities (Article 29) have a number of obligations under the proposal. Nonetheless, the most demanding and numerous obligations are targeted at the providers.

Indeed, the number of obligations makes their full reproduction here excessive, but some of the most important are to:

  • Ensure compliance with the rules under Chapter 2, Title III, including:
    • Obligations on data governance practices and data set management, including ensuring that data sets are relevant, representative, free of errors and complete;
    • Design and develop the AI systems in a manner that complies with the rules on transparency and provision of information;
    • Design and develop AI systems with adequate record-keeping and traceability capabilities;
    • Comply with the rules on technical documentation, which the provider must write, in accordance with requirements of Annex IV;
    • Establish, implement, document and maintain a risk management system under Article 9;
    • Design and develop the system in a manner that allows for proper human oversight under Article 14;
    • Design and develop the system in a manner that complies with the rules regarding accuracy, robustness and cybersecurity under Article 15;
  • Implement a quality management system in accordance to the requirements of Article 17;
  • Guarantee that the AI system undergoes the relevant conformity assessment procedure;
  • Report to national competent authorities any serious incidents or malfunctioning constituting a breach of legal obligations intended to protect fundamental rights;
  • Take the necessary corrective actions, including, if necessary, through withdraw or recall, when the AI system is not in conformity with the legal requirements under Title III, Chapter 2;
  • Comply with the applicable registration obligations.

In comparison to the provider, user entities’ obligations are ‘lighter’ in nature including (a) to monitor high-risk AI for serious incidents or malfunctions or signs of presenting a risk at a national level; (b) store logs when they are under the user’s control and; (c) making sure that input data is relevant for the AI system’s purpose, when the input is under the user’s control .

However, other entities in the supply-chain, including the user entities, should carefully take note of Article 28 of the Proposal. According to this provision: ‘any distributor, importer, user or other third-party shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: a) they place on the market or put into service a high-risk AI system under their name or trademark; b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service and; c) they make a substantial modification to the high-risk AI system’. The cut-off for the application of this provision may not be very high. First, placing on the market or putting into service an AI system under the user’s name or trademark is a common market practice and may have relevant advantages.

Second and in particular, the concept of making a substantial modification, defined in the proposal as ‘a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessed can be tricky. Could this requirement be fulfilled through the use of certain data sets for further training, for example? We cannot forget that a number of AI systems are trained mostly through the use of data in possession or generated by the deploying entity.

Lastly, note that Article 52 establishes transparency obligations applicable to specific types of AI systems (for example, chatbots) that are relevant regardless of whether the system is classified as high-risk.

 

7. Penalties

Even if the proposal leaves some margin to Member States to establish rules regarding penalties which must be effective, proportionate, and dissuasive, a significant part of the enforcement will be uniformised, through mandatory fines for certain types of breaches. Therefore, non-compliance with the rules on prohibited AI systems (Article 5) and on data governance practices and data set management may result in a fine of up to 30,000,000 euros or, if the offender is company, up to 6% of its total worldwide annual turnover for the preceding financial year.

Providing incorrect, incomplete or misleading information to notified bodies and national competent authorities in response to a request may result in fines of up to 10,000,000 euros or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year.

Failure to comply with any other requirement or obligation under the Regulation may result in fines of up to 20,000,000 euros or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year

In my view, Article 71(7) establishing that Member States can potentially waive fines for public authorities and bodies should be deleted. Public entities carry out a significant number of high-risk AI uses and removing the possibility to levy high value administrative fines not only favours them unfairly when compared to the private sector, but it also makes the Regulation less effective and provides citizens with less protection. In this matter the State should lead by example and show that it is able to develop its AI capacities while complying with the same rules it is creating for private entities. Elsewhere I have already manifested serious doubts about the GDPR version of this provision, and find no reason to change that view (fn 1). Of course, the fines established for EU institutions, agencies and bodies in Article 72 should also have the same value as for any other entity.

 

8. Supervisory Authority

According to the Proposal, Member States will have to designate a national supervisory authority to which it ‘assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State at the European Artificial Intelligence Board’. The European Artificial Intelligence Board is a European Data Protection Board-like body which shall work to guarantee the coherent application of the Regulation, including by issuing soft law, and cooperate in a number of tasks under the Proposal, including the development of harmonised standards.

Under Article 59(2), ‘the national supervisory authority shall act as notifying authority and market surveillance authority unless a Member State has organisational and administrative reasons to designate more than one authority’.

The proposal does not include a one-stop-shop mechanism such as the GDPR, which may actually be an enforcement advantage due to the severe shortcomings it has been revealing. However, it also misses the opportunity to establish stronger cooperation between supervisory authorities (The European Artificial Intelligence Board appears to have weaker powers to ensure coherence when compared to its data protection counterparty) and a level of centralised enforcement, for example by allowing the Commission to call upon itself for enforcement purposes and to levy fines when an infringement of the Regulation produces relevant effects across the EU.

 

9. (Preliminary) conclusions

It is understandably difficult to draw conclusions at this stage (this work dates from the exact day when the Commission presented its final proposal). Nonetheless, considering the available information the Proposal appears to be a good first step for the legislative procedure and touches on most of the necessary subjects. However, as explained above there are various key aspects that would be important to address and a few regulatory innovations that would be unfortunate if theopportunity to include them were missed.

For such a high-stakes legislative effort, trilogue negotiations are certain to bring some changes to the proposal and they are a good opportunity to fine-tune the outstanding issues.

 

Tiago Sérgio Cabral is a lawyer working on Technology, Privacy, Data Protection, Cybersecurity and Artificial Intelligence. He is also a Researcher at the Research Centre for Justice and Governance – EU Law (University of Minho, Portugal). The author’s opinions are his own. 

 

(fn 1) Tiago Sérgio Cabral and Rui Gordete Almeida, ‘Quando a administração viola as regras de Proteção de Dados: meios de reação do particular ao abrigo do RGPD’, in Isabel Celeste Fonseca ed., Estudos de E.Governação, Transparência e Proteção de Dados (Coimbra: Almedina, 2021): pp. 127-146.

×

Your privacy is important for us

We use cookies to improve the user experience. Please review privacy preferences.

Accept all Settings

Check our privacy policy and cookies policy.

Cookies