August 01
download app
download appDOWNLOAD OUR APP
download google-play
download app-store
Anjum Shabbir
28th April 2021
Data, Tech & IP

Insight: “Academy of European Law Conference on Artificial Intelligence and the Commission’s legislative proposal” by Anjum Shabbir

From 29 to 31 March 2021, the Academy of European Law ERA hosted a conference on ‘Human Rights and Artificial Intelligence Systems’. The conference was particularly well-timed, coming soon after the Council of Europe’s ad hoc AI Committee (CAHAI) Feasibility Study on AI Regulation in December 2020 – and as a possible European Convention is now being explored; and the European Commission’s long-awaited legislative proposal on AI Regulation in the EU law domain was just about to be presented (announced last week on 21 April 2021).

The expert speakers included former members of the European Commission’s High Level Expert Group on AI that advised the European Commission on its upcoming proposal, and they broke down the technical fundamentals of how AI works. There were also presentations by European (EU and CoE) officials, lawyers, professors and tech consultants, reflecting the breadth of the views presented from law, policy, and tech angles.

The presentations on how AI systems function complemented the fact that experts were unanimous that AI has to be understood, as bad rules can lead to a worse situation than no rules. This seems logical, but AI includes many terms unfamiliar to the traditional domain of law, some being mired in uncertainty and unpredictability. One example is the fact that self-learning AI systems rely on correlations between inputs to reach outputs – and correlation has little to do with the age-old concept that is so cherished in law: causality. It would be expected therefore that the definitions section of any AI regulation would be extensive (and it can be argued that the 44 definitions in Article 3 of the Commission’s proposal are not).

As of yet there is no legislation at the European level on AI, and the presentation on the Court of Justice’s case law confirmed the dearth of guidance from the Court of Justice. It is worth wondering why the latter is the case – AI is present and surely impinging on protected rights all over the EU, and for which it can be argued specific AI legislation is not necessary to commence an action before the Court of Justice. A dispute could arise for example over the use of AI in recruitment practices, which would be covered by the scope of the Equality at Work Directive (2000/78) and the Charter of Fundamental Rights.

The experts addressed the legislative gap with various discussions of the novel legal concepts that could emerge in the context of fundamental rights:

  • the principle of human autonomy;
  • the principle/right of explainability.
  • the right to be left alone;
  • the right not to be treated to algorithmic treatment (already in the GDPR);
  • a by-design approach;
  • the precautionary principle;
  • a specific facet of the right to free thought;
  • whether to confer legal personality on AI hardware/software.

There were also opposing views on whether (a) an ethics framework, governance mechanisms, and certification is desirable, and no forms of AI need to be banned, and (b) that there is already existing human rights law going beyond soft law, ethics and recommendations, and it should be possible to ban some uses of AI as that some human rights are absolute.

The European Commission’s new legislative proposal does in fact provide for some outright bans – on AI that is considered a clear threat to the safety, livelihoods and rights of people, and that allows ‘social scoring’ by governments. It is unknown whether such bans will survive through an arduous, long and heavily-lobbied legislative process. It also however leaves a large berth for several forms of AI not to be covered by the regulation, and therefore to indeed be subject to voluntary codes of conduct and ethics rules instead.

An insight was provided by some experts on their view as to why a risk-based approach, which has been taken on board in the legislative proposal, is preferable: AI systems evolve over time, it is a very agile domain, and very easily modified. But an associated problem was also identified: does not this categorisation first have to be scrutinised and really tested before deciding what is high and low risk?

Detail was also provided into how training and tests sets are used, how to avoid bias (discrimination) – highlighted as difficult because the data has to be run on an AI system to actually find the bias, which usually comes from annotations to the training data; how people have reacted to it; and the need for diversity of data. A point that was made, which should be considered when considering the legislative proposal, is also the cost of the data: for example in the case of a medical specialist reviewing Big Data before it could be considered of a sufficient quality.

On liability, ideas were floated to make engineers, researchers, producers, operators, and those placing AI systems on the market responsible – in the form of strict liability. There was also specific attention paid to State liability. This is highly important, especially if the new rules in the legislative proposal are to be adopted, as they propose sanctions (Article 63) in the case of persistent non-compliance including fines of between 10-30,000,000 euros or up to 6% of the annual turnover of a company (exceeding the sanctions provided for under the GDPR).

Exploration was also made of what actors would enforce AI regulation: and it was agreed by a number of the expert speakers that data protection authorities do not have enough resources to be the enforcer, are not aware of the broader human rights framework, or have insufficient powers. Other options suggested were equality bodies and an EU Agency. In this respect, the Commission’s legislative proposal could be seen as lacking, given that it suggests that supervision be carried out by presumably also existing national competent market surveillance authorities facilitated by a new European Artificial Intelligence Board – as it will be for Member States to identify which national authority is the best placed (Title VI, Chapter 2).

With inspiration from the discussions in the ERA’s conference, which went far beyond what has been mentioned above, EU Law Live will be publishing a number of Insights on artificial intelligence regulation in Europe. In the interim, read this Op-Ed by Tiago Sérgio Cabral on the European Commission’s legislative proposal to regulate AI.

Anjum Shabbir is an Assistant Editor at EU Law Live



Your privacy is important for us

We use cookies to improve the user experience. Please review privacy preferences.

Accept all Settings

Check our privacy policy and cookies policy.