top of page

3 Questions To Yannick Meneceur


3 Questions To… Yannick Meneceur, policy advisor on digital transformation and artificial intelligence at the Council of Europe, and seconded magistrate from French judiciary. He is also research fellow at IHEJ (Institut des Hautes Etudes sur la Justice) and host of the podcast “Les Temps Électriques”. At the Council of Europe, Yannick is currently involved in an Ad Hoc Committee on Artificial Intelligence (CAHAI) whose task is to carry out a feasibility study on a legal framework for AI. He will publish in May 2020 a book called “Artificial Intelligence on Trial” (L’intelligence artificielle en procès).




1.

There is already international agreement on the fact that AI will have an essential impact on our society and on human rights. In this regard, I would like to ask you what is the position of the Council of Europe on regulating Artificial Intelligence, especially considering the work of the CAHAI? Are there any standards on how the legally binding regulatory framework of AI should look like in the near future?


The Ad Hoc Committee on Artificial Intelligence (CAHAI) of the Council of Europe received a clear mandate from the Council of Europe member states in September 2019: to study the feasibility of a legal framework on AI. There is no question at this stage of locking up the thinking of the 47 member states on a particular legal form. The result could be a Convention or a framework Convention, or recommendations or guidelines.


What seems certain at this stage is the need to understand the existing legal frameworks - as well as those being developed in other international organizations or within Member States. CAHAI has therefore started its reflections with an in-depth mapping of the existing ones in order to identify possible gaps.


"However, such innovation can only be sustainable if it is part of overall development goals, as defined by the United Nations, and within which the protection of fundamental rights is essential."

From this, it will then be possible to deduce the appropriate, proportionate and suitable measures for a high level of protection of human rights, democracy and the rule of law, in a context where the Organization’s objective is not at all to hinder innovation. However, such innovation can only be sustainable if it is part of overall development goals, as defined by the United Nations, and within which the protection of fundamental rights is essential.


I also try in my forthcoming book to analyse and prioritise concrete measures that could contribute to the regulation of AI. In my opinion, instruments should be built to ensure, ex ante, a high level of design for AI applications producing significant risks. I am thinking of certification systems but also of the training of engineers and technicians. Precise training courses for data scientists could be set up, with a substantial digital humanities component... and why not a professional order that would provide continuous training and discipline in this new profession?


2.

Would soft-law be enough in an era of a fast-changing environment like AI or do we need hard-law to be developed? What would be the drawbacks of implementing soft-law/ hard-law?


The debate on the benefits and inadequacy of soft law is not unique to AI and digital in general. The trend in recent years has been to look for more flexible modes of regulation, capable of adapting to rapidly changing environments and taking into account the constraints of different operators. The ethical initiatives of private operators who say they are concerned about the consequences of their actions and who seem to be looking for solutions to make their actions ethical and human-centered are to be welcomed.


"Confidence cannot be decreed. Confidence must be proven in concrete acts and could be built if one has the certainty of sanctions in the event of failure."

However, this desire alone is not enough - a first essential reason: in the event of failure to comply with a principle laid down by such non-binding frameworks, there are no sanctions. Above all, these commitments are not the product of a democratic debate but of a unilateral act of will. We must remain attentive to possible attempts at "ethical washing" and slowing down the construction of binding legal frameworks. Confidence cannot be decreed. Confidence must be proven in concrete acts and could be built if one has the certainty of sanctions in the event of failure.


Ethics of AI, on the initiative of private sector, is to be welcomed as it is reassuring to see these operators concerned about the consequences of their actions. But their priority is not the protection of the general interest. The utility function of entrepreneurs is to make a profit and not to maximize social utility. These initiatives must therefore be seen as a contribution by these businesses to a regulatory approach that would ensure not only identical conditions of access to the market - and I am thinking here of small and medium-sized enterprises - but also legal certainty for everybody.


"It should also be noted that not all AI applications may require the same level of regulation. The stakes between regal sectors, such as justice, and culture are not the same."

If we take the example of liability in the event of damage, the digital industry can find itself in great difficulty. Depending on the legal systems in the various states, the search for liability will be conducted according to very different criteria. Although the regulation of defective products has unified the treatment of disputes in civil matters in the 27 Member States of the Union, disparities still exist. We have also to keep in mind that if we look at the 47 member states of the Council of Europe, the risk of different interpretations is even greater - not to mention the criminal risk. Various mechanisms could be devised and, here again, the Council of Europe has a contribution to make.


It should also be noted that not all AI applications may require the same level of regulation. The stakes between regal sectors, such as justice, and culture are not the same. An AI that would contribute to the assessment of a personal injury in a court of law, as is the case in France with the DATAJUST application, and an AI that makes a musical recommendation do not create the same risks.


3.

We have seen different applications of artificial intelligence to help fight the Covid-19 pandemic. Do you feel that the potential of AI has not been sufficiently exploited?


Unfortunately, I think that AI has still been invoked for applications that are not yet mature enough. Moreover, its uses have not necessarily been decisive.


If we try to categorise its different applications, we could distinguish three main groups: support for research and medicine, combating misinformation and support for population surveillance measures.


With regard to support for research and medicine, bioinformatics did not wait for the current health crisis to invest in machine learning to study phenomena such as protein folding, which are extremely useful for understanding how a virus "works". Exploiting the results of lung scans has also been made easier and faster with AI-assisted interpretation to diagnose the disease: here too, nothing new, as the potential of this technology for this task was known. Finally, major technology companies have joined forces to exploit the considerable number of scientific articles produced during this crisis with natural language processing.


With regard to the fight against disinformation, AI has played a role in making it possible to automatically filter content that voluntarily or involuntarily relays fanciful information. It is not AI, but a human filter, which however censored the words of Brazilian President Jair Bolsonaro on Twitter, Facebook and Youtube, where he praised hydroxychloroquine as "effective everywhere" and called for an end to all forms of social distancing.


If one tries to draw up a synthesis of all this, there is sometimes an impression of solutionism.

Support for population monitoring measures has sparked more debate, particularly with mobile phone applications to support contact tracing. Fairly strong fears about widespread surveillance emerged and "peer-to-peer" solutions, with anonymous identifiers, distributed among all mobile phones without centralisation, came as a response. This is the case of TraceTogether in Singapore or PEPP-PT in Europe, which offer to alert people who have been in contact with a patient. The proposal to use AI for this type of application came from the winner of the 2019 Turing Award, Yoshua Bengio, who wants to combine this operation with a risk assessment calculated by an AI...


If one tries to draw up a synthesis of all this, there is sometimes an impression of solutionism. Statistical learning proves to be a poor ally when there is relatively little data, as is the case with a new disease. However, the fear of a new epidemic of this magnitude can lead our societies to become more tolerant of mass surveillance solutions, under the pretext of prevention, and to weaken data protection standards.


More than ever, we must not get the debates wrong and ask ourselves whether we could not have dealt with this crisis better with more solidarity, coordination and resources between States… and not with any technical solution.


TO GO FURTHER:

bottom of page