Fight against terrorism: will facial recognition become necessary?

INTERVIEW. At the time, French law still appears reticent to the deployment of biometric identification techniques for individuals, as advocated by Valérie Pécresse [French President of the Capital-Region Council]. Until when?

Cheese, you are recognized!” The idea is being revived in the current context of the terrorist threat: once the masks have been abandoned, facial recognition could be used at the entrance to stations, or even on the platforms of RER and subway trains to identify wanted persons and call them before they board the trains. At any rate, this is Valérie Pécresse's ambition to combat the “alarming rise in insecurity” in Paris and the Paris region. “We are at very high risk of terrorism, and we have no way of using artificial intelligence to spot them, even though we now have cameras everywhere. Let's not wait for a tragedy to happen before acting!” justified the president of the Ile-de-France region in the columns of Le Parisien. She also proposes arming the municipal police and reinforcing the prerogatives of private security agents.

In France, “the use of ‘intelligent’ cameras is not provided for in any particular text”, the CNIL reminds us on its website. And, since the people filmed cannot consent (or not) to the system, the principle of prohibition applies. Thus, the video system set up at the time of the deconfinement at Châtelet-Les Halles station to count the masked faces was interrupted, even though the images of the passengers were deleted almost instantaneously.

Nevertheless, facial recognition is already used in France, particularly in the context of border controls. “At Nice airport, for example, a person registered as a wanted person can, thanks to this system, be questioned. Why not do in a train station what you can already do in an airport?” Valérie Pécresse told France Info.

Could the legal framework give way under the pressure of the attacks and citizens’ feelings of insecurity? The answers of Adrien Basdevant, a lawyer specialized in digital law, co-author of L'Empire des Données (Don Quichotte, 2018).

Le Point: The use of so-called “video protection” cameras is very legally regulated. What does the law provide?

Adrien Basdevant : Video protection, unlike ‘video surveillance’ (which is implemented in private spaces), concerns public places, and can serve different purposes: protecting certain buildings, detecting traffic violations, regulating transport flows, monitoring places particularly exposed to crime, or preventing acts of terrorism.

The system is governed by Article L.251-2 of the French Code of Internal Security and the French Data Protection Act, which provide in particular that before deploying such a system, a data protection impact analysis must be carried out, that persons likely to be filmed must be informed (by means of posters or signs), and that the retention period for images must be limited to a maximum of one month. To ensure the security of the data processed, viewing of the images may only be carried out by specifically authorized persons.

What about the use of these cameras for facial recognition purposes?

The law is clear: the use of biometric data on behalf of the State requires a decree by the Council of State after the CNIL has given its opinion. Indeed, facial recognition adds an algorithmic layer to the image recording system. This software functionality enables them to be analyzed automatically. Its specificity is that it processes biometric data that can identify a person on the basis of his or her physical characteristics. They are considered very sensitive and are therefore strictly controlled.

The problem with this type of system is the interconnection of files, for example, the fact of linking people's faces to criminal police files. Currently, the TAJ file, dedicated to criminal records, can be used for facial recognition, but only for comparison in deferred time with images obtained during an investigation, and not in real time.

And yet, facial recognition seems to be taking hold in the public space through “experiments”: in February 2019, at the Nice carnival, to identify runaway minors or people banned from large gatherings, or at Lyon Saint-Exupéry airport, to reduce queues.

In fact, it all depends on the objective assigned to the facial recognition system. Is it to authenticate a person, in other words to ensure that the person who, for example, turns up at an airport ticket office is really who they claim to be? In this case, the technology aims to certify the identity of this person by comparing a template (the digital signature corresponding to the characteristics of a given person's face) with another pre-recorded template stored on a medium. This is the case, for example, with the PARAFE border identity control system, which compares the face of the traveler entering the airlock with the photo stored in the microprocessor of his or her biometric passport.

The other facial recognition system, to which Valérie Pécresse seems to be referring, would aim to identify one person among others, for example in a crowd, which involves comparing his or her biometric data with those of other people, who will therefore have to be identified for this purpose. Thus, nobody could circulate anonymously anymore...

What is the CNIL's position on these two types of facial recognition?

The CNIL has given favorable opinions on authentication systems, provided they are necessary and proportionate. The commission has given a favorable opinion to PARAFE or Alicem, this application for smartphone that allows to prove one's identity on the Internet in a secure way, using one's smartphone and passport.

In Nice, on the other hand, the use of facial recognition at the entrance to high schools with the sole aim of making access more fluid was refused because it was deemed inadequate and disproportionate.

How do you feel about the idea of setting up a “State-region” ethics committee aimed at finding “the right balance between the need to secure transportation networks and the preservation of freedoms”, as proposed by Valérie Pécresse?

Creating an ethics committee is a good idea, but it is not enough. Ethics raises fundamental questions about the choices made by society; it can complement the law but cannot replace it. Even more so since the problems must be clearly posed. Now, we are told, to reassure us, with facial recognition “we are looking for behaviours, not people”. This statement stems from a lack of understanding of the realities of profiling and algorithmic processing. First, we do not need to know a person's identity to be able to identify them. Second, the issue is whether we consider individuals as subjects of law, or whether we consider them to be merely the aggregation of their data and the calculation of dangerousness scores.

Therefore, at the European level, high-risk uses of artificial intelligence, including facial recognition, will be subject to detailed impact assessments. And the texts to come will certainly limit them...

And yet, this technology is tending to become commonplace, just think of the new versions of smartphones that integrate biometrics (face, fingerprints).

We are gradually getting accustomed to the technologies that are monitoring us. However, the use of these technologies is not neutral. It is not just a matter of convenience. Let's think twice before agreeing to turn on our cell phone via our fingerprint or facial recognition to gain “simply” 2 seconds, rather than using a code!

This is a good illustration of the risk facing our society: being led by technological choices that are not subject to any democratic debate. Since the standard is integrated in real time into the computer code of the platforms we use on a daily basis, our behaviors are determined by it without having been preceded by any social and political dialogue.

Public facial recognition systems are also becoming commonplace. What legitimizes this surveillance, this continuous recording of facts and gestures if not the anxiety of our societies and the security imperative? We are in a society that does not accept uncertainty. We tell ourselves that the camera doesn't change anything for those who have nothing to reproach themselves for and who already give a lot of information about themselves on social networks. We consider that we can be both free and watched at the same time. So many reasons that justify the fact that we are in a triple state of emergency today: health, terrorism and economic!

The abusive and liberticidal uses of this technology are already numerous! Added to this are the technical flaws found here and there: in London, for example, the system set up to identify wanted persons is said to have an error rate of around 80%!

You start by monitoring transportation, then you get to schools, places of worship, festivals, stadiums, gardens, homes and workplaces. The ultimate risk is total population monitoring, in real time.

This massive deployment could thus be normalized, without shocking us anymore, as illustrated by the Clearview scandal, an American company that has been accused of having collected more than 3 billion photographs on the Internet, mainly on social networks. Hundreds of police departments in the United States have used this system by interconnecting it with files of convicted persons. We can see how the combination of different technologies makes us part of a surveillance infrastructure!

It all depends on the data used in connection with these systems, because it is not the same thing to apply facial recognition to the wanted persons file (which only concerns 600,000 people) as it is to the processing of criminal records (which concerns nearly 20 million people)! Moreover, the scope of these files is potentially very broad: it includes people concerned by certain administrative measures (Treasury debtors, stadium bans, etc.) as well as those sought in the context of a judicial investigation or those who constitute a threat to public order or State security.

In addition, as you point out, the risk of misuse and piracy is significant. Once the biometric database of millions of faces has been built up and is ready to use, how can we avoid using it in other contexts or for other purposes, such as detecting emotions?

Fortunately, there is resistance: cities like San Francisco have voted to ban the use of facial recognition by the police. Civil liberties associations and researchers continue to call for a moratorium. Many are fighting against the trivialization of the use of technologies and their surreptitious slippage to all sectors of society.

According to the CNIL, “facial recognition calls for political choices on the role of technology, its effects on the fundamental freedoms of individuals, and the place of humans in the digital age”. How, in your opinion, can the debate be reasonably posed in the current context?

The essential question to ask is that of the relevance and effectiveness of the technological tool: for which situations is facial recognition really adequate? Zero risk will never exist. It is not because surveillance and control systems are increased that risks will disappear. These technologies contain the illusion of protecting us from uncertainty.

I believe that there can be no widespread deployment of facial recognition in the country without prior consultation, awareness-raising and information for users.

Nevertheless, we are in a globalized ecosystem, and the strict framework of these technologies could quickly be overtaken by the systems developed in several parts of the world. Raising the debate at the national level presupposes the commitment of all actors, researchers, industrialists, politicians and citizens, in order to provoke a European reaction and hopefully lead to international commitments.

This article was initially published in Le