Facial recognition and data protection: Will you collect happy points for good citizenship in 2025?

Jul 5, 2019 12:00:00 AM | EU Facial recognition and data protection: Will you collect happy points for good citizenship in 2025?

Facial recognition is yet another exciting new technology that awaits its wider introduction in Europe. There are already various applications in the European Union, such as passport identification at airports, policing, and name tagging on social media platforms, such as Facebook. However, up until now, large scale deployment has not yet occurred. Why is that? And could it be related to the GDPR?

Facial recognition and data protection

 

Let’s first explore the technology itself. The way it works is that through the use of algorithms, the characteristics of a face (on a picture, in a video, or just in real-life) are translated into a sort of ‘fingerprint’, which is a unique identifier for each individual. This fingerprint can then be used to match with other ‘fingerprints’ that represent the faces of other individuals. When two fingerprints are sufficiently alike, they represent a match, which should mean that the two visual sources concern the same individual. This mechanism allows systems to match the same individual across different visual types, such as video and photo resources.

What this comes down to is that individuals are now identifiable based on their facial characteristics, and are no longer anonymous in the systems used by organisations such as Facebook. It also means that profiling a person is becoming much more easy. Let’s assume that we combine public data on the web, such as photos and videos, with closed sources, like surveillance camera footage -  we are now able to collect plenty of details also on the whereabouts of those depicted, effectively tracking their footsteps across the digital and physical worlds. 

The applications for facial recognition are wide-spread, it stretches from identification (phones, building entry) and targeted advertising, through social media photo and video tagging, to large scale surveillance. The privacy repercussions of most of these applications are major, as individuals across the web, and in real life, can be ‘followed’ throughout using data from this technology.

Facial recognition under the GDPR

Facial recognition is based on the use of biometric personal data. Under the GDPR, biometric data is listed as a special category of personal data. This means that in order to be able to process these types of data, in addition to a processing ground from article 6, an exception from article 9 would also be required. This exception would serve the ability to process special personal data in deviation of the general prohibition to do so.

In general, this translates to the fact that there are still several circumstances that would allow the use of facial recognition, such as in the case of explicit consent by the data subject, or for the necessity of substantial public interest. The former exception ground can work for individuals who consciously wish to take advantage of the technology, for instance for automatically tagging individuals on photos posted on social media. The latter exception ground might raise eyebrows and require additional legislation, once it would be called upon in a system for public surveillance, for instance for capturing criminal suspects.

On the other hand, for many applications of facial recognition it could be hard to find an applicable exception ground. For instance, tagging pictures with names of individuals on social networks would require explicit consent. To be clear, consent is defined in article 4 of the GDPR as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her”. Additional ‘explicitness’ would raise the bar even further.

Tagging individuals on Facebook therefore would require a clear explanation of what it would actually do, inside and possibly outside the data subject’s own network, and a clear explanation that it might fail, as its reliability is limited. Today, Facebook only provides the option to opt out of the auto-tagging function, possibly a direct breach of the GDPR.

If your organisation is planning to utilise the technology of facial recognition, under most circumstances, executing a data protection impact assessment (DPIA) will be mandatory under the GDPR: article 35(3)b requires an assessment to be done in case of large-scale processing of special categories of personal data. Such an assessment should also demonstrate what further, less intrusive alternatives would be optionally available compared  to facial recognition. Alternatives most probably will be available, and therefore the use of facial recognition might not pass the necessity and proportionality test. 

Ok, but does it work?

As of today, the technology behind facial recognition still struggles with unreliability. Sure, it works whenever the matching group is narrow, for instance in case of identifying individuals, but once the group of possible matches starts rapidly increasing, reliability also significantly decreases. Whereas Facebook can use your network data to do a probability assessment of a recognition being accurate, a generic surveillance system matching all passers-by against a large set of photographs of convicted criminals cannot use such additional data.

Many facial recognition systems used in real-life settings show that there are considerable numbers of false positives, as in people are identified incorrectly, and false negatives, as in actual matches that are not ‘caught’ by the algorithm. False positives and false negatives could be a big problem when governments try to pull off large-scale surveillance systems based on facial recognition, especially in light of increasing evidence that facial recognition is less reliable with respect to certain skin tones. 

The limited reliability and problems of algorithmic bias would cut off any automated decision making based on the technique. Therefore these types of scenarios beg the question: what problem does facial recognition solve that couldn’t possibly be addressed by other, less intrusive techniques? There are a great deal of identification and tracking methods that serve similar purposes, let’s just think of public transport chip cards, mobile phones, or wifi tracking for the sake of an example.

What does it all mean for you as the user?

Does it just mean that there is one more technique out there that invades your privacy? Or is it a technique that will make your life easier and safer? Well, while it is already virtually impossible to go through life anonymously, facial recognition might add to a sense of 24/7 surveillance. It lets people and companies learn about your private life through images and videos online, and lets governments, or even commercial organisations, capture and follow you while simply walking on the street. 

While facial recognition is only one of the many methods of identifying, following, or profiling individuals, it is one that might prove to be hard to escape. You may ‘opt out’ of wifi tracking by turning off your phone, but you can’t ‘opt out’ of many forms of facial recognition, without making the vow of staying at your home at all times, or physically disguising yourself (there is already an existing market for anti-surveillance makeups that outsmart facial recognition tools) when outside. Just like several other technologies or techniques, in itself, it is not ‘evil’, rather it can become a real threat if used with the wrong intentions.

Facial recognition is just one step in a gradual process where meaning is automatically attached to photos and videos; so for instance, rather than needing a human agent to describe what is happening, an algorithm can get the job done. Whereas currently a lot of data is collected that cannot be used in a meaningful way, it will change once techniques like facial recognition are developed further. 

Naturally, there might be a rather big room for error when it comes to picture descriptions. While sometimes humans are not the best judges for intentions either, the sheer scale on which computers would be able to assess human intentions would shift from a lot of collected data ending up unused, to automatically having intentions described on a scale that human beings would not be capable to do so. What is more, there will be insufficient human capacity to verify assumed intentions by checking the automatic description against the footage.

From mass surveillance to mass enforcement

Consequently, the tendency will have to be to have ‘responses’ attached automatically. It works as parking tickets, or speeding tickets. Cameras check the number plates, and if needed, fines are automatically issued. However, in this case, the ‘responses’ will not be based on objective facts (such as speeding with your car), but rather on a base of assumed intentions. What could these intentions be? Well, that an individual is happy or short-tempered, for instance. That he is interested in purchasing a certain item. Or even, that he wants to commit an offence or a crime. 

So to sum up this thought, facial recognition will not qualify as the final step towards surveillance society. It is just another step in a process, where  computers ‘understand’ more and more about what happens in the world - describing what human beings do, how they interact, and what their intentions are. The possibility to act upon those intentions will decrease our personal freedom even further. How so?

In the future one might even think of putting on a happy face all day to collect ‘happy points’ during surveillance, so that they can be a prime example for their personal positive citizenship program. Even if a government refrains from such methods, the private sector might use it. Just one step further from having a set of review buttons at the cash desk of a department store, one could envisage cameras registering whether all employees have the obligatory smile on their face.

Going back to the GDPR treatment of facial recognition, it’s good to see that the EU has a restrictive regime imposed on the processing of biometric data and requires additional exception grounds, e.g. an explicit legal basis. Having said that, focusing only on facial recognition might obscure the broader development that is going on, which is that attaching meaning to what happens is no longer the prerogative of human beings. 

The more computers can interpret what is happening on the streets, the more meaningful observations can be produced; and the more meaningful observations are made, the larger the inclination will be to attach consequences to those observations. Mass surveillance will turn into mass enforcement. A dangerous road to go down on.