Facial Recognition Technology: An International Threat to Privacy?
The present article deals with a contemporary issue arising due to an unprecedented leap in technology, which has enabled countries to use facial recognition technology as a law enforcement tool. The said technology has changed the dimensions of the existing discussion on Right to Privacy and has created an ideological divide between groups supporting civil liberties and countries putting national security at the highest pedestal.
What is Facial Recognition Technology?
Facial Recognition Technology, is a term that encompasses all those computer or AI driven mechanisms that are used to identify human faces by matching them to existing databases. Facial recognition includes facial signature, which is determining a mathematical formula of a human face by processing the geometry of the face. The process at the root of this technology is ‘biometrics’, which is the extrapolation of data from the features of a human body. In advanced AI powered systems, constant surveillance by employing biometrics is used to gather quantifiable data which is then statistically analysed under algorithms. This data includes parameters like emotion, mental state, ethnicity, skin colour, body type, age inter alia and is used by law enforcement agencies to identify suspects.
Although, primitive versions have just been used to identify suspects by comparing the existing police databases, facial recognition technology has been further developed to create ‘predictive policing algorithms’, which by using the collected biometric data predicts criminal behaviours. These leaps in technology have led to privacy activists around the world voicing their concern about the potential harms involved. Due to heavy investments from big corporations, it has been estimated that the facial recognition market is ‘expected to grow to $7.7 billion in 2022 from $4 billion in 2017.’
Infringement of Right to Privacy
Right to privacy has been accepted by many of the countries as a fundamental right and given a place in their constitutions. However, in the domain of International law, the International Covenant of Civil and Political Rights (ICCPR) is the primary document which secures the right to privacy. Article 17 of the ICCPR states that,
“1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation.
2. Everyone has the right to the protection of the law against such interference or attacks.”
This article prohibits interference with a person’s privacy in an arbitrary or unlawful fashion and thus implies that surveillance of a person without their consent and without a legitimate purpose is violative of their right to privacy. Further, Article 21 of the ICCPR provides for right to peaceful assembly without any restriction except those necessary in the interest of preserving public order. For example, it was reported that Facial Recognition technology can be trained to recognise people belonging to LGBTQ+ communities thus completely suppressing their right to observe protests and pride parades in authoritarian regimes. NGOs and privacy activists have primarily centred their concerns about facial recognition technologies as violating these two rights.
These issues with facial recognition are manifold as different versions and systems of the technology exist and it is not possible to generalise all of them using a single line of argument. The Indian Supreme Court in the landmark Puttaswamy judgment held that right to privacy entails that no data of any individual can be collected without their consent and that the exception of data collection for law enforcement purposes can only be allowed if it passes the test of proportionality. However, the facial recognition technology which was used by the Delhi Police in December 2019 to collect face and biometrics details in a mass protest, which involved thousands of people, would be a disproportional and an arbitrary infringement on the privacy of a person if one applies the Puttuswamy judgment. Unfortunately, this type of facial recognition is the least of our worries.
FRTs that use algorithms for categorisation of biometrics are the bigger danger. Most of the organisations upholding the right to privacy have concentrated their efforts on highlighting the illegitimacy of such a technology. Many researchers have expressed their fear that facial recognition that can be used to categorise emotions.Qualities using facial features will be exploited to make pseudoscientific judgments about individuals regarding their personalities and race much like the ‘science of physiognomy’ used by Nazis.
The Minority Report
FRT companies such as Clearview AI and Dataworks Plus have faced criticism for the systems that they sold to the US law enforcement agencies, because of them being used for identifying protestors. Protests and dissents are the backbone of a legitimate democracy. However, in the US Black Lives Matter and pro- LGBTQ+ rallies, advanced facial recognition technology was used to identify and target protestors. Hence, apart from being a massive violation of the privacy of people, facial recognition also hampers the exercise of the right to dissent of the oppressed minorities against the administration, which is an essential civil right. In the existing paradigm, where draconian laws around the world have criminalised LGBTQ+ activities, any technology which can invade people’s privacy irresponsibly, would be like handing a child a matchbox. It is arbitrary and unlawful to exercise mass surveillance tactics on peaceful protests and is a clear violation of the ICCPR.
Secondly, it has been opined by experts that FRTs are capable of more blatant human rights violations. Research has found that FRT systems can process faces with certain ethnic and racial features in a more accurate fashion than others. The National Institute of Standard and Technology, while researching on the impacts of biometric units like race and ethnicity on facial recognition technology, found that false positive rates were higher among women and African Americans. It has also been found by the researchers at Georgetown University that such facial recognition technology is bound to “disproportionately affect African Americans”, because the US police as an institution is racially biased and have black people on their watchlist much more than white people. Thus, using facial recognition technology today is a potential instrument of oppression against the marginalised communities that constantly face backlash at the hands of majoritarianism. The reasons of high delinquency among the minorities lie in their poor and disparate living conditions. They require social security and progressive help from the government to achieve a level playing field. However, facial recognition technology does the opposite by forcing them to live in a police state.
Beyond errors and false positives, the futuristic vision of facial recognition technology is even more concerning. Recently, airports have started using Facial Recognition at the international terminals for securing air travel. The faces captured at the airport are compared to the databases prepared by the customs agencies. However, there have been reports of errors in the process causing non-identification of legitimate travellers, causing delays. The situation can worsen when facial recognition becomes mandatory at all airports.
Further, the databases prepared using facial recognition are being analysed by algorithms to identify criminal patterns and predict any deviant behaviour. Such technology relies on huge sets of data known as big data, the collection of which has already been called by experts as a breach of an individual’s right to privacy. The patterns identified by the algorithm are then used by law enforcement agencies to pre-emptively deter a potential criminal. However due to biases at all the stages of operations, it is feared that the algorithm can end up reinforcing some inaccurate and unwanted stereotypes.
Regulation of Facial Recognition Technology
The concerns regarding facial recognition technology have been taken into account by legislators around the world and many countries have come up with regulations for it. However, they have not been able to address some of the grave issues concerning facial recognition technology. For instance, the EU has enacted the General Data Protection Regulation which oversees the processing of data collected through facial recognition, but is completely silent about the Artificial Intelligence employed by facial recognition technology.
India has also introduced a Personal Data protection bill, which aims to secure the privacy of citizens, but at the same time gives the government, the authority to process the personal data of anyone by ‘citing national security, public order or friendly relations with other states.’ Thus, the real concerns are largely unaddressed by any law and legislation that aspires to regulate facial recognition technology.
In the author’s opinion, this leap of technology can bring out dark things for humanity if its usage is continued under the imprecise laws that can barely regulate the basic versions of facial recognition and do not cover the use of AI by FRTs . No institution, including the law enforcement department, is free from institutionalised prejudices. It is important for the operation of administration throughout the world to go on without being affected by such biases. This can only be brought about through transparency and accountability of the officers involved. Implementation of facial recognition technology will be taking a step back as it further cloaks the biased individual calling the shots by delegating the biased errors to a machine. It has been shown in a detailed manner above as to how the facial recognition systems are racially biased and its abuse has already been documented.
National Security can never become a defence for targeting innocent individuals and placing all power in a problematic algorithm. This envisaged Orwellian regime pushed by the power-hungry hegemony of big tech and conservative politicians implies obvious apprehensions such as suppression of dissent and suspension of basic civil rights including but not limited to free movement and peaceful assembly. Even though one’s paranoia about such a technology might be exaggerated, the predictable realpolitik reminds us to take a genuine assessment of the status quo.
This article has been published by Subodh Singh who is a law student at ILS,Pune. He is keenly interested in International law and studying the intersection of technology with law.