• TIPLPR

AI AS A TOOL FOR INFORMATION CORRUPTION: A SOCIO-LEGAL PERSPECTIVE

Updated: Aug 30, 2020


Introduction

Malicious use of new technologies and artificial intelligence (AI) in particular, can have detrimental effects on mankind. Spread of disinformation, often in the form of fake news disseminated on social media, have profound implications on public discourse, political accountability and integrity, elections and governance. This article deals with various mechanisms used by AI to spread disinformation and its consequences on the socio-political forum.

Weaponizing AI To Spread Disinformation

Despite the numerous benefits that AI provides the world, it also brings with it new risks for human rights and democratic political processes across the globe. One such growing concern is the spread of disinformation. Simply put, disinformation is information that is false and deliberately created to harm a person, social group, organisation or country.

It was reported way back in 2017, that two data scientists taught an artificial intelligence to design its own phishing campaign. The artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans.

Co-ordinated activity by fake accounts can increase the likelihood of something trending, or can reduce the chance of legitimate news coming to mainstream light. This is called ‘Algorithmic amplification’, where some online content becomes popular at the expense of other viewpoints. One such example of news that got widely circulated before it was found to be untrue was that of the return of swans and dolphins in the Canals of Venice due to the COVID-19 induced lock-down.

Another growing concern stems from the ‘Deepfakes’ technology, which involves images and videos that are incredibly realistic but are, in fact, computer-generated. They have become an extremely powerful tool that allows humans to manipulate or fabricate visual and audio content on the internet to make it seem very real. At a time when people struggle to accurately identify the fake from the real, circulation of distorted content can significantly assist those wishing to manipulate opinion in a particular direction.

Social media algorithms create a scenario where you see ads, news and posts that resonate with you and reinforce your personal views — this is known as ‘ideology bubbles, the ‘filter bubble’, and mostly,the “echo chamber.” In other words, public discourse is over time algorithmically channelled, to create an echo chamber, an environment where one encounters only opinions and beliefs similar to their own, and doesn’t come across alternatives in the normal course of browsing.

Echo chambers lead people towards information or evidence that reinforce their pre-existing notions, and induce distorted reasoning, to steer them away from conflicting views. Lack of proper mechanisms to allow dissenting voices to be heard can lead to social fragmentation with societies being coloured with extreme ideologies.

Role Of AI In Shaping Public Discourse

Whether it’s allegations of ethnic cleansing in Myanmar, anti-Muslim violence in Sri Lanka or the online disinformation campaign spread against the ‘White Helmets’ to sow unrest in Syria, it is clear that social media platforms, heightened with the use of artificial intelligence, are helping spread divisive messages online at an alarming rate and potentially fuel offline violence.

The information ecosystem also enables political manipulation and control by filtering content available to users. For example, the Alphabet Executive Chairman admitted in 2017, that Google would de-rank content produced by Russia Today and Sputnik. While such tools could be used to help filter out malicious content or fake news, they also could be used to manipulate public opinion.Free speech is inherent in most democracies. This means that while firms may not be directly regulated, political parties can still apply pressure to make sure that they filter some of the content that comes through, or manipulate information to favour their requirements.

Young people form a large demographic group of social media users.They are susceptible to fall victims to this disinformation culture, which can in turn influence their thoughts and behaviour. This is a dangerous trajectory that could ultimately skew policy discourse and contort political climate.

Implications Of Disinformation On The Socio-Political Landscape

Corrupt information proliferates online alongside traditional journalism in this age.AI can be used as a tool to produce and spread disinformation, and target voters in the run-up to elections.

In February 2020, a day ahead of the Legislative Assembly elections in Delhi, two videos of the BJP President Manoj Tiwari criticising the incumbent Delhi government went viral on WhatsApp. While one video had Tiwari speak in English, the other was he speaking in Haryanvi. The latter was a fabricated video created using ‘deepfakes’, to reach and dissuade the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival party.

Needless to say, in a country like India where digital literacy is nascent, even low-tech versions of video manipulation can lead to violence, especially with lacunae in the law equipped to deal with such issues.

The #Macronleaks that took place just 48 hours before the second round of the French presidential election saw the use of similar technology. After Macron's emails were hacked, fake documents were inserted in its place.

The US presidential election in 2016 saw the behavioural targeting of voters and used this data to suppress voter turnout. For instance, days before the election, messages circulated on social media that Hillary Clinton had died. Additionally, messages were targeted at Democrat voters claiming that the date of the election had changed.

AI, with its rapidly developing features, can be easily misused to heighten existing tensions within society and manipulate public opinion. It poses a serious threat to objective decision-making by the voting public. The effectiveness of the same has in part been attributed to the manner in which they have been able to take advantage of the biases that have been deep-rooted in society. Algorithms hold increasing sway, which will eventually lead to deep polarization of ideas and beliefs in society.

Filling The Lacunae In Law

Classic tactics of disinformation is fuelling a growing disaffection with politics. There is an urgent need to find ways to enable democracy to defend itself, and to bring into the open the intentional tactics being used to undermine public discourse and democracy. The fragility of the online public sphere, and the ways in which technology is being manipulated for political purposes, needs to come to light.

Governments are becoming increasingly concerned about fake news, misinformation and the way the public sphere can be manipulated. Several governments have announced enquiries, are establishing units to debunk fake news and are proposing legislation and regulation.

Democracies should not only create resilience in the digital world, but also try to expose disinformation and propaganda articles and encourage media outlets to do the same. While denouncing false articles helps re-establish the truth, it does not address the fact that a certain number of people would have already read such false stories. The situation of France having a media black-out for the 48 hours prior to the closing of voting polls played an important role in hampering the impact of leaked documents from Macron’s campaign. Such measures could be adopted to control the effects of possible cyber attacks during elections. Additionally, strict steps must be taken to ensure that political parties refrain from using such electoral tactics.

In October last year, the state of California in the US passed a bill that made it illegal to circulate deepfake videos of politicians within 60 days of an election. The legislation was signed to protect voters from misinformation. Egypt and the Gambia have long had legislation aimed at combatting fake news, though they faced severe criticism from free speech advocates. The German parliament recently passed a law to fine social media companies with more than two million users for failing to remove certain content (such as fake news and hate speech) within 24 hours. Italy has called for the EU to establish agencies which can fine companies for spreading false information. Many other countries are jumping on this wagon.

Although, spread of disinformation has been heightened with the use of AI, it can also effectively counter such uses of technology. The governments need to work towards making this weapon their biggest strength in the battle against fake news.

AI And Disinformation In The Time Of COVID-19

Scholars argue that false news travel six times faster on social media than real news. In the midst of the ongoing COVID-19 pandemic, bots are twice as active as studies initially predicted. Conspiracy theories associated with the pandemic can be many and far-fetched since the scientific community’s knowledge of the virus remains limited. Bots are currently responsible for about 70% of the COVID-19 related activity on Twitter, and 45% of the accounts relaying information about the virus are bots.

On the other hand, AI is being used by multiple corporations to check and prevent misinformation. For instance, Facebook, after encountering millions of fake posts about the virus, turned to AI to detect misinformation. AI-powered NLP chatbots are being used to provide correct information to people.

Conclusion

In this rapidly shifting technological and political landscape, disinformation programs require the highest possible degree of examination and accountability. Countries need to work on bridging the gap between existing laws and newly developing technologies, while at the same time safeguarding the rights to free speech. Governments and social media forums need to work together to ensure that majoritarian opinions don’t push others to the backseat.Amidst today’s pervasive influence of social media, automatic methods of detecting fake news should be researched upon significantly. Expanding on the work AI is currently performing in the pandemic, it can be further developed as a tool to combat disinformation.

Advances in artificial intelligence will make its future influence on the global socio-political landscape, higher and more dangerous. Thus,the ongoing dialogue regarding the ethics of artificial intelligence should expand to consider the socio-legal implications of these technologies and to find effective mechanisms to remedy its adverse human rights impact.

Title Image Source: IT Security Guru


The Article has been written by Tanya KY who is a law student from Gujarat National Law University. She is currently exploring new avenues in order to find her niche in the field of Law. She finds herself inclined towards Intellectual Property Law. An avid reader and artist, she also enjoying debating and writing.