top of page
Search

AI-Powered Surveillance: Google’s shift in AI policy ignites a call to action

Background

 

In a concerning development, Google quietly updated its ethical provisions & lifted its long-standing ban on using Artificial Intelligence (AI) for developing weapons and surveillance systems. Google’s parent company, Alphabet, dropped the section which previously recognised such usages as ‘likely to cause harm.’ It has defended its move by the overused excuse of justifying human rights violation in the garb of “National security”; that businesses and governments need to work together to realise AI’s capability to protect national security.

 

This post would focus on the legal and human rights risk of AI-powered surveillance and how it enables mass monitoring of citizens and erosion of the right to privacy. Additionally, these surveillance systems also seem to perpetuate racial bias and systemic discrimination by disproportionately targeting marginalized sections.

 

AI-powered Surveillance

 

Artificial Intelligence has the potential of rapidly expanding the reach of government surveillance that is pushing us to a future where virtually no aspect of our life would remain truly private. An American Civil Liberty Union report says, AI-driven surveillance can “supercharge” mass monitoring,  enabling cameras to detect “unusual” behavior, analyze intimate interactions, extract embarrassing footage, and even estimate a person’s age or emotional state — literally “read” every action of people.

 

Even today, governments across the world are working closely with companies in collecting and storing data about our personal lives without our explicit consent. The justification given is that merely existing online is implicitly forfeiting privacy. This is a deeply flawed argument that shifts responsibility away. More importantly, is it possible for a modern man to exist in the world without an online presence? This subtle coercion should not be used as an excuse to intrude into our personal lives and take advantage of it. We are being constantly watched and this is not a paranoid delusion. Google’s policy change can set a dangerous precedent and embolden the government and corporations to expand AI-driven surveillance with impunity. Cybersecurity expert Bruce Schneier describes this as “ubiquitous surveillance”. It is an ecosystem where data is collected at an industrial scale and is organized into vast digital vaults of personal information.

 

Governments can then use this data to manipulate public opinion, monitor dissidents, and suppress opposition, all under the guise of security. One stark example of this is the Cambridge Analytica political scandal where personal data of citizens was used to influence political campaigns. Global AI giants and governments have already put such systems in place that are being used for alarming purposes. Mass surveillance can also be used for systemic oppression.

 

 An Office of the High Commissioner for Human Rights (OHCHR) report revealed that China’s system of oppression involves partnering with high-tech private companies to unleash a wholesale tracking and monitoring of Uyghur individuals. It involves the collection of biometric data of facial imagery, iris scans, and genomic surveillance through mandatory DNA sampling.

 

According to the UN report[1] , “Such monitoring has reportedly been driven by an ever-present network of surveillance cameras, including facial recognition capabilities; a vast network of “convenience police stations” and other checkpoints; and broad access to people’s personal communication devices and financial histories, coupled with analytical use of big data technologies.” 

 

Imagine this model being adopted at a global scale. What can happen if such AI-powered surveillance becomes a default tool for governments to monitor citizens and manipulate public perception?

 

Conflict with International Human Rights Framework

 

AI-powered surveillance poses a threat to human rights because it threatens privacy, freedom of expression, and protection from discrimination, all of which are fundamental rights recognized under international law.

 

 

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.[2] 

 

Privacy is foundational to who we are as human beings. It helps us make a place in the world and defines our relationship with others. It gives us space to be ourselves, free of judgement, and allows us to think freely without discrimination. It gives us the freedom of autonomy, and to live life with dignity.  Forms of mass surveillance used today by governments directly threaten the very core of our right to privacy, as protected by Article 12 of the Universal Declaration of Human Rights as well as other human rights instruments, and AI is putting this process on fast track.

 

  1. Article 19 of  UDHR & ICCPR

 

"Everyone shall have the right to hold opinions without interference. Everyone shall have the right to freedom of expression, which includes the freedom to seek, receive, and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing, or in print."[3] [4] [5] 

 

Mass surveillance can be used as a tool for political suppression as seen in China’s surveillance of Uighurs and Hong Kong protesters. It can track dissidents, journalists, and protests turning public spaces into zones of constant fear. They can profile us for our risk of committing crimes, based on factors entirely beyond our control, or use facial recognition to monitor dissent at every demonstration we attend. The mere knowledge that we are being constantly watched stifles free thought and speech. Individuals would be hesitant to challenge authority or express controversial views due to the fear of surveillance.

 

As put by Julian Agwin, an American Journalist,[6] “Surveillance is just the act of watching, but what has it done to the society, right? …. What does it do when there are no pockets where you can have dissident views, or experiment with self-presentation in different ways? What does that look like? That’s really just a form of social control…a move towards conformity…. Surveillance itself is not quite an aggressive enough word to describe it.”

 

  1. Article 7 of UDHR

 

Even if we assume wildly, against all odds, that governments deploy AI surveillance without malicious intent, the technology itself is inherently flawed in ways that disproportionately target marginalized communities. The western hegemony and racial prejudices have seeped into the Tech sphere as well. MIT and GeorgeTown studies have shown that AI misidentifies Black and Asian races at higher rates. AI facial recognition systems have specifically been proven to be biased against minorities. There may be two reasons for this, as highlighted in the Georgetown research.

 

Firstly, AI systems do not operate in a vacuum, they inherit the biases of the societies that create them. AI systems thus reflect existing biases in areas including healthcare, criminal justice, and education. For example, if Black people are more likely to be arrested in the United States due to historical racism and disparities in policing practices, this will be reflected in the training data.

 

Secondly, there exists unrepresentative or incomplete training data or algorithms.  The fault here is on the humans feeding information into machine learning systems, rather than systemic biases – handlers have neglected to give a complete paradigm for the computer. For example, in Amazon’s Rekognition tool, the biases were likely due to a surplus of lighter-skinned photos and not enough darker-skinned photos, resulting in the computer being insufficiently trained.

 

Corporations and Human Rights

 

It is a well-affirmed principle that people are entitled to the same rights in the online sphere as they are in the offline world. This position has been validated by the human rights council as well. The state must create a conducive environment for the enjoyment of human rights. As per the United Nations guiding principle on Business and Human rights, companies have a responsibility to respect all human rights that exist independently of a state’s ability or willingness to fulfil its own human rights obligations, and also exist over and above compliance with national laws and regulations. As part of fulfilling this responsibility, companies need to have a policy commitment to respect human rights, and take ongoing, proactive and reactive steps to ensure that they do not cause or contribute to human rights abuses – a process called human rights due diligence. By lifting its ban on AI for surveillance and military applications, Google is discarding this responsibility.  Instead of exercising due diligence, it has chosen to align itself with potentially unchecked state surveillance and the militarization of AI. This decision sets a dangerous precedent.

 

Conclusion


While it would certainly be unfair to completely disregard the benefit that AI surveillance could provide in crime prevention and security, the danger that such surveillance could turn into mass monitoring and weaponization with enhanced support from BigTech companies is real. AI-powered surveillance carries with it grave ramifications for human rights. Resisting the removal of such safeguards has not been adequate and is a cause of concern. Indeed, the consequences of mass surveillance would not be equally felt in society. Journalists uncovering corruption or activists and dissidents would be the first to be targeted. But it would be naive to assume that one is safe from potential abuses. Governments and their priorities change, and so do their targets. Google must reexamine its decision and reflect on why these safeguards were put in place to begin with. The responsibility of protecting human rights cannot be conveniently set aside in the name of security or profit.


 
 
 

Recent Posts

See All

Comentarios


  • Instagram
  • Twitter
  • LinkedIn

Join our mailing list

Thanks for subscribing!

ADDRESS 

Centre for Human Rights and Subaltern Studies

National Law University, Delhi 

Sector-14 Dwarka, Delhi - 110078

Please email your queries to chra@nludelhi.ac.in 

© 2023 by Collective for Human Rights Advocacy

bottom of page