Global, Global Investigates

Global Investigates: Nanny state: AI and CCTV

In 2018, a cybersecurity activist successfully hacked into China’s facial-recognition system, unveiling a web of CCTV cameras spanning the country that possessed the ability to recognise an individual instantly by their face. Artificial Intelligence, AI, has been incorporated into almost all aspects of modern life, from facial recognition passwords on smartphones to adverts based on search history, AI is inescapable. But when this technology is harnessed by a government to monitor its population it raises serious ethical questions and poses a significant risk to civil liberty.

It is China that currently appears to be leading the way in AI-based population monitoring, though the same technology has made a worrying appearance in Europe and America in recent years.

China’s current system, known as City Brain, consists of an estimated 20 million AI-enabled CCTV cameras, forming a high tech web of surveillance that will one day capture every movement of China’s 1.3 billion people. In 2018, the system was capable of distinguishing facial expressions, registering if a person’s eyes and mouth were open or closed, and most concerningly, it could identify Uighurs based upon their ethnic features. With further development, it is thought AI CCTV could soon match an individual to every text they send, every internet search they make, every place they visit, every purchase they make and everybody they associate with. Data of this magnitude could allow a government to predict potential political unrest before it truly forms, posing a significant threat to human rights and political freedoms.

An early version of this system is already in place in Xinjiang, an area in northwestern China and the site of prisons holding more than one million Uighur Muslims, the largest internment camps of a single ethnic-religious minority since the Holocaust. Once the technology has been perfected within the camps, using it across China is an easy step to make.

For Uighurs outside the camps, they are now the most heavily surveillanced population on Earth. AI cameras are able to identify them separately from other ethnicities within China solely upon their facial features. Monitoring has now spread further, with Uighurs being forced to install monitoring apps on phones which use AI-powered algorithms to detect any unusual behaviour. Something as small as leaving their house out the backdoor rather than the front is enough to trigger a warning within the AI system. 

Avoiding technology is also out of the question as no social media activity is also categorised as suspicious. Police reaction times to surveillance pings are extremely rapid. In 2017 John Sudworth, a journalist for the BBC, travelled to the southern city of Guiyang which then had a population of 3.5 million to see how long he could evade detection. It took only seven minutes for AI-enabled cameras to detect Sudworth and for police to have him in custody.

Surveillance of citizens is not unique to the 21st century or China, though she is at the forefront of the use of AI. Former president of China, Mao Zedong, developed a network of local spies which he used to keep “sharp eyes” on the population and monitor for political dissent. His successor, Xi Jinping, has co-opted the term for his network of AI cameras, the eyes for the City Brain operation. The Chinese government has long used major events to discreetly introduce new surveillance measures. In the build-up to the 2008 Olympics which were held in Beijing, internet access came under stricter controls than ever before. During the COVID-19 response, the Chinese government used private companies databases to access users’ personal data, a measure that may remain in place far beyond the end of the pandemic.

The British human rights organisation Article 19 recently released a report on the development of AI surveillance technologies by 27 companies in China. They detailed that the technology is developing with no safety precautions or ethical reviews, particularly concerning when the technology is developed for use by official bodies such as the police to monitor the population. An increasing number of human rights activists believe the use of AI surveillance poses a serious risk to human rights and freedom of expression, particularly political. The global AI industry is predicted to be worth almost $36 billion by 2023, a growth of nearly 30 percent a year, and as such calls for strict controls on the industry are growing more urgent.

Michael Kratsios, former president Donald Trump’s Chief Technology Officer, has said that “if we want to make sure that Western values are baked into the technologies of the future, we need to make sure we’re leading in those technologies.” It can not be said, however, that the West has remained entirely ethical in its use and development of AI. Several of China’s big AI developers are, at least in part, funded by Silicon Valley venture capital firms. 

America’s police forces have also begun to develop their use of surveillance cameras. Criticised heavily in recent years for racial profiling, police have begun to use footage collected from home-security cameras, most commonly in the form of doorbell cameras. Innocent devices when used by homeowners to watch their door for deliveries, but when the footage from these cameras is connected, it forms an extensive network of cameras that can be used by police to monitor an individual’s movement across entire cities. Chinese-style surveillance networks have begun to crop up across the world.

In 2014, a Chinese telecom company sold a surveillance system to the government of Ethiopia, which has since been used in a crackdown on protests. Brazil, Kenya, Ecuador and Great Britain are all known to have purchased video monitoring technology from Chinese companies.

The EU has also significantly increased its interest in AI. In 2019, German MEP Patrick Breyer became aware of new AI technology which could detect when someone is lying based on their facial expressions. The project, called iBorderCtrl, was being funded by the EU for its potential use on European borders. Breyer used the EU transparency law to request details from the European Commission concerning the ethics and legality of such technology, technology Breyer now refers to as “Pseudo-scientific security hocus pocus.” When the request was denied, Breyer sued. The MEP claims “The European Union is funding illegal technology that violates fundamental rights and is unethical.” The landmark case is expected to reach court in the new year.

The EU funding such projects is not a new phenomenon. Horizon 2020 is the EU’s source for research funding, and from 2014 to 2020 it controlled €80 billion worth of funding grants for scientific research. Typically used for medical research, in the past seven years €1.7 billion was granted to security groups for researching technology to be used by police forces and border control officials. iBorderCtrl received €4.5 million from the Horizon 2020 fund, spending three years developing the programme.

The EU states the use of AI is crucial to counter crime, and its intended use is to enhance the region’s security to compete with the US and China. However, ethical concerns remain. The development of AI brings concern that ethical scrutiny has been sidelined in favour of developing programmes as rapidly as possible. Very little of the ethical analysis of AI programmes is ever made accessible to the public which furthers public distrust of these programmes.

All projects funded by Horizon 2020 must be assessed by a team of independent ethicists who can either approve a project or demand further ethical research, though their influence is generally weak. Kristoffer Lidén, a researcher at the Peace Research Institute Oslo, said that the process of an ethics review is often taken as an approval for the project, even when the review expresses grave concerns. Speaking on the process, Lidén said: “[Projects] can easily be co-opted by commercial logic or by general technological optimism where people bracket ethical concerns because they see new technology as a positive development.”

In a 2015 interview, Peter Burgess, a philosopher and political scientist who worked on three Horizon 2020 security projects, expressed his concern for the impact the technology will have on the migrant crisis. Speaking to German TV channel ARD and a reporter from Ser Spiegel, Burgess said “Refugees are seen as targets and goals to be registered” by AI security companies. Following the interview, Burgess was removed from his roles on Horizon 2020 ethical boards. The European Commission has denied that critics are removed from their posts stating to the Guardian: “No request for removing ethics experts participating in the assessments/checks has been received by DG Research and Innovation”.

As the technology continues to rapidly develop, larger numbers of human rights groups are expressing concern for the technology’s ability to completely corrode civil liberties if they are in the wrong hands. Groups such as Article 19 are now calling for a ban on the use of AI technology in surveillance systems before the technology becomes too widespread to control and ethically monitor effectively.


Like Concrete on Facebook to stay up to date


16/11/2021

About Author

Aislinn Wright



Notice: Trying to access array offset on value of type null in /home/wp_35pmrq/concrete-online.co.uk/wp-content/themes/citynews/tpl/tpl-related-posts.php on line 11

Notice: Trying to access array offset on value of type null in /home/wp_35pmrq/concrete-online.co.uk/wp-content/themes/citynews/tpl/tpl-related-posts.php on line 26

What do you think?

Calendar
November 2021
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  
Latest Comments
About Us

The University of East Anglia’s official student newspaper. Concrete is in print and online.

If you would like to get in touch, email the Editor on Concrete.Editor@uea.ac.uk. Follow us at @ConcreteUEA.