Surveillance and Predictive Policing Through AI

Cities are leveraging artificial intelligence (AI) to ensure safety and security for their citizens while safeguarding privacy and fundamental human rights.

Surveillance and predictive policing through AI is the most controversial trend in this report but one that has important implications for the future of cities and societies.

Technology is frequently used as a synonym of evolution, but the ethics of its use may need to be questioned. An underlying question is what society are we aiming to build. There are doubts and uncertainties about the impact of AI on communities and cities: the most fundamental concern is privacy, but there are frequent debates about AI from other perspectives, such as its impact on jobs, the economy and the future of work. Therefore, one cannot disconnect the discussions about surveillance and predictive policing from recent debates about the societal, ethical, and even geopolitical dimensions.

The pace the adoption of AI for security purposes has increased in recent years. AI has recently helped create and deliver innovative police services, connect police forces to citizens, build trust, and strengthen associations with communities. There is growing use of smart solutions such as biometrics, facial recognition, smart cameras, and video surveillance systems. A recent study found that smart technologies such as AI could help cities reduce crime by 30 to 40 per cent and reduce response times for emergency services by 20 to 35 per cent.1 The same study found that cities have started to invest in real-time crime mapping, crowd management and gunshot detection. Cities are making use of facial recognition and biometrics (84 per cent), in-car and body cameras for police (55 per cent), drones and aerial surveillance (46 per cent), and crowdsourcing crime reporting and emergency apps (39 per cent) to ensure public safety. However, only 8 per cent use data-driven policing.2 The AI Global Surveillance (AIGS) Index 2019 states that 56 out of 176 countries used AI for surveillance for safe city platforms, although with different approaches.3 The International Data Corporation (IDC) has predicted that by 2022, 40 per cent of police agencies will use digital tools, such as live video streaming and shared workflows, to support community safety and an alternative response framework.4

Surveillance is not new, but cities are exploring the capabilities of predicting crime by analysing surveillance data, in order to improve security. Cities already capture images for surveillance purposes, but by using AI images can now be analysed and acted on much more quickly.5 Machine learning and big data analysis make it possible to navigate through huge amounts of data on crime and terrorism, to identify patterns, correlations and trends. When the right relationships are in place, technology is the layer that supports law enforcements agencies to better deliver their job and trigger behaviour change. The ultimate goal is to create agile security systems that can detect crime or terrorism networks and suspicious activity, and even contribute to the effectiveness of justice systems.

Cities are also exploring other uses of surveillance and artificial intelligence technologies. AI is being used for urban tolling and emission zones to reduce air pollution for sustainability purposes. Another emerging area of application is the prevention of another health crisis. Paris uses AI to monitor the metro system to ensure passengers are wearing face masks. The aim is not to identify and punish rule-breakers but to generate anonymous data that helps authorities to anticipate future outbreaks of infection.6

How to achieve these goals while respecting privacy and liberties remains a crucial question.

Experts say it is almost impossible to design broadly adopted ethical AI systems because of the enormous complexity of the diverse contexts they need to encompass. Any advances in AI for surveillance and predictive policing need to be accompanied by discussions about ethical and regulatory issues. Even though the value proposition of these technologies might seem attractive from a use case perspective, liberties and civil rights need to be protected by proper privacy and human rights regulations.

Although a controversial issue in Western countries (and some cities in the US have banned it), predictive policing is being deployed widely in Asia. A survey by Deloitte has shown considerable differences in the acceptance and desirability of these technologies between regions. Both surveillance and predictive policing are considered undesired in more privacy-aware geographies such as the EU and North America. Latin America and Asia have shown greater acceptance.

In summary, cities need to consider if using technology for surveillance and policing implies making concessions to convenience at the expense of freedom.

Learn more.

Download the full report Study Overview by Deloitte Insights Watch the video interviews Listen to the podcasts

“There is a lot of mistrust between communities and the police, and what we have seen again and again is that traditionally marginalised low-income communities are less likely to call for help. Introducing technology like gunshot detection empowers your police officers and law enforcement agencies to respond and help the community.”

Jeff Merritt, Head of IoT and Urban Transformation at The World Economic Forum

Why are AI-enabled surveillance and predictive policing relevant for cities and their citizens?

Prevention and reduction of crime, to make cities more secure: Predictive policing aims to prevent crime, and can be the most efficient and effective way of keeping communities safe and increasing their trust and confidence in police services while reducing response times.7 For example, the police in Vancouver use predictive models to identify areas where robberies are expected to occur and then post officers to deter potential thieves or other criminals.

Support for the police force and other entities beyond crime detection: Police departments in many cities are short-staffed, making it harder to ensure public safety at all times and in all places. As the size of the population increases, it will be difficult for police forces to provide effective security and monitor crowds. Law and order agencies across the world can make use of AI to ensure proper coverage of all areas with only a limited increase in the need for resources.8 9 Artificial intelligence can also help local governments identify behaviours that impact environmental sustainability or public health.

AI can improve the seamless interconnection between municipal bodies: Cities whose security systems are spread across police departments, firefighters and other agencies or security entities may benefit from AI that can detect complex patterns and connections between events in different departments, leading to improved incident response times.10

Safeguarding the lives of police and law enforcers: Technologies such as video surveillance and robotic security devices can be used for identifying and preventing potential threats. They can prevent crimes from happening and avoid putting the safety and lives of police officers at risk.11

Reduce health costs and other security expenses: Cities can reduce the health service costs of treating victims of crime. This may result in lower insurance premiums in high-risk cities, and in some cases, reduce the need for private security services.

How to ensure a successful implementation?

Balance security interests with the protection of civil liberties, including privacy and freedom: The use of AI for policing does not have unanimous support because some people see it as a threat to individual privacy and liberty. For example in 2020, the Californian city of Santa Cruz banned the use of predictive policing tools and New York City has mandated that police should disclose their use of surveillance tools. Local authorities must therefore ensure that they apply strict criteria to the use and retention of personal data and are transparent about data usage.12 Moreover, cities should consult with stakeholders and discuss the benefits and motives for deploying AI and surveillance technologies.

Experiment responsibly and regulate first: Any experimentation with surveillance and AI technologies needs to be accompanied by proper regulation to protect privacy and civil liberties. Policymakers and security forces need to introduce regulations and accountability mechanisms that create a trustful environment for experimentation of the new applications. Regulations can help mitigate the risks related to potential disruptions that AI innovations might cause.13

Establish institutional review boards: These should carry out reviews of any applications that imply the use of personal data. They should include experts from multiple disciplines such as ethics, privacy and technology, policymakers, civil servants, and community representatives. Ethical considerations need to address three critical areas: fairness, equity and inclusion.

Create mechanisms for algorithms that can be accountable and reviewed: With the advent of explainable artificial intelligence, it is increasingly possible to deploy algorithms that can become accountable and subject to revisions. Explainable AI makes it possible to monitor issues such as bias and the fairness of algorithms. Cities should use white-box algorithms that respect the principles of transparency, interpretability, and explainability. For example, according to the US Department of Justice, black people are five times more likely to be arrested than white people. This suggests that there may well be racial bias in AI algorithms, whereas a key reason for using AI to detect crime is to avoid prejudice by humans. However, bias may be built in (albeit unconsciously) by the individuals who write the algorithms.14 15 16

Prioritise the usage of environment data instead of personal data: The use of personal data heavily affects trust and might affect privacy and civil rights. Whenever possible, cities should use anonymous, aggregated, non-identifiable data to obtain insights. IoT and sensors make it easier to collect and analyse environmental data to predict events; this limits the need for personal data.

Promote strong collaboration and trust between law enforcement systems and citizens: Trust is a key requirement for the application of AI for security and policing. To get the most out of technology, there must be community engagement.

Accompany the digitalisation with a change in culture: To benefit fully from using AI, the police, justice systems and city governments must change their organisational culture in order to accommodate such substantial movements.17

“Whether it is about sensors, CCTVs, digital contact tracing, it is very important for us to be sensitive to how people feel about the data collection and data use, and we must communicate and be very clear about what we are doing.”

Kok Yam Tan, Deputy Secretary of Smart Nation and Digital Government Office, Singapore

Where to see this in action?

Singapore

At the forefront of adopting technology with the aim of creating a secure environment, Singapore is already leveraging AI across police, border security and homeland security applications. It has implemented measures for using smart technologies such as sensors, data analytics and AI to make lives safer.

For example, the Singapore Civil Defence Force (SCDF) has deployed UAVs (unmanned aerial vehicles) for monitoring activities such as fire tracking, surveillance, and search and rescue missions. The Ministry of Home Affairs uses UAVs to conduct aerial surveillance for monitoring crowds during mass public events, such as on New Year’s Eve. The Immigration & Checkpoints Authority introduced iris scanning in July 2018, enhancing the pre-existing network of cameras with facial recognition capabilities.

Since December 2016, drones have helped police to catch criminals conducting illegal activities in the forest, and have led to savings of 80 per cent in time and 60 per cent in costs compared to traditional methods of building inspection.18 Drones have also helped monitor major pipelines and traffic congestion at checkpoints. In 2018 police officers caught 125 illegal immigrants through a night-time drone operation.

The Singapore Police Force (SPF) intends to use wearable technology such as smart glasses with video feeds to facilitate information gathering. These glasses are expected to carry out real-time video analytics such as facial recognition.

To deal with COVID-19 related challenges, the government used AI in initiatives such as:

  • VigilantGantry, which automatically screens the temperatures of individuals with a video camera and thermal scanner. According to Singapore Government Developer Portal, “It augments existing thermal scanners to improve the rate of contactless scanning, ease bottlenecks in long queues outside buildings and reduce manpower required for temperature screening measures.”19
  • SPOTON, a mass temperature screening solution for venues with little infrastructure support. It “can screen up to ten people at once, with a temperature indicator for each face and automated alarms and email alerts when high temperatures are detected.”20
  • A robot named Spot which patrols and reminds people in public parks to observe a social distance of at least one metre.
  • A network analysis tool to help contact tracing by the Singapore armed forces.21 22 23

As part of the National Artificial Intelligence Strategy, which covers five national projects (transport and logistics, smart cities and estates, healthcare, education, and safety and security), the government has a vision: “By 2030, Singapore will be a leader in developing and deploying scalable, impactful AI solutions, in key sectors of high value and relevance to our citizens and businesses.”24 Two projects for improving security with AI are:

  1. 1. Computer vision drowning detection system (CVDDS) at 27 swimming pools,25 to be implemented in 2021
  2. 2. Border clearance operations: Singapore will explore the use of AI to assist immigration officers in evaluating the risk profile of travellers before they arrive at checkpoints, and to tier the level of security screening accordingly. This will be done through technologies such as machine learning, computer vision, cognitive systems, and explainable AI.26
Kanagawa, Japan

In preparation for the Tokyo Olympics, the Japanese police force launched AI-enabled predictive policing.

The AI systems are capable of determining whether multiple crimes were committed by the same person by comparing data relating to each crime. Using this information, AI predicts the criminal's next move.

The AI system itself has ’deep learning’ algorithm for teaching the computer systems in real time as it collects more data. The process enables the system to have full access to police force statistics, while also providing access to other details of the crime, such as time, place, weather and geographical conditions.

There is also a plan to permit AI access to social media in order to identify the specific areas or people who may be involved in a crime. The Kanagawa police began studying the feasibility in 2018 of this and carried out joint research with the private sector before putting a system in place. In addition to the above, the National Police Agency has also set up a panel to advise on how the police should make use of AI.27 28 29

Rio de Janeiro, Brazil

With high murder rates and feeling of insecurity (in 2015 81 per cent of Brazilians feared that they would be a victim of homicide in some form), the city implemented measures to predict crimes more successfully and lower the crime rate.

In 2016, the Igarapé Institute (a think tank) partnered with Via Science (a data analytics firm) to develop the CrimeRadar app, a crime prediction platform that assessed the frequency of crime across locations and times in the metropolitan region.

The platform runs on smartphones and desktop browsers. The software uses advanced data analytics to monitor crime rates and potential risks across the municipality.

CrimeRadar provides a visualised representation of the safety levels in specific locations and times. It also makes crime data more accessible and transparent and thus improves security for citizens. The platform has helped to reduced crime in the region by 30-40 per cent.30 31 32

  • Deloitte: Emerging tech that can make smart cities safer. (2018)
  • ESI ThoughtLab: Smart City solutions for a riskier world. (2021)
  • Carnegie Endowment for International Peace: The Global Expansion of AI Surveillance. (2019)
  • IDC FutureScape: Worldwide Smart Cities and Communities 2021 Predictions
  • MicroScope: AI: The smart side of surveillance. (2019)
  • Security Boulevard: AI Surveillance in a post-pandemic world. (2020)
  • United Nations University – Centre for Policy Research: AI & Global Governance: Turning the Tide on Crime with Predictive Policing. (2019)
  • Forbes: Leveraging AI And IoT For Citizen Security In Smart Cities. (2019)
  • Deloitte: The Age of With - The AI advantage in defense and security. (2019)
  • Indigo Vision: Integrate Artificial Intelligence with your city surveillance to enhance incident response. (2020)
  • Forbes: Leveraging AI And IoT For Citizen Security In Smart Cities. (2019)
  • Ibid.
  • Wall Street Journal: California City Bans Predictive Policing. (2020)
  • The Black Box in Medium: Artificial Intelligence Predictive Policing: Efficient, or Unfair?. (2020)
  • The New York Times: U.N. Panel: Technology in Policing Can Reinforce Racial Bias. (2020)
  • MIT Technology Review: Predictive policing algorithms are racist. They need to be dismantled. (2020)
  • Forbes: Leveraging AI And IoT For Citizen Security In Smart Cities. (2019)
  • GovTec Singapore: Drones that keep Singapore going. (2020)
  • Singapore Government Developer Portal: VigilantGantry - Access Control with Artificial Intelligence (AI) and Video Analytics. (2021)
  • Singapore Government Developer Portal: SPOTON - – A Smart Thermal Scanner for Crowd Temperature Screening. (2021)
  • GovTech Singapore: Big push for AI proves fruitful and useful. (2020)
  • Reuters: Singapore to test facial recognition on lampposts, stoking privacy fears. (2018)
  • Civil Service College Singapore: Artificial Intelligence: Impact on Public Safety and Security. (2019)
  • Smart Nation and Digital Government Office. Smart Nation Singapore National AI Strategy. (2019)
  • Smart Nation and Digital Government Office: Transforming Singapore. (2015)
  • Smart Nation and Digital Government Office. Smart Nation Singapore National AI Strategy. (2019)
  • Interesting Engineering: Japan Set to Launch AI System to Predict Crime. (2018)
  • Engineering 360: Japan to Launch AI System to Predict Crime. (2018)
  • Kyodo News: Kanagawa police eye AI-assisted predictive policing before Olympics. (2018)
  • World Economic Forum. What happens when we can predict crimes before they happen? (2017)
  • WIRED: CrimeRadar is using machine learning to predict crime in Rio. (2016)
  • Bloomberg City Lab: Mapping 'Pre-Crime' in Rio. (2016)

You may access the links to these sources, where available, on page 148 of the Urban Future with a Purpose study.

Video Interviews

Podcasts

Get in Touch

Jean Barroca

Jean Barroca

Global Public Sector Digital Modernization Leader

jbarroca@deloitte.pt

+351 210 422 532

Miguel Eiras Antunes

Miguel Eiras Antunes

Global Smart City, Smart Nation & Local Government Leader

meantunes@deloitte.pt

+351 210427542