Posted: 13 Apr. 2023 5 min. read

AI Ethics in the Intelligence Lifecycle – A Perspective on Change

National security measures have historically been based on ground-breaking technologies. As Artificial Intelligence (AI) is being designed, developed, and deployed to aid organisations maximise their processes and solve inefficiencies, many governments have published national innovation strategies that aim to leverage AI capabilities to enhance the security of their national critical infrastructures (CNI). 

As we look at this technology being used in the security space to the intelligence lifecycle, AI promises to inject higher degrees of speed and decisiveness into the security decision-making process.

However, AI introduces a plethora of new risks that, if underrated, can have substantial consequences to national security and society. While national and international laws inform several principles that national security agencies must adhere to, there are a number of additional considerations that ought to be taken when designing and deploying AI systems, which are inherent to the technology being implemented. 

Therefore, to reap the benefits of these new trends, a re-envisioning of the risk management landscape towards an AI governance model that is able to embrace security, privacy, and ethics is required.

AI adoption for Intelligence activities

AI can be a key strategic advantage for the Intelligence Community as it can help maximise effectiveness and efficiency of operations across the collection and processing of large volumes of data for detection and prevention purposes.  Not surprisingly, therefore, there has been an emergence of both national and industry specific AI strategies emphasising the importance of integrating AI into everyday society, while calling for ensuring that the founding values upon which Western liberal democracies are built remain at the core of AI development, implementation, and broader outputs. Now more than ever, the development of secure, ethical, and privacy preserving AI is at the centre of attention, with many newly established international partnerships, including the Global Partnership on Artificial Intelligence and Atlantic Charter, advocating for a safer digital ecosystem.

The Intelligence Community is emblematic of three key characteristics: 1) it needs to be the largest consumer and producer of data in order to ensure the security of its territory and people; 2) it operates in a complex environment combining large numbers of people, physical assets, and technology; 3) its budget faces public scrutiny and deliberation. As summarised by Blanchard and Taddeo, the intelligence cycle follows the following steps:

  1. Direction: Referring to the definition of strategic priorities, scope, and approaches;
  2. Collection: Referring to the methods, sources, and requirements for collecting the data;
  3. Processing and Exploitation: Referring to the curation and organisation of the data;
  4. Analysis: Referring to the process of value creation of the various data points collected against the key strategic priorities;
  5. Dissemination: Referring to the labelling of the priority rating, based on urgency and level of threat;
  6. Feedback: Referring to the process of updating the direction, based on what has been found.

As highlighted by the Deloitte Center for Government Insights, the advent of new technologies, including unmanned aerial systems, remote sensors, advanced reconnaissance airplanes, amongst the many, have further increased the production of data that intelligence analysts are required to process and analyse in order to extract value. 

The dividend promised by AI is in processing large volumes of data more quickly and more effectively, striking higher degrees of speed and decisiveness in the decision-making process. The work of Babuta,Oswald and Janjeva shows the areas of application of AI to aid intelligence analysis, in what is commonly known as Augmented Intelligence Analysis. 

The concept of Augmented Intelligence Analysis refers to the use of AI toenhance human intelligence rather than operate independently of or outright replace it’, and its key components include: cognitive automation; filtering, flagging and triage; and behavioural analytics. 

By adopting cognitive automation and filtering, flagging, and triage to support the processing and exploitation step, intelligence analysts can benefit from the identification of patterns of speech, faster classification and categorisation of information, transcription of audios also from foreign languages, and performing tailored summaries based on established key words and prioritisation mechanisms. Similarly, behavioural analytics extrapolates patterns from datasets based on the nature and volume of interactions, meaning that intelligence analysts can identify trends and probability cases more quickly. 

What can go wrong? Highlighting the ethical dimensions

When trying to articulate the ideal scenario where almost real-time decision-making can occur, one must wonder whether the existing ethical concerns surrounding traditional practices enacted by the intelligence community can be further exacerbated by the inherent risks that AI introduces.

One of the long-standing issues surrounding the intelligence community, which took a central stage after the Snowden revelations in 2013, is the risk to privacy. While there are competing interpretations on what classifies as intrusion under the national security umbrella, the fundamental principle of data minimisation faces some challenges as ‘AI does not work well when tackling ambiguous, broad challenges particularly if there is inadequate data on which it can train and learn’  (GCHQ, 2021), meaning that more data may be required in order to enable the full effectiveness of the AI artefact being implemented. Cascading from this, therefore, broader questions on the proportionality and legal grounds for collecting and processing data are put into question, raising concerns on the relevance and impact that this may have on the liberties of individuals. 

Furthermore, the computational power enabled by AI enables the phenomenon of inferential data to occur. This refers to the ability to deduce something personal, if not sensitive, of an individual, despite 1) the individual not having necessarily provided the personal data, and 2) the data not necessarily constituting personal data in accordance with applicable data protection regulations. However, what is inferred can become a valuable data point upon which decisions impacting the individual can be made. While this presents a challenge in complying with applicable data protection regulations, it also represents a risk for the analysis and dissemination steps in the intelligence cycle insofar as it may hinder the accuracy of the analysis and the prediction models being facilitated by AI. 

Accordingly, if trust in AI is to be established by both intelligence analysts and society, data quality and accuracy become a central piece for AI implementation. While critical security studies have long challenged some of the discourse around the securitisation of individual threats, the inherent ability of AI to introduce bias assumes a far more serious connotation. Bias, meaning the unintended and potentially harmful skewing of algorithmic predictions, plays a key role in the topic of algorithmic fairness, and similarly, in the adherence to the fundamental principles of western liberal democracies. Bias can enter into the AI product lifecycle in a multitude of ways and it can be: historical bias, representation bias (at the point of collection of data), measurement bias (when labelling the data), aggregation bias (when combining different data points), evaluation bias (when selecting metrics and objectives of the training data into the trained model), and deployment bias (when models are injected into the real world that combines unpredicted socio-technical systems). 

Understanding these risks is of paramount importance for the intelligence community. As stated by former US Director for Central Intelligence, Stansfield Turner, "there is one overall test of the ethics of human intelligence activities. That is whether those approving them feel they could defend their actions before the public if the actions became public”. With the convergence of AI to support human intelligence activities, the principle of explainability - meaning the ability of decision-makers to explain to individuals in a meaningful way how decisions are being made – become a core attribute of the principle of transparency, which ultimately lies at the core of democratic accountability. 

Understanding the pitfalls to unleash the full potential 

As many of the risks analysed in the context of AI and the intelligence community are novel and call for further exploration, systematising AI allows us to envision a new approach to risk management where old practices can evolve to address and mitigate these challenges. Advocating for an AI governance model means embracing the fact that AI does not live in isolation, and that it entangles domains of risk management that have historically been adopted separately. 

The experience of translating principles and rights into tangible processes and procedures for entities to adopt remains valid in the context of AI for the intelligence community, though it requires a new structure and alignment. Our perspective on this change is to adopt a three-pillar framework that is able to: 

  1. Identifying the purposes and impacts of the AI system against the objectives and values that the entity stands for. This, in turn, means being able to define the data that is strictly necessary in order to achieve the set objectives, while gauging the (un)intended consequences that such purposes and objectives may have on the impacted individuals. 
  2. Adopting a just and fair approach to the technology and the data within. This means ensuring that security standards are met in the design, configuration, and development of the technology, while aligning to additional intersecting regulations whether these are originating from applicable data protection regulations, human rights law, just war principles, and anti-discrimination laws. 
  3. Maintaining human-centred values. Bringing the attention back to the individuals the technology aims to serve - whether these are the intelligence analysts having to trust the machine or the society the intelligence community is striving to protect. This, in turn, means realising the values of transparency and explainability, not simply in compliance with applicable regulations, but in alignment to people’s expectations. 

While many ascertain that ethics starts where the law ends, the Intelligence Community has the opportunity to innovate their operations, while proactively bringing digital ethics at the forefront of their responsible agenda. Ultimately, as change occurs rapidly and effective decision-making remains a core priority for the industry, a new perspective is required in order to seize the opportunity of building a safer and value-driven digital ecosystem.

 

Key Contacts

Stephen Wray

Stephen Wray

Partner

Stephen is a Partner in Deloitte’s Cyber business, leading client relationships in the Government and Public Services arena. He is a specialist in cyber research and innovation, enterprise security architecture and cyber culture trends and strategy. Stephen has over 20 years’ experience of founding and scaling cyber and technology businesses, including 6 years as Commercial Director of The Centre for Secure IT (CSIT) at Queen’s University Belfast. He is also currently a non-exec director at Catalyst and sits on the Exec board of LORCA - the London Cyber Innovation Centre. Stephen’s key skills are a depth of understanding in Enterprise Security Architecture married with an innovative approach to value creation in both new businesses and established enterprises. Stephen helps organisations create a cyber-minded culture, reimagine risk and uncover strategic opportunities to become faster, more innovative, and more resilient in the face of ever-changing threats.

Lucia Lucchini

Lucia Lucchini

Senior Manager

Lucia is a Senior Manager in our Cyber, Data and Digital practice within Deloitte Risk Advisory. Her experience ranges from privacy & data protection to the intersection between privacy & ethics in new technologies, specifically AI. Lucia focuses on the changing regulatory landscape surrounding new technologies, with particular attention to AI governance and policy. Lucia is also part of the of the Research & Innovation team, specializing in conducting cyber-related research.

Liz Huth

Liz Huth

Consultant

Liz is a consultant in the Cyber Strategy practice, with a background in international policy. Her interests lie in the future of regulation in the emerging technology space and has spent time as a researcher focusing on the future of AI frameworks and regulation, ethics and smart citys, and digital authoritarism.