Written Evidence - Premature Procurement

Published on 19 May 2020

We submitted Written Evidence to the UK Parliament Defence Committee's Inquiry on Defence industrial policy: procurement and prosperity.

Executive Summary

In this response we particularly focus on defence and those in adjacent markets systems that integrate increasingly capable artificial intelligence (AI), especially those based on machine learning (ML). Many systems that the Ministry of Defence (MoD) is likely to procure over the next 5-10 years will integrate AI and ML; these systems are likely to both be strategically important and to introduce new vulnerabilities [1][2][3]. These vulnerabilities are likely to pose significant national security risks over the next few decades, both for the UK and the UK’s allies. These systems are the focus of much of our work [4][5][6][7], and where we hope to add our expertise to the Committee’s Inquiry.

From our research and interactions with defence and procurement practitioners, we draw the following conclusions:

  • Militaries worldwide are beginning, and will likely continue, to procure systems that integrate increasingly capable AI and ML to deliver greater speed, capability or other purported defence advantages.
  • However, if these systems are procured and deployed ‘prematurely’ - before they are fully technologically ready, derisked, safe and secure - they could introduce several new vulnerabilities, including safety, security and systemic risks.
  •  The market for these systems is characterised by: leadership by the private sector; dominance at the infrastructure level and in R&D by a handful of multinationals; rapid progress and obsolescence cycles meaning most systems are novel; and a development environment in which safety and security at the level needed for defence are rarely present.
  • These market characteristics, especially the novelty of the systems and private sector leadership, have contributed to the potential for a skills gap within the MoD while the ability to understand the risks and system readiness of these systems during procurement may not always be present.
  • The narrative of safe and responsible autonomous defence systems focuses on the end-user human operator. This focus on the end user is necessary but not sufficient. Risks need to be mitigated at all stages of a system’s life-cycle, especially procurement.
  • The MoD’s idiosyncratic definition of lethal autonomous weapons systems is holding the UK back from providing global leadership and creating uncertainty for the UK’s procurement decisions.

Combined, this leads to risks and oversights in supply and procurement - specifically the risk that the MoD prematurely will procure and deploy defence systems that integrate AI and ML, including in ways that affect strategic operations, thus introducing both known and unknown vulnerabilities.

We therefore make the following recommendations to protect against premature and/or unsafe procurement and deployment of ML-based systems:

  • Improve systemic risk assessment in defence procurement.
  • Ensure clear lines of responsibility so that senior officials can be held responsible for errors caused in the procurement chain and are therefore incentivised to reduce them;
  • Acknowledge potential shifts in international standards for autonomous systems, and build flexible procurement standards accordingly.
  • Update the MoD’s definition of lethal autonomous weapons - the Integrated Security, Defence and Foreign Policy Review provides an excellent opportunity to bring the UK in line with its allies.

Read full report

Subscribe to our mailing list to get our latest updates