Pages

Monday, July 24, 2023

Is it worth recruiting AI as a spy?

Is it worth recruiting AI as a spy?

Sir Richard Moore, the head of UK intelligence agency MI6, argued in a speech this week that whatever the benefits of artificial intelligence, nothing will replace the “human factor” of espionage — the unique bond of trust between agents overseas and the officers who work with them. On the other hand, AI has huge potential to boost spies’ operations as security threats proliferate. In a world where information is ever more contested, what might it mean to have an augmented — or even artificial — intelligence system at the heart of national security?



At its core, AI does pattern recognition. It can learn and generate patterns at incomprehensible speed, dependent on access to trustworthy training data. AI can help an analyst process information, flag trends and, ultimately, assist with decision making. GCHQ, which has led the UK intelligence agencies’ work in this area, has revealed that AI supports human investigators by seizing and intercepting imagery, messages and chains of contact to help pinpoint offenders. It could also be used to search out “hidden people and illegal services” on the dark web.

Intelligence agencies — whether their focus is on recruiting human sources or intercepting communications — exist to make sense of complexity. To interpret events from incomplete jigsaws of piecemeal data and to provide insights that help governments plan their foreign policy and respond to crises. In a job that so often involves seeking the needle in the haystack of information, AI will inevitably have a role to play.

However, there are significant hurdles. Western intelligence agencies such as the Five Eyes alliance (the US, UK, Canada, Australia and New Zealand) place great value on their reputation as trusted entities, working in a way that is legal, ethical, proportionate and responsible. The challenge will be maintaining that reputation for trust while adapting to a new digital capability.

AI is less reliable in complex and changing scenarios. It has been hard enough to develop and deploy safely and securely in a controlled environment such as a hospital. Looking for something that might never have been seen before, in contested environments in which adversaries (such as terrorists or hostile states) are actively seeking to obscure your view, is still a work in progress.

AI is very capable of dealing with the mean position: the “what most people would do” variety of analysis. It deals badly with outliers. But outliers are the very focus of the intelligence community, who are looking for the extremist, rather than the average citizen. AI is superb at extrapolating from the known, but struggles with determining the unknown, or the never seen before. 

The technology cannot, yet, add contextual understanding, correct erroneous input or check the veracity of its source information. A picture of a building on fire with today’s date is, to an AI system, a building on fire today — not a historic artefact. Statements about climate change from sub-threads of an online debate are, to an AI system, a set of equally accurate views: it cannot sift out the wildly inaccurate, or separate the PhD thesis from the ramblings of a conspiracy theorist, unless coded to do so by a data scientist, who may in turn impose their own biases on the entire data set.

It is not only that AI data inputs are susceptible to human interference — its responses are also relatively easy to manipulate. There have always been those who like building tech, and those who enjoy breaking it — GCHQ’s second world war headquarters at Bletchley Park employed both codemakers and codebreakers. AI is no different. The most advanced driverless vehicles in San Francisco can be disrupted by pranksters that place a traffic cone on their bonnet, leaving the car stranded. The vehicle has no capability to deal with unforeseen events that a human can instinctively respond to.

Within UK science and industry, researchers are investigating AI safety and security to build in robust standards for the future. These experts are working to ensure that features to counter adversarial practices are enshrined in AI design from the outset. Poisoning an AI system through corruption of the training data, whether deliberately or accidentally, remains a critical risk.

The obvious application of AI, for spies as for so many other professions, is in managing the mass data created in daily life. Our modern society produces more data per minute than the Ancient Greeks stored in their entire civilisation. We have a library without librarians. The quantity challenge is one in which AI offers new opportunity.

But in a world in which adversaries are constantly pumping misinformation into the data sea, the quality challenge remains real. This is why intelligence agencies, like all those professions that deal in the pursuit of truth, still require humans in the loop.

“As AI trawls the ocean of open source . . . the unique characteristics of human agents in the right places will become still more significant,” Moore told his audience in Prague. “They are never just passive collectors of information . . . sometimes they can influence decisions inside a government or terrorist group.” For that, at least, no automated shortcut yet exists.