Close

What’s missing from the Malicious Use of Artificial Intelligence report?

Only a fool would dare criticise the report “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” coming as it does from such an august set of bodies — to quote: “researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy.”

Cripes, that’s quite a list. But let me at least try to summarize its 100 pages of dense text.

– There’s a handy executive summary and introduction

– 38 pages cover all the things that could go wrong

– 15 pages describe ways to not let them happen

– 33 pages cover the people and materials referenced

It’s difficult to argue with any of it, on the surface at least. Particularly the overall message: there could be bad things, and we should not sleepwalk into them. While this is welcome advice, one factor is noticeable by its absence. Strangely, as the report comes from groups for whom the scientific method should be as familiar as brushing one’s teeth in the morning, it lacks any discussion, or indeed conception, of the nature of risk.

Risk, as security and continuity professionals know, is a mathematical construct, the product of probability and impact. The report itself makes repeated use of the term ‘plausible’, to describe AI’s progress, potential targets and possible outcomes. Beyond this, there is little definition.

We can all conjure disaster scenarios, but it is not until we apply our expertise and experience to assessing the risk, that we can prioritise and (hopefully) mitigate any risks that emerge.

So, without this rather important element, what can we distil from its pages? First we can perceive the report’s underlying purpose, to bring together the dialogues of a number of disparate groups. “There remain many disagreements between the co-authors of this report,” it states, showing the reality, that it is a work in progress: to coin an old consultancy phrase, “I’m sorry their report is so long, we didn’t have time to make it shorter.”

A second, laudable goal was to bring AI into the public discourse. In this it has succeeded, measured in terms of column inches — though in doing so, it is in danger of achieving no more than adding to the well-meant, yet already heaped pile of hype and anti-hype surrounding AI. Writing in the MIT review, Rodney Brooks’ The Seven Deadly Sins of AI Predictions offers a pretty good analysis of this phenomenon.

Finally, buried within its pages are an important admission on the part of “throw the doors open” organisations such as the Electronic Frontier Foundation. A priority area is stated as “Exploring Different Openness Models” — that’s right, it’s not as simple as making everything open by default, particularly if bad guys and rogue governments have the same access as good, community-spirited folks like the rest of us. To whit:

“The potential misuses of AI technology surveyed in the Scenarios and Security Domains sections suggest a downside to openly sharing all new capabilities and algorithms by default: it increases the power of tools available to malicious actors.”

So, no, the report should be thrown out wholesale, it’s collates some good, if incomplete thinking. It should however be seen for what it is: a non-scientific work in progress, an undisitilled set of perspectives from a range of academic researchers on an emerging capability. Indeed, three of the four recommendations advise more priority (and potentially more money, therefore) to be allocated towards AI-related research areas. That’s a standard tactic for academics as much as for consulting firms.

Perhaps the fourth and final recommendation is most telling, “Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.” Reports such as these will only make sense if they involve technology organizations developing the capabilities involved, as well as reinforcing the point that our policy makers need to be an order of magnitude more tech-savvy than currently.

On the upside, there will (plausibly) be as many good things coming out of AI as bad. “These technologies have many widely beneficial applications,” states the report. So there remains cause for optimism, even as we look to gain a better handle on future reality.