It is profoundly difficult to grapple with risks whose stakes may include the global collapse of civilisation, or even the extinction of humanity. The pandemic has shattered our illusions of safety and reminded us that despite all the progress made in science and technology, we remain vulnerable to catastrophes that can overturn our entire way of life. These are live possibilities, not mere hypotheses, and our governments will have to confront them.
As Britain emerges from Covid-19, it could find itself at the forefront of the response to future disasters. The government’s recent integrated review, Britain’s taking of the G7 presidency and the Cop26 climate conference, which will be hosted in Glasgow later this year, are all occasions to address global crises. But in order to ensure that the UK really is prepared, we need to first identify the biggest risks that we face in the coming decades.
Technological progress since the Industrial Revolution has ultimately increased the risk of the most extreme events, putting humanity’s future at stake through nuclear war or climate breakdown. One technology that may pose the greatest threat this century is artificial intelligence (AI) – not the current crop of narrowly intelligent networks, but more mature systems with a general intelligence that surpasses our own. AI pioneers from Alan Turing to Stuart Russell have argued that unless we develop the means to control such systems or to align them with our values, we will find ourselves at their mercy.
By my estimation, the chances of such a risk causing an existential catastrophe in the next century are about one in six: like Russian roulette. If I’m even roughly right about the scale of these threats, then this is an unsustainable level of risk. We cannot survive many centuries without transforming our resilience.
The government’s recent integrated review highlighted the importance of these “catastrophic-impact threats”, paying attention to four of the most extreme risks; the threats from AI, global pandemics, the climate crisis and nuclear annihilation. It rightly noted the crucial role that AI systems will play in modern warfare, but was silent about the need to ensure that the AI systems we deploy are developed safely and aligned with human values. It underscored the likelihood of a successful biological attack in the coming years, but could have said more about the role science and technology can play in protecting us. And although it mentioned the threat of other countries increasing and diversifying their nuclear capabilities, the decision to expand the UK’s own nuclear arsenal is both disappointing and counterproductive.
To really transform our resilience to extreme risks, we need to go further. First, we must urgently address biosecurity. As well as the possibility of a new pandemic spilling over from animals, there is the even worse prospect of an engineered pandemic, designed by foreign states or non-state actors, with a combination of lethality, transmissibility, and vaccine resistance beyond any natural pathogen. With the rapid improvements in biotechnology, the number of parties who could create such a weapon is only growing.
To meet this risk, the UK should launch a new national centre for biosecurity, as has been recommended by the joint committee on the National Security Strategy and my own institute at Oxford University. This centre would counter the threat of biological weapons and laboratory escapes, develop effective defences against biological threats and foster talent and collaboration across the UK biosecurity community. There is a real danger that the legacy of Covid-19 does not go beyond preparing for the next naturally occurring pandemic, neglecting the possibilities of a human-made pandemic that keep experts up at night.
Second, the UK needs to transform its resilience to the full range of extreme risks we face. We don’t know what the next crisis on the scale of Covid-19 will be, so we need to be prepared for all such threats. The UK’s existing risk management system, within the Cabinet Office’s civil contingencies secretariat, is strong in many ways, but it only addresses risks that pose a clear danger in the next two years – making it impossible to adequately evaluate dangers that would take more than two years to prepare for, such as those posed by advanced AI. We also suffer from the lack of a chief risk officer, or equivalent position, who could take sole responsibility for the full range of extreme threats across government.