The End of the Simulation
Nick Bostrom, potential threats to humanity and why we don’t research those threats.
Eventually our planet will likely be destroyed. If we’re to survive as a species, we’ll have to spread to the stars at some point or make peace with the almost inevitability of our own destruction. You’d think that because of this humanity would be devoting significant resources towards identifying and mitigating the many risks that we face.
Unfortunately, our academic priorities don’t appear to be skewed towards existential risk. Nick Bostrom gave a short presentation in 2013 at TEDxOxford which briefly outlined many of the issues confronting researchers working on existential risk (1).
A little under eight minutes into the talk Bostrom refers to a chart showing existing academic priorities. Humanity devotes almost six hundred times more resources towards research around snowboarding the we do around existential risk. We devote one thousand more resources towards the humble dung beetle. I’m not sure how many resources we devote to angry birds.
I’m not sure why this is the case. Perhaps its simple ignorance. Perhaps the thought of our impending destruction is just so confronting that we’ll do anything to avoid dealing with that. Perhaps we simply can’t handle our species overall lack of importance in the cosmos.
Nick Bostrom is one person who is doing something about this issue. In 2005 he founded the Future of Humanity Institute (2). The institute is a research centre which focusses on the big questions about our species existence and future prospects.
He argues that the most cost effective way to reduce the existential risks to humanity is to fund the analysis of a wide range of potential risks and strategies. This analysis should be conducted from a long term perspective.
Bostrom’s view is that many of the most worrisome existential risks centre around the emergence of new technology. Specifically, Bostrom is particularly concerned about the emergence of a true artificial intelligence (AI). In 2014 Bostrom published Superintelligence: Paths, Dangers, Strategies (3).
This quite brilliant book describes the various ways that superintelligence could develop from biological methods such as selective breeding and genetic manipulation through to whole brain emulation by technological means.
He discusses the forms in which superintelligence might arrive. For example, a ‘speed’ superintelligence could be roughly as intelligent as humans, but able to process information faster. A ‘collective’ superintelligence might be composed of a number of smaller intellects interacting in some way. Lastly a ‘quality’ superintelligence could carry out tasks which humans simply cannot do.
Bostrom focuses heavily on what he calls the ‘control problem’. He points out that should humanity give birth to superintelligence there is no guarantee that it’s goals will align with our goals. This could lead to the AI acting in a manner which might lead to our harm or eventual destruction. Whilst every effort could be made to constrain the superintelligence how could we possibly do this if the entity we are trying to constrain is more intelligent than we are?
Try as we might to programmatically retain control once again how can we be sure that this super intelligent entity will not navigate a way around whatever programmatic constraints we put in place? It is indeed worrisome that Asimov’s three laws of robotics (4), which were written in 1942 are still the in many ways best constraints that humanity has yet devised to control a potential artificial intelligence.
Bostrom has also found some renown as a leading proponent of the simulation argument (5). This fascinating theory contends that one of three seemingly unlikely propositions must be true.
The fraction of human-level civilizations that reach a post human stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero or;
The fraction of post human civilizations that are interested in running ancestor-simulations is very close to zero or;
The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
Bostrom claims that if the third proposition is true then we are almost certainly living in a simulation. Before you throw your hands up in the air and tell Bostrom to save this kind of talk for the Wachowski brothers just consider the idea.
Right now in our civilisation one of the most popular forms of entertainment is so called reality TV shows. It’s difficult to argue that humanity doesn’t enjoy watching the struggles and triumphs of others.
Right now in our civilisation we run simulations all the time. They are called computer games. Some of these are very detailed. They aim to enthral the user. There have absolutely been cases of gamers losing themselves and in extreme cases even dying as a result of playing detailed computer games.
If we could actually run a ‘real world’ simulation online do you think we would? I’d suggest that’s exactly what we would do if we could. We love to play god. We love it so much we invented god. We love it so much we invented second life (6).
Superintelligence: Paths, Dangers, Strategies is quite a difficult read. I listened to it in audiobook version. I’d like to say the narrator made it easier to digest. I suspect the narrator’s seemingly onerous tone may even have made it worse. Despite this I still recommend the book and I thoroughly recommend Bostrom’s other work (7).