kids encyclopedia robot

Nick Bostrom facts for kids

Kids Encyclopedia Facts
Quick facts for kids
Nick Bostrom
Prof Nick Bostrom 324-1.jpg
Bostrom in 2020
Born
Niklas Boström

(1973-03-10) 10 March 1973 (age 51)
Helsingborg, Sweden
Education
Spouse(s) Susan
Awards
  • Professorial Distinction Award from University of Oxford
  • FP Top 100 Global Thinkers
  • Prospect's Top World Thinkers list
Era Contemporary philosophy
Region Western philosophy
School Analytic philosophy
Institutions Yale University
University of Oxford
Future of Humanity Institute
Thesis Observational Selection Effects and Probability (2000)
Main interests
Philosophy of artificial intelligence
Bioethics
Notable ideas
Anthropic bias
Reversal test
Simulation hypothesis
Existential risk
Singleton
Ancestor simulation
Information hazard
Infinitarian paralysis
Self-indication assumption
Self-sampling assumption

Nick Bostrom (/ˈbɒstrəm/ BOST-rəm; Swedish: Niklas Boström [ˈnɪ̌kːlas ˈbûːstrœm]; born 10 March 1973 in Sweden) is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

Bostrom is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002), Superintelligence: Paths, Dangers, Strategies (2014) and Deep Utopia: Life and Meaning in a Solved World (2024).

Bostrom believes that advances in artificial intelligence (AI) may lead to superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". He views this as a major source of opportunities and existential risks.

Early life and education

Born as Niklas Boström in 1973 in Helsingborg, Sweden, he disliked school at a young age and spent his last year of high school learning from home. He was interested in a wide variety of academic areas, including anthropology, art, literature, and science.

He received a B.A. degree from the University of Gothenburg in 1994. He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. He also did some turns on London's stand-up comedy circuit. In 2000, he was awarded a PhD degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and probability. He held a teaching position at Yale University (2000–2002), and was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).

Research

Existential risk

Bostrom's research concerns the future of humanity and long-term outcomes. He discusses existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". Bostrom is mostly concerned about anthropogenic risks, which are risks arising from human activities, particularly from new technologies such as advanced artificial intelligence, molecular nanotechnology, or synthetic biology.

In 2005, Bostrom founded the Future of Humanity Institute which, until its shutdown in 2024, researched the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.

In the 2008 essay collection, Global Catastrophic Risks, editors Bostrom and Milan M. Ćirković characterize the relationship between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects and the Fermi paradox.

Vulnerable world hypothesis

In a paper called The Vulnerable World Hypothesis, Bostrom suggests that there may be some technologies that destroy human civilization by default when discovered. Bostrom proposes a framework for classifying and dealing with these vulnerabilities. He also gives counterfactual thought experiments of how such vulnerabilities could have historically occurred, e.g. if nuclear weapons had been easier to develop or had ignited the atmosphere (as Robert Oppenheimer had feared).

Superintelligence

In 2014, Bostrom published Superintelligence: Paths, Dangers, Strategies, which became a New York Times Best Seller. The book argues that superintelligence is possible and explores different types of superintelligences, their cognition, the associated risks. He also presents technical and strategic considerations on how to make it safe.

Characteristics of a superintelligence

Bostrom explores multiple possible paths to superintelligence, including whole brain emulation and human intelligence enhancement, but focuses on artificial general intelligence, explaining that electronic devices have many advantages over biological brains.

Bostrom draws a distinction between final goals and instrumental goals. A final goal is what an agent tries to achieve for its own intrinsic value. Instrumental goals are just intermediary steps towards final goals. Bostrom contends there are instrumental goals that will be shared by most sufficiently intelligent agents because they are generally useful to achieve any objective (e.g. preserving the agent's own existence or current goals, acquiring resources, improving its cognition...), this is the concept of instrumental convergence. On the other side, he writes that virtually any level of intelligence can in theory be combined with virtually any final goal (even absurd final goals, e.g. making paperclips), a concept he calls the orthogonality thesis.

He argues that an AI with the ability to improve itself might initiate an intelligence explosion, resulting (potentially rapidly) in a superintelligence. Such a superintelligence could have vastly superior capabilities, notably in strategizing, social manipulation, hacking or economic productivity. With such capabilities, a superintelligence could outwit humans and take over the world, establishing a singleton (which is "a world order in which there is at the global level a single decision-making agency") and optimizing the world according to its final goals.

Mitigating the risk

Bostrom explores several pathways to reduce the existential risk from AI. He emphasizes the importance of international collaboration, notably to reduce race to the bottom and AI arms race dynamics. He suggests potential techniques to help control AI, including containment, stunting AI capabilities or knowledge, narrowing the operating context (e.g. to question-answering), or "tripwires" (diagnostic mechanisms that can lead to a shutdown). But Bostrom contends that "we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out". He thus suggests that in order to be safe for humanity, superintelligence must be aligned with morality or human values so that it is "fundamentally on our side". Potential AI normativity frameworks include Yudkowsky's coherent extrapolated volition (human values improved via extrapolation), moral rightness (doing what is morally right), and moral permissibility (following humanity's coherent extrapolated volition except when it's morally impermissible).

Bostrom warns that an existential catastrophe can also occur from AI being misused by humans for destructive purposes, or from humans failing to take into account the potential moral status of digital minds. Despite these risks, he says that machine superintelligence seems involved at some point in "all the plausible paths to a really great future".

Digital sentience

Bostrom supports the substrate independence principle, the idea that consciousness can emerge on various types of physical substrates, not only in "carbon-based biological neural networks" like the human brain. He considers that "sentience is a matter of degree" and that digital minds can in theory be engineered to have a much higher rate and intensity of subjective experience than humans, using less resources. Such highly sentient machines, which he calls "super-beneficiaries", would be extremely efficient at achieving happiness. He recommends finding "paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive".

Anthropic reasoning

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that an anthropic theory is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and identifies how each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".

In later work, he has proposed the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past. Bostrom claims events that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Simulation argument

Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:

  1. The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
  2. The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
  3. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

Ethics of human enhancement

Bostrom is favorably disposed toward "human enhancement", or "self-improvement and human perfectibility through the ethical application of science", as well as a critic of bio-conservative views.

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved with either of these organisations.

In 2005, Bostrom published the short story "The Fable of the Dragon-Tyrant" in the Journal of Medical Ethics. A shorter version was published in 2012 in Philosophy Now. The fable personifies death as a dragon that demands a tribute of thousands of people every day. The story explores how status quo bias and learned helplessness can prevent people from taking action to defeat aging even when the means to do so are at their disposal. YouTuber CGP Grey created an animated version of the story.

With philosopher Toby Ord, he proposed the reversal test in 2006. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.

Bostrom's work also considers potential dysgenic effects in human populations but he thinks genetic engineering can provide a solution and that "In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot".

Technology strategy

Bostrom has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. .....

In 2011, Bostrom founded the Oxford Martin Program on the Impacts of Future Technology.

Bostrom's theory of the Unilateralist's Curse has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.

Awards

Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential." Prospect Magazine listed Bostrom in their 2014 list of the World's Top Thinkers.

Public engagement

Bostrom has provided policy advice and consulted for an extensive range of governments and organizations. He gave evidence to the House of Lords, Select Committee on Digital Skills. He is an advisory board member for the Machine Intelligence Research Institute, Future of Life Institute, and an external advisor for the Cambridge Centre for the Study of Existential Risk.

In response to Bostrom's writing on artificial intelligence, Oren Etzioni wrote in a 2016 MIT Review article that "predictions that superintelligence is on the foreseeable horizon are not supported by the available data." Professors Allan Dafoe and Stuart Russell wrote a response contesting both Etzioni's survey methodology and Etzioni's conclusions.

Prospect Magazine listed Bostrom in their 2014 list of the World's Top Thinkers. Bostrom has been called the "father" of Longtermism.

Selected works

Books

  • 2002 – Anthropic Bias: Observation Selection Effects in Science and Philosophy, ISBN: 0-415-93858-9
  • 2008 – Global Catastrophic Risks, edited by Bostrom and Milan M. Ćirković, ISBN: 978-0-19-857050-9
  • 2009 – Human Enhancement, edited by Bostrom and Julian Savulescu, ISBN: 0-19-929972-2
  • 2014 – Superintelligence: Paths, Dangers, Strategies, ISBN: 978-0-19-967811-2

Journal articles

  • — (2011). "Information Hazards: A Typology of Potential Harms from Knowledge". Review of Contemporary Philosophy 10: 44–79. ProQuest 920893069. http://www.nickbostrom.com/information-hazards.pdf.

Personal life

Bostrom met his wife Susan in 2002. As of 2015, she lived in Montreal and Bostrom in Oxford. They had one son.

See also

Kids robot.svg In Spanish: Nick Bostrom para niños

Kids robot.svg In Spanish: Nick Bostrom para niños

  • Doomsday argument
  • Dream argument
  • Effective altruism
  • Pascal's mugging
kids search engine
Nick Bostrom Facts for Kids. Kiddle Encyclopedia.