Nick Bostrom

Nick Bostrom ( English / b ɒ s t r əm / ; Swedish : Niklas Boström , IPA: [buːˌstrœm] ; born 10 March 1973) [1] is a Swedish philosopher at the University of Oxford Berninahaus for his work on Existentiële risk , the anthropic principle , human enhancement ethics, superintelligence risks, the reversal test , and consequentialism . In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, [2] and have been Currently the founding director of the Future of Humanity Institute [3] at Oxford University.

He is the author of over 200 publications, [4] waaronder Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller [5] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002). [6] In 2009 and 2015 he was included in Foreign Policy ‘ s Top 100 Global Thinkers list. [7] [8] Bostrom’s work on superintelligence – and his group voor zijn Existentiële risk to humanity over the coming century – has brought` zowel Elon Musk and Bill Gates to similar thinking. [9] [10] [11]

Biography

Bostrom was born in 1973 [12] in Helsingborg , Sweden. [4] At a young age, he disliked school, and he ended up spending his last year of high school learning from home. He SOUGHT to Educate himself in a wide variety of disciplines, including anthropology, art, literature, and science. [13] on Despite what has leg called a “serious mien ,” he once did some turns on London’s stand-up comedy circuit. [4]

He holds a BA in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master’s degrees in philosophy and physics, and computational neuroscience from Stockholm University and King’s College London , respectively. During his time at Stockholm University, have Researched the relationship tussen language and reality by studying the analytic philosopher WV Quine . [13] In 2000, he was Awarded a PhD in philosophy from the London School of Economics . He held a teaching position at Yale University (2000-2002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (2002-2005). [6] [14]

Philosophy

Existentiële risk

An important aspect of Bostrom’s research concerns the future of humanity and long-term outcomes. [15] [16] He introduced the concept of an Existentiële risk , welke he defines as one in welke an “adverse outcome mention anything Either Annihilate Earth Originating intelligent life or Permanently and drastically curtail zijn potential.” In the 2008 volume Global Catastrophic Risks , editors Bostrom and Milan Ćirković characterize the relatie tussen Existentiële risk and the broader class of global catastrophic risks, and link Existentiële risk to observer selection effects [17] and the Fermi Paradox . [18] [19] In a 2013 paper in the journal Global Policy , Bostrom sacrifices a taxonomy or Existentiële risk and proposés a reconceptualization of sustainability in dynamic terms, as a developmental trajectory dat minimizes Existentiële risk. [20]

The philosopher Derek Parfit argued for the belang or ensuring the survival of humanity, due to the value of a Potentially large number of future generations. [21] Similarly, Bostrom has zegt dat, from a consequentialist perspective, even small réductions in the cumulative amount of Existentiële risk dat humanity will face are Extremely Valuable, to the point where the traditional utilitarian imperative-to Maximize verwachte utility-can be simplified to the Maxipok principle: Maximize the probability of an outcome OK (where an OK outcome is ANY therein avoids Existentiële catastrophe). [22] [23]

In 2005, Bostrom founded the Future of Humanity Institute , [13] welke Researches the far future of human civilization. He is ook een adviser to the Centre for the Study of Risk Existentiële . [16]

Superintelligence

In his 2014 book Superintelligence: Paths, Dangers, Strategies , Bostrom reasons therein with “cognitive performance greatly [imprisionment] therein of humans in Virtually all areas of interest”, superintelligent agents Could promise Substantial societal benefits and pose a significant artificial intelligence (AI) -related Existentiële risk . Charmain Horn Please note, it is crucial, he says, dat we approach this area with caution, and take active steps to mitigate the risks we face. In January 2015, Bostrom joined Stephen Hawking , Max Tegmark , Elon Musk , Martin Rees , Jaan Tallinn onder others in signing the Future of Life Institute ‘s open letter warning of the potential Dangers or AI. The signatories “… believe dat research on how to make AI systems robust and Beneficial is zowel important and Timely, and therein specific research arnt be pursued today.” [24] [25]

Anthropic reasoning

Bostrom has published numerous articles on anthropic reasoning , as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy . In the book, he criticizes previous formulations of the anthropic principle, zoals Those of Brandon Carter , John Leslie , John Barrow , and Frank Tipler . [26]

Bostrom convinced dat de mishandling or indexical information is a common flaw in many areas of inquiry (including kosmologie, philosophy, evolution theory, game theory, and quantum physics). He argues dat a theory of anthropics is needed to deal with synthesis. He introduced the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA) and Showed how they ‘lead to différent Conclusions in a number of cases. He pointed out dat lycra is AFFECTED with paradoxes or counterintuitive implications in certainement thought experiments (the SSA in real the Doomsday argument, the SIA in the Presumptuous Philosopher thought experiment). He suggested dat a way forward nov involvement Extending SSA into tje Strong Self-Sampling Assumption (SSSA) welke Replaces “observers” in the MSA definition to “observer-moments”. This Could allow directive for the reference class to be relativized (and have derived an expression for this in the “Observation equation”).

In later work, he has DESCRIBED the phenomenon or anthropic shadow , an Observation selection effects therein Prevents observers from Observing certainement childhood or catastrophes fits hun recent geological and evolutionary. [27] Catastrophe types therein lie in the anthropic shadow are LIKELY to be underestimated unless statistical Adjustments are made.

Ethics of human enhancement

Bostrom is favorable towards “human enhancement” or “self-improvement and human perfectibility through the ethical application of science”, [28] [29] as well as a critic of bio-conservative views. [30] With philosopher Toby Ord , have Proposed the reversal test . Given humans’ Irrational status quo bias, how kan one distinguish tussen validate Criticisms of Proposed changes in a human trait and Criticisms merely Motivated by resistance to change? The reversal test attempts to do this by Asking Whether it mention anything be a good thing if the cat was altered in The Opposite Direction. [31]

In 1998, Bostrom co-founded (with David Pearce ) the World Transhumanist Association [28] (welke has since changed names to zijn Humanity + ). In 2004, he co-founded (with James Hughes ) theInstitute for Ethics and Emerging Technologies , hoewel de he is no longer involved in Either of These organizations. Bostrom was named in Foreign Policy ‘ s 2009 list of top global thinkers “for accepting no limits on human potential.” [32]

Technology strategy

He has suggested dat technological policy aimed at Reducing Existentiële risk arnt seek to influence the order in welke verschillende technologische capabilities are attained, proposing the principle of differential technologische development . This principle states dat we ought to retard the development of dangerous technologies, bijzonder zones therein raise the level of Existentiële risk, and versnellen the development or Beneficial technologies bijzonder Those dat protect Against the Existentiële risks posed by nature or by other technologies.

Bibliography

Books

  • 2002 – Anthropic Bias: Observation Selection Effects in Science and Philosophy , ISBN 0-415-93858-9
  • 2009 – Human Enhancement , edited by Bostrom and Julian Savulescu , ISBN 0-19-929972-2
  • 2011 – Global Catastrophic Risks , edited by Bostrom and Milan M. Ćirković, ISBN 978-0-19-857050-9
  • 2014 – Superintelligence: Paths, Dangers, Strategies , ISBN 978-0-19-967811-2

Journal articles (selected)

  • Bostrom, Nick (1999). “The Doomsday Argument is Alive and Kicking” . Mind . 108 (431): 539-550. doi : 10.1093 / mind / 108431539 . JSTOR 2,660,095 .
  • – (January 2000). “Observer-relative chances in anthropic reasoning?” . Erkenntnis . 52 (1): 93-108. doi : 10.1023 / A: 1005551304409 . JSTOR 20,012,969 .
  • – (June 2001). “The Doomsday Argument, Adam & Eve, UN ++, and Quantum Joe” . Synthesis . 127 (3): 359-387. doi : 10.1023 / A: 1010350925053 . JSTOR 20,141,195 .
  • – (October 2001). “The Meta-Newcomb Problem” . Analysis . 61 (4): 309-310. doi : 10.1111 / 1467-8284.00310 . JSTOR 3,329,010 .
  • – (March 2002). “Existentiële Risks: Analyzing Human Extinction Scenarios and Related Hazards” . Journal of Evolution and Technology . 9 (1).
  • – (January 2002). “Self-Locating Belief in Big Worlds: Cosmology’s Missing Link to Observation” . Journal of Philosophy . 99 (12): 607-623. JSTOR 3,655,771 .
  • – (April 2003). “Are You Living in a Computer Simulation?” (PDF) . Philosophical Quarterly . 53 (211): 243-255. doi : 10.1111 / 1467-9213.00309 . JSTOR 3,542,867 .
  • – (2003). “The Mysteries of Self-Locating Belief and Anthropic Reasoning” (PDF) . Harvard Review of Philosophy . 11 (Spring): 59-74.
  • – (November 2003). “Astronomical Waste: The Opportunity Cost of Delayed Technological Development” . Utilitas . 15 (3): 308-314. doi : 10.1017 / S0953820800004076 .
  • – (May 2005). “The Fable of the Dragon-Tyrant” . J Med Ethics . 31 (5): 273-277. doi : 10.1136 / jme.2004.009035 . JSTOR 27,719,395 . PMC 1734155 . PMID 15863685 .
  • – (June 2005). “In Defense of Human Dignity Post” . Bioethics . 19 (3): 202-214. doi : 10.1111 / j.1467-8519.2005.00437.x . PMID 16167401 .
  • with Tegmark, Max (December 2005). “How Unlikely is a Doomsday Catastrophe?” . Nature . 438 (7069): 754. doi : 10.1038 / 438754a . PMID 16341005 .
  • – (2006). “What is a singleton?” . Linguistic and Philosophical Investigations . 5 (2): 48-54.
  • – (May 2006). “Quantity of Experience: Brain-Duplication and Degrees of Consciousness” (PDF) . Minds and Machines . 16 (2): 185-200. doi : 10.1007 / s11023-006-9036-0 .
  • with Ord, Toby (July 2006). “The Reversal Test: Eliminating Status Quo Bias in Applied Ethics” (PDF) . Ethics . 116 (4): 656-680. doi : 10.1086 / 505233 .
  • with Sandberg, Anders (December 2006). “CONVERGING Cognitive Enhancements” (PDF) . Annals of the New York Academy of Sciences . 1093 (1): 201-207. doi : 10.1196 / annals.1382.015 .
  • – (July 2007). “Sleeping beauty and self-location: A hybrid model” (PDF) . Synthesis . 157 (1): 59-78. doi : 10.1007 / s11229-006-9010-7 . JSTOR 27,653,543 .
  • – (January 2008). “Drugs kan be-used to treat morethan disease” (PDF) . Nature . 452 (7178): 520. doi : 10.1038 / 451520b .
  • – (2008). “The doomsday argument.” Think . 6 (17-18): 23-28. doi : 10.1017 / S1477175600002943 .
  • – (2008). “Where Are Way Down? Why i hope the search for extraterrestrial life FINDS nothing” (PDF) . Technology Review (May / June): 72-77.
  • with Sandberg, Anders (September 2009). “Cognitive Enhancement: Methods, Ethics, Regulatory Challenges” (PDF) . Science and Engineering Ethics . 15 (3): 311-341. doi : 10.1007 / s11948-009-9142-5 . PMID 19543814 .
  • – (2009). “Pascal’s Mugging” (PDF) . Analysis . 69 (3): 443-445. doi : 10.1093 / analys / anp062 . JSTOR 40,607,655 .
  • with Ćirković, Milan; Sandberg, Anders (2010). “Anthropic Shadow: Observation Selection Effects and Human Extinction Risks” (PDF) . Risk Analysis . 30 (10): 1495-1506. doi : 10.1111 / j.1539-6924.2010.01460.x .
  • – (2011). “Information Hazards: A Typology of Potential Harms from Knowledge” (PDF) . Review of Contemporary Philosophy . 10 : 44-79.
  • Bostrom, Nick (2011). “Infinite Ethics” (PDF) . Analysis and metaphysics . 10 : 9-59.
  • – (May 2012). “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents” (PDF) . Minds and Machines . 22 (2): 71-84. doi : 10.1007 / s11023-012-9281-3 .
  • with Shulman, Carl (2012). “How Hard AI? Evolutionary Arguments and Selection Effects” (PDF) . J. Consciousness Studies . 19 (7-8): 103-130.
  • with Armstrong, Stuart; Sandberg, Anders (November 2012). “Thinking Inside the Box: Controlling and Using Oracle AI” (PDF) . Minds and Machines . 22 (4): 299-324. doi : 10.1007 / s11023-012-9282-2 .
  • – (February 2013). “Existentiële Risk Reduction and Global Priority” . Global Policy . 4 (3): 15-31. doi : 10.1111 / 1758-5899.12002 .
  • with Shulman, Carl (February 2014). “Embryo Selection for Cognitive Enhancement: Curiosity or Game changer?” (PDF) . Global Policy . 5 (1): 85-92. doi : 10.1111 / 1758-5899.12123 .
  • with Muehlhauser, Luke (2014). “Why we need friendly AI” (PDF) . Think . 13 (36): 41-47. doi : 10.1017 / S1477175613000316 .

References

  1. Jump up^ “nickbostrom.com” . Nickbostrom.com . Retrieved 16 October 2014 .
  2. Jump up^ “Professor Nick Bostrom: People” . Oxford Martin School . Retrieved 16 October 2014 .
  3. Jump up^ “Future of Humanity Institute – University of Oxford” . Fhi.ox.ac.uk . Retrieved 16 October 2014 .
  4. ^ Jump up to:a b c Thornhill, John (14 July 2016). “Artificial intelligence: can we control it?” . Financial Times . Retrieved 10 August 2016 . (subscription required)
  5. Jump up^ “Best Selling Science Books” . The New York Times . Retrieved 19 February 2015 .
  6. ^ Jump up to:a b “Nick Bostrom on artificial intelligence” . Oxford University Press. 8 September 2014 . Retrieved 4 March 2015 .
  7. Jump up^ Frankel, Rebecca. “The FP Top 100 Global Thinkers” . Foreign Policy . Retrieved 5 September 2015 .
  8. Jump up^ “Nick Bostrom: For Sounding the alarm on our future computer Overlords.” . foreignpolicy.com . Foreign Policy magazine . Retrieved 1 January 2015 .
  9. Jump up^ “Forbes” . Forbes . Retrieved 19 February 2015 .
  10. Jump up^ “Bill Gates Is Worried About the Rise of the Machines” . The Fiscal Times . Retrieved 19 February 2015 .
  11. Jump up^ Bratton, Benjamin H. (23 February 2015). “Outing AI: Beyond the Turing Test” . The New York Times . Retrieved 4 March 2015 .
  12. Jump up^ Kurzweil, Ray (2012). How to create a mind the secret of human thought revealed . New York: Viking. ISBN  9781101601105 .
  13. ^ Jump up to:a b c Khatchadourian, Raffi (23 November 2015). “The Doomsday Opus” . The New Yorker . Condé Nast. XC (37): 64-79. ISSN  0028-792X .
  14. Jump up^ “Nick Bostrom: CV” (PDF) . Nickbostrom.com . Retrieved 16 October 2014 .
  15. Jump up^ Bostrom, Nick (March 2002). “Existentiële Risks” . Journal of Evolution and Technology . 9 .
  16. ^ Jump up to:a b Andersen Ross. “Omens” . Aeon Media Ltd . Retrieved 5 September 2015 .
  17. Jump up^ Tegmark, Max ; Bostrom, Nick (2005). “Astrophysics: is a doomsday catastrophe LIKELY?” (PDF) . Nature . 438 (7069): 754. doi : 10.1038 / 438754a . PMID  16341005 .
  18. Jump up^ Bostrom, Nick (May-June 2008). “Where are they? Why I Hope the Search for Extraterrestrial Life Finds Nothing” (PDF) . MIT Technology Review : 72-77.
  19. Jump up^ Overbye, Dennis (August 3, 2015). “The Flip Side of Optimism About Life on Other Planets” . The New York Times . Retrieved October 29, 2015 .
  20. Jump up^ “Existentiële Risk Prevention and Global Priority” (PDF) . Nickbostrom.com . Retrieved 16 October 2014 .
  21. Jump up^ Parfit, Derek (1984). Reasons and Persons . Oxford, England: Oxford University Press. pp. 453-454. ISBN  019824908X .
  22. Jump up^ “Astronomical Waste: The Opportunity Cost of Delayed Technological Development” . Nickbostrom.com . Retrieved 16 October 2014 .
  23. Jump up^ “Existentiële Risks: Analyzing Human Extinction Scenarios” . Nickbostrom.com . Retrieved 16 October 2014 .
  24. Jump up^ “The Future of Life Institute Open Letter” . The Future of Life Institute . Retrieved 4 March 2015 .
  25. Jump up^ “Scientists and investors warn on AI” . The Financial Times . Retrieved 4 March 2015 .
  26. Jump up^ Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy (PDF) . New York: Routledge. pp. 44-58. ISBN  0-415-93858-9 . Retrieved 22 July 2014 .
  27. Jump up^ “Anthropic Shadow: Observation Selection Effects and Human Extinction Risks” (PDF) . Nickbostrom.com . Retrieved 16 October 2014 .
  28. ^ Jump up to:a b Sutherland, John (9 May 2006). “The ideas interview: Nick Bostrom, John Sutherland meets a transhumanist who wrestles with the ethics or technologically enhanced human beings” . The Guardian .
  29. Jump up^ Bostrom, Nick (2003). “Human Genetic Enhancements: A Transhumanist Perspective” (PDF) . Journal of Value Inquiry . 37 (4): 493-506. doi : 10.1023 / B: INQU.0000019037.67783.d5.
  30. Jump up^ Bostrom, Nick (2005). “In Defence of Human Dignity Post”. Bioethics . 19 (3): 202-214. doi : 10.1111 / j.1467-8519.2005.00437.x . PMID  16167401 .
  31. Jump up^ Bostrom, Nick; Ord, Toby (2006). “The reversal test: eliminating status quo bias in toegepast ethics” (PDF) . Ethics . 116 (4): 656-679. doi : 10.1086 / 505233 .
  32. Jump up^ “The FP Top 100 Global Thinkers – 73. Nick Bostrom” . Foreign Policy . December 2009.
  33. Jump up^ Bostrom, Nick (19 January 2010). “Are You Living in a Computer Simulation?” .
  34. Jump up^ “nickbostrom.com” . Nickbostrom.com . Retrieved 19 February 2015 .