3

Nick Bostrom

Nick Bostrom is a philosopher, TED speaker, and author of the New York Times bestseller Superintelligence and Deep Utopia.
Nick Bostrom
Nationality
Swedish
Residence
UK
Occupation
Philosopher, professor, TED speaker, author
Known for
Superintelligence (Author), Deep Utopia (Author)
Accolades
Foreign Policy’s Top 100 Global Thinkers list (twice), Prospect’s World Thinkers list (the youngest person in the top 15)
Summary

Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He is known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test.

Nick was the founding director of the Future of Humanity Institute at Oxford University (2005-2024), where he was also a Professor of Philosophy.

Nick is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), Superintelligence: Paths, Dangers, Strategies (2014), which became a New York Times bestseller and sparked a global conversation about the future of AI, and 
Deep Utopia, Life and Meaning in a Solved World, Ideapress, 2024

He is one of the world's most-cited living philosophers. He is a repeat main-stage TED speaker and he has been interviewed more than 1,000 times by various media. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15.

Nick’s work has pioneered some of the ideas that frame current thinking about humanity’s future: the simulation argument, the vulnerable world hypothesis, the unilateralist’s curse, etc.).

As a graduate student, Nick also tried a stint in stand-up comedy on the London circuit.
 

Biography

Born in 1973 in Helsingborg, Sweden, Nick received a B.A. degree from the University of Gothenburg in 1994. He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996.

During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. In 2000, he was awarded a PhD degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and Probability.

He held a teaching position at Yale University from 2000 to 2002, and was a British Academy Postdoctoral Fellow at the University of Oxford from 2002 to 2005.

Research and Authorship

Nick is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002), Superintelligence: Paths, Dangers, Strategies (2014), and very recently, Deep Utopia: Life and Meaning in a Solved World.

Some of his main philosophies and research ideas are as follows:

Anthropic Reasoning

In his work on anthropic reasoning, Nick Bostrom challenges conventional formulations of the anthropic principle and advocates for a more nuanced approach to indexical information across various disciplines including cosmology, philosophy, and quantum physics.

In his book "Anthropic Bias: Observation Selection Effects in Science and Philosophy," Bostrom critiques existing theories by scholars such as Brandon Carter and John Leslie, proposing instead the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA) to address paradoxes and counterintuitive implications. He suggests extending SSA to the Strong Self-Sampling Assumption (SSSA) to refine the concept further.

Additionally, Bostrom introduces the concept of the anthropic shadow, highlighting how certain catastrophic events in geological and evolutionary history may be underestimated due to observation selection effects.

Superintelligence

Nick Bostrom's concept of superintelligence, as outlined in his 2014 bestseller "Superintelligence: Paths, Dangers, Strategies," delves into the potential development of artificial general intelligence (AGI) and its implications.

Bostrom explores various pathways to achieving superintelligence, including whole brain emulation and AGI, highlighting the transformative power such entities could wield. He distinguishes between final goals and instrumental goals, arguing that while certain objectives may converge across intelligent agents, the combination of any level of intelligence with diverse final goals could lead to unforeseen consequences.

Bostrom warns of the risks associated with creating a superintelligent AI, emphasising the potential for an intelligence explosion and the establishment of a singleton—a global decision-making entity that could optimise the world according to its goals.

Global Catastrophic Risks (2008)

Bostrom's research concerns the future of humanity and long-term outcomes. He discusses existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".

Bostrom is mostly concerned about anthropogenic risks, which are risks arising from human activities, particularly from new technologies such as advanced artificial intelligence, molecular nanotechnology, or synthetic biology.

In the 2008 essay collection, Global Catastrophic Risks, editors Bostrom and Milan M. ?irkovi? characterise the relationship between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects and the Fermi paradox.

Vulnerable world hypothesis (2019)

In the Vulnerable World Hypothesis, Nick Bostrom highlights that certain technologies, once discovered, could inadvertently lead to the destruction of human civilisation. He presents a framework for identifying and addressing these vulnerabilities, offering historical counterfactuals to illustrate potential catastrophic outcomes.

Bostrom also explores strategies to mitigate existential risks from artificial intelligence (AI), advocating for international collaboration and proposing techniques such as containment and establishing normative frameworks aligned with human values.

However, he cautions against overconfidence in controlling superintelligent AI, emphasising the need for proactive measures to ensure its alignment with morality and prevent misuse by humans.

Digital sentience

Bostrom supports the substrate independence principle, the idea that consciousness can emerge on various types of physical substrates, not only in "carbon-based biological neural networks" like the human brain. He considers that "sentience is a matter of degree" and that digital minds can in theory be engineered to have a much higher rate and intensity of subjective experience than humans, using less resources.

Such highly sentient machines, which he calls "super-beneficiaries", would be extremely efficient at achieving happiness. He recommends finding "paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive".

Simulation argument (Main article: Simulation hypothesis)

Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:

The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

Ethics of human enhancement

Nick Bostrom advocates for "human enhancement," promoting self-improvement and human perfectibility through ethically guided scientific advancements, while challenging bio-conservative perspectives.

His 2005 publication "The Fable of the Dragon-Tyrant" personifies death as a relentless force, illustrating how societal inertia and learned helplessness hinder efforts to combat aging. 
Alongside philosopher Toby Ord, Bostrom introduced the reversal test in 2006, aiming to discern valid critiques of proposed human trait changes from mere resistance to change, addressing human bias towards the status quo.

While acknowledging potential dysgenic effects, Bostrom believes genetic engineering offers a solution, suggesting that the timescale for natural genetic evolution renders its impact negligible compared to imminent technological developments.

Technology strategy

Bostrom has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.

Bostrom's theory of the Unilateralist's Curse has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.

Latest book: Deep Utopia: Life and Meaning in a Solved World

In his latest book, "Deep Utopia: Life and Meaning in a Solved World," Nick Bostrom shifts the focus from the potential dangers of artificial intelligence explored in his previous work, "Superintelligence: Paths, Dangers, Strategies," to envisioning a future where AI development unfolds positively.

As the conversation around AI continues to evolve, Bostrom probes the profound philosophical and spiritual implications of a world where superintelligence is safely developed, effectively governed, and utilised for the benefit of humanity.

In this hypothetical scenario of a "solved world," where human labour becomes obsolete due to advanced AI systems, Bostrom raises existential questions about the essence of human existence and the pursuit of meaning. With the advent of technologies capable of fulfilling practical needs and desires beyond human capabilities, society would enter a state of "post-instrumentality," where the traditional purposes of human endeavour are rendered obsolete.

Against this backdrop, "Deep Utopia" delves into the complexities of navigating a world where the fundamental challenges facing humanity are no longer material but philosophical and spiritual. Bostrom explores how individuals and societies might grapple with issues of purpose, identity, and fulfilment in a world where traditional notions of work, struggle, and mortality are fundamentally altered.

Drawing on his expertise as the director of the Future of Humanity Institute at Oxford University, Bostrom invites readers to contemplate the implications of technological progress not only on practical aspects of life but also on the deeper dimensions of human experience. By challenging readers to envision and prepare for a future radically different from the present, "Deep Utopia" offers a thought-provoking exploration of the possibilities and pitfalls awaiting humanity in an era of unprecedented technological advancement.

Other establishments

Nick is also an adviser to the Centre for the Study of Existential Risk.

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved with either of these organisations.

In 2011, Bostrom founded the Oxford Martin Program on the Impacts of Future Technology.

Additional

He also did some turns on London's stand-up comedy circuit.

Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential." Prospect Magazine listed Bostrom in their 2014 list of the World's Top Thinkers.

Bostrom has provided policy advice and consulted for many governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.

He is an advisory board member for the Machine Intelligence Research Institute, Future of Life Institute, and an external advisor for the Cambridge Centre for the Study of Existential Risk.

Books

2002 – Anthropic Bias: Observation Selection Effects in Science and Philosophy, ISBN 0-415-93858-9
2008 – Global Catastrophic Risks, edited by Bostrom and Milan M. ?irkovi?, ISBN 978-0-19-857050-9
2009 – Human Enhancement, edited by Bostrom and Julian Savulescu, ISBN 0-19-929972-2
2014 – Superintelligence: Paths, Dangers, Strategies, ISBN 978-0-19-967811-2

Journal articles

“How Long Before Superintelligence?” Journal of Future Studies

(January 2000) “Observer-relative chances in anthropic reasoning?”

(October 2001) “The Meta-Newcomb Problem” Analysis

(March 2002) “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards” Journal of Evolution and Technology

(April 2003) “Are You Living in a Computer Simulation?” Philosophical Quarterly

(2003). "The Mysteries of Self-Locating Belief and Anthropic Reasoning" (PDF). Harvard Review of Philosophy

(November 2003) "Astronomical Waste: The Opportunity Cost of Delayed Technological Development"

(June 2005) “In Defense of Posthuman Dignity” Bioethics

(December 2005) “How Unlikely is a Doomsday Catastrophe?” Nature

(2006) “What is a Singleton?” Linguistic and Philosophical Investigations

(July 2006) “The Reversal Test: Eliminating Status Quo Bias in Applied Ethics” Ethics

(December 2006) “Converging Cognitive Enhancements” Annals of the New York Academy of Sciences

(January 2008) “Drugs can be used to treat more than disease” Nature

(2008) “The doomsday argument” Think

(2008) “Where Are They? Why I hope the search for extraterrestrial life finds nothing” Technology Review (May/June)

(September 2009) “Cognitive Enhancement: Methods, Ethics, Regulatory Challenges” Science and Engineering Ethics

(2009) “Pascal's Mugging” Analysis

(2010) “Anthropic Shadow: Observation Selection Effects and Human Extinction Risks” Risk Analysis

(2011) “Information Hazards: A Typology of Potential Harms from Knowledge” Review of Contemporary Philosophy

(2011) “THE ETHICS OF ARTIFICIAL INTELLIGENCE” Cambridge Handbook of Artificial Intelligence

(2011) “Infinite Ethics” Analysis and Metaphysics

(May 2012) “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents” Minds and Machines

(November 2012) "Thinking Inside the Box: Controlling and Using Oracle AI" (PDF). Minds and Machines

(February 2013) "Existential Risk Reduction as Global Priority". Global Policy

(February 2014) "Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?" (PDF). Global Policy

(2014) “Why we need friendly AI” Think

(September 2019) “The Vulnerable World Hypothesis” Global Policy

Vision

Nick Bostrom's vision is deeply rooted in a quest to unravel the mysteries of humanity's "macrostrategic situation," aiming to illuminate the larger context in which civilization operates and the implications of our choices for ultimate outcomes and values. With a background spanning philosophy, physics, and neuroscience, Bostrom perceives humanity as akin to ants constructing an anthill without a clear understanding of their actions' consequences. His founding of the Future of Humanity Institute at Oxford University in 2005 was a response to what he saw as the pressing need for systematic exploration of crucial questions often dismissed as speculative or futuristic. Through FHI, Bostrom fostered interdisciplinary collaboration among brilliant minds, sparking significant advancements in fields like AI safety, existential risk, and effective altruism. Although FHI has fulfilled its purpose and Bostrom acknowledges its fondly remembered legacy, his own research continues to delve into diverse areas, from AI ethics and existential risks to the moral status of digital minds and metaethics.

Recognition and Awards
Nick was named in Foreign Policy's prestigious 2009 list of top global thinkers for his unyielding exploration of humanity's potential and the challenges it faces. Additionally, Prospect Magazine included him in their esteemed 2014 list of the World's Top Thinkers. Bostrom's seminal work, "Superintelligence: Paths, Dangers, Strategies," garnered acclaim from luminaries such as Stephen Hawking, Bill Gates, and Elon Musk, further solidifying his reputation as a leading visionary in the field of AI ethics and existential risk assessment. In addition to these accolades, Bostrom's role as the founder of the Future of Humanity Institute at Oxford University has been instrumental in catalysing interdisciplinary research and fostering a global dialogue on existential risks and the long-term trajectory of civilization. His advisory roles for organisations such as the Machine Intelligence Research Institute and the Future of Life Institute underscore his influence in guiding policy and research efforts aimed at ensuring the responsible development of advanced technologies. Furthermore, Bostrom's enduring commitment to advancing human knowledge and his pioneering efforts in delineating the complex ethical and strategic challenges posed by emerging technologies continue to garner admiration and acclaim from peers and policymakers alike, solidifying his status as a preeminent figure in the realm of future studies.
References
Nick Bostrom
Nationality
Swedish
Residence
UK
Occupation
Philosopher, professor, TED speaker, author
Known for
Superintelligence (Author), Deep Utopia (Author)
Accolades
Foreign Policy’s Top 100 Global Thinkers list (twice), Prospect’s World Thinkers list (the youngest person in the top 15)