Clicky

- Key sources for the "most important century" series

Key sources for the "most important century" series

Here are key sources featured in each post of the "most important century" series, for readers interested in going more in depth:

Eternity in Six Hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox (Armstrong and Sandberg 2013). Discusses how a stable galaxy-wide civilization could be created with not-yet-existing, but achievable-seeming, technologies. Cited in All possible views about humanity's long-term future are wild.

Dissolving the Fermi Paradox (Sandberg, Drexler and Ord 2018). Argues that the hardest, most unlikely steps on the road to galaxy-scale expansion are likely the steps our species has already taken. Cited in All possible views about humanity's long-term future are wild.

3 pieces on the dynamics of accelerating growth and how advanced AI could cause it, all cited in The Duplicator:

Age of Em (Hanson 2018). Attempts detailed forecasts of what the world could look like if mind uploading were possible. Cited in The Duplicator and Digital People Would Be An Even Bigger Deal.

The Singularity: A Philosophical Analysis (Chalmers 2010, Section 9), Zombies (Yudkowsky 2016), The Conscious Mind (Chalmers 1996). These sources argue that sufficiently detailed digital copies of humans would be conscious. Cited in Digital People Would Be An Even Bigger Deal.

Limits to Growth (Hanson 2009) and Galactic-Scale Energy (Murphy 2011). Analyses arguing that today's level of growth can't be sustained for more than another few thousand years. Cited in This Can't Go On.

Superintelligence (Bostrom 2017) and Draft report on existential risk from power-seeking AI (Carlsmith 2021). Discussions of the AI alignment problem, a key part of Forecasting Transformative AI, Part 1: What Kind of AI?

Report on Semi-informative Priors (Davidson 2021). Estimates the probability of transformative AI by various dates using a relatively simple mathematical framework. Cited in Forecasting Transformative AI: What's the Burden of Proof?

What should we learn from past AI forecasts? (Muehlhauser 2016). Examines past "AI hype cycles." Cited in Forecasting Transformative AI: What's the Burden of Proof?

When Will AI Exceed Human Performance? Evidence from AI Experts (Grace, Salvatier, Dafoe, Zhang, Evans 2017). Survey of 352 AI researchers on future AI capabilities. Cited in

Forecasting Transformative AI: Are we "trending toward" transformative AI?

Draft report on Biological Anchors (Cotra 2020). Estimates the probability of transformative AI by various dates, by asking: "Based on the usual patterns in how much 'AI training' costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?" Cited in Forecasting Transformative AI: Are we "trending toward" transformative AI? and Forecasting transformative AI: the "biological anchors" method in a nutshell.

How Much Computational Power Does It Take to Match the Human Brain? (Carlsmith 2020). Cited in Forecasting transformative AI: the "biological anchors" method in a nutshell. Methodology informs the "AI model as big as a human brain" aspect of the Biological Anchors report.

The Future of Human Evolution (Bostrom 2004) on potential bad dynamics of a "race to populate the galaxy." Cited in How to make the best of the most important century?

The Precipice (Ord 2020) Chapter 7. Discusses the "long reflection," a potential period in which people could collectively decide upon goals and hopes for the future, ideally representing the most fair available compromise between different perspectives. Cited in How to make the best of the most important century?