"Most important century" series: roadmap
This is an outline of how each piece in the "most important century series relates to the overall argument. I think it's useful to read through this before reading through the series, to get a sense of where each piece fits in.
I think we have good reason to believe that the 21st century could be the most important century ever for humanity. I think the most likely way this would happen would be via the development of advanced AI systems that lead to explosive growth and scientific advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.
A bit more specifically,1 I think there is a good chance that:
- During the century we're in right now, we will develop technologies that cause us to transition to a state in which humans as we know them are no longer the main force in world events. This is our last chance to shape how that transition happens.
- Whatever the main force in world events is (perhaps digital people, misaligned AI, or something else) will create highly stable civilizations that populate our entire galaxy for billions of years to come. The transition taking place this century could shape all of that.
I think it's very unclear whether this would be a good or bad thing. What matters is that it could go a lot of different ways, and we have a chance to affect that.
I believe the above possibility doesn't get enough attention, discussion, or investment, particularly from people whose goal is to make the world better. By writing about it, I'd like to either help change that, or gain more opportunities to get criticized and change my mind.
This post serves as a summary/roadmap for an 11-post series arguing these points (and the posts themselves are often effectively summaries of longer analyses by others). I will add links as I put out posts in the series.
Our wildly important era
All Possible Views About Humanity's Future Are Wild argues that two simple observations - (a) it appears likely that we will eventually be able to spread throughout the galaxy, and (b) it doesn't seem any other life form has done that yet - are sufficient to make the case that we live in an incredibly important time. I illustrate this with a timeline of the galaxy.
The Duplicator explains the basic mechanism by which "eventually" above could become "soon": the ability to "copy human minds" could lead to a productivity explosion. This is background for the next few pieces.
Digital People Would Be An Even Bigger Deal discusses how achievable-seeming technology - in particular, mind uploading - could lead to unprecedented productivity, control of the environment, and more. The result could be a stable, galaxy-wide civilization that is deeply unfamiliar from today's vantage point.
Our century's potential for acceleration
This Can't Go On looks at economic growth and scientific advancement over the course of human history. Over the last few generations, growth has been pretty steady. But zooming out to a longer time frame, it seems that growth has greatly accelerated recently; is near its historical high point; and is faster than it can be for all that much longer (there aren't enough atoms in the galaxy to sustain this rate of growth for even another 10,000 years).
The times we live in are unusual and unstable. Rather than planning on more of the same, we should anticipate stagnation (growth and scientific advancement slowing down), explosion (further acceleration) or collapse.
Forecasting Transformative AI, Part 1: What Kind of AI? introduces the possibility of AI systems that automate scientific and technological advancement, which could cause explosive productivity. I argue that such systems would be "transformative" in the sense of bringing us into a new, qualitatively unfamiliar future.
Why AI Alignment Could Be Hard With Modern Deep Learning (guest post) goes into more detail on why advanced AI systems could be "misaligned," with potentially catastrophic consequences.
Forecasting transformative AI this century
Forecasting Transformative AI: What's The Burden Of Proof? argues that we shouldn't have too high a "burden of proof" on believing that transformative AI could be developed this century, partly because our century is already special in many ways that you can see without detailed analysis of AI.
Forecasting Transformative AI: Are We "Trending Toward" Transformative AI? discusses the basic structure of forecasting transformative AI, the problems with trying to forecast it based on trends in "AI impressiveness," and the state of AI researcher opinion on transformative AI timelines.
Forecasting Transformative AI: The "Biological Anchors" Method In A Nutshell summarizes the biological anchors framework for forecasting AI. This framework is the main factor in my specific forecasts.
I am forecasting more than a 10% chance transformative AI will be developed within 15 years (by 2036); a ~50% chance it will be developed within 40 years (by 2060); and a ~2/3 chance it will be developed this century (by 2100).
AI Timelines: Where The Arguments, And The "Experts," Stand briefly summarizes the state of the arguments and addresses the question, "Where does expert opinion stand on all of this?"
- The claims I'm making neither contradict a particular expert consensus, nor are supported by one (though most of the key reports I cite have had external expert review). They are, rather, claims about topics that simply have no "field" of experts devoted to studying them.
- Some people might choose to ignore any claims that aren't actively supported by a robust expert consensus; but I don't think that is what we should be doing here.
How To Make The Best Of The Most Important Century? discusses different, contrasting views of how to help the most important century go as well as possible for humanity - and lists "robustly helpful actions" that seem worth taking regardless.
Call To Vigilance is in lieu of a "call to action" for the series. Given all the uncertainty we face, I don't think people should rush to "do something" and then move on. Instead, they should take whatever robustly good actions they can today, and otherwise put themselves in a better position to take important actions when the time comes.
Some supplemental posts that elaborate on points made in the series:
- Some additional detail on what I mean by "most important century"
- A note on historical economic growth: How the "most important century" argument is affected if our picture of long-run economic history changes.
- More on “multiple world-size economies per atom”: A follow up on "This Can't Go On" for the skeptical.
- Weak point in “most important century”: full automation (acknowledges that I could have done more to address the question of how complete AI automation has to be to bring about the consequences I discuss, and adds a bit more on this point)
- Weak point in “most important century”: lock-in (acknowledges that I could have done more to address how AI could lead to "lock-in" of the long-run future, and adds a bit more on this point)
- “Biological anchors” is about bounding, not pinpointing, AI timelines: more on how I've used the "biological anchors" framework, aimed at skeptical readers.
I've listed some key sources for this series in one place here, for those interested in going much deeper.
First piece in series: All Possible Views About Humanity's Future Are Wild