I thought it would be good to write a couple of posts covering what I see as the weakest points in the “most important century” series, now that I’ve gotten some reactions and criticisms.
I currently think the weakest point in the series runs something like this:
- It’s true that if AI could literally automate everything needed to cause scientific and technological advancement, the consequences outlined in the series (a dramatic acceleration in scientific and technological advancement, leading to a radically unfamiliar future) would follow.
- But what if AI could only automate 99% of what’s needed for scientific and technological advancement? What if AI systems could propose experiments but not run them? What if they could propose experiments and run them, but not get regulatory clearance for them? In this case, it’s plausible that the 1% of things AIs couldn’t do quickly and automatically would “bottleneck” progress, leading to dramatically less growth.
- The series cites expert opinion on when transformative AI will be developed. Technically speaking, the type of situation that the respondents are forecasting - “unaided machines can accomplish every task better and more cheaply than human workers" - should be enough for a productivity explosion. But the people surveyed might be thinking of a slightly less powerful type of AI than is literally implied by that statement - which could lead to dramatically smaller impacts. Or they could be imagining that even AIs with intellectual capability to match humans still might lack the in-practice ability to do key tasks because (for example) they aren’t instinctively trusted by humans. Either way, they (the survey respondents) could be imagining something almost as capable - but not nearly as impactful - as the type of AI I discuss.
- Furthermore, even if AIs could do everything that humans do to automate scientific and technological advancement, their scientific and technological progress might have to wait on the results of real-world experiments, which could slow them down a lot.
In brief: a small gap in what AI can automate could lead to a lot less impact than the series implies. Automating “almost everything” could be very different from automating everything.
This is important context for the attempts to forecast transformative AI: they are really forecasting something pretty extreme.
I think all of the above is about right as stated: we would indeed need extreme levels of automation to produce the consequences I envision. (There could be a few tasks that need to be done by humans, but they’d have to be quite a small and limited set in order to avoid slowing things down a lot via bottleneck.)
It’s also true that I haven’t spelled out how such extreme automation could be achieved - how each activity needed to advance scientific and technological advancement (including running experiments and waiting for them to finish) could be done in a quick and/or automated way, without human or other bottlenecks slowing things down much.
With that acknowledged, it’s also worth noting that the extreme levels of automation need not apply to the whole economy: extreme automation for a relatively small set of activities could be sufficient to reach the conclusions in the series.
For example, it might be sufficient for AI systems to develop increasingly efficient (a) computers; (b) solar panels (for energy); (c) mining and manufacturing robots; (d) space probes (to build more computers in space, where energy and metal are abundant). That could be sufficient (via feedback loop) for explosive growth in available energy, materials and computing power, and there are many ways that such growth could be transformative.
For example and in particular, it could lead to:
- Misaligned AI with access to dangerous amounts of materials and energy.
- Digital people, if AI systems also had some way of (a) “virtualizing” neuroscience (via virtual experiments or simply dramatically increasing the rate of learning from real-world experiments); or (b) otherwise having insight about how to create something we would properly regard as “digital descendants.”
I don’t think I’ve thoroughly (or, for readers with strong initial skepticism on this point, convincingly) demonstrated that advanced AI could cause explosive acceleration in scientific and technological advancement, without hitting human-dependent or other “bottlenecks.” I think I have given a good sense of the intuition for why they could, but this is certainly a topic that I haven’t poked as hard as I could; I hope and expect that someone will eventually.
I do think such poking will ultimately support the picture I’ve given in the “most important century” series. This is partly based on the reasoning above: the relatively limited scope of what would need to be fully automated in order to support my broad conclusions. It's also partly based a similar reasoning process to what I’ve used in the past to guess at some key conclusions before we’d done all the homework: engaging in a lot of conversations and forming views on how informed different parties are and how much sense they’re making. But I acknowledge that this is not as satisfying or reliable as it would be if I gave a highly detailed description of what precise activities can be automated.
For email filter: florpschmop