Clicky

- Replies to Ancient Tweets

Replies to Ancient Tweets

Occasionally someone tweets about Cold Takes such that I want to respond. But I can't respond over Twitter because I don't tweet. So here are my thoughts on some tweets people have made. The vibe I’m going for here is kind of like if someone delivered a sick burn in person, then received a FAX 6 years later saying “Fair point - but have you considered that it takes one to know one?” or something.

@MatthewJBar on an email I sent to Tyler Cowen on transformative AI (3/9/22)

In response to my email to Tyler Cowen, Matthew Barnett tweeted a number of disagreements. I think this is overall a thoughtful, intelligent response that deserves a reply, though I ultimately stand by what I said on nearly all points. I think most of the disagreements come down to confusions about what I meant in that abbreviated context, which isn't terribly surprising - I word things much more carefully when I have more space, but that was a deliberately shortened and simplified presentation.

Details:

My comment was intended to highlight that the Bio Anchors report took many approaches to modeling, rather than to claim there's no conceivable or plausible way to use its framework to reach a long-timelines conclusion; however, my wording didn't make that clear, and that's my bad. I do stand by the former message.

Barnett's critique of Bio Anchors points out that the report assumes a 2.5-year doubling time for hardware efficiency, and does not incorporate variation around this in its uncertainty. However:

  • Barnett's critique doesn't propose an alternative trajectory of hardware progress he thinks is more likely, or spell out what that would mean for the overall forecasts, besides saying that the doubling time has been closer to 3.5 years recently.
  • The Bio Anchors report includes a conservative analysis that assumes a 3.5 year doubling time with (I think more importantly) a cap on overall hardware efficiency that is only 4 orders of magnitude higher than today's, as well as a number of other assumptions that are more conservative than the main Bio Anchors report's; and all of this still produces a "weighted average" best guess of a 50% probability of transformative AI by 2100, with only one of the "anchors" (the "evolution anchor," which I see as a particularly conservative soft upper bound) estimating a lower probability.
  • This highlights that it isn't enough to simply assume a slower doubling time in order to expect transformative AI development to stretch past 2100. You need to put in a lot of (IMO) overly conservative assumptions at once.

I do think that in full context, the "conservative" assumptions about compute gains are in fact too conservative. This is simply an opinion, and I hope to gain more clarity over time as more effort is put into this question, but I'll give one part of the intuition: I think that conditional on hardware efficiency improvements coming in on the low side, there will be more effort put into increasing efficiency via software and/or via hybrid approaches (e.g., specialized hardware for the specific tasks at hand; optimizing researcher-time and AI development for finding more efficient ways to use compute). So reacting to Bio Anchors by saying "I think the hardware projections are too aggressive; I'm going to tweak them and leave everything else in place" doesn't seem like the right approach.

Overall, I think there are plenty of open questions and room for debate regarding Bio Anchors, but I think a holistic assessment of the situation supports a broad, qualitative claim along the lines of "It's pretty hard to see how the most reasonable overall usage of this framework would leave us with a bottom-line median expectation of transformative AI being developed after 2100."

I was referring to https://www.metaculus.com/questions/5121/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of-stronger-operationalization/ and https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/ , which seem more germane than the link Barnett gives in the tweet above. (There are many ways transformative AI might not be reflected in economic growth figures, e.g. if economic growth figures don't include digital economies; if misaligned AI derails civilization; or if growth is deliberately held back, perhaps with AI help, in order to buy more time for improving things like AI alignment.) I also note that this question has been particularly volatile; the forecast has been below 2100 a number of times, including (barely) as I write this.

The 2138 response was for a subset of respondents; I am referring to the mainline forecast (more here).

I'm not sure how my statement is misleading, if we agree that the burden of proof isn't "giant."

I'm going to stand by my statement here - these look to be simply ceteris paribus reasons that AI development might take longer than otherwise. I'm not seeing a model or forecast integrating these with other considerations and concluding that our median expectation should be after 2100. (To be clear, I might still stand by my statement if such a model or forecast is added - my statement was meant as an abbreviated argument, and in that sort of context I think it's reasonable to say "reasonably close to nonexistent" when I mean something like "There aren't arguments of this form that have gotten a lot of attention/discussion/stress-testing and seem reasonably strong to me or, I claim, a reasonable disinterested evaluator.")

I think the confusion here is whether ems count as transformative AI.

  • In the link Matthew gives above, Robin states: "Now of course, I completely have this whole other book, Age of Em, which is about a different kind of scenario that I think doesn’t get much attention, and I think it should get more attention relative to a range of options that people talk about. Again, the AI risk scenario so overwhelmingly sucks up that small fraction of the world. So a lot of this of course depends on your base. If you’re talking about the percentage of people in the world working on these future things, it’s large of course."
  • In the context of that conversation, Robin is contrasting "AI" with "ems." But in a broader context, I think it is appropriate to think of the Age of Em scenario as a transformative AI scenario: it's one in which digital minds cause an economic growth explosion.
  • (This is why I said "of a sort" in my abbreviated summary.)

@ezraklein on Rowing, Steering, Anchoring, Equity, Mutiny (11/30/21)

Ezra Klein on Rowing, Steering, Anchoring, Equity, Mutiny:

I basically think I just agree with all of this. My post didn’t present them as mutually exclusive, just as sources of confusion (see this table where I categorize different non-exclusive combinations). “Maintenance” is a good one.

@Evolving_Moloch on Pre-agriculture gender relations seem bad (11/29/21)

William Buckner on Pre-agriculture gender relations seem bad:

I feel spiritually very on board with a comment like “You have to read, can’t just accept codes!” I ideally would’ve read all of the details, and I hope to come back and do this someday. Why didn’t I?

  • I think I did check out a couple, but it was really logistically difficult for a number of them, as they were sourced from expensive out-of-print books and things like that. (I had a similar issue with the data on early violence cited by Better Angels of our Nature, and I’ve been slowly finding used versions of the books, compiling a little library, and planning to eventually dig through everything and see whether the whole picture comes crashing down. I might not finish that for a long time, but stay tuned!)
  • I ultimately decided not to explore every angle I could, which could’ve taken weeks. Instead I figured: “I’ve dug further into this than any other concise presentation I’ve seen, and certainly further than the existing highly-cited sources (e.g., Wikipedia) seem to, so why not put out what I have, and if it spreads, maybe someone else will point out what I missed?” In a sense, then, this worked OK, and I basically endorse “Dig deeper than others have” as a better rule than “Do a full minimal-trust investigation every time.”
  • Another factor was that I expect my conclusion to hold up based on a number of other subtler data points, such as the fact that Wikipedia’s “support” for good gender relations actually contained quite a bit of info that seemed to suggest bad gender relations, and the fact that the best source I’d found on hunter-gatherers overall seemed pretty down on the situation. (More at the full piece.)
  • And indeed, if the above points are the biggest corrections out there, that really doesn’t seem to change the picture much. I do not think “women lead women’s spaces” is good enough! And yeah, I wouldn’t instinctively classify the example he gives as a “leader” in the sense of having political power over the society as a whole, though I’d guess there are a lot more details that matter in that case.