Clicky

- Call to Vigilance
By Holden Karnofsky 5 min read

Call to Vigilance

Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.

Today’s world Transformative AI Digital people World of Misaligned AI World run by Something else or or Stable, galaxy-wide civilization

This is the final piece in the "most important century" series, which has argued that there's a high probability1 that the coming decades will see:

When trying to call attention to an underrated problem, it's typical to close on a "call to action": a tangible, concrete action readers can take to help.

But this is challenging, because as I argued previously, there are a lot of open questions about what actions are helpful vs. harmful. (Although we can identify some actions that seem robustly helpful today.)

This makes for a somewhat awkward situation. When confronting the "most important century" hypothesis, my attitude doesn't match the familiar ones of "excitement and motion" or "fear and avoidance." Instead, I feel an odd mix of intensity, urgency, confusion and hesitance. I'm looking at something bigger than I ever expected to confront, feeling underqualified and ignorant about what to do next. This is a hard mood to share and spread, but I'm trying.

Situation Appropriate reaction (IMO)
"This could be a billion-dollar company!" "Woohoo, let's GO for it!"
"This could be the most important century!" "... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one."

So instead of a call to action, I want to make a call to vigilance. If you're convinced by the arguments in this piece, then don't rush to "do something" and then move on. Instead, take whatever robustly good actions you can today, and otherwise put yourself in a better position to take important actions when the time comes.

This could mean:

  • Finding ways to interact more with, and learn more about, key topics/fields/industries such as AI (for obvious reasons), science and technology generally (as a lot of the "most important century" hypothesis runs through an explosion in scientific and technological advancement), and relevant areas of policy and national security.
  • Taking opportunities (when you see them) to move your career in a direction that is more likely to be relevant (some thoughts of mine on this are here; also see 80,000 Hours).
  • Connecting with other people interested in these topics (I believe this has been one of the biggest drivers of people coming to do high-impact work in the past). Currently, I think the effective altruism community is the best venue for this, and you can learn about how to connect with people via the Centre for Effective Altruism (see the "Get involved" dropdown). If new ways of connecting with people come up in the future, I will likely post them on Cold Takes.
  • And of course, taking any opportunities you see for robustly helpful actions.

Buttons you can click

Here's something you can do right now that would be genuinely helpful, though maybe not as viscerally satisfying as signing a petition or making a donation.

In my day job, I have a lot of moments where I - or someone I'm working with - is looking for a particular kind of person (perhaps to fill a job opening with a grantee, or to lend expertise on some topic, or something else). Over time, I expect there to be more and more opportunities for people with specific skills, interests, expertise, etc. to take actions that help make the best of the most important century. And I think a major challenge will simply be knowing who's out there - who's interested in this cause, and wants to help, and what skills and interests they have.

If you're a person we might wish we could find in the future, you can help now by sending in information about yourself via this simple form. I vouch that your information won't be sold or otherwise used to make money, that your communication preferences (which the form asks about in detail) will be respected, and that you'll always be able to opt out of any communications.

Sharing a headspace

In This Can't Go On, I analogized the world to people on a plane blasting down the runway, without knowing why they're moving so fast or what's coming next:

Animated image of the view out the airplane window as it blasts down the runway.

As someone sitting on this plane, I'd love to be able to tell you I've figured out exactly what's going on and what future we need to be planning for. But I haven't.

Lacking answers, I've tried to at least show you what I do see:

  • Dim outlines of the most important events in humanity's past or future.
  • A case that they're approaching us more quickly than it seems - whether or not we're ready.
  • A sense that the world and the rules we're all used to can't be relied on. That we need to lift our gaze above the daily torrent of tangible, relatable news - and try to wrap our heads around weirder, wilder matters that are more likely to be seen as the headlines about this era billions of years from now.

There's a lot I don't know. But if this is the most important century, I do feel confident that we as a civilization aren't yet up to the challenges it presents.

If that's going to change, it needs to start with more people seeing the situation for what it is, taking it seriously, taking action when they can - and when not, staying vigilant.

Twitter Facebook Reddit More

  

Use "Feedback" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use "Forum" if you want to discuss this post publicly on the Effective Altruism Forum.


Footnotes

  1. "I am forecasting more than a 10% chance transformative AI will be developed within 15 years (by 2036); a ~50% chance it will be developed within 40 years (by 2060); and a ~2/3 chance it will be developed this century (by 2100)."