Career Resources on AI Strategy Research

By AI Safety Fundamentals Team (Published on November 8, 2022)

Summary and Introduction

One potential way to improve the impacts of AI is helping various actors figure out good AI strategies—that is, good high-level plans focused on AI.  To support people who are interested in that, we compile some relevant career information, advice, and opportunities on this page.

There is currently little other public writing that advises on these careers, so this advice is mostly based on one person’s interpretation of informal conversations and scattered online sources from several experts in the field. As a result, this advice is likely flawed and incomplete. (Still, hopefully, it’s still better than the current alternative, which for many people may be even more limited.)

Here is the short version: If successful, AI strategy research could unlock ways for much talent and funding to be usefully deployed toward improving the long-term impacts of AI. This research can be done in a wide range of ways, ranging from highly empirical to highly conceptual approaches. There have been some major, recent successes in the field (but not many), suggesting this work is a little tractable. To build relevant expertise, it is great to learn about many relevant fields, talk with professionals in the field, and work with relevant companies or governments. To test and demonstrate your interest and ability, a good default action is to try out relevant research and share what you come up with (even if that’s, say, a blog post summary of some existing research). For more resources on relevant advice, research questions, and opportunities, see the links in this document.

Career information - what is AI strategy research?

Zooming out, failure to anticipate future events and chart a course toward better ones may leave us in trouble—facing avoidable problems and unprepared in crises. In the context of AI, this isn’t just a theoretical concern; long-term-oriented grantmakers and policy professionals often point to a lack of strategic clarity—specifically, lack of clarity over which “intermediate goals” are (highly) valuable to pursue—as one of the few key bottlenecks (or the bottleneck) on their ability to improve the long-term impacts of AI. In other words, greater strategic clarity could unlock much talent and funding, allowing them to be effectively deployed toward helping make AI go well. Some people work on this through the closely related tasks of strategy and forecasting research. That means trying to figure out what will happen with AI and what (at a high level) companies, nonprofits, governments, or other actors should do about it.

What is AI strategy research?

See this post (Clarke, 2022) for an overview of what AI strategy research is and how it relates to other activities. In short, AI strategy researchers are at the first part of a process that starts with confusion and (ideally) ends with good decisions. To simplify, AI strategy and tactics researchers figure out high-level goals, policy development researchers translate those high-level goals into detailed policy proposals, advocates convince decision makers to make those proposals into actual policy, and then people implement the policy. (In practice, as mentioned in Clarke’s post, the process is almost always not so straightforward, e.g., because information flows “backward” along that chain.)

How do people do AI strategy research?

There are a bunch of different approaches, or types of research questions, that researchers bring to AI strategy. They vary a lot in style/“feel” and methodology, from nitty-gritty, empirical work to philosophical, conceptual work. Here are some major approaches, very roughly ordered from more empirical to more conceptual [1]:

  • Monitoring: Asking, “What is currently going on (in areas relevant to AI)?” Examples of this research include analyses of what is happening in the semiconductor supply chain, China’s AI strategy, and cryptography. (Such work may help inform forecasts about which AI-related events are feasible and which are likely.)
  • Examining history: Asking, “What can we learn from historical analogues of AI (as an area of governance or as a technical research field)?” [2] Examples of this research include historical case studies of strategic general-purpose technologies, nuclear weapons governance, and early field growth.
  • Examining feasibility: Asking, “Is a certain strategic proposal feasible (technically, politically, legally, economically, etc.)?”[3] Examples of this include work on whether AI industry self-regulation could be compliant with antitrust law and work on the technical feasibility of verifying compliance with AI treaties.
  • Technical forecasting: Asking, “What will future technical advances in AI capabilities, applications, or related technologies be like?” Examples of this include work on forecasting AI timelines and on AI “takeoff speeds.”
  • Examining late-stage scenarios: Asking, “If certain scenarios come about shortly before major advances in AI, how well will things go from there, or what should relevant actors do then?” Examples include these two pieces. (Such work may help decision makers set late-stage goals or plans.)
  • Developing strategy: Asking, “What high-level plan should a certain actor (or set of actors) have?” All the other items on this list feed into answering this question, and then strategy development involves synthesizing other insights to actually answer the big-picture question of what strategies actors should have. Examples of this work include these two pieces (overlapping somewhat with the previous category).
  • Assessing risks: Asking, “How likely are various AI-related catastrophes?” Examples of this work include multiple analyses of risks from misaligned AI. (Such work may help clarify what problems should be prioritized and what is needed to solve them.)
  • Macrostrategy: Asking, “What are some important, big-picture trends and considerations we might be missing?” Examples of this probably include much of Nick Bostrom’s work, such as his work on the vulnerable world hypothesis, “Malthusian traps,” and digital minds.
  • (There are probably other approaches that we’re missing.)

Is long-term AI strategy research tractable enough?

Is figuring this stuff out just too hard?

On one hand, there have been at least several significant, recent successes in AI strategy research (see footnote for examples [4]), even though there have not been many people working on this. Additionally, one expert estimates that there are very few people—“fewer than 20 people employed by all multi-person organizations combined”—working full-time on big-picture AI strategy questions. This suggests AI strategy research is a little tractable, in part because much low-hanging fruit may still be unpicked. And that tractability, given the usefulness of some strategic clarity, may be enough to make this research worthwhile.

On the other hand, several successes over the past handful of years isn’t much. One expert cautions that AI strategy research “is very hard to do in a way that is likely correct, convincing to others, and thorough enough to be action-guiding,” and that people may often be better able to help improve the trajectory of AI through other sorts of work.

Career advice

What to learn about and how - Fields and coursework

Based on resources linked below and the varied academic backgrounds of AI strategy researchers, there is no “must-have” coursework experience for doing AI strategy research. Still, there are various fields it seems useful to learn about (whether through classes or other means):

  • Generalist and interdisciplinary knowledge along with specialist expertise is often valued.
    • Relevant fields include:
      • Political science, international relations, security studies, history, and policy;
      • Economics and statistics;
      • Law; and
      • Computer science (especially machine learning [5], and perhaps also: hardware engineering, distributed computing, security, and cryptography).
  • Background familiarity with AI alignment and AI governance is often useful (though currently not commonly offered in universities).

Additionally, writing skills and analytical thinking skills are often highly valued in AI strategy research[6]. Writing-heavy classes and extracurriculars seem helpful for the former; perhaps philosophy and (proof-based) math are helpful for the latter.

Discussion

Much thinking in AI strategy has not been published yet or can be hard to make sense of independently. Because of this, to get lots more context, it can be very useful to—as Muehlhauser recommends—“discuss the issues in depth with "veterans" of the topic.”

Educational jobs

Some relevant professionals highlight certain jobs for their value toward developing AI strategy expertise.

  • Muehlhauser suggests that, in most cases, people who want to try AI strategy work “gain experience working in relevant parts of key governments and/or a top AI lab (ideally both) so that you acquire a detailed picture of the opportunities and constraints those actors operate with.”
    • [If D.C.-based roles interest you, a companion document compiles career resources on U.S. AI policy.]
    • [If roles at AI labs interest you, a common path seems to be first building demonstrable experience through early-career work in relevant research nonprofits or in policy.]
  • Muehlhauser and Karnofsky suggest that, if you are able to, going directly to a research role with a supervisor/organization working on the type of question you are interested in can also be a promising starting point.
    • Muehlhauser: “you can try to help answer one or more narrowly-scoped questions that an AI x-risk motivated person who is closer to having those advantages has identified as especially action-informing (even if you don't have the full context as to why).”
  • Writing about empirical and conceptual longtermist research, Karnofsky recommends, as an additional option: “I think other jobs are promising as well for developing key tools, habits, and methods: [...] Jobs that heavily feature making difficult intellectual judgment calls and bets, preferably on topics that are “macro” and/or as related as possible to the questions you’re interested in. There are some jobs like this in "buy-side" finance (trying to predict markets) and in politics (e.g. BlueLabs).”

What degrees to get

There is no “must-have” degree for doing AI strategy research; the undergraduate and graduate degrees of researchers in the field are quite varied (though typically in the above-mentioned fields). Still, some related considerations may be helpful:

  • Just about everyone in the field (other than some interns) has at least a bachelor’s degree.
  • One might have guessed that PhDs are necessary for doing AI strategy research. This is false; PhDs are not necessary and do not appear extremely helpful for working in the field, partly because most of this research is happening in organizations outside academia. Anecdotally, under half of AI strategy researchers seem to have PhDs.
    • Still, relevant employers tend to highly value research track records, and PhD programs are one way to build such a track record (so are research internships).
  • Insofar as you aim to do AI strategy research as a part of, or in addition to, U.S. policy work, then the standard advice for U.S. policy work and related graduate school choices applies (e.g. it probably makes sense to do some sort of policy-relevant graduate school).
  • Insofar as you aim to do AI strategy research as a part of, or in addition to, working at AI companies, then graduate degrees tend to not matter much (perhaps somewhat more at DeepMind).

How to test and demonstrate your fit

While some of the previous sections focused on how you could build up your AI strategy research ability, this section is focused on ways to test and demonstrate it. (In practice, these will also probably be helpful for building the ability.) The main ways people test and demonstrate their fit for AI strategy research are:

  • Internships and fellowships focused on AI strategy/governance
  • (Semi[7]-)independent research on AI strategy or other research questions
    • Doing and sharing research lets you demonstrate your interest, writing skills, analytical skills, knowledge (including interactional expertise), and ability to synthesize information.
    • Many people find supportive environments and guidance really helpful, so for most people, it’s probably better to do one of the above internships or fellowships than to spend the same amount of time doing independent work. Still, independent work can be a useful alternative, and it can be a great way to demonstrate that you would be a good fit for an internship or fellowship[8].
    • The “‘Conceptual and empirical research on core longtermist topics’ aptitudes” section of this post (Karnofsky, 2021) gives somewhat detailed advice on how you can try this out and assess how you’ve done.
    • As mentioned in the above post, this work can be done “on free time, and/or on scholarships designed for this (such as EA Long-Term Future Fund grants, Research Scholars Program [not currently taking applicants], and Open Philanthropy support for individuals working on relevant topics).”
    • Relatedly, another AI strategy researcher suggests: the easiest way to get started here is with tasks like summarizing existing work or doing a literature review. And then from there, look out for natural questions that arise.

How to do AI strategy research well

This doesn’t seem to have been really figured out yet, but here are relevant resources (all great reads, in the author’s opinion):

Where people get hired to do AI strategy research [10]

For context, a quick note on terminology: AI strategy opportunities are often labeled “AI governance,” including when they are not focused on governments.

AI companies: At least two major AI companies have teams of researchers explicitly focused on future-oriented AI strategy and governance:

DC think tanks:

Philanthropic foundations:

  • Open Philanthropy (specifically their worldview investigations team) has published various reports on AI technical forecasting. Their research informs their own grantmaking decisions in AI safety and governance.

Other nonprofit (including academic) research organizations:

(See this document for more information about the above organizations as well as additional relevant organizations.)

Footnotes

  1. Admittedly, the order of numbers four to six is pretty contestable.

  2. It’s probably conceptually cleaner to think of historical case studies as a sub-approach to multiple (all?) other approaches in this list, but the approach seems prominent enough to emphasize.

  3. This work can draw heavily from other approaches on this list, e.g., monitoring and technical forecasting. It can look more like tactics-level research (e.g., examining multiple potential ways to implement a strategy, to see if any are feasible), which could be thought of as distinct from strategy research. Still, I include it as a type of strategy research because of its importance to strategy research, as well as the fact that it seems to often be done by researchers who are more generally AI strategy researchers.

  4. My guess is that significant post-2015 successes in AI strategy research include:

    • CSET’s work on semiconductor supply chains, which arguably established AI hardware “chokepoints” as a promising (potentially even critical) lever for mitigating AI safety and misuse risks.
    • Open Philanthropy’s research on AI timelines (summarized here), which arguably clarified that transformative AI will very plausibly come this century (which arguably means that trying to impact the trajectory of transformative AI isn’t totally crazy).
    • The development of increasingly concrete and thorough high-level governance proposals for mitigating AI risks, such as this analysis.

    (A smaller success) Research on AI “takeoff speeds” (summarized here), which arguably made many people in the field take seriously a plausible class of scenarios they had previously been overly quick to dismiss: the possibility that major AI advances will happen gradually.

  5. It is typically helpful but not necessary to be familiar with machine learning for these roles. Tips on how to get this basic familiarity:

    • This YouTube video series by the channel 3Blue1Brown offers a nicely animated introduction to machine learning.
    • For moving from that introduction to having basic familiarity with machine learning, a promising approach can be taking courses in several prerequisites (introductory courses in statistics, linear algebra, and Python programming, as well as multivariable calculus) and then taking an introductory course in machine learning or deep learning (not so much “AI,” since classes labeled “AI” are often relatively outdated).
  6. These are frequently emphasized by relevant job postings and employers, and it makes sense that these skills would be highly valued for jobs that mostly consist of doing and writing up research.

  7. By “semi-independent research,” I mean something like “doing research mostly on your own, while also emailing a few relevant researchers to get suggestions and feedback.” I speculate that people will often be receptive to giving feedback, especially if (a) you ask relatively junior researchers who are doing relevant work, (b) you make it clear how they can help you, with a specific ask (e.g., “Is there anything you’ve found useful to read on topic X?”, or “Could you give feedback on research plan Y?”, less so, “Any advice?”), and (c) you give people ways to help you quickly (e.g., “Could you share any quick thoughts on this 1-page summary?”, not so much starting off with, “Want to review my 50-page essay?”).

  8. After all, these programs have reputations for having quite competitive applications, and it can be hard for an application reviewer to be confident you’re an especially good fit if you’ve done no relevant work. (Still, anecdotally, most successful and most unsuccessful applicants don’t seem to have done relevant work independently.)

  9. While this discussion focuses on technical work in AI alignment, one of its main points arguably generalizes to AI strategy and governance research: research can be more impactful by prioritizing problems that people are unlikely to take care of when they come up in the future (e.g. because they require more advance work). In this sense, “your choices about research projects are closely related to your sociological predictions about what things will be obvious [and easy to act on] to whom when.”

  10. In particular, AI strategy research that is highly relevant to especially large-scale AI risks.

We use analytics cookies to improve our website and measure ad performance. Cookie Policy.