Thesis: EA/AI safety is comprised of a bunch of tiny orgs that are not closely coordinated. Why is this? How does this compare to the structures of a competent large org (eg Google?)
Comparing movements (AI safety, EA) to large orgs (Google, Amazon)
- How are initiatives started?
- Google: Top down management, OKRs. Some 20% time projects bottoms-up.
- EA/AIS: Bottoms-up doocracy. Funders have a bit of say, but mostly approve/reject applications.
- How is strategy decided?
- Google: ?
- EA/AIS: Some people write papers or books (eg Will MacAskill, Toby Ord, Scott Alexander). Others pick up on them and try to execute on them?
- How do individuals and projects get feedback about whether they are succeeding?
- Google: Feedback from managers, also the real world. Perf review every other year.
- EA/AIS: Occasional feedback from funders, paper submissions.
- How does funding move?
- Google: (speculating) Top-down allocation of headcount to managers?
- Some things like Waymo are spun out
- EA/AIS is thought to be pretty centralized to OpenPhil, though there are other ways to get money
- Some fields are more competitive, between multiple funders
- How do members communicate to each other? How does information move?
- Google: Internal emails, chat, mailing lists. You prioritize stuff from your own email
- EA/AIS: Blogs, emails, conferences, Constellation slack, Twitter
- How does the field communicate to the world?
- Google: Kinda badly with “comms”/PR, vs eg sama on Twitter
- But also: ship a website that everyone uses, which is a kind of comms
- EA/AIS: also badly. random talks with media? Own our own stack?
- How are members recruited?
- Google: You get hired. Very clear line of in or out. Screened fairly carefully.
- EA/AIS: 80k, blogs, friendships, programs like Atlas.
- How are membership boundaries enforced?
- Google: Clear “who is hired”, also leveling system for seniority
- EA/AIS: Very unclear who is in our out, or what people’s roles are. Same for what orgs are in EA or AI safety.
- How are members supported?
- Google: Benefits, vesting. Retention matters a lot
- EA/AIS: Parties, social connections. Retention doesn’t matter as much.
- Internal tooling & software
- Google invests a lot in this, has a whole cottage industry of internal tools
- A bit Galapagos-like, in that the wider world has
- EA/AIS have EA Forum/Lesswrong
- Facilities
- Google has pretty amazing campuses set up for productivity
- EA/AIS have a few coworking & events spaces (Mox, Constellation, Lighthaven)
Appendix
Think about:
- vs in the startup ecosystem, field of tech/AIS?
- vs within a growing startup (cf High Growth Handbook?), eg Stripe or Anthropic?
- How do theory of firm considerations change with
- Which is a better bet, “AI safety” or “Anthropic”?
- Considerations on when things should be small orgs/firms vs aligned entities
- Ronald Coase on transaction costs
- Large orgs can have “money” as alignment mechanism
Observations:
- Case study: I asked Nick Beckstead why he is doing Secure AI Project rather than doing more funding stuff similar to Future Fund or OpenPhil. Turns out funders don’t have very much control over outcomes, and he saw so much low hanging fruit. He’s conceiving of SAIP’s advantage as taking full ownership over outcomes, which is rare in AIS nonprofits.
Motivation for this piece:
- Figure out what ties Manifund together
- Doing 4 different things, vs focusing on a single thing well
- Climbing gradient towards “larger, more money” vs setting up incentives for the field
- Considerations on a Manifund incubation
- Have reasonable suggestions for AI for Epistemics retreat & fieldbuilding
- publish this & the retro thoughts from hackathon