🧽

What EA can learn from startups

EA is too analytical, and not iterative enough

What is the most “EA” activity? The example that comes to mind is Givewell doing cost-benefit analyses on which interventions produce the best return on investment, in terms of lives saved per dollar spent. When this was first conceived, it made a lot of sense: there were all kinds of charities running out trying random interventions, and what was missing was an attempt to quantify how these stacked up against one another.

But this analytical origin has permeated through the EA culture, such that the default go-to methodology for tackling a question is “think real hard, then write ten thousand words”. It feels like most EA work happens through long, philosophical, deeply researched tracts with copious footnotes posted on the EA Forum, rather than through trying things out and seeing what happens. Many topics discussed have little experimental grounding, and there’s little appetite for experimentally validating things.

Contrast this to building a software product. At many different levels, you have iterative feedback loops telling you whether the thing you did was good:

  • Second by second, you have the compiler telling you if your code is correctly structured
  • Hour by hour, you have unit tests telling you if anything broke
  • Day by day, you have code reviews from coworkers outlining areas of improvement
  • Week by week, you have metrics indicating whether anyone is actually using your product

These feedback loops rapidly ground a developer’s understanding of what actually happens in the world. What are the feedback loops inside the EA ecosystem?

Another common startup adage is that “ideas are worthless, execution is everything”. But in the EA world, most of what gets evaluated are ideas. Karma, prestige, attention and funding accrue to those who speak prettily.

An analogy: there are two ways to aim a gun. You can sit there with a scope, chart of bullet velocities, wind map, and calculator, trying to estimate exactly where the bullet will land before taking the shot. Or: you can fire a tracer bullet, leaving a streak behind the flight path of the bullet, and then see how different angles

There are times when you want to do the first! If you’re firing a sniper rifle, and only have one chance at getting things right, then you do want to step back and run your calculations carefully; to measure twice before firing once. But it’d be a very costly mistake to use that same process to eg shoot down enemy aircraft; there, you should be firing hundreds of bullets a minute, using tracer rounds to identify where your shots are actually going.

EA is too deliberate, and not impatient enough

Ben Kuhn describes the virtue of impatience:

Being impatient is the best way to get faster at things. And across a surprising number of domains, being really fast correlates strongly with being effective. There’s an obvious way in which moving faster is important: if you’re 10% more productive, you will finish your work in 10% less time, so you can do 10% more work total. But I don’t think that’s the main reason that speed is important… The faster you process information, the faster you can incorporate the result into what you do next. In other words, the main benefit of being fast is that you end up doing different things. Nelson Elhage’s point—“having faster tools changes how users use a tool”—applies across nearly every domain:
  • If you respond to your emails quickly instead of slowly, you’ll get access to more new opportunities, and end up prioritizing them over whatever you would have done instead.
  • If you make it 10x faster to test your code, you don’t just save time waiting on tests—you can start doing test-driven development, discover your mistakes earlier, and save yourself from going down bad paths.
  • If you deploy your new app now instead of next week, you’ll learn how users like the new features one week earlier, and you’ll be able to feed that knowledge back into future product decisions.

That means that moving quickly is an advantage that compounds. Being twice as fast doesn’t just double your output; it doubles the growth rate of your output. Over time, that makes an enormous difference.

With few exceptions, I have a hard time associating “impatient” with EA. The most impatient fastest-moving altruists I have heard of like Tara MacAulay and Sam Bankman Fried took leadership roles in CEA… then left to make a bunch of money instead. I speculate about an evaporative cooling effect, where capital EA “Effective Altruism” orgs lose out on fast movers because fast movers are allergic to staying in slow orgs.

Example: There’s a huge difference in payment speed from fundraising in Silicon Valley, vs receiving grants from FTX/SFF/LTFF. When Manifold went to raise our seed round, angel investors would usually sign after a single video call; median time between getting into contact with the funder to having the check hit our bank account was about two weeks. In contrast, with charitable funding from EA orgs, I can’t recall a single application process that got back to us in under a month; wiring the funds would usually take another 2+ weeks.

  • Soma Cap
  • Leonis Cap
  • Long Term Future Funds
  • FTX Future Fund
  • Survival and Flourishing Funds

Some of this is due to the difficulty of funding through 501c3 charity systems vs for-profit investment schemes. But as another point: incorporating Manifold as a C-Corp was fairly streamlined, taking a couple weeks to figure out through Stripe Atlas. There’s no Stripe Atlas for 501c3 incorporation — getting an entity up-and-running to accept grant funding for Manifold looks like it will take 4-8 weeks.

It feels like EA prizes deliberation over moving fast and breaking things; consensus-seeking over unilateral action, asking permission over begging forgiveness, avoiding bad PR over acquiring good PR.

EA is too conditional, instead of loving universally

In general, it feels like EA asks you to speak in certain shibboleths (eg “be aligned”) before it is willing to direct funding and support your way. And it will enthusiastically fund people who pass the threshold, and basically ignore all others.

One example: the Carrick Flynn campaign drew out dozens of interested students and donors, who were especially excited about the prospect of “one of us” making it to Congress. It was cool to see the amount of resources coalescing around a single cause - but it was also in noted contrast to the relative apathy of EA folks to politics at large.

The contrast might be to a Catholic hospital (disclaimer, I’m Catholic); the emphasis is NOT on providing service only to other Catholic people. Because what would that prove? Every ingroup is happy to help ingroup members. This willingness to serve others was crucial in the spread of early Christianity — if a missionary goes to your village and preaches about the virtues of their faith, that’s not especially convincing. But if that same missionary comes in and spends long hours nursing you to health that one time you caught typhoid, then when you recover you’re much more receptive to the teaching behind the actions. Universal, unconditional love is far more inspirational than watertight logical arguments that seem to further your own aims.

Just looking at the name, “Effective Altruism” has two components: being effective, and being altruistic. Conditionally providing support based on ingroup cohesion can be very effective, but not especially altruistic. Sure, it’s an altruism that cashes out to helping people in the distant future — but how is this distinguishable from any other political or religious movement which claims to be helping others?

The host of my AirBnb was an elderly Black man. He grew up poor. His mother would clean white folks' houses all day to earn money and then clean his home to save money. His parents worked so hard to survive they had little time to raise their children. Two of his sisters became pregnant at age thirteen. When I arrived at my host's home, he gave me a white towel with lots of visible stains. But it was clean. I immediately used it to wipe my face. My AirBnb host is really into Black Power, but he never pushed his political beliefs on me. He wanted to know what it was like to be rich. I wanted to know what it was like to be poor. Were I to go to a fancy hotel, the system would make sure I never had to interact with a man like him.

When was the last time any EA group made “service to others” an explicit action that they did? Sure, there are a lot of service orgs out there, and it’s not in EA’s comparative advantage not to be feeding the homeless in soup kitchens.

What is the marginal impact of doubling down on in-funnel EA members, vs people on the border?

EA Global asks “How EA are you” before deciding to admit you.

  • “Community organizing” “Lead an EA club” is much more heavily weighted than “Actually can do stuff IRL”
  • (Disclaimer, I was rejected from EAG London and am kinda salty about it)

EA is too theoretical, instead of focused on producing legible results

My smart friend, ex-Google Brain and now VP of robotics at Halodi, asked: “For all the money and brainpower spent, what has AI Safety accomplished in the last 5 years?” And… I had a really hard time answering this question!

Jesus says in Matthew 7:15-20:

“Beware of false prophets, who come to you in sheep’s clothing, but inwardly they are ravenous wolves. You will know them by their fruits. Do men gather grapes from thornbushes or figs from thistles? Even so, every good tree bears good fruit, but a bad tree bears bad fruit. A good tree cannot bear bad fruit, nor can a bad tree bear good fruit.  Every tree that does not bear good fruit is cut down and thrown into the fire. Therefore by their fruits you will know them.

Essentially: the only way you can evaluate whether a person or organization is doing good, is by the good they have produced.

When I say “legible” here, I don’t just mean “other EAs are impressed by this” — I’m actually thrown by how low the bar in EA is for people to be impressed. Rather, it means “the rest of the world robustly thinks this is cool”.

EA is inefficient (same as legible results?)

the Effective Altruism community has just blown about $14 million on the Carrick Flynn campaign, including $7-$11 million (numbers vary) from Sam Bankman-Fried, the effective altruist crypto billionaire. Carrick Flynn is a guy with impeccable effective altruist credentials and very little political experience who ran in the Democratic primary for a newly created House seat in Oregon. He lost to Andrea Salinas, a longtime local Democratic political operative in that area, albeit one who never ran for election. She, needless to say, did not spend $14 million on her campaign. $14 million would have been a lot of money to spend on Flynn’s campaign even if he had won, as it’s difficult for any Congressperson to make a difference in today’s government, nevermind an incredibly junior Congressman with no political experience. Considering that Carrick lost, it’s, well, a shitton of money. I don’t want to bring up the whole bednets thing, but $14 million buys a lot of bednets. If the EA community plans to have political influence/run candidates in the future, I think there needs to be a much better plan than what there was for Carrick, which mostly seemed to be “EA guy decides he wants to run, everyone gives him money”. After all, this is supposed to be a rational community, and that doesn’t seem like a particularly rational strategy.

Carrick again: it definitely was an impressive amount of money spent and ads shown.

EA overweights ops (manual work), and underweights eng (automation)

EA is has too many individual contributors and not enough managers

My sense is that EA places a lot of emphasis on individual talent. Research and writing (which forms the bulk of full-time EA output) is a fairly solo-directed activity. EAs focus on recruiting the smartest, drawing their talent pool from elite universities.

Some of this is due to the quick growth of the EA movement; if your median member has only been around for a year, they won’t have had time to form the connections to get really good at management. But a lot of it is also in the kinds of programs that EAs support: they recruit early-career folks quite heavily, with ample support for university and even high-school students. But on the flip side, few concessions are made to bringing on mid- and late-career folks, who have organization expertise to coordinate between the work done by different roles.

In contrast, the work that gets done in Silicon Valley companies is highly collaborative, with room for individuals to specialize in a variety of different roles. On one eng team building a single product, you might have:

  • One people manager
  • One program manager
  • One product manager
  • 4-6 engineers (individual contributors)

EA funding is too centralized

  • Application-based
    • Pragmatically, all the money is in the hands of a few people: FTX, OpenPhil, SFF
  • Also true of EAG?
  • Means that the market is fairly inefficient
    • Contra: Lots of funding opportunities for other things. Non-EA orgs still exist
    • Goes back to “what does it mean to be EA?”
  • Related to conditional/shibboleths, instead

Seems to run counter to “too many ICs” — but this is about cause area, that’s about operational efficacy.

EA is too hobbyist

EA feels like a recreational sports team or social club, instead of being a professional sports team or a cult

Would rather have 1 proto-Givewell than 10 University Groups

Is this fair?

EA is too zero-sum

Examples of zero-sum

Funding

Job applications

Research (competes for attention)

Positive sum

Process improvements

Trade, specialization

EA is default closed, rather than default open

  • Slack vs Discord
  • Google Docs vs Notion
    • Closed internally, even!
  • Expanding:
    • EA Orgs require permission up front
      • EAG, Future Forum turn getting into a conference into a competition
      • Funding dynamic = let me check the language on your blog post
      • Very careful not to upset anyone you’re interacting with
      • Very careful not to let bad things show up on the public site
        • E.g. responding to criticism in private EAForum message
      • Too many NDAs
    • “If we’re going to give you money, here are the rules you have to abide by”
      • Seems kinda natural, but then it restricts the feeling of freedom, emphasizes consensus over getting things done

EA has shitty software, and spends little on making better software

Others:

EA focuses too much on process, not enough on results.
EA focuses too much on up front analysis, not enough on fast iteration
  • EA moves too slowly, doesn't have a good feedback process
  • EA has shitty software and spends very little on making better software
  • EA is all IC, no manager
  • For a movement that has "effective" in its title, it's sure not clear what it's effective on. Has altruistic people and smart people, but not effective people
  • EA only helps EA. Aims to do good within EA - very insular, conditional. Be more Catholic in community outreach
  • EA institutions do not hit quality level of startups. Cf number of people reached, revenue generated, user satisfaction. How sad would you be if <X EA Org> stopped existing?
  • EA grantmakers do not hit quality level of VCs. Cf speed of decisionmaking
  • EA feels like a recreational sports team or social club, instead of being a professional sports team or a cult
  • EA is too sales-y, and not enough marketing

Stephen

  • Too trusting of other nonprofits and NGO
    • Pandemic preparedness - NGOs failed miserably
    • Accountability
    • make sure that it happens - do something

    46ca8daa3636409687864231db0390bf

Breakdowns

  • Funding
    • Centralized
    • Slow
    • Zero-sum
  • Work
    • Writing-based
    • Independent
  • Community
    • Loose
    • Exclusive
    • Tight