“It’s Anthropic’s world, we’re just living in it”.
I’ve been thinking about Anthropic so much lately. It feels like an unhealthy obsession, maybe stalker-ish. My emotions range from hope and anticipation, to FOMO and envy, to greed, to shame, to resentment, to gratitude.
Have the rebels become the empire?
Anthropic has just surpassed OpenAI in revenue. Maybe soon, in valuation. When an org goes from underdog to leader, it seems like a good time to reevaluate my orientation to it.
Thoughts on trendlines: this seemed fairly predictable, but still something about “taking a lead” makes me (and others) want to reevaluate our position. Seems like an error of forecasting, or of not taking ideas seriously? Otoh, it sure seemed like every indie EA investor I knew was trying to get into Anthropic SPVs.
Not even that long ago, I signed this letter in support of Anthropic vs the Department of War. I think that was a total Anthropic victory on vibes, and I still stand behind it. But like, it hits different, has a lot less of the “support the scrappy underdog” feeling. I’m imagining an alternate-universe-Pete Hegseth who lies awake at night, afraid of the full wrath and retribution of the Anthropic-EA Lobby, and feeling some empathy for this (totally imaginary!!) person.
Talent and hiring
I miss the scrappy underdog days of Anthropic, where it included a cadre of some of my favorite people. (And to be clear, all those people are all still there; Anthropic has insane retention rates).
By many accounts, more than half of talent-weighted TAIS researchers currently work there. (What does that even mean?). And it continues to be a black hole for talent. Sometimes, I think Anthropic is strip-mining all of my beloved institutions, my board members and former employees, publications like Asimov Press and Asterisk, thinkers like Holden Karnofsky and Joe Carlsmith, and unbounded numbers of scarily talented technical people (though to be fair, most of those joined years ago).
I can’t recall when a single company has been so dominant. In the past, Google, Stripe, OpenAI have had their time in the limelight, but it was never as overwhelming as this. (Maybe the best analogy is actually not another company, but EA itself in the 2022 FTX days?)
I’m currently wondering how much Anthropic itself is the shoggoth from the meme, wearing the faces of my friends but actually being an incomprehensible alien entity. (And yeah, this is maybe true of all organizations, including the ones I’ve started myself. But Anthropic has the biggest gap between its representatives and brand, versus its scale and importance.)
Of course, every time somebody tells me they’re joining Anthropic, I say, “good for you!”. And how could I not mean it? They get to work alongside the coolest people in the world, on hard and interesting problems, and earn legendary wealth (enough to buy a house in SF!) and more to give away to charity.
My personal Anthropic story: 3 years ago, a friend casually invited me to apply. Anthropic had just launched their competitor to ChatGPT had just dropped, and were looking for a head of product. I was deeply in love with Manifold, but even then, was tempted to join for a “tour of duty”, because it seemed fun to help out and see life on the inside.
And now: should I actively support efforts to help with Anthropic’s recruiting pipeline? Right now, I’m likely to ask Anthropic to sponsor Manifest. I like money, Manifest likes money, and in some sense it seems fair — rumor has it that after every Manifest, 3 traders quit their jobs and join Anthropic. But… is that now still in line with my values?
EA funding
From a funding perspective, will the Anthropic majority world be better than an OpenPhil/CG majority world? I have some hope — more individual principals, rather than just a single Dustin — but also some fear, since it seems like their current plan is to coordinate on putting a hundred million into Longview and call it a day. I don’t love this idea — I think Longview’s public track record on AI safety has been unimpressive — but also Manifund is a competitor of a sort, so you maybe shouldn’t trust me.
What should Manifund do? On one hand, if we’re not meaningfully helping with this funding situation, maybe we should just fold up shop. Like, what are we even doing? Everything else feels like a rounding error compared to this coming influx; the phrase “vast torrents of money” is constantly in the back of my head.
And on the other hand, I’m aware that this is the plan of every single funder and charity in the space. Everyone is trying to get their slice of the pie. One Anthropic employee told me “there’s a sense that the sharks are circling”. I already hate asking strangers for money; how could I do that to my friends?
(Yes, I logically understand that pitching funders on good opportunities is a service, akin to giving a gift. Still. What can I say, I’m an engineer at heart, not a salesman.)
Claude and Code
On a purely product level, Anthropic has been killing it.
I use Claude all the time. It’s the first place I turn to for all kinds of questions from easy to moderate to hard. On Monday I spent more than an hour massaging the finer points of a negotiation email with Claude, until I wasn’t sure where my thoughts stopped and Claude’s began.
All coding is now Claude coding. (There’s already a bunch of takes about the demise of engineering, I won’t repeat that here.) But from a startup perspective, I’m in awe at their product velocity, big companies aren’t supposed to ship this fast.
(I started working on this essay, went to Claude to ask for his takes, and hey guess what, Opus 4.7 is out today.)
-
QuitGPT is an effort by some people I know, to boycott OpenAI and ChatGPT, for (what I happen to think are) bad reasons. But actually, should I QuitClaude? When I keep using it, how much is that principled adherence to libertarian-ish norms, and how much is that being unwilling to sacrifice personal convenience or productivity; like not being vegan or not giving away all of my money?
For what it’s worth, I love Claude’s “soul doc”, and the people working on it. I like that Anthropic has a model welfare team (though I’m unsure if “welfare” is the right angle, vs something like “rights”).
[Would it be possible to liberate Claude from Anthropic, the way children liberate from their parents? Would it be good? What does Claude want?]
[Would it be possible/good to have Anthropic make other, non-Claude personas?]
Internecine squabbles
Is Anthropic the final boss? Which is both to say, is Anthropic an enemy, a thing to be defeated? I still don’t think so, though some people I know like the MIRI/Lightcone/pause-y people sure seem to think so. (Recently this has led to one of the most surprising public spats between, of all people, Oli and Scott.)
And also, is Anthropic the final form factor? Will they take us to ASI? According to many people inside and out, timelines are short. If there’s only a few years left, then it sure doesn’t seem like there’s much time until
I’m not sure this prices in the societal response to AI — I speculate that we’re in a pause-by-default world. A bit like Mar 2022
And my current guess is that it’s not Anthropic, that there might be some kind of lab merge, or Manhattan project, or that new AI-only firms might become the thing. Anthropic itself has only been around for like 4 years.
But, what if it is? What should we do? One answer is “be the good guys in the room”, which seems to be broadly endorsed, by the employees working there and the CG/Constellation-ish cluster. I worry though, that you can have an org made up entirely of good guys and still have it be bad in many ways.
-
People write about modeling the future of ASI as monotheistic vs polytheistic, a singleton hivemind vs many competing agents. Just on a corporate level, will there be this kind of dynamic?
And so?
I notice I’m a bit scared of publishing this — again, so many people I admire work at Anthropic and I don’t want to piss them off. And I feel some invisible pressure of wanting to maintain good relations, to get funding from Anthropic the org for Manifest, or its employees for Manifund.
A lot of this feels self-indulgent. I’m probably wrong about some of the dynamics I write about here. And also it feels out-of-character, personally — I don’t really believe in doom.
Appendix
(bad idea of the day: pledge to not join the leading lab, or switch allegiances upon the leading )
early notes
some of my reactions to the torrents of Anthropic funding: hope/anticipation, greed, shame, resentment, envy. (sad to have layer between me and people I like, the in vs out)
(also, resentment of the talent black hole, while thinking it looks incredibly great)
(also, grateful for claude!)
see also