đź”—

Thesis on OpenAI

  • The ScarJo thing:
    • Society would be better off if people didn’t maintain a monopoly on their (voices, words, creative works)
    • I feel like things are consistent with OpenAI’s public statements, not sure why
  • The NDA thing:
    • Making a mountain out of a molehill
  • The lots of people leaving OpenAI thing:
    • Yeah, that’s probably kinda bad
  • EA/AI Safety shouldn’t define itself by opposition to a thing, that’s pretty unhealthy
    • Also strategically unsound? Puts the momentum in the hands of OpenAI; broadly makes them seem like more of a threat
    • Positioning your movement as the enemy of a thing is fraught
      • Especially when the thing is as powerful as “progress”
  • OpenAI seems willing to update in response to what the press and individuals are saying. This seems good and laudable?
  • The thesis of “let’s just put out the models and let people work with them” seems to have been more right than wrong
    • In general incremental fast iteration is better than doing lots of planning up front. It doesn’t feel like we’re in a Yudkowskian FOOM world.
    • Also just baseline, things like ChatGPT and Copilot have just been good
  • Sam Altman is probably like, a reasonably good person
    • People reliably overestimate how much of a company’s choices are directly the result of the leader’s choices, vs random subordinates or communications drops