🚦

Can orgs be moral patients?

Highly uncertain; this is more like a list of open questions than a defendable thesis.

  • What is an organization? Approximately, “a connected network of humans (or other moral patients)”. Some examples:
    • Corporations, nonprofits
    • Religions, movements like EA
    • 1:1 relationships, marriages
    • Countries, cities, neighborhoods
    • Online hiveminds, like “Twitter” or “Reddit”
    • Future: AI agents, networks of agents (and humans, etc)
  • Intuition pumps for why orgs might have moral patienthood:
    • I notice I get a bit sad when a see that a local store has closed down
    • If Bob dies in a trolley accident, the global utility lost is not just from Bob himself, but also Alice his widow and the loss of their relationship
      • One might respond “that’s just summing utility between Bob + Alice” — but I think it might be more useful to identify “Bob and Alice’s marriage” as a thing that lives and can die
    • As a society, we set up a bunch of laws to ensure that humans treat each other fairly, but we also have a bunch of laws for corporations to treat each other fairly
    • Environmentalism: many people think of diverse biological ecosystems as valuable in and of themselves.
      • When I was more naive hardcore utilitarian, I considered this irrational (and eg was in favor of damming Hetch Hetchy); now I’m sympathetic
    • Humans beings are also organizations of organs, which are organizations of cells, which are organizations of molecules
  • How do you measure the utility of an organization like a company?
    • Naive answer is “profit” — it certainly is what companies are supposed to maximize
      • But this seems to me to be about as meaningful as measuring a human’s utility by their wealth or physical size. Not wholly uncorrelated, but very lossy metric.
    • The total utility of its constituents (employees), or shareholders?
    • Whether the company is fulfilling its founding mission?
  • How do you talk to an organization, to elicit its values?
    • You can speak with a member of the org, but are you getting the full picture vs one tiny piece?
      • The owner of a one-man coffeeshop might accurately express its values; but a random American citizen might only very lossily speak for “America”
      • Trying to get eg customer support from Google can be like communicating with Cthulhu
        • Large organizations are also fractral — eg Google/Alphabet is a collection of mini companies each of which are broken down into many levels. America is a collection of states is a collection of cities.
    • Trade with many orgs seems pretty well established
      • My pet theory: patienthood is not “can they reason” nor “can they suffer” but “can they trade”?
  • Can ideas be moral patients? Orgs are fundamentally fictitious. They’re ideas that live in everyone’s head, and exist by fiat.
    • Though, maybe the concept of “individual human beings” or “consciousness” is equally fictitious so…
  • Why do we care about moral patienthood in the first place? (Why does this matter?)
    • Helps us as a society coordinate on what rules to enforce
    • Gives us a metric to decide on whether we’re moving in the right direction
  • What should one do differently if this is true?

Pushback @September 23, 2024

A friend pushes back against the idea that “orgs have moral patienthood”

I think you just have to ask yourself "is there any situation where an action that harms all humans involved is still the best thing to do because it benefits an organization”

And I cannot think of one

Stuff that harms people short term but benefits them long term doesn't count

E.g. killing one person to protect the US government or whatever

Most of the sympathetic examples that come to mind do look like “harm people short term, benefit people long term”. E.g. a new restaurant that starts by putting its staff and management through hell, accumulating recipes and process knowledge and brand loyalty, and then ends up paying out good dining experiences and profit over its lifetime.

But I think this should count? My reasoning leans on the question of personal identity, as constructed out of numerous timeslices. If 5 people jump into a dirty river to rescue a drowning infant, we think that’s great, because the infant’s future timeslices will realize much positive utility. I claim this is because we expect a human person to produce large positive externalities throughout their lifetime — if the same infant was then confined to an experience machine for the rest of their life, society would value this trade much less.

So an immediate sacrifice for an org that yields future benefits for other moral patients, is fundamentally similar to the tradeoffs we expect when deciding between humans in a moral scenario.

In a moral framework typically one wants to disentangle 1) "value from intrinsic moral patienthood" from 2) "valuable from instrumental utility". When we say “all men are created equal” we mean Elon Musk and random US citizen hold equal weights under 1), but because of 2) society would prefer to save Elon Musk in a trolley situation.

It’s clear that orgs have immense value under 2) — think of the people who are willing to die for their country, religion, ideology. (Or, the people who dedicate the majority of their waking hours and focus towards their work in a corporation.)

Furthermore, I’m unsure if 1) exists at all, or if it does how to ground it, or use it to guide our actions. Modern society is supposed to assign patienthood equally between living persons, but it sure feels like there’s wide variation in patienthood assigned to a US citizen vs a foreigner, let alone more exotic persons like children or animals.

(Upon reflection, I think “moral patienthood” might be a confusing term because it implies a binary yes or no, and that all patients are equal; this is less flexible and a less good model for reality compared to moral weights.)

What is “moral patienthood” about? I think of the process of identifying “who is a moral patient”, and more broadly the process of “moral philosophy”, as a joint negotiation between the set of “moral agents” — people and things (like orgs or AI) who have the ability to enact change in the world, and are also able to be reasoned with.

Many different moral agents hold different values, but morality lays down rules for positive-sum cooperation and trade. A difference in values cashes out to identifying different moral patients, or alternatively assigning different moral weights between moral patients.

One reason orgs should probably be considered moral patients is that self-patienthood is extremely common among moral agents. That is, every human thinks of themself as a moral patient with very large weights — Bob’s own future timeslices are the most likely to share Bob’s current value system. I expect orgs to share this intuition, and to desire their own continued existence.