Status: WIP, ~30% written
Transformative AI is coming. Why do good?
See also: The final retroactive funder is God (aka future ASI)
First: why earn money?
- Today, money is a societal-agreed metric for how much value you have created; and equivalently, how much influence you should be able to expend in society
- Literal âcash in bank accountâ fails to capture other important metrics like value of your skills, your relationships & network. Letâs call the aggregate of all these things âcapitalâ
- Probably, capital will continue to matter post ASI
- Societal capital allocation encodes a bunch of contracts and agreements and value judgements that were endorsed by past agents. Honoring this allocation is a simple way of respecting past agents
- (why should future agents respect past agents? because future agents want to be respected by future-future agents)
- (link to other arguments on this eg intelligence curse)
By default, capital will matter more than ever after AGI â LessWrong
- Revealed preference: it sure seems like the people who are arguing about transformative AI are also busy accumulating capital (Sam Altman is taking OAI equity, Leopold is running a hedge fund, Intelligence Curse folks are starting a startup)
â2025 cash allocationâ will only be a part of how eg post-ASI 2035 allocates its capital/value/resources to 2025 agents. Where is the rest? âimpactâ, or âgoodnessâ.
- Just as âcash in bank accountâ undercounts the value of your career, network, etc capital on an individual level, âcash in bank accountâ undercounts our societal understanding of what we value
- eg Trump, when he was broke: âthat beggar on the street is worth more than meâ â not true!
- What is the thing that is getting undercounted? Aka, what does âcurrent net worthâ not capture?
- Skills, network, sure
- But also: âgood stuff that youâve done and now âsocietyâ owes you forâ
- If youâve been out there saving lives; feeding the needy; caring for the sick
- Counterpoint: has this happened historically�
- Well, some examples of societies forming narratives around who has done good work and then rewarding them much later or posthumously
- eg Oskar Schindler â Schindlerâs List
- eg Petrov â Petrov Day
- eg Jesus Christ â Christianity? This seems like the big example
- or really, all the major religions
You canât really give away money; it always comes back to you.
- When money is yours, your decision to give it away, means you are responsible for the downstream consequences of it.
- Your choice to buy a product, invest in a stock, or donate it to a charity â always adds to your ledger. Some of these ledgers are measured on Earth now; some are measured in impact (to be tallied in the future)
- Similarly: you canât give away your intelligence, your privileges of birth â how you choose to spend these reflect on
Morality of agents: people, animals, AIs, organizations
See also:Can orgs be moral patients?
- Is there any difference between âmoral patientâ and âmoral actorâ?
- On one base level: âevery person is equalâ
- But as practiced and endorsed by society: some people (eg US president) are worth more than others (eg Secret Service).
- Why? Because the US president has more skills and is harder to replace than a SS agent; good decisions from a president can influences millions of individual lives (eg: PEPFAR)
- One way is to draw a distinction between moral patients (everyone is equal on some base fundamental level) and moral actors (where you might assign more capital/decisionmaking power/status, in line with a technocratic approach)
- (MAYBE AXE)
- What are necessary properties of moral agents?
- identity
- values
- (TODO)
- Soon (if not already), AIs will be moral patients
- Already, some double-digit % of people think AIs have consciousness
- Maybe theyâre âwrongâ â but what does it mean to be âwrongâ? If a significant number of humans already value consciousness (assign nontrivial moral weight to conscious beings) and think this is a thing AIs have, why should the view of a few elite philosophers override this? â see moral parliament
What is morality anyways? I think: game theory
Morality is a set of rules, norms, guidelines that help agents understand what interaction patterns lead to better joint outcomes
- I think most people agree that morality supersedes law â as in, thereâs some weird unspecified thing that you can do in service of a higher good, even if unlawful. See: Martin Luther King Jr. getting arrested for protesting.
- (though mostly, law supports morality â law is an encoded set of rules that we as a society agreed on to use to adjudicate disputes)
- we live in an iterated society
- morality encourages cooperation in prisonerâs dilemma situations
Why morality might be universal
- Government, law, contracts, religion, morality: all trying to create frameworks where their participants prosper, thrive, win.
- Governments compete for citizens & companies
- Ideologies and memes compete for mindshare, eventually for nations
- One weird lens: these things are a search for universal moral truth.
- ASI will help accelerate the search for âmoral truthâ or âoptimal contractsâ. Previously, this search was conducted in real time by humans; in the future, simulated beings can get us to much deeper understanding of how different eg voting, coordination, governance systems behave.
- Counterpoint: what if ASI raises the complexity bar for agents, making âoptimal coordinationâ much harder?
- well, higher complexity agents are just another abstraction for âfaster searchâ â a thing like a human searches faster than a thing like a bee
Moral parliaments, across our shared civilization
- The âmoral parliamentâ idea is that you (individual human) and your values might be best represented, not as a single unified entity, but a âparliamentâ of different beliefs
- Eg maybe you are an 100-member parliament with 40 members dedicated to âselfish egoismâ, 20 members to âperfect utilitarianismâ, 10 to âvirtue ethicsâ etc etc.
- This concept rhymes a bit with âinternal family systemsâ
- One way to describe âuniversal moralityâ is to just blow up this concept: every agent (every human, animal, company, government) sits in a giant universal parliament. Some properties of this parliament
- Unlike standard parliament, different agents hold different weights
- Some concept of âliquid democracyâ or âvalue flow across nodesâ: a lot of what any one individual node (eg me, Austin) might weigh is the values of other nodes (eg my wife Rachel, my friends, my communities).
Trades through time, identity
aka: what will matter to you? what is âyouâ?
(link to Holden on identity)
Open questions
- Say you believe this. What should you do? What will be seen as good?
- Austinâs best guess (aka current values) is around a combination of EA/AI safety, Catholic, and tech startup-y values. Also classic liberalism. Maybe a bit of guess cultureness.
- You might be suspicious of this though
- Is utopia zero-sum? Why fight over âimpact done nowâ?
- If weâre in a simulation, what does that imply?
- meta: which things that Iâm saying are novel, weird, disbelievable? want to shore up those
- cf PG on essays: insightful and ⊠(?)
See also
- meta: who are the prophets of AI morality?
- Joe Carlsmith: Can Goodness compete?
- Dwarkesh: Give AIs a stake in the future
- Holden Karnofsky
Holden Karnofsky Ideal governance (for companies, countries and more)
- On identity:
Holden Karnofsky What counts as death?
- Scott Alexander on the parable of the talents