R

Rebecca

AI governance @ MATS
1883 karmaJoined Working (0-5 years)San Francisco, CA, USA

Participation
4

  • Attended an EA Global conference
  • Attended an EAGx conference
  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group

Posts
1

Sorted by New

Comments
334

I don’t think you’re being confrontational, I just think you’re over-complicating someone saying they support things that might bring AGI forward to 2035 instead of 2045 because otherwise it will be too late for their older relatives. And it’s not that motivating to debate things that feel like over-complications.

The reason why "everyone [he] know[s]" will be dead is because everyone will be dead, in that scenario.

 

We are already increasing maximum human lifespan, so I wouldn't be surprised if many people who are babies now are still alive in 100 years. And even if they aren't, there's still the element of their wellbeing while they are alive being affected by concerns about the world they will be leaving their own children to.

Prioritising young people often makes sense from an impartial welfare standpoint, because young people have more years left, so there is more welfare to be affected. With voting in particular, it’s the younger people who have to deal with the longer term consequences of any electoral outcome. You see this in climate change related critiques of the Baby Boomer generation.


See eg

“Effective altruism can be defined by four key values: …

2. Impartial altruism: all people count equally — effective altruism aims to give everyone’s interests equal weight, no matter where or when they live. When combined with prioritisation, this often results in focusing on neglected groups…”

https://80000hours.org/2020/08/misconceptions-effective-altruism/

Have you read the whole Twitter thread including Jaime’s responses to comments? He repeatedly emphasises that it’s about his literal friends, family and self, and hypothetical moderate but difficult trade offs with the welfare of others.

I’d agree that a lot of people who care about AI safety do so because they want to leave the world a better place for their children (which encompasses their children’s wellbeing related to being parents themselves and having to worry about their own children’s future). But there’s no trade off between personal and impartial preferences there. That seems to me to be quite different from saying you’re prioritising eg your parents and grandparents getting to have extended lifespans over other people’s children’s wellbeing.

The discussion also isn’t about the effects of Epoch’s specific work, so I’m a bit confused by your argument relying on that.

From Jaime:

“But I want to be clear that even if you convinced me somehow that the risk that AI is ultimately bad for the world goes from 15% to 1% if we wait 100 years I would not personally take that deal. If it reduced the chances by a factor of 100 I would consider it seriously. But 100 years has a huge personal cost to me, as all else equal it would likely imply everyone I know [italics mine] being dead. To be clear I don't think this is the choice we are facing or we are likely to face.“

I don't think people are expecting you to pretend to not hold the values that you do, rather they're disappointed that you hold those values, as welfare impartiality is a core value for a lot of EAs.

I think Sharmake might be thinking you are one of the people that left Epoch to start Mechanize? (He says "admits that the reason he is working on this" in response to the main post, about Mechanize)

No one is critiquing Daniela’s personal life though, they’re critiquing  something about her public life (ie her voluntary public statements to journalists) for contradicting what she’s said in her personal life. Compare this with a common reason people get cancelled where the critique is that there’s something bad in their personal life, and people are disappointed that the personal life doesn’t reflect the public persona- in this case it’s the other way around.

Hi Mikhael, could you clarify what this means? “It is known that they said words they didn't hold while hiring people”

Load more