V

Vaipan

321 karmaJoined Working (0-5 years)

Participation
5

  • Completed the In-Depth EA Virtual Program
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Comments
92

Answer by Vaipan2
0
0

We will organize a viewing of Oppenheimer in our office for our community and have a debate afterward: what would have the movie looked like if it was about AI?

Have you ever considered interacting with policy institutes/political commissions (EU commission, national parliaments, etc) to spread the word about effective allocation of resources, similar trends that could be followed by some governmental departments? 

The second one is more daring, but I'm curious. How much does OpenPhilantropy and its council of advisors rely/apply your advice? For example, you wrote a very interesting sequence on value maximisation and one insight was that animal welfare was a winner on the short and longterm, but that does not translate at all in OP current funding allocation given the recent reductions in animal welfare budget/grant-criteria tightening when it comes to animal projects? 

Answer by Vaipan3
1
0

This is true, and in our EA group, we are establishing an outreach model to attract them. So far here's why they don't get involved:

  • Mood at EA event is very young and excited, and connecting is harder for older people since they have different interests/lifestyles (not everyone can 'optimize' each step of their lives when they have kids and such). Communication norms are also different.
  • Career opportunities are much harder to find and grasp for experienced people: it's not as easy to go three months do a fellowship somewhere and leave your family behind, or change countries to find the perfect job because there are few effective opportunities in your country. We're improving that by working on a mapping of EA-likeminded institutions, but it's nascent work, not that supported by 80k. 
  • Many feel that their experience isn't valued and appreciated by EA members, when it's often a contest of who has read this and that but not so much learning from experienced members. 

    So yeah, as long as EA won't have a clear strategy of cooperation with other institutions (for example, the UN is often discarded efficiency-wise, but no good research proves this!), and as long as behavioural norms won't change, it"s going to be hard. We trying to reach a tipping point of 25% of experienced people for the mood to change, but it's hard.

You are not alone, definitely not alone.

As a community builder, I have several people telling me this on a frequent basis. It's nice to be able to follow good charities on Twitter, but that does not make up for the direction of the funding and therefore the opportunities and projects that are actually selected and funded, or the fact that most posts on the forum are now about AI given the sharp increase in AI-interested people (who do not necessarily have a past with EA, or altruism, as in giving etc). It does not make up for the fact that most people enter EA through 80k and get the feeling that they have to get into AI to be impactful, given the priorities. Or the fact that your chance to be coached by 80k is much greater if you want to work in longtermistic matters.

There is really a turning point in the movement, few actors are reacting against it, there is no real counter movement and most people in power do not speak up against this, even though they might have a more nuanced view on funding distribution than what is actually happening. 

Maybe it will be one of these cases where the audience of one community changes completely, and thus becomes a different organization. It makes me very sad--there is no replacement to EA. No, global aid economics are not 'GH' in EA. No, animalistic parties cannot replace the work done by some EA orgs. It's a question to all: will we silently abide and passively go along the movement, whatever it becomes, or will we just have to exit EA? The latter is already happening a lot. 

Well you said it: STEM is what makes the very big difference here. A 'leftwing' STEM will not have the same priorities at all than a social science student, so this leftwing label is very misleading, no matter how much people like to use it here to claim that EA is leftist. 

A STEM student will have much more contempt towards protests, and what you conveniently forget to say is that STEM students are in general earning much more and come from much more privileged backgrounds. It's all about resources and how they are distributed, and so these students are in much less need to go out in the streets. So it's easier to look down on protests and think that these protests are just noisy and useless. 

So my answer still stands and explains why EA is not protest-friendly.

Answer by Vaipan0
0
1

Protests are usually done by those in dire need of change: minorities, poor people, people whose identity is attacked, etc. AI risks are overwhelmingly highlighted by rich white male engineers: not those who usually have a reason to go out in the streets. And as Geoffrey says, who despise those who do--it's easier to mock those who struggle when you don't, assuming that they make unnecessary noise because you don't feel at all part of their fight.

And now EAs realize that profit is taking over safety concerns--it took a lot of time! It was painful to read Altman's praising until the board shuffling at OpenAI. It's been years that people protest because greed and unequal distribution of money make their own lives poorer and harder; but now  greed causes survival risks that also extend to rich engineers, so they have to do something. 

Of course. It is much easier for privileged individuals to relate to the suffering of minds that do no exist yet compared to the very real suffering of people and animals today that force you to confront your emotions and uneasiness towards those who have so little when you have so much. 

The divide between gender and cause-area is obvious (not just from this study but also from my own EA group!). Women in general care much more about GHD and animal welfare and dislike fixing technological issues with yet another technology; they want more systemic change. That some privileged men who benefit from the current status quo do not want to change the current power dynamics and prefer to think about future beings who do not have a voice yet to feel useful is hard to deny.

Sadly I have not seen any research mixing gender dynamics and longtermist urgency.

I agree. We have to take into account that 80k strongly pushed for careers in AI safety, encouraged field building specifically for AI safety, and the job board has become increasingly dominated by AI safety job offers. And the trend is not likely to be reversed soon. 

However, that does not keep people outside of EA to obtain jobs in the GHD field (which is not just development economics, as someone wrote one day);  they are just not accounted for. And if the movement keep giving opportunities and funding specifically towards AI safety, sure we'll get less and less GHD people. So it's still impressive, taking all this funding concentration, that we get so many EAs that still consider GHD as the most pressing cause-area. 

Load more