I know this is a debate, but one thing I want to touch on is that animal welfare and human welfare are not necessarily in conflict. I think initiatives like preventing the rise of factory farming in the developing world could be really great for both animals and humans. Animals wouldn't have to exist in horrible conditions, and humans could (as far as I know; don't have sources with me right now) have greater food, water, and resource security, reduced ecological/climate devastation, and reduced risk of disease, to name a few things. I think it's important to think about ways in which we can jointly improve animal welfare and global health, because we all ultimately want to create a better world.
A few reasons immediately come to mind for me:
For me, I think the biggest crux is whether you believe animal suffering is comparable to human suffering. Animal is a broad category, but I think at least for some animals, there is all the reason to think that their suffering is comparable and little reason to think it is not. The only reason I put one notch below the maximum is to signal that I am willing to concede some slight uncertainty about this, but nowhere near enough to persuade me that animal welfare/rights is not a pressing cause.
Hi Leopold,
Thank you for the thoughtful comment! I appreciate that my experience has informed your decision-making, but in the end it’s just my experience, so take it with a grain of salt. I also appreciate your caution; I would say that I’m also a pretty cautious person (especially for an EA; I personally think we sometimes need a little more of that).
I will say that big and risky projects aren’t necessarily a bad thing; they’re just big and risky. So if you’ve carefully considered the risks and acknowledged that you’re committing to a big project that might not pay off and you have some contingency plans, then I think it’s fine to do. I just think that sometimes we get caught up in the vision and end up goodharting for bigger and more visionary projects rather than more actually effective ones (my failure mode in Spring 2023).
Best, Kenneth
This kind of reminds me of a psychological construct called the Militant Extremist Mindset. Roughly, the mindset is composed of three loosely related factors: proviolence, vile world, and Utopianism. The idea is that elevated levels in each of the three factors is most predictive of fanaticism I think (total) utilitarianism/strong moral realism/lack of uncertainty/visions of hedonium-filled futures fall into the utopian category. I think EA is pretty pervaded but vile world thinking, including reminders about how bad the world is/could be and cynicism about human nature. Perhaps what holds most EAs back at this point is a lack of proviolence—a lack of willingness to use violent means/cause great harm to others; I think this can be roughly summed up as “not being highly callous/malevolent”.
I think it’s important to reduce extremes of Utopianism and vile world in EA, which I feel are concerningly abundant here. Perhaps it is impossble/undesirable to completely eliminate them. But what might be most important is something that seems fairly obvious: try to screen out people who are capable of willfully causing massive harm (i.e., callous/malevolent individuals).
Based on some research I’ve done, the distribution of malevolence is relatively highly right-skewed, so screening for malevolence probably affects the fewest individuals while still being highly effective. It also seems that callousness and a willingness to harm others for instrumental gain are associated with abnormalities in more primal regions of the brain (like the Amygdala) and are highly resistant to interventions. Therefore, changing the culture is very unlikely to robustly “align” them. And intuitively, a willingness to cause harm seems to be the most crucial component, while the other components seem to be more channeling malevolence towards a more fanatical bent.
Sorry I’m kind of just rambling and hoping something useful comes out of this.
TLDR: Recent graduate with a B. S. in Psychology and certificate in Computer Science. Looking for opportunities which involve (academic) research, writing, and/or administrative/ops.
Skills & background: ~6 months doing academic research with a grant from Polaris Ventures regarding malevolence (dark personality traits). Before this I was a leader of the UW–Madison chapter of EA and President of its chapter of Effective Animal Advocacy. I also have a substack where I write mostly about EA stuff and philosophy. I have experience writing articles for both academic and lay audiences, writing newsletters, and coordinating events.
Here's my substack: https://kennethdiao.substack.com/
I should also have a forum post out soon which will showcase more of my research aptitude
Location/remote: Would prefer remote but willing to relocate. I'm currently based in the Twin Cities, MN.
Availability & type of work: Currently, I am quite available and can start immediately. I am interested primarily in paid part-time (or full-time) opportunities, though I'm also open to volunteering.
Resume/CV/LinkedIn: https://www.linkedin.com/in/kenneth-diao-292b02168/
Email/contact: kenneth.diao@outlook.com
Other notes: My principal cause areas are animal advocacy and suffering reduction, though I'm also interested in learning more about AI governance. My fuzzy vision for my ultimate role is that it involves doing writing and research which is close enough to the public and policy world to be grounded and have a concrete impact. I'm hoping the next couple of roles are able to help me test my fit and develop aptitudes/capital for reaching that eventual stage.
Questions:
Thanks everyone!
Hi Rob,
Thank you for writing this post. I am also highly disappointed that no institutional post-mortem has been conducted, so I'm glad that you're speaking out about it. Now that the verdict has been officially handed down to SBF, there's no excuse for there not to be an investigation anymore.
Maybe somehow there are good excuses (and yes, they are excuses) for why a formal investigation has not taken place. But no matter how florid or sophisticated they are, they won't change my mind that a public investigation should take place. Pretty much no matter what, the reputation of the core EA leadership is going to take a hit if no public and formal investigation is carried out, at least in my eyes.
Regarding comments about psychopathy/sociopathy: I recently did a bunch of research on malevolence, so I feel confident in speaking on the subject. The term "sociopathy" seems to be the less well-defined term, so I would somewhat advise against using it, at least until greater clarity arises. However, psychopathy is a fairly established construct in the literature with a few widely-used instruments from the academy, so if you're choosing between using psychopathy or sociopathy, I would say use psychopathy. But even psychopathy is a pretty confused term because it captures so many different characteristics (including callousness, grandiosity, impulsivity, and criminality) which don't necessarily coincide. My opinion is that the cleanest way of talking about all this is to list out more specific and well-defined traits, such as callousness.
But, and I stress this, just because he wasn't a violent criminal doesn't prove he was a good, compassionate person. Neuroscientific evidence suggests that deficiencies in empathy/caring for others have distinct origins from violent or socially unacceptable behavioral expressions. Indeed, the main distinguishing point between psychopathy and Antisocial Personality Disorder (ASPD) is that psychopathy has a component that does not theoretically relate to violent or socially unacceptable behavioral expressions (according to an authority on Psychopathy). It would be most adaptive for a person to be able to abide by the most explicit and universal social norms (e.g., don't kill people) but still do harm in covert, neutral, or even socially desirable ways (e.g., being the CEO of a giant meat company). This is the type of malevolent person I expect SBF is, if he indeed is malevolent.
I also intend to publish a post on this topic, but I thought I'd clarify here since I saw a discussion regarding sociopathy in the comments.
Hi Brian,
I'm honored that you read my article and thought it was valuable!
For the record, I also think that it's good to know the truth. Maybe I wish it wasn't necessary for us to know about these things, but I think it is necessary, and I very much prefer knowing about something and thus being able to act in accordance to that knowledge than not knowing about it. So yeah, don't let my adverse reaction fool you; I love your work and admire you as a person.
Regarding love and hatred, the points you brought up do make me think. I try to always keep an evolutionary perspective in mind; that is, I tend to assume something is adaptive, especially if it's survived across big time. So I think that, at least in certain environments, things like the dark tetrad traits (narcissism, machiavellianism, psychopathy, sadism) are adaptive even on a group level; maybe they reach some kind of local maximum of adaptiveness. My hope is that there is a better way to retain the adaptive behavioral manifestations of these traits while avoiding the volatile and maladaptive aspects of these traits, and my belief is that we can approach this by having more correct motivations. Like I really idealise the approaches of people like Gandhi and MLK who recognised the wrongness of the status quo while also trying to create positive change with love and peace; I believe we need more of that. That being said, I take your point that darkness and hate can lead to love/reduction in hatred, and that this may always be true, especially in our non-ideal world.
I I think this is an interesting dilemma, and I am sympathetic to some extent (even as an animal rights activist). At the heart of your concern are 3 things:
I think in this case, 2) is of lesser concern. It does seem like adults tend to give far more weight to humans than animals (a majority of a sample would save 1 human over 100 dogs), though interestingly children seem to be much less speciesist (Wilks et al., 2020). But I think we have good reasons to give substantial moral weight to animals. Given that animals have central nervous systems and nociceptors like we do, and given that we evolved from a long lineage of animals, we should assume that we inherited our ability to suffer from our evolutionary ancestors rather than uniquely developing it ourselves. Then there's evidence, such as (if I remember correctly) that animals will trade off material benefits for analgesics. And I believe the scientific consensus has consistently and overwhelmingly been that animals feel pain. Animals are also in the present and the harms are concrete, so animal rights is not beset by some of the concerns that, say, long-termist causes are. So I think the probability that we will be wrong about animal rights is negligible.
I sympathize with the idea that being too radical risks losing support. I've definitely had that feeling myself in the past when I saw animal rights activists who preferred harder tactics, and I still have my disagreements with some of their tactics and ideas. But I've come to see the value in taking a bolder stance as well. From my experience (yes, on a college campus, but still), many people are surprisingly willing to engage with discussions about animal rights and about personally going vegan. Some are even thankful or later go on to join us in our efforts to advocate for animals. I think for many, it's a matter of educating them about factory farming, confronting them with the urgency of the problem, and giving them space to reflect on their values. And even if you don't believe in the most extreme tactics, I think it's hard to defend not advocating for animal rights at all. Just a few centuries ago, slavery was still widely accepted and practiced, and abolitionism was a minority opinion which often received derision and even threats of harm. The work of abolitionists was nevertheless instrumental in getting society to change its attitudes and its ways such that the average person today (at least in the West) would find slavery abhorrent. Indeed, people would roundly agree that slavery is wrong even if they were told to imagine that the enslaved person's welfare increased due to their slavery (based on a philosophy class I took years ago). To make progress toward the good, society needs people who will go against the current majority.
And this may lead to the final question of how we decide what is right and what is wrong. This I have no rigorous answer to. We are trapped between the Scylla of dogmatism and the Charybdis of relativism. Here I can only echo the point I made above. I agree that we must give some weight to the majority morality, and that to immediately jump ten steps ahead of where we are is impractical and perhaps dangerous. But to veer too far into ossification and blind traditionalism is perhaps equally dangerous. I believe we must continue the movement and the process towards greater morality as best we can, because we see how atrocious the morality of the past has been and the evidence that the morality of the present is still far from acceptable.