Bio

I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into multiple languages.

After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee or need a place to stay.


Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License

Sequences
1

Future Matters

Comments
1202

Topic contributions
4123

So recently I've found myself not finishing, or even skipping, some of the AI episodes. I see the guests, think I can guess the general perspectives they and Rob/Luisa will take on AI, and don't think it'll add too much to my knowledge of the topic. If there are particular episodes that you think this is particularly incorrect about, then please let me know!

I’d be very surprised (and very impressed) if the Carl Shulman episodes did not add much to your knowledge of the topic (relative to how much you learned from the listed episodes).

This does not seem obvious. According to an analysis from Rethink Priorities:

Thanks for drawing this study to my attention. In this context, the truth of the price, taste, and convenience hypothesis is irrelevant, though; what matters is whether consumers of animal products have an intrinsic preference that this food comes from live animals in extreme agony, which is the feature of factory farming by virtue of which we regard it as seriously morally wrong. I have partly crossed out a sentence in my previous comment to make this clear.

Citizens of the Axis powers also did not have an intrinsic preference to cause lots of human suffering.

The claim is not that the Holocaust was morally evil because German citizens supported it. The claim is that the Holocaust was morally evil, to a significant degree, because it consisted of a systematic plan to exterminate all members of an ethnic group. Whether this was intended only by the Nazi leadership or by larger sections of German society is primarily relevant for assessing their degree of moral responsibility and blameworthiness, rather than for evaluating the Holocaust itself.

At the same time, I believe striving to be impartial is good. I value welfare the same regardless of species, country, time or ethnic group.

Me too, but as I said, our intuitive appraisal of the badness of the Holocaust is clearly shaped by the commonsense moral views I described.

do you think factory-farming is less intentional and systematic than the Holocaust?

Yes, because consumers of animal products mainly demand the taste and texture associated with meat, eggs and milk; exceptions aside, people do not have an intrinsic preference that these products come from live animals (let alone sentient beings). Furthermore, even if factory farming could be described as intentional and systematic, it would not be the intentional and systematic extermination of an entire ethnic group, which seems central in the intuition shaping commonsense evaluations of the Holocaust.

I agree with the general reasoning of your comment.

However, I also think that this specific comparison is not very illuminating. You comapre these two moral tragedies along the dimension of QALYs lost. However, commonsense moral intuitions about the Holocaust—which shape our own intuitions, even if we reject commonsense morality—aren't solely driven by an implicit quantification of its QALY burden. The intentional, systematic, and large-scale effort to exterminate an entire ethnic group also plays a significant role in our intuitive assessment. When multiple dimensions of evaluation influence our grasp of the moral value of something, comparing something else to it along only one of these dimensions may not help us much to internalize how good or bad it really is.

(ETA: I made a few edits to make the comment clearer.)

@RobBensinger had a useful chart depicting how EA was influenced by various communities, including the rationalist community.

I think it is undeniable that the rationality community played a significant part in the development of EA in the early days. I’m surprised to see people denying this.

What seems more debatable is whether this influence is best characterized as “rationalism influenced EA” rather than “both rationalism and EA emerged to a significant degree from an earlier and broader community of people that included a sizeable number of both proto-EAs and proto-rationalists”.

Hi Mo. I'm unsure if you've seen it, but Gwern’s article was discussed here.

Thanks for sharing this. FYI,  the links to the ‘Nuclear Safety Standards’ and ‘Basel III’ case studies are not publicly accessible.

Beware safety washing:

An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.

Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”

I think if you think there's a major difference between the candidates, you might put a value on the election in the billions -- let's say $10B for the sake of calculation.

You don't need to think there's a major difference between the candidates to conclude that the election of one candidate adds billions in value. The size of the US discretionary budget over the next four years is roughly three orders of magnitude your $10B figure, and a president can have an impact of the sort EAs care about in ways that go beyond influencing the budget, such as regulating AI, setting immigration policy, eroding government institutions and waging war.

Couldn't secretive agreements be mostly circumvented simply by directly asking the person whether they signed such an agreement? If they fail to answer, the answer is very likely 'Yes', especially if one expects them to answer 'Yes' to a parallel question in scenarios where they had signed a non-secretive agreement.

Load more