Lorenzo Buonanno🔸

Software Developer @ Giving What We Can
4794 karmaJoined Working (0-5 years)20025 Legnano, Metropolitan City of Milan, Italy

Bio

Participation
1

Hi!

I'm currently (Aug 2023) a Software Developer at Giving What We Can, helping make giving significantly and effectively a social norm.

I'm also a forum mod, which, shamelessly stealing from Edo, "mostly means that I care about this forum and about you! So let me know if there's anything I can do to help."

Please have a very low bar for reaching out!

I won the 2022 donor lottery, happy to chat about that as well

Posts
11

Sorted by New

Comments
601

Topic contributions
5

OWID says that ~45% of the population in Uganda has access to electricity, and that it more than doubled in the past 10 years. Does this match your experience?

I used to think that the exact philosophical axiologies and the handling of corner cases were really important to guide altruistic action, but I now think that many good things are robustly good under most reasonable moral frameworks.

 

these practical and intuitive methods are ultimately grounded in Singer’s deeply counterintuitive moral premises.

I don't think this is necessarily true. Many (I would argue most) other moral premises can lead you to value preventing child deaths or stunting, limiting the suffering of animals in factory farms, or ensuring future generations live positive, meaningful lives.

@WillieG mentioned Christianity, and indeed, EA for Christians has many Christians who care deeply about helping others and come from a very different moral background. (I think sometimes they mention this parable)

 

within the EA community, beyond working on their own projects, do people have the tendency to remind & suggest to others “what they could have done but didn’t?”

I don't have an answer to this question, but you might like these posts: Invisible impact loss (and why we can be too error-averse)  and Uncertain Optimizing and Opportunity Costs 

I think people regularly do encourage themselves and others to consider opportunity costs and counterfactuals, but I don't think it's specific to the EA community.

 

The principle becomes more challenging to accept when Singer extends it to a particular edge case.

I think this is the nature of edge cases. I don't think you need to agree with Singer on edge cases to value helping others. This vaguely reminded me of this Q&A answer from Derek Parfit where he very briefly talks about borderline cases and normative truths.

 

I do think things get trickier for e.g. shrimp welfare and digital sentience, and in those cases philosophical considerations are really important. But in my opinion the majority of EA work is not particularly sensitive to one's stance on utilitarianism.

Note that the hold-out set doesn't exist yet. https://x.com/ElliotGlazer/status/1880812021966602665

What does this mean for OpenAI's 25% score on the benchmark?

Note that only some of FrontierMath's problems are actually frontier, while others are relatively easier (i.e. IMO level, and Deepmind was already one point from gold on IMO level problems) https://x.com/ElliotGlazer/status/1870235655714025817

You might also be interested in this post: Measuring Good Better, as a very high level summary of different organizations views on measuring ‘good’ (apparently nobody uses DALYs!)

After reading the recent https://www.thenation.com/article/society/progressive-left-philanthropy-strategy/ and many similar articles, my understanding is that proponents of "system level" changes are sceptical of a neoliberal/market-driven approach, and want a more centrally planned economy, where opportunities and outcomes are guaranteed to be more equal, or at least everyone is guaranteed a basic amount of wealth.

 

My understanding is that they care primarily about things like increased inequality, homelessness and unemployment in the United States, and they believe that main causes for those issues are the greed of the top 0.01% and market regulations (or lack thereof) which favour the richest at the expense of the poorest.

 

So I would imagine that reading things like:

AGOA gives Sub-Saharan African countries duty-free access to the American market for a range of product categories — in particular, apparel, which has historically been a key stepping stone for countries pursuing export-led manufacturing growth. [...] With Chinese labor costs increasing and general protectionist pressures growing, there may be a window of opportunity for African manufacturing industries to grow before automation in high-income countries potentially leads to significant re-shoring — provided that AGOA does not expire beforehand. Advocating for a strong AGOA renewal bill could improve the odds for an African industrial transformation.

They would expect an AGOA renewal to increase inequality and unemployment in the US by replacing American jobs with sweatshops in countries with lower minimum wage/worker rights, enriching capitalists who would profit from exploiting less protected workers.

 

But this is definitely a position I struggle to understand, so it's likely that I'm misrepresenting it and would welcome other guesses/corrections.

Do they mention effective giving or collaborate with Doneer Effectief/the Tien Procent Club?

I mean the reasoning behind this seems very close to #2 no? The target audience they're looking at is probably more interested in neartermism than AI/longtermism and they don't think they can get much tractability working with the current EA ecosystem?

 

I think 2 and especially 3 are very likely, but I think it's also likely that Bregman was very impressed with AIM, and possibly found it more inspiring than 80k/CEA, and/or more pragmatic, or a better fit for the kind of people he wanted to reach regardless of their views on AI.

How many of them have made that choice recently though?


A lot![1]

80k seems to mostly care about x-risk, but (perhaps surprisingly) their messaging is not just "Holy Shit, X-Risk" or "CEOs are playing Russian roulette with you and your children".

They instead also cover a lot of cause-neutral EA arguments (e.g. scope sensitivity and the importance of effectiveness)

So I don't think it's surprising that Rutger doesn't recommend them if he doesn't share (or even actively disagrees with?) those priorities even if his current focus on persuading mid-career professionals to look into alternative proteins and tobacco prevention sounds very EA-ish in other respects.

Yeah agree with this, but I still think that 80k is more than useless for altruists who don't value the long-term future, or are skeptical of 80k's approach to trying to influence it.

I'm curious whether he mentioned ProbablyGood or if he's even aware of them?

My understanding is that the SMA team knows much more about the space than I do, so I'm sure they are aware of them if I'm aware of them.

  1. ^

    I don't have an exact number, but I would conservatively guess more than 100 people and more than $100k in total donations for 2024

I've noticed that comments with more disagree than agree votes often have more karma votes than karma


Note that the number of karma votes is not accurate, I think it gives users the impression that there are more downvotes than there actually are.

I don't have any insider information, but my speculation would be that they just think that they counterfactually reach more people by having a very separate brand.

i.e. SMA closely related to the EA brand/flavor/way of communicating would counterfactually help X more people do more good than EA by itself, while SMA as a separate movement with its own ideas/style on how to do the most good would counterfactually help Y extra people, and Y > X.

I also think it's likely that SMA believes that for their target audience it would be more valuable to interact with AIM than with 80k or CEA, not necessarily for the 3 reasons you mention.

Load more