This is a special post for quick takes by Puggy. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I think it is a cool idea for people to take a giving pledge on the same day. For example, you and your friend both decide to pledge 10% to charity on the same day. It would be even more fun if you did it with strangers. Call it “giving twins” or “giving siblings”.

Imagine that you met a couple strangers and they pledged with you. Imagine that after pledging you all just decided to be friends, or at least a type of support group for one another. Like “Hey, you and I took the further pledge together on New Year’s Day last year. When I’m in your city let’s go have a pint or maybe we can email each other about our career plans to discuss ways we can help people more!”

Which leads me to my final bit here: would anyone be interested in being my giving sibling? Haha I am interested in taking the further pledge in 2022, and it would be fun to have the same ‘giving birthday’ as other people so I could befriend them, meet people in the community, and get a couple lifelong friends.

Giving what we can could even take this idea and run with it. They could assign you a giving sibling if you entered into the sibling program, this could help increase the feeling that we are a community.

does giving sibling mean giving somethin g to each other?

That could be the case, but I think the emphasis is more on the idea that you have the same “birthdate” to be considered a giving sibling.

Like on February 15 you and a friend took the Giving Pledge together and then that date was the same day you became siblings. Then you celebrate that day every year or form a bond around this shared experience.

Here’s the problem:

Some charities are not just multiple times better than others, some are thousands of times better than others. But as far as I can tell, we haven’t got a good way of signaling to others what this means.

Think about when Ed Sheeran sells an album. It’s “certified platinum” then “double platinum” peaking at “certified diamond”. When people hear this it makes them sit back and say “wow, Ed sheeran is on a different level.”

When a football player is about to announce his college, he says “I’m going D1”. You become a “grandmaster” at chess. Ah, that restaurant must be good it has won two Michelin stars. That economist writing about the tragedy of the commons is great, she won a Nobel prize.

We need nomenclature that goes beyond “High impact” charity. “Cost-effective” “High impact” “Effective” are all good descriptions, but we need to come up with a rating system or some method of giving high status to the best charities (possibly based on how much $ it costs to save one life).

It’s got to be something that we can bring into the popular conscience, and it can’t be something we just narrowly assign to all of our own EA meta charities. We need journalists popularizing the term and recognizing the 3-5 super charities that save lives like no ones business. We should work with marketing teams and carefully plan what the name would be. But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it, just like he gains status by eating at Michelin star restaurants.

(excuse me if I’m not the first to outline this idea)

"But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it" - I think this brushes a good point which I'd like to see fleshed out more. On some level I'm still a bit skeptical in part because I think it's more difficult to make these kinds of designations/measurements for charities whereas things like album statuses are very objective (i.e., a specific number of purchases/downloads) and in some cases easier to measure. Additionally, for some of those cases there is a well-established and influential organization making the determination (e.g., football leagues, FIDE for chess). I definitely think something could be done for traditional charities (e.g., global health and poverty alleviation), but it would very likely be difficult for many other charities, and it still would probably not be as widely recognized as most of the things you mentioned.

Great points. Thank you for them. Perhaps we could use a DALY/QALY measure. A charity could reach the highest status if, after randomized controlled studies, it was determined that $10 donated could give one QALY to a human (I’m making up numbers). Any charity that reached this hard to achieve threshold would be given the super-charity designation.

To make it official imagine that there’s a committee or governing body formed between charity navigator and GiveWell. 5 board members from each charity would come together and select the charities then announce the award once a year and the status would only be official for a certain amount of time or it could be removed if they dipped below a threshold.

What do you think

I certainly would be interested in seeing such a system go into place—I think it would probably be beneficial—the main issue is just whether something like that is likely to happen. For example, it might be quite difficult to establish agreement between Charity Evaluator and GiveWell when it comes to the benefits of certain charities. Additionally, there may be a bit of survivor bias when it comes to organizations that have worked like FIDE, although I still think the main issue is 1) the analysis/measurement of effectiveness is difficult (requiring lots of studies vs. simply measuring album downloads/streams); and 2) the determination of effectiveness may not be widely agreed upon. That’s not to say it shouldn’t be tried, but I think that might contribute to limiting the effectiveness relative to the examples you cite.

Forecasters Bias

This may already be a bias, I haven’t really researched this. Excuse my ignorance. But perhaps there could be a new bias we could identify called forecasters bias.

This bias would be the phenomenon where forecasters have a tendency to place too much weight on the importance of or the effect of forecastable events versus events that are less forecastable. Thereby somewhat (or entirely) neglecting other improbable less forecastable events.

Example 1: Theres a new coronavirus variant called Omicron. It has not yet spread, but it will. We can track the spread of this virus going forward into the future. When forecasting Omicron’s effect we have a tendency to overemphasize its effect because this event is forecastable.

Another coronavirus example 2: Early in the coronavirus pandemic individuals tracked the spread of the virus, and the rate at which vaccines progressed. They predicted with a good degree of accuracy the amount of deaths. They did not predict, however, that the political whims of the populace would lead to an anti-vax movement. The less forecastable event (anti-vax sentiment) was under-predicted

Example 3: fictional market researchers notice dropping energy prices. They model this phenomenon and expect it to continue for 18 months. But in this fictional 18 months, major earthquakes destroy huge cities and the researchers systematically failed to consider the prospect of major earthquakes happening which raise energy prices.

Example 4: energy prices are rising drastically. Researchers expect this to continue for 18 months. Suddenly, commercially viable nuclear fusion becomes available and governments spread this throughout the world. Energy prices drop to “too cheap to meter”, researchers got this wrong because it was too hard to forecast the progress of nuclear fusion.

I don’t know if this idea is any good. Just a thought!

Have you seen Taleb’s Black Swan book? (https://en.m.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable) I personally haven’t read it, but based on the description it seems related to what you’re describing. Either way, I think it is a good point to consider

[comment deleted]2
0
0

Do you prefer libertarian policy ideas, but you aren’t too sold on the deontological or rights-based reasoning which many libertarians use to justify their policy preferences?

Perhaps this new political identifier could work for you: introducing…. Consequentarian! You’re pretty much a consequentialist through and through, you value good outcomes more than liberty or rights based claims to things however it just empirically turns out that all the best policy ideas which lead to the best outcomes are libertarian. You recognize open borders, drug legalization, limited (or no) government, very low regulation, and competitive enterprise produce more human flourishing than all the alternatives. But you don’t find strict rights arguments compelling (like if a car is driving at you, you can jump on your neighbors lawn even if it violates his property rights).

Pronounced: Consequen-tarian

Associated schools of thought:

  • Chicago school economics
  • University of Arizona Tucson School of liberalism
  • neoclassical liberalism
  • Michael Huemer and Bryan Caplan’s anarchism

empirically turns out that all the best policy ideas which lead to the best outcomes are libertarian


What makes you think this is the case? I agree with your principle that you can make a welfare maximizing case for libertarianism, but surely a Conservative or Socialdemocrat could also argue for their preferred policy from a welfare maximizing perspective.

Calling the set of policies you happen to think are welfare maximizing Consequentarian, strikes me as very uncharitable to those with different views from your own.

There’s a growing literature pointing to the myriad of government failures but the highlights are: government failures are in almost every scenario significantly worse than market failures, so let the market decide. Increasing liberty produces great outcomes (drug use goes down with liberal drug policy same with overdoses, increasing immigration increases everyone’s income, housing prices and homelessness go down when we reduce nimby policies and have a free market in housing, FDA and other bureaucratic agencies overspend (Mercatus Center estimates it costs 93 million to save a life through regulation and with the case of the FDA they actually actively kill 20,000+ people a year), education and healthcare costs would drop significantly if we had a free market in them (the strongest argument that shows why prices rise in these sectors is because of artificial inflation caused by government intervention), wars cost enormous sums of money to produce and their consequences are almost always worse than non-intervention (since 9/11 200K Iraqi civilians have died, while terrorism has increased 2,000%), there’s some historical evidence that free banking systems are less prone to the disastrous effects of the business cycle, there’s lots of more evidence.

These empirical facts are related to the idea that market based interventions outperform government interventions because the market does not have to act through a centralized hierarchy to make decisions. It’s difficult to make centralized decisions that are attentive to the concerns at the margins of the economy.

You might be interested in the Neoliberal Project: What Neoliberals Believe 🥑

Co-director Jeremiah Johnson did an AMA here the other day.

True yea. I have seen the neoliberalism movement. They are more market friendly than the median voter and they are motivated by consequentialist reasoning, but I think they advocate government intervention over what is required in some areas. But overall that’s a great movement.

Has anyone ever thought of doing incentive based pledges with their charitable giving?

Incentive pledge: I will live off of X amount of money, but this figure increases by Y for every $100,000 I donate or pledge to donate.

Example: I will live off of $30,000 in 2020 dollars for the rest of my life and the rest will be donated to charity, however this amount increases by $1000 for every $100,000 I donate (or I will donate at some future date).

Under this incentive pledge, for every $1 million in 2020 dollars that the pledger earns (and donates), $10,000 dollars will be added to their yearly allowance. Then if you’re feeling confident you could cap it at a certain level. For example, this could max out at $100,000 yearly allowance or $70,000 or something like that.

This is for someone who wants to essentially take the further pledge, but they aren’t entirely comfortable confining themselves to a fixed amount to live on (adjusted for inflation) forever. Or this is for the person who could see themselves incentivized to give more if they knew their yearly allowance would raise the more they earned.

Is this much better than pledging a certain %, e.g. 50% of everything above $30,000? It seems that is incentive based, because earning more money means both more for charity and more for you.

That could be a form of an incentive pledge

More from Puggy
Curated and popular this week
Relevant opportunities