This is a special post for quick takes by Prabhat Soni. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Some fairly prominent people in EA have spoken with the person behind this channel about creating an introductory EA video. I'm not sure whether such a video is actually in the works, though. (I imagine that sponsoring one of these is quite expensive.)

Sorry for the lack of further detail; I don't know much else, beyond the fact that this meeting happened.

Thanks, this was helpful!

High impact career for Danish people: Influencing what will happen with Greenland

EDIT: Comments give a good counter-argument against my views!

Climate change could get really bad. Let's imagine a world with 4 degrees warming. This would probably mean mass migration of billions of people to Canada, Russia, Antartica and Greenland.

Out of these, Canada and Russia will probably have fewer decisions to make since they already have large populations and will likely see a smooth transition into a billion+ people country. Antarctica could be promising to influence, but it will be difficult for a single effective altruist since multiple large countries lay claims on Antarctica (i.e. more competition). Greenland however is much more interesting.

 

It's kinda easy for Danes to influence Greenland

Denmark is a small-ish country with a population of ~5.7 million people. There's really not much competition if one wants to enter politics (if you're a Dane you might correct me on this). The level of competition is much lower than conventional EA careers since you only need to compete with people within Denmark.

 

There are unsolved questions wrt Greenland

  1. There's a good chance Denmark will sell Greenland because they could get absurd amounts of money. Moreover, Greenland is not of much value to them since Denmark will mostly remain inhabitable and they don't have a large population to resettle. Do you sell Greenland to a peaceful/neutral country? To the highest bidder? Is it okay to sell it to a historically aggresive country? Are there some countries you want to avoid selling it to because they will gain too much influence? USA, China and Russia have shown interest in buying Greenland.
  2. Should Denmark just keep Greenland, allow mass immigration and become the next superpower?
  3. Should Greenland remain autonomous?

 

Importance

  1. Greenland, with a billion+ people living in it, could be the next superpower. Just like how most of the emerging technology (e.g. AI, biotechnology, nanotechnology) are developed in current superpowers like USA and China, future technologies could be developed in Greenland.
  2. In a world of extreme climate change, it is possible that 1-2 billion people could live in Greenland. That's a lot of lives you could influence.
  3. Greenland has a strategic geographic location. If a country with bad intentions buys Greenland, that could be catastrophic for world peace.

The total drylands population is 35% of the world population (~6% from desert/semi-desert). The total number of migrants, however, is 3.5% of world population. So less than 10% of those from drylands have left. But most such migrants move because of politics, war, employment rather than climate. The number leaving because of climate is less (and possibly much less) than 5% of the drylands population.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants. Probably too few of these people will go to any country, let alone Greenland, to make it into a new superpower. But let's run the numbers for Greenland anyway. Of the world's 300M migrants, Greenland currently has only ~10k. So of an extra 50M, Greenland could be expected to take ~2k, so I'm coming in 5-6 orders of magnitude lower than the 1B figure.

It does still have some military relevance, and would be good to keep it neutral, or at least out of the hands of China/Russia.

Thanks Ryan for your comment!

It seems like we've identified a crux here: what will be the total number of people living in Greenland in 2100 / world with 4 degrees warming?

 

I have disagreements with some of your estimates.

The total drylands population is 35% of the world population

Large populations currently reside in places like India, China and Brazil. These currently non-drylands could be converted to drylands in the future (and also possibly desertified). Thus, the 35% figure could increase in the future.

So less than 10% of those from drylands have left.

Drylands are categorised into {desert, arid, semi-arid, dry sub-humid}. It's only when a place is in the desert category, that people seriously consider moving out (for reference all of California comes under arid or semi-arid category). In the future, deserts could form a larger share of drylands, and less arid regions could form a smaller share. So, you could have more than 10% of people from places called "drylands" leaving in the future.

The total number of migrants, however, is 3.5% of world population.

Yes, that is correct. But that is also a figure from 2019. A more relevant question would be how many migrants would there be in 2100? I think it's quite obvious that as the Earth warms, the number of climate migrants will increase.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants.

I don't really agree with the 5% estimate. Specifically for desertified lands, I would guess the %age of people migrating to be significantly higher.

Of the world's 300M migrants, Greenland currently has only ~10k.

This is a figure from 2020 and I don't think you can simply extrapolate this.

 

After revising my estimates to something more sensible, I'm coming with ~50M people in Greenland. So, Greenland would be far from being a superpower. I'm hesitant to share my calculations because my confidence level for my calculations is low -- I wouldn't be surprised if the actual number was upto 2 orders of magnitude smaller or greater.

A key uncertainity: Does desertification of large regions imply that in-country / local migration is useless?

 

The world, 4 degrees warmer. A map from Parag Khanna's book Connectography

I'm not sure you've understood how I'm calculating my figures, so let me show how we can set a really conservative upper bound for the number of people who would move to Greenland.

Based on current numbers, 3.5% of world population are migrants, and 6% are in deserts. So that means less than 3.5/9.5=37% of desert populations have migrated. Even if half of those had migrated because of the weather, that would be less than 20% of all desert populations. Moreover, even if people migrated uniformly according to land area, only 1.4% of migrants would move to Greenland (that's the fraction of land area occupied by Greenland). So an ultra-conservative upper bound for the number of people migrating to Greenland would be 1B*.37*.2*.014=1M.

So my initial status-quo estimate was 1e3, and my ultra-conservative estimate was 1e6. It seems pretty likely to me that the true figure will be 1e3-1e6, whereas 5e7 is certainly not a realistic estimate.

Hmm this is interesting. I think I broadly agree with you. I think a key consideration is that humans have a good-ish track record of living/surviving  in deserts, and I would expect this to continue.

Socrates' case against democracy

https://bigthink.com/scotty-hendricks/why-socrates-hated-democracy-and-what-we-can-do-about-it

Socrates makes the following argument:

  1. Just like we only allow skilled pilots to fly airplanes, licensed doctors to operate on patients or trained firefighters to use fire enignes, similarly we should only allow informed voters to vote in elections.
  2. "The best argument against democracy is a five minute conversation with the average voter". Half of American adults don’t know that each state gets two senators and two thirds don’t know what the FDA does.
  3. (Whether a voter is informed can be evaluated by a short test on the basics of elections, for example.)

Pros: better quality of candidates elected, would give uninformed voters a strong incentive to learn aout elections.

Cons: would be crazy unpopular, possibility of the small group of informed voters acting acting in self-interest -- which would worsen inequality.

(I did a shallow search and couldn't find something like this on the EA Forum or Center for Election Science.)

What's the proposed policy change? Making understanding of elections a requirement to vote?

Yep, that's what comes to my mind atleast :P

Among rationalist people and altruistic people, on average, which of them are more likely to be attracted to effective altruism?

This has uses. If one type of people are significantly more likely to be attracted to EA, on average, then it makes sense to target them for outreach efforts. (e.g. at university fairs)

I understand that this is a general question, and I'm only looking for a general answer :P (but specifics are welcome if you can provide them!)

I don't have or know of any data (which doesn't mean much, to be fair), but my hunch would be that rationalist people who haven't heard of EA are, on average, probably more open to EA ideas than the average altruistic person who hasn't heard of EA. While altruistic people might generally agree with the core ideas, they may be less likely to actually apply them to their actions.

It's a vague claim though, and I make these assumption because of the few dozens of EAs I know personally, I'd very roughly assume 2/3 of them to come across as more rationalist than altruistic (if you had to choose which of the two they are), plus I'd further assume that from the general population more people will appear to be altruistic, than rationalist. If rationalists are more rare in the general population, yet more common among EAs, that would seem like evidence for them being a better match so to speak. These are all just guesses however without much to back them up, so I too would be interest in what other people think (or know).

Hmm interesting ideas. I have one disagreement though, my best guess is that there are more rationalist people than altruistic people.

I think around 50% of the people who study some quantitative/tech subject and have good IQ qualify as rationalist (is this an okay proxy for rationalist people?). And my definition for altruistic people is someone who makes career decisions primarily due to altruistic people.

Based on these definitions, I think there are more rationalist people than altruistic people. Though, this might be biased since I study at a tech college (i.e. more rationalists) and live in India (i.e. less altruistic people, presumably because people tend to become altruistic when their basic needs are met).

A film titled "Superintelligence" has released in November 2020. Could it raise risks?

Epistemic status: There's a good chance I'm overthinking this, and overestimating the risk.

Superintelligence [Wiki] [Movie trailer]. When you Google "Superintelligence", the top results are no longer those relating to Nick Bostrom's book but rather this movie. A summary of the movie:

When a powerful superintelligence chooses to study Carol, the most average person on Earth, the fate of the world hangs in the balance. As the AI decides whether to enslave, save or destroy humanity, it's up to Carol to prove people are worth saving.

~ HBO Max.

I haven't watched the movie; I've only watched the trailer. A possible this might raise:

  • The general public gets a tangential understanding of superintelligence. In particular, people might tend to associate superintelligence with evil intentions/consequences, while ignoring the possibility of good intentions (aka aligned AI) and good consequences. It might be that, based on this line of thought, people are not welcoming of machine superintelligence.

I don't know if it would raise risks, and I haven't watched the movie (only the trailer), but I'm disappointed about this movie. Superintelligence is a really important concept, and they turned it into a romantic action comedy film, and making it a not-so-serious topic. The film also didn't do well amongst critics - it has an approval rating 29% on Rotten Tomatoes.

I think there's nothing we can do about the movie at this rate though.

I've never seen anyone explain EA using the Pareto Principle (80/20 rule). The cause prioritisation / effectiveness part of EA is basically the Pareto principle applied to doing good. I'd guess 25-50% of the public knows of the Pareto principle. So, I think this might be a good approach. Thoughts?

That's a good point, it's not a connection I've heard people make before but it does make sense.

I'm a bit concerned that the message "you can do 80% of the good with only 20% of the donation" could be misinterpreted:

  • I associate the Pareto principle with saving time and money.  EA isn't really a movement about getting people to decrease the amount of time and money they spend on charity though, if anything probably the opposite.
  • To put it another way, the top opportunities identified by EA still have room for more funding.  So the mental motion I want to instill is not about shaving away your low-impact charitable efforts, it's more about doubling down on high-impact charitable efforts that are underfunded (or discovering new high-impact charitable efforts).
  • We wouldn't want to imply that the remaining 20% of the good is somehow less valuable--it is more costly to access, but in principle if all of the low-hanging altruistic fruit is picked, there's no reason not to move on to the higher-hanging fruit.  The message "concentrate your altruism on the 80% and don't bother with the 20%" could come across as callous.  I would rather make a positive statement that you can do a lot of good surprisingly cheaply than a negative statement that you shouldn't ever do good inefficiently.

Nevertheless I think the 80/20 principle could be a good intuition pump for the idea that results are often disproportionate with effort and I appreciate your brainstorming :)

Hey, thanks for your reply. By the Pareto Principle, I meant something like "80% of the good is achieved by solving 20% of the problem areas". If this is easy to misinterpret (like you did), then it might not be a great idea :P  The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

That sounds harder to misinterpret, yeah.

See here for some related material, in particular Owen Cotton-Barratt's talk Prospecting for Gold and the recent paper by Kokotajlo & Oprea.

Does a vaccine/treatment for malaria exist? If yes, why are bednets more cost-effective than providing the vaccine/treatment?

There's only one approved malaria vaccine, and it's not very good (requires 4 shots, and ~36% reduction in number of cases).

Anti-mosquito bednets have an additional advantage (over malaria vaccines) in being able to prevent mosquito-borne diseases other than malaria, though I don't know how big a deal this is in practice (eg, I don't know how often the same area will have yellow fever and malaria).

Thanks! This was helpful!

Is it high impact to work in AI policy roles at Google, Facebook, etc? If so, why is it discussed so rarely in EA?

I see it discussed sometimes in AI safety groups.

There are, for example, safety oriented teams at both Google Research and DeepMind.

But I agree it could be discussed more.

Changing behaviour of people to make them more longtermist

Can we use standard behavioral economics techniques like loss aversion (e.g. humanity will be lost forever), scarcity bias, framing bias and nudging to influence people to make longtermist decisions instead of neartermist ones? Is this even ethical, given moral uncertainty?

It would be awesome if you could direct me to any existing research on this!

I think people already do some of it. I guess the rhetorical shift from x-risk reasoning ("hey, we're all gonna die!") to lontermist arguments ("imagine how wonderful the future can be after the Precipice...") is based on that.

However, I think that, besides cultural challenges, the greatest obstacles for longtermist reasoning, in our societies (particularly in LMIC), is that we have an "intergenerational Tragedy of the Commons" aggravated by short-term bias (and hyperbolic discount) and representativeness heuristic (we've never observed human extinction). People don't usually think about the longterm future - but, even when they do it, they don't want to trade their individual-present-certain welfare for a collective (and non-identifiable), future and uncertain welfare.

Hi Ramiro, thanks for your comment. Based off this post, we can think of 2 techniques to promote longtermism. The first is what I mentioned - which is exploiting biases to get people inclined to longtermism. And the second is what you [might have] mentioned - a more rationality-driven approach where people are made aware of their biases with respect to longtermism. I think your idea is better since it is a more permanent-ish solution (there is security against future events that may attempt to bias an individual towards neartermism), has spillover effects into other aspects of rationality, and has lower risk with respect to moral uncertainity (correct me if I'm wrong).

I agree with the several biases/decision-making flaws that you mentioned! Perhaps, sufficient levels of rationality is a pre-requisite to one's acceptance of longtermism. Maybe a promising EA cause area could be promoting rationality (such a cause area probably exists I guess).

Cause prioritisation for negative utilitarians and other downside-focused value systems: It is interesting to note that reduction of extinction risk is not very high-impact in downside-focussed value systems.

Some things that are extinction risks are also s-risks or at least risk causing a lot of suffeing, e.g. AI risk, large-scale conflict. See Common ground for longtermists by Tobias Baumann for the Center for Reducing Suffering.

But ya, downside-focused value systems typically accept the procreation asymmetry, so future people not existing is not bad in itself.

It is very high-impact when survival is considered indispensable to have control over nature for preventing negative values from coming back after extinction.

I would recommend this short essay on the topic: Human Extinction, Asymmetry, and Option Value
Abstract: "How should we evaluate events that could cause the extinction of the human species? I argue that even if we believe in a moral view according to which human extinction would be a good thing, we still have strong reason to prevent near-term human extinction."
(Just to clarify: this essay was not written by me)
 

Insightful thoughts!

Some good, interesting critiques to effective altruism.

Short version: read https://bostonreview.net/forum/logic-effective-altruism/peter-singer-reply-effective-altruism-responses (5-10 mins)

Longer version: start reading from https://bostonreview.net/forum/peter-singer-logic-effective-altruism (~ 1 hour)

I think these critiques are fairly comprehensive. They probably cover like 80-90% of all possible critiques.

This is a big topic, but I think these critiques mainly fail to address the core ideas of EA (that we should seek the very best ways of helping), and instead criticise related ideas like utilitarianism or international aid. On the philosophy end of things, more here: https://forum.effectivealtruism.org/posts/hvYvH6wabAoXHJjsC/philosophical-critiques-of-effective-altruism-by-prof-jeff

Should More EAs Focus on Entrepreneurship?

My argument for this is:

1. EAs want to solve problems in area that are neglected/unpopular.

=> 2. Less jobs, etc in those fields and lot of competition for jobs among existing EA orgs (e.g. GPI, FHI, OpenPhil, Deepmind, OpenAI, MIRI, 80K). I'm not sure, but I think there's an unnecessarily high amount of competition at the moment -- i.e. rejecting sufficiently qualified candidates.

=> 3. It is immensely beneficial to create new EA orgs that can absorb people.


Other questions:

  • Should we instead make existing orgs larger? Does quality of orgs go down when you create a lot of orgs?
  • What about oligopoly over market when there are very few orgs (e.g. due to whatever reason if GPI starts messing up consistently it is very bad for EA since they are on of the very few orgs doing global priorities research)
[comment deleted]1
0
0
[comment deleted]1
0
0
[comment deleted]1
0
0
[comment deleted]1
0
0
[comment deleted]1
0
0
[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities