Hide table of contents

Training for Good recently announced Red Team Challenge, a programme that calls small teams together to "red team" important ideas within effective altruism. This programme provides training in “red teaming” best practices and then pairs small teams of 2-4 people together to critique a particular claim and publish the results.

We are looking for the best ideas to “red team” and want to pay $100 to our 'top' answer and $50 to our second and third top pick.

Constraints

  • It should be possible to reach a tangible result for relatively inexperienced researchers within a total time frame of ~50-60 hours or less of research, including a write up (divided between 2-4 team members)
  • The red teaming question needs to be:
    • Precisely defined with a clear goal and scope
    • One sentence long
      • Feel free to provide a short explanation of up to 100 words if the question itself is not fully self-explanatory or if you want to provide some additional context.
    • Related to effective altruism in some way

How to participate

  • We'd like you to leave your answer to the question as a comment to this post or by messaging us your answer using the Forum messaging system.
  • If you have questions or want to clarify something, please ask it in a comment to this post.
  • We don't want other users discussing other people's answers, so we will moderate away those comments. You may, however, upvote or downvote comments as per normal Forum usage.
  • We will end the competition on March 28 2022

How we decide who wins

  • We will ultimately pick who we give the $100 prize to solely on our own opinion of which one we find most useful for our intended goals. The same goes for the second and third prize.
  • We might find that none of the answers are what we wanted (likely because we under-specified what we want). In that case, we will offer only $10 to the 1st, 2nd, and 3rd best. My fragile guess is that there is a 30% chance of this happening.
  • We will DM the winners on prize closing. We might also comment publicly who won on this post, but we'll check in with you first.

Examples

  • “Make the best case why this recommendation of charity X should not convince a potential donor to donate”
  • “Scrutinize this career profile on X. Why might it turn out to be misleading /counterproductive /unhelpful for a young aspiring EA?”
  • “Why might one not believe in the arguments for -
    • EA university groups promoting effective giving?”
    • hits-based giving being the most impactful approach to philanthropy at the current margin?”
    • insects being considered moral patients?"

48

0
0

Reactions

0
0

More posts like this

Comments33
Sorted by Click to highlight new comments since:

Red team: This is not the most important century.

 

Context: https://www.cold-takes.com/most-important-century/

Strongly seconded

Every project idea from the Future Fund is probably a useful goal to red-team as it might affect how a lot of money will be spend.

https://ftxfuturefund.org/projects/

They also explicitely ask for criticism as their last project suggestion:

Critiquing our approach

Research That Can Help Us Improve

We’d love to fund research that changes our worldview—for example, by highlighting a billion-dollar cause area we are missing—or significantly narrows down our range of uncertainty. We’d also be excited to fund research that tries to identify mistakes in our reasoning or approach, or in the reasoning or approach of effective altruism or longtermism more generally.

I'll just add all mine in one comment, since I'm assuming you won't base your decision off the number of upvotes. Most of these are about movement-building, since that's probably what I spend most of my time thinking about

  • The EA community should not be trying to grow.
    • This research would advocate for something like stronger gatekeeping/less outreach.
  • Giving pledges (GWWC, OFTW) are no longer relevant for engaged EAs 
    • Perhaps linked to ETG becoming less of a priority
  • Despite many community builders pushing for them, fellowships are not the best way to get new people involved in EA
  • AGI will never be developed, so we don't need to worry about AI safety
    • There are many possible arguments for this. Here is one.

Seconding the first one.

  • Why might one not believe in the arguments for: epistemics training of any kind makes people better / clearer / more rigorous thinkers
  • Why might one not believe in the arguments for: personal assistants make people meaningfully more productive
  • Has focus on AGI in EA contributed more to capabilities than to safety?
  • Look for times when beneficiaries of charity would rather get less money / value in the way of goods in exchange for: more fairness, justice, durable over nondurable goods, etc.
  • Red-team: how much is impact / success etc predictable from eliteness of school attended
  • Red-team the claim: PR should be a major concern for EA orgs and writers (ie how likely is backlash, negative consequences, and how bad are those)
  • Red-team: camps/outreach to young people are worth the risks of harm to those young people

Red team: Why might one not believe in the arguments for wild animals having net negative welfare?

It's great that this question is examined or red teamed!

For onlookers:

  • It’s unclear whether Wild Animal Welfare advocates in EA believe wild animals have very net negative lives, at least in the broad sense that one might get from Brian Tomasik’s writings.
  • It’s unnecessary to have a strong belief in net negative welfare for this cause area to rank highly in EA. For example, if a small subset of animals have terrible lives, and we could alleviate this suffering cost effectively, it seems worthy of effort.

BTW, I don’t work in wild animal welfare. I’m just some random dude or something.
 

Red team: What non-arbitrary views in population axiology do avoid the “Very Repugnant Conclusion” (VRC)?

Context/Explanation:

According to Budolfson & Spears (2018), “the VRC cannot be avoided by any leading welfarist axiology despite prior consensus in the literature to the contrary”, and “[the extended] VRC cannot be avoided by any other welfarist axiology in the literature.”

Yet surely we need not limit our views to the ones that were included in their analysis.

Bonus points if the team scrutinizes some assumptions that are commonly taken as unquestioned starting points, such as additive aggregationism, impersonal compensation, or independent positive value.

(By “non-arbitrary”, I mean that the views would be attractive for other reasons besides avoiding the VRC.)

I think there are plenty of views which avoid the original VRC, basically any that avoids the original repugnant conclusion, including average utilitarianism, maximin, rank-discounted utilitarianism, person-affecting views, etc.. For the extended VRC, I would recommend the contractualist Scanlon’s “Greater Burden Principle”, or the deontological animal rights theorist Regan’s “harm principle”, both according to which (from my understanding) a greater individual burden or harm to one should be prioritized over any number lesser burdens or harms to others, all else equal, as well as principles for "limited aggregation" which allow some aggregation when comparing burdens or harms of sufficiently similar severity or “relevance”. These are different from lexical views, maximin, etc., in that that they aim to minimize the largest loss in welfare, not necessarily improve the welfare of the worst off individual or experience, or ensure everyone's welfare is above some lexical threshold.

Cool idea! :) In case you haven't seen it, Linch listed a few ideas in a shortform a couple months ago (Note: Linch is my manager).

Red team: Is existential security likely, assuming that we avoid existential catastrophe for a century or two?

Some reasons that I have to doubt that existential security is the default outcome we should expect:

  • Even superintelligent aligned AI might be flawed and fail catastrophically eventually
  • Vulnerable world hypothesis
  • Society is fairly unstable
  • Unregulated expansion throughout the galaxy may reduce extinction risk but may increase s-risks, and may not be desirable

Red team: Argue that moral circle expansion is or is not an effective lever for improving the long-term future. Subpoint: challenge the claim that ending factory farming effectively promotes moral circle expansion to wild animals or digital sentient beings.

Related:

Red team: Scrutinize this career profile on medical careers. Why might it turn out to be misleading /counterproductive /unhelpful for a young aspiring EA?

Red-team - "Are longtermism and virtue ethics actually compatible?"

A convincing red-team wouldn't need a complex philosophical analysis, but rather a summary of divergences between the two theories and an exploration of five or six  'case studies' where consequentialist-type behaviour and thinking is clearly 'unvirtuous'. 


Explanation - Given just how large and valuable the long-term future could be, it seems plausible that longtermists should depart from standard heuristics around virtue. For instance, a longtermist working in biosecurity who cares for a sick relative might have good consequentialist reasons to abandon their caring obligations if a sufficiently promising position came up at an influential overseas lobbying group. I don't think EAs have really accepted that there is a tension here; doing so seems important if we are to have open, honest conversations about what EA is, and what it should be.

I would be interested in this one. 

To provide a relevant anecdote to the Benjamin Todd thread, (n = 1, of course) I had known about EA for years, and agreed with the ideas behind it. But the thing that got me to actually take concrete action about it was that I joined a group that, among other things, asked its members to do a good deed each day. Once I got into the habit of doing good deeds, (and, even more importantly, actively looking for opportunities to do good deeds) however small or low-impact, I began thinking about EA more, and finally committed to try giving 10% for a year, then signing the pledge. 

Without pursuing classical virtue, I would be unlikely to be involved in EA now. My agreement with EA philosophically remained constant, but my willingness to act on a moral impulse was what changed. I built the habit of going from "Someone should do something" to "I should do something" with small things like stopping to help a stranger with a heavy box, and that transferred  to larger things like donating thousands of dollars to charity.

Thus, I am interested in the intersection of EA and virtue and how they can work together. EA requires two things - philosophical agreement, and commitment to action. In my case, virtue helped bridge the gap between 1 and 2.

Red-team - "Examine the magnitude of impact that can be made from contributing to EA-relevant Wikipedia pages. Might it be a good idea for many more EA members to be making edits on Wikipedia?" 

Rough answer  - "I suspect it's the case that only a small proportion of total EA members make edits to the EAF-wiki. Perhaps the intersection of EA members and people who edit Wikipedia is about the same size, but my intuition is that this group is even smaller than the EAF wiki editors. Given that Wikipedia seems to receive  a decent amount of Internet traffic (I assume this probably also applies to EA-relevant pages on Wikipedia), it is very likely the case that contributing to Wikipedia is a neglected activity within EA; should an effort be made by EA members to contribute more to Wikipedia, the EA community might grow or might become more epistemically accessible, which seem like good outcomes."

I helped start WikiProject Effective Altruism a few months ago, but I think that the items on our WikiProject's to-do list are not as valuable as, say, organizing a local EA group at a top university, or writing a useful post on the EA Forum. One tricky thing about Wikipedia is that you have to be objective, so while someone might read an article on effective altruism and be like "wow, this is a cool idea", you can't engage them further. I also think that the articles are already pretty decent.

Red team : Many EA causes seem to assume that the current society will continue on its current trajectory for a long time, despite the fact that it is dependant on a massive amount of non-renewable fossil fuels, and is thus unsustainable. It is unlikely that alternatives sources of energy, like solar, wind or nuclear can replace fossil fuels at the speed and scale required since they themselves depend on a limited supply of metals, and depend on fossil fuels for their transportation (trucks) and manufacturing (extraction of minerals).

Given that, shouldn't EA causes be repriorized based on the assumption that the current energy-intensive industrial society is a temporary thing?

 

A few more element of context :

  • Oil is expected to decline in the medium to short term, between 2025 and 2040, even with high discoveries. Trends that prevented such decline in the past (better technology, more investment) appear to have reached their limits.
  • 90% of products are made of or transported by oil. Food production is also very dependent on fossil fuels. Most post-WWII recessions have been preceded by high oil prices, including the 2008 crisis, so a prolonged energy descent could mean the end of growth, leading to a disruptions of political and economic systems.
  • This would make some causes with energy-intensive solutions less promising, since they would be unattractive in an energy-constrained world.  I have in mind most longtermist causes, asteroid prevention, or even development of cultured meat. Depending on the timing (so more uncertain), the development of artificial intelligence might  also be impaired for the same reasons.

You can find the detailed reasoning and sources in a draft that we are currently writing for the EA Forum : https://docs.google.com/document/d/1Mte_x4hsW5XiccCkHkW-iAUvNM_qwenKMCQazXxNJrc/edit# (and we are looking for reviewers, if someone is interested!)

Reposting my post: “At what price estimate do you think Elsevier can be acquired?

Could acquiring Elsevier and reforming it to be less rent-seeking be feasible?”

Why might one believe that MacAskill and Ord's idea of The Long Reflection is actually a bad idea, or impossible, or that it should be dropped from longtermist discourse for some other reason?

Robin Hanson's argument here: https://www.overcomingbias.com/2021/10/long-reflection-is-crazy-bad-idea.html

Red team: is it actually rational to have imprecise credences in possible longrun/indirect effects of our actions rather than precise ones?

Why: my understanding from Greaves (2016) and Mogensen (2020) is that this has been necessary to argue for the cluelessness worry.

This came up here. This paper was mentioned.

Imo, there are more important things than ensuring you can't be Dutch booked like (having justified beliefs and avoiding fanaticism). Also, Dutch books are hard to guarantee against with unbounded preferences, anyway.

Red team: Certain types of prosaic AI alignment (e.g., arguably InstructGPT) promote the illusion of safety without genuinely reducing existential risk from AI, or are capabilities research disguised as safety research. (A claim that I've heard from EleutherAI, rather indelicately phrased, and would like to see investigated)

  1. Red team against hedonism and objectively cardinally measurable hedonistic welfare in particular.
  2. Red team against objectively cardinally measurable welfare generally.
  3. Red team against expected value maximization and the importance of formal guarantees against Dutch books/money pumps, perhaps relative to other considerations and alternatives.

Red team: There are no easy ways that [EA org*] strategy can be better optimised towards achieving that organisation's stated goals and/or the broader goals of doing the mots good.

Honestly, not the easiest question but practically quite useful if anyone listens.

 

* E.g. CEA, OpenPhil, GiveWell, FTX Future Fund, Charity Entrepreneurship, HLI, etc.

Make the best case against: "Some non-trivial fraction of highly talented EAs should be part- or full-time community builders." The argument in favor would be pointing to the multiplier effect. Assume you could attract the equivalent of one person as good as yourself to EA within one year of full-time community building. If this person is young and we assume the length of a career to be 40 years, then you have just invested 1 year and gotten 40 years in return. By the most naive / straightforward estimate then, a chance of about 1/40 of you attracting one you-equivalent would be break even. Arguably that's too optimistic and the true break-even point is somewhat bigger than 1/40; maybe 1/10. But that seems prima facie very doable in a full-time year. Hence, a non-trivial fraction of highly-talented EAs should do community building.

(I have a few arguments against the above reasoning in mind, but I believe listing them here would be against the spirit of this question. I still would be genuinely interested to see this question be red-teamed.)

Red Team: Toby Ord states that to safeguard humanity will require "...new institutions, filled with people of keen intellect and sound judgement, endowed with a substantial budget and real influence over policy" argue this is not the case.

 

Quote is from The Precipice, page 196.

When discussing the future of humanity Toby Ord states "I find it useful to consider our predicament from humanity's point of view: casting humanity as a coherent agent, and considering the strategic choices it would make were it sufficiently rational and wise" why might this approach be more misleading than useful?

 

Quote is from The Precipice, page 188.

Red team: reducing existential risks is still the best use of our resources even if you have population ethical views other than total utilitarianism. (More precisely: … defend it for either I. non identitarian person affecting views, or ii. ‘average utilitarianism with a minimum population constraint’)

Curated and popular this week
Relevant opportunities