This is a special post for quick takes by Amber Dawn. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Application forms for EA jobs often give an estimate for how long you should expect it to take; often these estimates are *wildly* too low ime. (And others I know have said this too). This is bad because it makes the estimates unhelpful for planning, and because it probably makes people feel bad about themselves, or worry that they're unusually slow, when they take longer than the estimate. 

Imo, if something involves any sort of writing from scratch, you should expect applicants to take at least an hour, and possibly more. (For context, I've seen application forms which say 'this application should take 10 minutes' and more commonly ones estimating 20 minutes or 30 minutes).

It doesn’t take long to type 300 words if you already know what you’re going to say and don’t particularly care about polish (I wrote this post in less than an hour probably).  But job application questions —even ‘basic’ ones like ‘why do you want this job?’ and ‘why would you be a good fit?’-- take more time. You may feel intuitively that you’d be a good fit for the job, but take a while to articulate why. You have to think about how your skills might help with the job, perhaps cross-referencing with the job description. And you have to express everything in appropriately-formal and clear language.

Job applications are also very high-stakes, and many people find them difficult or ‘ugh-y’, which means applicants are likely to take longer to do them than they “should”, due to being stuck or procrastinating. 

Maybe hirers put these time estimates because they don’t want applicants to spend too long on the first-stage form (for most of them, it won’t pay off, after all!) This respect for people’s time is laudable. But if someone really wants the job, they *will* feel motivated to put effort into the application form.

There’s a kind of coordination problem here too. Let's imagine there's an application for a job that I really want, and on the form it says 'this application should take you approximately 30 minutes'. If I knew that all the other applicants were going to set a timer for half an hour, write what came to mind, then send off the form without polishing it too much, I also might do that. But as far as I know, they are spending hours polishing their answers. I don’t want to incorrectly seem worse than other candidates and lose out on the job just because I took the time estimates more literally than other people!

‘Aren’t you just unusually slow and neurotic?’
-No; I’d guess that I write faster than average, and I’m really not perfectionist about job applications.

Suggestion: if you’re hiring, include a link at the end of the application form where people can anonymously report how long it actually took them.

As a former applicant for many EA org roles, I strongly agree! I recall spending on average 2-8 times longer on some initial applications than was estimated by many job ads. 

As someone who just helped drive a hiring process for Giving What We Can (for a Research Communicator role) I feel a bit daft having experienced it on the other side, but not having learned from it. I/we did not do a good enough job here. We had a few initial questions that we estimated would take ~20-60 minutes, and in retrospect I now imagine many candidates would have spent much longer than this (I know I would have). 

Over the coming month or so I'm hoping to draft a post with reflections on what we learned from this, and how we would do better next time (inspired by Aaron Gertler's 2020 post on hiring a copyeditor for CEA). I'll be sure to include this comment and its suggestion (having a link at the end of the application form where people can report how long it actually took to fill the form in) in that post. 

Might there be a way to time submissions? I know some tests I have taken for prospective employers are timed. This means candidates e.g. only gets 1 hour both to see the questions asked and to answer them. This might also remove any bias in recruitment as someone with a full-time job and caretaker responsibilities might not have the luxury of spending 6 x the time on an application, while someone in a more privileged position can even spend longer than that.

In the hiring round I mentioned, we did time submissions for the work tests, and at least my impression is we found a way of doing so worked out fairly well. Having a timed component for the initial application is also possible, but might require more of an 'honour code' system as setting up a process that allows for verification of the time spent is a pretty a big investment for the first stage of an application. 

Yes, there are ways to time submissions, and (from my perspective) they aren't particularly difficult to find or to use. I suspect that any organization not using them doesn't have can't find a timing tool as a reason, and more likely has chose not to devote the resources to improving this process, or  hasn't thought of it or hasn't bothered with it as a reason.

A second thought I had is also that timed responses might be beneficial for the hiring organization. This could be because of two reasons. First, at work, you do not have 4 hours to polish an email to a potential donor. You have 10 minutes because you have a mountain of other important things to do. As such, having a strictly timed assessment is likely to give a more realistic view of the expected performance on the job. Secondly, timed responses will also make for a more apples-to-apples comparison, where you are more likely to select the best candidates instead of the candidates with the most time and/or the largest network of educated family and friends willing to help out polish responses.

I'm looking forward to reading a post with reflections on lessons learned. :)

We had a few initial questions that we estimated would take ~20-60 minutes, and in retrospect I now imagine many candidates would have spent much longer than this (I know I would have).

Michael, I'm wondering if more transparency would have helped here? As a simplistic example, there is a big difference between these two questions:

Tell us about a time when you took initiative in a work context.

and

Tell us about a time when you took initiative in a work context. We are specifically looking for candidates that have done this in relation to people management, can describe the process and the results/impact, and can demonstrate taking initiative by doing something fairly innovative.

I'm not sure I follow what you mean by transparency in this context. Do you mean being more transparent about what exactly we were looking for? In our case we asked for <100 words on "Why are you interested in this role?" and "Briefly, what is your experience with effective giving and/or effective altruism?" and we were just interested in seeing if applicants' interest/experienced aligned with the skills, traits and experience we listed in the job descriptions.

I mean transparency in the sense of how the answers are assessed/evaluated. This basically gives candidates a little bit more guidance and structure.

An analogy that I like to use is rather silly, but it works: I might ask a candidate to describe to me how physically fit he are, and he tells me about how many weights he can lift and how fast you can run. But it turns out that I’m actually interested in flexibility and endurance rather than power and speed, and I’ll reject this candidate since he didn’t demonstrate flexibility or endurance. So it is true that he described physical fitness and that I’m assessing based on your physical fitness, but it’s also true that the information offered and what I wanted to assess were very different.

I don't have any particularly strong views, and would be interested in what others think.

Broadly, I feel like I agree that more specificity/transparency is helpful, though I don't feel convinced that it's not also worth asking at some stage in the application an open-ended question like "Why are you interested in the role?". Not sure I can explain/defend my intuitions here much right now but I would like to think more on it when I get around to writing some reflections on the Research Communicator hiring process. 

I just want to say that I love seeing this kind of thing on the EA Forum, and it is so different from most other parts of the internet: I have a proposal or a suggestion, and it doesn't quite mesh with what you think/feel. Neither of us have great justifications or clear data, and rather than ad hominems or posturing or some type of 'battle,' there is simply a bit of exchange and some reflection.

I really like that your response was reflective/pensive, rather than aggressive or defensive. Thanks for being one of the people that makes the internet ever-so-slightly better than it otherwise would be. ☺

Two (barely) related thoughts that I’ve wanted to bring up. Sorry if it’s super off topic.

Rethink priorities application for a role I applied for two years ago told applicants it was timed application and not to take over two hours. However there was no actual verification of this; it was simply a Google form. The first round I “cheated” and took about 4 hours. I made it to the second round. I felt really guilty about this so made sure not to go over on the second round. I didn’t finish all the questions and did not get to the next round. I was left with the unsavory feeling that they were incentivizing dishonest behavior and it could have easily been solved by using something similar to tech companies where a timer starts when you open the task. I haven’t applied for other stuff since so maybe they fixed this.

Charity entrepreneurship made a post a couple months back extending their deadline for the incubator because they thought it was worth it to get good candidates. I decided to apply and made it a few rounds in. I would say I spent like 10 ish hours doing the tasks. I might be misremembering, but at the time of extension I’m pretty sure they already had 2000-4000 applicants. Considering the time it took me, and assuming other applicants were similar, and the amount of applicants they already had, I’m not sure it was actually positive ev extending the deadline.

Neither of these things are really that big of a deal but thought I’d share

Hi Charlie,

Peter Wildeford from Rethink Priorities here. I think about this sort of thing a lot. I'm disappointed in your cheating but appreciate your honesty and feedback.

We've considered many times about using a time verification system and even tried it once. But it was a pretty stressful experience for applicants since the timer then required the entire task to be done in one sitting. The system we used also introduced some logistical difficulty on our end.

We'd like to try to make things as easy for our applicants as possible since it's already such a stressful experience. At the same time, we don't want to incentivize cheating or make people feel like they have to cheat to stay ahead. It's a difficult trade-off. But so far I think it's been working -- we've been hiring a lot of honest and high integrity people that I trust greatly and don't feel like I need a timer to micromanage them.

More recently, we've been experimenting with more explicit honor code statements. We've also done more to pre-test all our work tests to ensure the time limits are reasonable and practical. We'll continue to think and experiment around this and I'm very open to feedback from you or others about how to do this better.

Hi Peter thanks for the response - I am/was disappointed in myself also. 

I assumed RP had thought about this. and I hear what you are saying about the trade-off. I don't have kids or anything like that and I can't really relate to struggling to sit down for a few hours straight but I totally believe this is an issue for some applicants and I respect that. 

What I am more familiar with is doing school during COVID. My experience left me with a strong impression that even relatively high-integrity people will cheat in this version of the prisoner's dilemma. Moreover, it will cause them tons of stress and guilt, but they are way less likely to bring it up than someone who is caused issues from having to take the test in one sitting because no one wants to out themselves as a cheater or even thinking about cheating. 

I will say in school there is something additionally frustrating or tantalizing about seeing your math tests that usually have a 60% average be in the 90%s and having that confirmation that everyone in your class is cheating but given the people applying are thoughtful and smart they probably would assign this a high probability anyway. 

If I had to bet, I would guess a decent chunk of the current employees who took similar tests (>20%) at RP did go over time limits but ofc this is pure speculation on my part. I just do think a significant portion of people will cheat in this situation (10-50%) and given a random split between the cheaters and non-cheaters, the people who cheat are going to have better essays and you are more likely to select them. 

(to be clear I'm not saying that even if the above is true that you should definitely time the tests, I could still understand it not being worth it)

I'd be very interested in information about the second claim: that the incubator round already had 2k applicants and thus the time from later applicants was a waste.

Did you end up accepting late applicants? Did they replace earlier applicants who would otherwise have been accepted, or increase the total class size? Do you have a guess for the effects of the new participants?

Or more generally: how do you think about the time unaccepted applicants spend on applications?

My guess is that evaluating applications is expensive so you wouldn't invite more if it didn't lead to a much higher quality class, but I'm curious for specifics. CE has mentioned before that the gap between top and median participant is huge, which I imagine plays into the math.

I think you might have replied on the wrong subthread but a few things. 

This is the post I was referring to. At the time of extension, they claim they had ~3k applicants. They also infer that they had way fewer (in quantity or quality) applicants for the fish welfare and tobacco taxation projects but I'm not sure exactly how to interpret their claim. 
 

Did you end up accepting late applicants? Did they replace earlier applicants who would otherwise have been accepted, or increase the total class size? Do you have a guess for the effects of the new participants?

using some pretty crude math + assuming both applicant pools are the same, each additional applicant has ~.7% chance of being one of the 20 best applicants (I think they take 10 or 20). so like 150 applicants to get one replaced. if they had to internalize the costs to the candidates, and lets be conservative and say 20 bucks a candidate, then that would be about 3k per extra candidate replaced.

and this doesn't included the fact that the returns consistently diminish. and they also have to spend more time reviewing candidates, and even if a candidate is actually better, this doesn't guarantee they will correctly pick them. you can probably add another couple thousands for these considerations so maybe we go with ~5k?

Then you get into issues of fit vs quality, grabbing better quality candidates might help CE counterfactual value but doesn't help the EA movement much since your pulling from the talent pool. And lastly it's sort of unfair to the people who applied on time but that's hard to quantify. 

and I think 20 bucks per candidate is really really conservative. I value my time closer to 50$ an hour than 2$ and I'd bet most people applying would probably say something above 15$. 

So my very general and crude estimate IMO is they are implicitly saying they value replacing a candidate at 2k-100k, and most likely somewhere between 5-50k. I  wonder if we asked them how much they would have to pay for one candidate getting replaced at the time they extended what they would say. 

if anyone thinks I missed super obvious considerations or made a mistake lmk. 

That post says opens with

If we don't find more potential founders we may not be able to launch charities in Tobacco Taxation and Fish Welfare

This is apparently a pattern

In recent years we have had more charity ideas than we have been able to find founders for.

Seems pretty plausible they value a marginal new charity at $100k, or even $1m, given the amount of staff time and seed funding that go into each participant. 

I also suspect they're more limited by applicant quality than number of spaces.

That post further says

it is true that we get a lot of applicants (~3 thousand). But, and it’s a big but, ~80% of the applications are speculative, from people outside the EA community and don’t even really understand what we do. Of the 300 relevant candidates we receive, maybe 20 or so will make it onto the program.

If you assume that the late applicants recruited by posting on EAF are in the "relevant" pool, those aren't terrible odds.[1] And they provide feedback even to first round applicants, which is a real service to applicants and cost to CE.

I don't know if they're doing the ideal thing here, but they are doing way better than I imagined from your comment. 

 

  1. ^

    I don't love treating relevant and "within EA" as synonyms, but my guess is this that the real point is "don't even really understand what we do", and EA is a shorthand for the group that does.

I don't know if they're doing the ideal thing here, but they are doing way better than I imagined from your comment. 

Yep after walking through it in my head plus re- reading the post, doesn't seem egregious to me. 

Hi Elizabeth,

I represent Rethink Priorities but the incubator Charlie is referencing was/is run by Charity Entrepreneurship, which is a different and fully separate org. So you would have to ask them.

If there are any of your questions you'd want me to answer with reference to Rethink Priorities, let me know!

Oops, should have read more carefully, sorry about that.

As I have spent more time interacting with job application processes,[1] I lean more and more toward the opinion that broad/vague questions (such as ‘why are you interested in this job?’ and ‘why would you be a good fit?’) shouldn't be used. I'll ramble a bit about reasons, but I think the TLDR would be "poor applicant experience, and not very predictive of job performance."

On the organizational side, my observations are that there often isn't clear criteria for assessing / evaluating these questions[2], which means that the unofficial criteria often ends up being "do I like this answer." I'd prefer something ever-so-slightly more rigorous, such as reject unless both A) there aren't grammar/spelling mistakes, and B) the answer demonstrates that this person has at least a basic understanding of what our organization does.[3]

On the applicant side, there is a lot of uncertainty regarding what a good answer looks like, which makes the application feel like very arbitrary guess what the right answer is game. We might label it as low procedural justice. For a question such as "How did you hear about ORGANIZATION, and what makes you interested in working here?" an honest answer will probably be penalized, and thus I suspect that most applicants who care about getting the job will spend a good deal of effort on impression management, shying away from saying/describing how the appeal is a combination of prestige, good salary, company culture, the professional network, and the feeling of making a positive impact.

These broad/vague questions are probably useful for eliminating particularly bad fit applications.[4] But I do not have confidence in the ability of these question to do any more than to eliminate the bottom ~15% of applications.

  1. ^

    Both from the company side of filtering/selecting applications, and from the applicant side of submitting applications.

  2. ^

    But I have seen a minority of organizations that actually use a rubric and have clear and job-relevant criteria. Good for you guys!

  3. ^

    While also informing the applicant up front that "we don't expect you to write an essay about how you've have a lifelong desire to work in an entry-level research positions. We are just looking to make sure you have at least a surface level understanding of our industry and our mission. We'd like for you to demonstrate that you have some knowledge or experience related to our field/industry."

  4. ^

    "Bad fit" is a pretty fuzzy concept, but I'm thinking roughly about people who give answers that don't demonstrate a modicum of knowledge or experience in the relevant field. If I am applying to Open Philanthropy, these would probably be answers such as "Overall I want to give pursue goodness for people, present and future," or "I can succeed in this role because of my experience as a JOB_TITLE. My organization and attention to detail enabled me to exceed expectations in that role." If I am the hiring manager, I want to see that the applicant has read the job description and is able to demonstrate some familiarity with the area of work.

I found this reflection interesting and in general really like hearing your thoughts on hiring, Joseph :)

Aww, thanks. That makes me smile and tear up a bit.

Thanks for saying this. This totally rhymes with my experience. I assume that if an application says it will take 15 minutes, I will probably need to spend at least an hour on it (assuming I actually care about getting the job).

Let’s research some impactful interventions! Would you come to an intervention evaluation 101 learning-together event in London?

I want to run an event where we get together and do some quick-and-dirty intervention evaluation research, to learn more about how it works. I know nothing about this so we’ll be learning together!

Where: (central?) London
When: a mutually-agreed weekend day
What: I’ll come up with a structure loosely based on (some stages of?) the AIM/Charity Entrepreneurship research process. We’ll research and compare different interventions addressing the same broad problem or cause area. For example, we might start by quickly ranking or whittling down a long list in the morning, and then do some deeper dives in the afternoon.  We’ll alternate between doing independent research and discussing that research in pairs or small groups.

If you're interested in coming, please DM me: I’ll use a WhatsApp chat to coordinate. No need to firmly commit at this stage!

Why?
I hope to:

-Better understand how charity evaluators and incubators such as GiveWell and AIM form their recommendations, so I feel more empowered to engage with their research and can identify personal cruxes

-Learn how to assess interventions in areas that I think are promising, but that haven’t been discussed or researched extensively by EAs

-Just learn more about the world?

The event could also be useful for people who want to test their fit for EA research careers, though that’s not my own motivation.

What cause area?
We’d vote on a cause area beforehand. My vision here is something like ‘an area in global health and development that seems very important, but that hasn’t been discussed, or has been discussed relatively minimally, by EAs’.

What next?
If this goes well, we could hold these events regularly and/or collaborate and co-work on more substantive research projects.

Is there a point to this? 
AIM’s process takes ~1300 hours and is undertaken by skilled professional researchers; obviously we’re not going to produce recommendations of anywhere near similar quality. My motivation is to become personally better-informed and better-engaged with the nitty gritty of EA/being impactful in the world, rather than to reinvent the GiveWell wheel. 
That said, we’re stronger together: if 50 people worked on assessing a cause area together, they’d only have to spend 26 hours each to collectively equal the AIM process. 26 hours isn’t trivial (and nor is 50 people), but it’s not crazy implausible either. If collectives of EAs are putting AIM-level amounts of hours into intervention evaluation in their spare time, seems like a win?

You don't have to be an asshole just because you value honesty 

In Kirsten's recent EA Lifestyles advice column (NB, paywalled), an anonymous EA woman reported being bothered about men in the community whose "radical honesty" leads them to make inappropriate or hurtful comments:
 


For example: radical honesty/saying true things (great sometimes, not fun when men decide to be super honest about their sexual attraction or the exact amount they’re willing to account for women’s comfort until the costs just “aren’t justified.” This kind of openness is usually pointless: I can’t act on it, I didn’t want to know, and now I’m feeling hurt/wary).

 

 

An implication is that these guys may have viewed the discomfort of their women interlocutors as a (maybe regretful) cost of them upholding the important value of honesty. I've encountered similar attitudes elsewhere in EA - ie, people being kinda disagreeable/assholeish/mean under the excuse of 'just being honest'.

I want to say: I don't think a high commitment to honesty inevitably entails being disagreeable, acting unempathetically, or ruffling feathers. Why? Because I don't think it's dishonest not to say everything that springs to mind. If that were the case, I'd be continually narrating my internal monologue to my loved owns, and it would be very annoying for them, I'd imagine. 

If you're attracted to someone, and they ask "are you attracted to me?", and you say "no" - ok, that's dishonest. I don't think anyone should blame people for honestly answering a direct question. But if you're chatting with someone and you think "hmm, I'm really into them", and then you say that - I don't think honesty compels that choice, any more than it compels you to say "hold up, I just was thinking about whether I'd have soup or a burger for dinner".

I don't know much about the Radical Honesty movement, but from this article, it seems like they really prize just blurting out whatever you think. I do understand the urge to do this: I really value self-expression. For example, I'd struggle to be in a situation where I felt like I couldn't express my thoughts online and had to self-censor a lot. But I want to make the case that self-expression (how much of what comes to mind can you express vs being required to suppress) and honesty are somewhat orthogonal, and being maximally honest (ie, avoiding saying false things) doesn't require being maximally self-expressive. 


 

I struggle reconciling implied takeaways from two discourse crises on EA forum. 

When I read SBF stuff, I get the sense that we want to increase integrity-maxing unstrategic practices.

When I read about male misbehavior, it is suggested that we want to decrease adjacency to the radical honesty cluster of practices. 

I think it might feel obvious enough to me which takeaway should apply to which cases, but I still fear the overall message may be confused and I don't know if expectations are being set appropriately in a way that lots of people can be expected to converge on. 

Hmm, that's interesting. I guess I had seen both of those discourses as having similar messages - something like 'it doesn't matter how "effective" you are, common sense virtue is important!' or 'we are doing a bad job at protecting our community from bad actors in it, we should do better at this'. (Obv SBF's main bad impact wasn't on EA community members, but one of the early red flags was that a bunch of people left Alameda because he was bad to work with. And his actions and gendered harassment/abuse both harm the community through harming its reputation). 

I do think it's reasonable to worry that these things trade off, fwiw. I'm just not convinced that they do in this domain - like, integrity-maxxing certainly involves honesty, but I don't see why it involves the sort of radical-honesty, "blurting out" thing described in the post. 

 

It seems that you, correct me if I'm wrong, along with many who agree with you, are looking to further encourage a norm within this domain (on the basis of at least one example, i.e. the one example from the blog post, that challenged it).

This might benefit some individuals by reducing their emotional distress. But strengthening such a norm that already seems strong/largely uncontroversial/to a large extent popular in the context of this community, especially one within this domain, makes me concerned in several ways:

  • Norms like these that target expression considered offensive seem to often evolve into/come in the form of restrictions that require enforcement. In these cases, enforcement often results in:
    • "Assholes"/"bad people" (and who may much later even be labeled "criminals" through sufficient gradual changes) endure excessive punishments, replacing what could have been more proportionate responses. Being outside of people's moral circles/making it low status to defend them makes it all too easy.
    • Well-meaning people get physically or materially (hence also emotionally) punished for honest mistakes. This may happen often - as it's easy for humans to cause accidental emotional harm.
    • Enforcement can be indeed more directed but this is not something we can easily control. Even if it is controlled locally, it can go out of control elsewhere.
  • Individuals who are sociopolitically savvy and manipulative may exploit their environment's aversion of relatively minor issues to their advantage. This allows them to appear virtuous without making substantial contributions or sacrifices.
    • At best, this is inefficient. At worst, to say the least - it's dangerous.
  • Restrictions in one domain often find their way into another. Particularly, it's not challenging to impose restrictions that are in line with illegitimate authority as well as power gained through intimidation.
    • This can lead people to comfortably dismiss individuals who raise valid but uncomfortable concerns, by labeling them as mere "assholes". To risk a controversial, but probably illuminating example, people often unfairly dismiss Ayaan Hirsi Ali as an "Islamophobe".
    • This burdens the rest of society with those other restrictions and their consequences. Those other restrictions can range from being a mere annoyance to being very bad.

I'd be less worried (and possibly find it good) if such a norm was strengthened in a context where it isn't strong, which gives us more indication that the changes are net positive. However, it's evident that a large number of individuals here already endorse some version of this norm, and it is quite influential. Enthusiasm could easily become excessive. I sincerely doubt most people intend to bring about draconian restrictions/punishments (on this or something else), but those consequences can gradually appear despite that.

(So my aim was less to propose a norm, more to challenge an implicit preconception I've heard of (elsewhere in EA too!) - that a person who highly values honesty will, necessarily, end up hurting others' feelings. I don't really agree with "proposing norms" as an activity - I'm just reacting a certain way to certain people, and they can react to my reaction my changing their behaviour, or not doing that.

You seem to be worried that advocating for a norm that's already strong  critiques tends to lead to unfair punishments for transgressors. I don't really think there's a basis for this. Are there many instances in EA where you think people have been punished excessively and disproportionately for minor transgressions? Is this a pattern? Fwiw I don't want to "punish" people who radically honest in hurtful ways - I just want them to understand that they can be honest and also kind/empathetic.

In general, I think that the way norms stay strong is by people advocating for them, even if people already mostly agree. It teaches newcomers the norm and reminds older community members. It can be worth stating the obvious. But my original point doesn't seem to be that obvious, given that the original letter-writer was having problems with people "breaking" this supposed "norm".

 

You seem to have written against proposing norms in the past. So apologies for my mistake and I'm glad that's not your intention. 

To be clear, I think we should be free to write as we wish. Regardless, it still seems to me that voicing support for an already quite popular position on restricting expression comes with the risk of strengthening associated norms and bringing about the multiple downsides I mentioned.

Among the downsides, yes, the worry that strengthening strong norms dealing with 'offensive' expression can lead to unfair punishments. This is not a baseless fear. There are historical examples of norms on restricting expression leading to unfair punishments; strong religious and political norms have allowed religious inquisitors and political regimes to suppress dissenting voices.

I don't think EA is near the worst forms of it. In my previous comment, I was only pointing to a worrying trend towards that direction. We may (hopefully) never arrive at the destination. But along the way, there are more mild excesses. There have been a few instances where, I believe, the prevailing culture has resulted in disproportionate punishment either directly from the community or indirectly from external entities whose actions were, in part, enabled by the community's behavior. I probably won't discuss this too publicly but if necessary we can continue elsewhere.

I see how both are related to honestly saying things unprompted.

One difference is whether the honesty is necessary for someone to make an important decision.

If we want to increase our transparency as a community and reduce the risk of bad actors gaining undue influence, someone needs to say "I know no one asked, but I had a concerning experience with this person." And then some people will hopefully say, "Thanks, I was going to make a deal with this person or rely on them for something, and now I won't."

But if someone just came up to me and said "I like how your body looks" or something, I would probably say, "I wasn't planning on making any decisions relating to you and my body, and I continue to not plan on doing that. Why are you telling me? Who is this supposed to benefit?"

Reminds me of this scene from Glass Onion: https://twitter.com/KnivesOut/status/1611769636973854723?s=20

"It's a dangerous thing to mistake speaking without thought for speaking the truth."

I also doubt that the men in question actually speak honestly and with the same immediacy. The choice to say this and not something else is motivated by things other than honesty.

I have a framing that I often adopt that may be even more simple for people to use. I value honesty, but I also value not making other people uncomfortable. If I were fully honest I would very often tell women I meet "I find you attractive," but I don't because that would make them uncomfortable. For me, honesty should be bounded by consideration for others. So from my perspective what these people are doing (maximizing honesty) is very similar to naïve utilitarianism.

Another framing is asking myself if what I want to say is true, is kind, is necessary, is helpful.

Some stuff that frustrates me about the ‘dating within EA’ conversation

This post is related to ‘Consider not sleeping around within the community’, to the smattering of (thankfully heavily downvoted) posts unironically saying there should be less polyamory in EA, and to various conversations I’ve had about this, both in public and private. It’s less polished and more of an emotional snapshot. I feel extremely triggered/activated by this and I’m a bit wary that I’m earning myself a reputation as the unhinged-rants-about-polyamory woman,  or that I’m sort of arguing against something that isn’t substantively “there”.But I also think that emotions are information, and since these conversations are nominally about “making EA good/safe for women”, my perspective as a woman matters.

The meta-level

-We are all talking past each other. Some people are talking about power dynamic relationships. Some are talking about conflicts of interest. Some are talking about polyamory. Some are talking about casual sex or dating within EA. I've even seen one comment saying 'no-one should date anyone within EA'.  I'm likely part of the problem here, but yeah, this is aggravating. 

-I’m generally very wary of somewhat-vague admonishments addressed to a large group, with the assumption that the people who “need to hear” the admonishment will accurately self-select in and those to whom it doesn’t apply will accurately realise that and ignore it. Like, consider a feminist inveighing against vaguely how men are “trash” and/or need to “do better”. I’m pretty against this kind of rhetoric (unless it comes with a *hyper-specific* call to action or diagnosis of the bad behavior), because I think that this will cause anxiety for conscientious, neurotic, feminist men who wouldn’t hurt a fly (and sometimes queer women and NBs, if it’s relating to attraction to women), whereas abusive and/or misogynistic men are just not going to care. 

Similarly, I do not think men will correctly self-assess as socially clumsy or as having lots of power. Owen Cotton-Barratt’s statement is instructive here: he completely failed to see his own power. (Also, incidentally, if I understand right he was monogamously partnered and wasn't deliberately trying to hit on the women he made uncomfortable, so a 'don't sleep around in the community' norm wouldn't have helped, here). I think the advice 'avoid hitting on people if you're socially clumsy', if taken seriously, would lead to lots of kind, reasonable men neurotically never hitting on anybody - even in appropriate social contexts when those advances would be welcome - whereas boundary-pushers and predators won’t care.

This sort of thing is especially dangerous in an EA context, since EAs take moral injunctions very literally and very seriously. I think this is why I feel defensive about this.

-These conversations are supposed to be about “making EA better/safer for women” whereas (a) it’s not clear that most of the posts are even by women (some are anon, and lots of the comments are from men) and (b) as a woman who dates people in the community, this just feels deeply counter-productive and Not Helpful. It’s possible that there are norms that are good for women overall but not me specifically, but I think this is far from established and I’m still not crazy about being collateral damage.

The object level

-I do think that if I had taken some people’s views about dating within the community seriously, I wouldn’t have the relationships I do. I want to defend the attitudes and behaviours that led to me and my partners forming positive relationships with each other.


-I think this kind of critique implies a view of the world I disagree with. 
(I) it implies that a large part of the problems in EA come from social clumsiness, or maybe social clumsiness + power. I’m just more cynical about this: while I don’t want to minimize the harm done by 'off' comments and awkward advances, I’m more concerned about stuff like rape, assault, or ongoing abuse (in workplaces, homes or relationships). And there have been plenty of allegations of those things!

 I don’t subscribe to an overly black-and-white view of people where the are either bad villains or good well-meaning citizens, but I don’t think that you end up raping or abusing people through 'social clumsiness'.

(Ii) it implies that power is inevitable and relationships are not.  Like, one way to prevent the unsavory interaction of power + relationships is to dissuade relationships. Another is to try to distribute power more equitably and give more power to people who are newer to the community, lower within organisational hierarchies, and who are structurally disadvantaged by things like gender. Similarly, in situations where a relationship conflicts with a professional role, I'd strongly want to prioritize preserving the relationship over preserving the role, just because for most people romantic relationships are very important and meaningful, whereas work relationships are instrumental. 

I also think this kind of attitude takes responsibility and agency away from men? It assumes that drama and offence is just a necessary consequence of sexual interaction, rather than *something that can be mitigated* when people develop a feminist consciousness (and other progressive consciousnesses like anti-racism) and work on their empathy and social and emotional skills. The view ‘to solve gender problems we need to stop/limit sex’ seems both very pessimistic and kind of sexist against men. Rather than telling men not to date or even not to have casual sex, I’d rather tell men (and other genders! Other genders aren’t exempt from this!) to try to build the maturity to handle these encounters well, while empowering women so that they feel they can push back directly against minorly-inappropriate behaviour, and be supported in the case of suffering majorly inappropriate or harmful behaviour. 

 

This whole controversy reminds me a bit of the different approaches that governing bodies have taken to minimizing the risks that people run if they use financial instruments that they don’t fully understand.

The US government, maybe in particular the SEC, have historically taken the approach of banning poor people from using them under the assumption that rich people must’ve gotten rich somehow, so they probably know how to use any and all financial instruments. Also they can afford to lose more money.

Some crypto sites instead use quizzes that you can try as often as you want but where you have to get all answers right before you can use a product. People could just answer at random, but I suppose the effect is that most people actually read the questions and memorize the answers. Some of these quizzes fall short in quality though.

I’ve heard that you can now become accredited investor by passing similar tests rather than just by being rich. That seems amazing to me!

I don’t know if this will really work in practice, but perhaps it’s worth a try? Some EAs who are expert in this develop informative high-quality quizzes. To be accepted into conferences and such you have to pass the latest version of the quiz. (Unchanged questions can be saved and prefilled the next time.)

That’ll probably not filter any strategic predators, but the people who were just always too busy with their jobs to read up on feminism will learn the basics and adjust their behavior. 

Plus, there is the hypervigilant group who read about these controversies and then feel so insecure about their social skills that they hardly dare to meet new people anymore or try anything new at all. If they ace the quiz, it can give them some confidence that the conference is not going to be a minefield of unknown unknown rules that they might trip over at any moment and traumatize others and end their own careers.

What do you think? Could that work?

I strongly agree with you: that kind of discourse takes responsibility away from the people who do the actual harm; and it seems to me like the suggested norms would do more harm than good.

Still, it seems that the community and/or leadership have a responsibility to take some collective actions to ensure the safety of women in EA spaces, given that the problem seems widespread. Do you agree? If yes, do you have any suggestions?

Weakly against asking people to explain downvotes/disagree-votes (even politely)

Quite often it'll be clear that a post/comment is being downvoted/disagree-voted, and someone - either the OP or just a reader who likes the comment/post - writes that they're surprised at the disagreement/downvoting, and they'd be interested to know why people are disagreeing or why they don't like it. 

Most of the time these requests are very polite and non-demanding, but I'm still (weakly) against them, because I think they contribute to an expectation that if you downvote/disagree-vote, you have to be willing and able to 'defend' your choice to do that. But this is a very high bar - if I was forced to defend verbally all of my voting choices - and in language according to Forum norms, no less (not "I think the post is dumb, what do you want from me") -  I would almost never vote. If people wanted to explain why they disagreed or disliked the post, they probably would have already commented! 

It's also asymmetric - I've never seen someone say "I don't understand why this is getting upvoted". So asking people to explain downvotes/disagreevotes might lead to a dynamic where there's a mild disincentive to downvote/disagree and no comparable disincentive to upvote/agree, which means that controversial posts would appear to have more artificially more upvotes/agree-votes than they 'deserve'. 

 

What if users could anonymously select tags explaining why they upvoted or downvoted? This is similar to how apps like Uber and Airbnb let you provide more detailed feedback like "Ride was smooth" and "Ride was on time" when you rate drivers/hosts.

I get what you mean but I'm in favor of norms where, when a proposal or take isn't quite right, somehow or another another option gets dropped there proactively. I do worry EAs have the habit of poo-pooing things without actually contributing to improvement

If people framed their requests more in terms of "can someone offer a better proposal in this thread please? Something they think will get upvotes by the people downvoting?" how would you feel about That?

I upvoted & disagreevoted on this, and I'm going to try to defend the 'call for explanation' a bit - especially for disagreevotes, I generally agree that downvote signals are more clear.

I think sometimes it's not really clear why a post is being disagree voted, especially if it makes a number of different points. Does a disagree vote represent a vote against the points on balance, or the major point, or a particular crux of this issue? My posts/comments to be a bit on the long-side and I've run into this (so it may be self-inflicted). I think the call for explanation can more be seen as a call for replies focusing on a specific section and continuing from there.

For the asymmetry, I grant that's probably true. On the other hand, it's probably valuable for people to find cruxes of disagreement in good faith, possibly more so than just a shows of agreement - though I do think there should be more comments saying (I agree and thought paragraph 2 was an excellent statement of position X/changed my mind).

I think fundamentally there's a collective action problem here where, no we probably don't want every vote to be accompanied by a comment and explanation of that vote. But as everyone can free ride on others to provide that explanation, we may often find ourselves in a position where there's a topic where there's a lot of active disagreement, and a user posts a position but gets disagreevoted and there is no explanation of that vote at all. This might be particularly disheartening for the poster (especially if they are posting against conventional EA wisdom) if they've approached the topic in good faith, put a lot of time and effort into writing their comment, and provided a lot supporting evidence for it.

Tl;dr: Overall probably a co-ordination problem - I think we could get value by shifting the norms more towards exploring disagreement on Forum posts/comments more

I don't have a problem with people asking nicely; since it creates no obligation to explain I don't think it creates a disincentive to downvoting.

Fund me to research interesting questions?
 

Here’s a list of questions/topics I’d be interested to research. If you’re also interested in one of these questions and would like to fund me to research it, get in touch: you can email at ambace@gmail.com, DM me on the Forum, or book a chat. It’s a bit of a long shot, but you don’t get what you don’t ask for XD

I’m also keen to hear about relevant work that already exists. I haven’t done much work yet on any of these questions, so it’s possible there’s already a lot of research on them that I’m not aware of.

1. Why do people treat each other badly?

The world is pretty bad. Some of that badness isn’t caused by humans, for example disease and natural death. But lots of the badness comes from humans doing bad things to each other: abuse, war, and failing to save metaphorical drowning children.

Why is this?

Scott Alexander fans might say ‘Moloch’. Therapy people might say ‘trauma’. Evopsych people might say ‘it maximises fitness’. Other people, I’m sure, would say other things. Who is (most) right?


2. Antidote to the curse of knowledge

The ‘curse of knowledge’ is the phenomenon whereby, if you know about something, it’s really hard to explain it to others, because it’s hard for you to imagine not knowing what you know, and therefore what needs explanation. This is one of the fundamental difficulties of teaching and explanatory writing.

Relatedly, I think there is also a ‘curse of competence’: if you’re teaching or explaining a topic, chances are you have some natural aptitude for that topic or skill. If you’re a maths teacher, chances are you’re naturally good at math, and learnt it quite easily when you were young. This makes it harder for you to empathise with people who really struggle.

I think it would be cool to do some research into ‘systematic ways to bypass the curse of knowledge’. This could either be a technique that explain-ers and teachers could use themselves, or a technique for a teacher and student, or explainer and explain-ee, to use collaboratively. Such a technique might involve asking certain questions, developing a typology of ‘reasons people don’t understand a thing’, coming up with intuitive ways of ‘breaking things down’, etc.

(I expect there is some useful research and thought on this out there, so it might just be a question of collating/distilling it)

3. Could we make a society where everyone loved their work?

It seems like an awful lot of people don’t like their jobs, the thing they have to do for approximately 40 hours a week. This seems bad.

Charles Fourier was an early socialist/anarchist thinker. He had this (bonkers? genius?) idea that one could set up a happy society by forming people into units such that for every job that needed to be done, there would be enough people in the unit who were innately passionate about doing that job.
His idea was that you could drive production purely by exploiting people’s passions, so you wouldn’t need to force anyone to work with external incentives.

This seems… great if you could make it work?

I envisage that for this project, I’d start by reading Fourier’s writings and trying to extract the non-bonkers elements, and then move on to studying more prosaic ways that people have tried to improve working conditions, such as labour unions, workers cooperatives, even career coaching.

4. Surveys to work out global priorities


I’ve posted about this before. If we want to do the most good, it seems important to get a granular sense of what the majority of people in the world actually want and value the most. If the population of the world could vote on what I should do with my donations or career, would they want me to work on global health, or longtermist causes, or something else entirely?  

5. Getting ‘open borders’ into the Overton window and/or research into advocacy to decrease anti-immigrant sentiment

6. Ideas that changed people’s lives: substack/blog series

I want to interview people about specific ideas that changed their life, then write posts based on that.

What sort of ideas?
e.g.
-theories or facts about how the world works (e.g. historical, scientific, economic, personal?)
-relationship skills (e.g. non-violent communication, authentic relating, ??)
-therapeutic techniques (e.g. IFS, CBT, ACT, loving-kindness meditation, ??)
-political ideas (e.g. critical race theory, labour theory of value, classical liberalism, ??)
-philosophies (e.g. Stoicism, utilitarianism, Quakerism, ??)
-practical ideas (e.g. productivity or planning systems, skills, ??)

What do you mean ‘changed your life’?

-made you decide to do a certain sort of work, or advocacy
-changed your day-to-day habits
-improved your wellbeing or mental health
-improved your relationships

I’ve started this one already as a spare-time project, but if someone funded it, I could afford to spend more time on it.

My hope for this project is it will both spread lots of good ideas, and also help me understand ‘how people and ideas interact’, which might in turn help me understand how one could best spread good/helpful ideas, if one wanted to do that.


7. Anarchism: ???
More of a broad topic than a question. I’m drawn to anarchism but have a bunch of questions about it. There is loads of writing on anarchism, so this might be less of a research project, more of a distillation project; for example, producing an ‘Anarchism 101 for Dummies’ explainer, or coming up with and framing anarchism-inspired ideas that could, with skilled advocacy, spread and catch on (for obvious reasons, I'm thinking less of political advocacy and change, more of cultural change or movements).


How much funding do you want?
Say up to £25,000, but (much!) less is also fine? I’m open to lots of possibilities. You could fund me to work on one of these questions full-time, or part-time, for a few months or a year. Or you could say ‘I’ll pay you to do 3 hours of research on Question 2 and see how you get on’. Or you could do something in the middle. Happy to talk specifics in DMs.

Will you research [other thing] instead? 
It very much depends on what the other thing is and how well it fits my skills and interests, but feel free to ask.

What are your qualifications to do this?
I did most of a PhD in Classics. I ended up leaving the PhD before finishing*, but for many years I enjoyed the course and produced research regularly, and I got some good feedback on my research.

I think I’m good at the type of research that involves staring at difficult questions, sitting with confusion, working with concepts, understanding and synthesizing complex texts, and thinking by writing. Happy to send you my CV and/or talk in more detail about my credentials.

*this was because of a combination of poor mental health and becoming disenchanted with my topic and academia in general

What’s the impact?
I haven’t done a detailed impact analysis for any of these questions, but my intuition is that they are all difficult to solve/make progress on, but potentially highly impactful if you do make progress. The impact case for me working on these questions is not, imo, that they are likely have more impact than malaria prevention/AI safety/other central EA areas, but that they might be the highest impact thing for me personally to be doing.


Questions? Feel free to DM, email (ambace@gmail.com) or book a call. While I am shamelessly plugging myself, I am also doing writing and editing stuff: more details here

 

Why doesn't EA focus on equity, human rights, and opposing discrimination (as cause areas)?

KJonEA asks:

'How focused do you think EA is on topics of race and gender equity/justice, human rights, and anti-discrimination? What do you think are factors that shape the community's focus?'

In response, I ended up writing a lot of words, so I thought it was worth editing them a bit and putting them in a shortform. I've also added some 'counterpoints' that weren't in the original comment. 

To lay my cards on the table: I'm a social progressive and leftist, and I think it would be cool if more EAs thought about equity, justice, human rights and discrimination - as cause areas to work in, rather than just within the EA community. (I'll call this cluster just 'equity' going forward). I also think it would be cool if left/progressive organisations had a more EA mindset sometimes. At the same time, as I hope my answers below show, I do think there are some good reasons that EAs don't prioritize equity, as well as some bad reasons. 

So, why don't EAs priority gender and racial equity, as cause areas? 

1. Other groups are already doing good work on equity (i.e. equity is less neglected)

The social justice/progressive movement has got feminism and anti-racism pretty well covered. On the other hand, the central EA causes - global health, AI safety, existential risk, animal welfare -are comparatively neglected by other groups. So it kinda makes sense for EAs to say 'we'll let these other movements keep doing their good work on these issues, and we'll focus on these other issues that not many people care about'.

Counter-point: are other groups using the most (cost)-effective methods to achieve their goals? EAs should, of course, be epistemically modest; but it seems that (e.g.) someone who was steeped in both EA and feminism, might have some great suggestions for how to improve gender equality and women's experiences, effectively.

2. Equity work isn't cost-effective

EAs care a lot about cost-effectiveness, ie how much demonstrable good impact you can get for your money. Maybe lots of equity/diversity work is, though important, not cost-effective: ie it's expensive, and the benefit is uncertain.

Counter-point: maybe EAs should try to work out how one might do equity work cost-effectively. ('Social justice' as such is seen as a western/rich world thing, but the countries where EA organisations often work also have equity problems, presumably).

3. Equity isn't an EA-shaped problem

EAs focus on what I think of as 'technical' solutions to social problems - i.e., 'fixes' that can be unilaterally performed by powerful actors such as nonprofits or governments or even singular wealthy individuals. I see many equity issues as cultural problems - that is, a nonprofit can't just swoop in and offer a wonk-ish fix; whole swathes of people have to be convinced to see the world differently and change their behaviour. Obviously, governments and NGOs do work on equity issues, but a big part of the "solution" to (e.g.) sexism is just "people, especially guys, learn basic feminist principles and stop being sexist to women". This is really important to work on, but it's not the style of solution that EAs tend to be drawn to.

4. EA is STEM-biased and equity is Humanities-biased 

Related: historically many EAs have been from STEM or analytic philosophy academic backgrounds (citation needed: is this in the survey?) These EAs are naturally more drawn to science-and-maths-y problems and solutions, like 'how to cure diseases and distribute those cures' or 'how to align artificial intelligence with human values', since these are the types of problems they've learnt how to solve. More 'humanities'-ish problems and solutions - like 'what are the interpersonal dynamics that make people's lives better or worse and how can we change culture in the positive direction?' - are out of the modal EA's comfort zone. 

5. EAs are privileged and underrate the need for equity

There are lots of people with social privilege in EA: it's majority white, straight, male, middle-class, etc. (Though getting more diverse on gender and race all the time, I think, and already more diverse on sexuality and stuff like mental health and neurodiversity than many social groups, I'd guess). You might predict that socially-privileged people would care less about equity issues than people who are directly impacted by those issues (though obviously this is not inevitable; many privileged people do care about equity issues that don't affect them).

6. EA is apolitical and equity is left-coded

Perhaps relatedly, EA is 'officially' apolitical, and equity/discrimination issues are more associated with left-wing or liberal politics. In fact, most EAs are liberal or politically left, but a decent amount are centrist, apolitical, libertarian or conservative. These EAs might not be interested in equity/discrimination issues; they might think that they're overrated, or they might dislike standard progressive approaches to equity (i.e. they might be "anti-woke"). This political minority might be vocal or central enough to sway the consensus.

EAs are privileged and underrate the need for equity

How do you reconcile this hypothesis with the huge importance EAs assign, relative to almost everyone else, to causes that typically affect even less privileged beings than the victims of injustice and inequity social justice and progressive folk normally focus on (i.e. oppressed people in rich countries and especially in the United States)? I'm thinking of "the bottom billion" people globally, nonhuman animals in factory farms, nonhuman animals in the wild (including invertebrates), digital minds (who may experience astronomical amounts of suffering), and future people (who may never exist). EAs may still exhibit major moral blindspots and failings, but if we do much better than most people (including most of our critics) in the most extreme cases, it is hard to see why we may be overlooking (as opposed to consciously deprioritizing) the most mundane cases.

I'm not sure it's right to call EA apolitical. If we define politics about who should have power in society and how that power should be used, EA is most definitely political. It may not be party political, or traditional left-right coding, but its clearly political

On number 1, my understanding is that upstream disciplines (e.g., medicine, public health) created most of the highly effective interventions that EAs deployed, implemented, and scaled (e.g., bednets, vaccinations). EA brought in the resources and execution ability to implement stuff that already existed in the upstream disciplines but wasn't being implemented well due to a lack of will, or a lack of emphasis on EA principles like cost-effectiveness and impartiality. So the question I'd have for the person who was steeped in both EA and feminism is whether there is an intervention already present in women's studies, sociology, or another field that scored well on cost-effectiveness and might be something EA could implement.

I'm skeptical that EA would have been better at inventing highly cost-effective public health strategies than physicians, public-health experts, etc. with much greater subject-matter expertise. Likewise, I'd be skeptical of EA's ability to invent a highly cost-effective equity strategy that mainstream subject-matter experts haven't already come up with. 

On number 3, I think it's not only that potential solutions for equity in developing countries aren't the kind of "solution that EAs tend to be drawn to." There's also a mismatch between EA's available resources and the resources needed for most potential solutions. (As far as equity in developed countries, your first and second points are rather strong.) One could describe EA's practical resources available for implementation -- with some imprecision -- as some significant financial firepower and a few thousand really smart people, predominately from the US and Western Europe. 

But it's doubtful that smart Westerners is the resource that high-impact equity work in developing countries really needs. As you implied, the skill set of those particular smart Westerners may not be the ideal skill set for equity work either. In contrast, malaria biology is the same everywhere, so the dropoff in effectiveness for Westerners working on malaria in cultures other than their own is manageable. 

I think this dovetails with your number 4: I would suggest that 'humanities'-ish work is significantly more difficult to do effectively when someone is trying to do it in a culture that is significantly different from their own. But I would characterize both number 3 and 4 somewhat more in terms of equity often being a less good fit for the resources EA has available to it (although I think lower comfort / interest is also a factor).

Since the season of Draft Amnesty is upon us, a bit of mild self-promotion: you can hire me to help you turn your unwritten thoughts and messy drafts into posts. 

For example:

-if some sections of your post are in your head but not yet on the page, I can help you draft them
-if you feel self-conscious about your draft, I can quickly review it and fix or flag the biggest issues
-if you feel ugh-y or uncertain about posting or finishing your post, or if you have anxieties about posting more generally, I can talk you through that
-I agree with the Forum team that it can be very valuable to share even unpolished and unfinished drafts, but if you'd like to publish a more polished post, I can help with editing and structuring

I have collaborated on a bunch of Forum posts.

Happy drafting! 

[comment deleted]2
0
0
Curated and popular this week
Relevant opportunities