E

Elizabeth

4664 karmaJoined

Comments
397

Off the top of my head: in maybe half the cases I already had the contact info. In one or two cases cases one of beta readers passed on the info. For the remainder it was maybe <2m per org, and it turns out they all use info@domain.org so it would be faster next time. 

Your post reflects a general EA attitude that emphasizes the negative aspects [...]

 

Something similar has been on my mind for the last few months. It's much easier to criticize than to do, and criticism gets more attention than praise. So criticism is oversupplied and good work is undersupplied. I tried to avoid that in this post by giving positive principles and positive examples, but sounds like it still felt too negative for you. 

Given that, I'd like to invite you to be the change you wish to see in the world by elaborating on what you find positive and who is implementing it[1].

  1. ^

    this goes for everyone- even if you agree with the entire post it's far from comprehensive

EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post  mentioned a lot of people and organizations, so it seemed like useful data.

I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic.  This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.

Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.

It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d realized I was going to cut an example ahead of time

Only 80,000 Hours, Anima International, and GiveDirectly failed to respond before publication (7 days after I emailed them). Of those, only 80k's mention was negative.

I didn’t keep as close track of changes, but at a minimum replies led to 2 examples being removed entirely, 2 clarifications and some additional information that made the post better. So overall I'm very glad I solicited comments, and found the process easier than expected. 

My model is that at least one of the following must be true: you're one factor among many that caused the change, the change is not actually that big, or attrition will be much higher than standard pledge takers. 

Which is fine. Accepting the framing around influencing others[1]: you will be one of many factors, but your influence will extend past one person. But I think it's good to acknowledge the complexity. 

  1. ^

    I separately question whether the pledge is the best way to achieve this goal. Why lock in a decision for your entire life instead of, say, taking a lesson in how to talk about your donations in ways that make people feel energized instead of judged?

Assigns 100% of their future impact to you, not counting their own contribution and the other sources that caused this change. It's the same kind of simplification as "every blood donation saves 3 lives", when what they mean is "your blood will probably go to three people, each of whom will receive donations from many people."

Assumes perfect follow up. This isn't realistic for a median pledger, but we might expect people who were tipped into pledging by a single act by a single person to have worse follow-up than people who find it on their own. You could argue it isn't actually one action, there were lots of causes and that makes it stickier, but then you run into #1 even harder. 

Reifies signing the pledge as the moment everything changes, while vibing that this is a small deal you can stop when you feel like it. 

Assumes every pledger you recruit makes exactly the same amount. Part of me thinks this is a nit pick. You could assume people recruit people who on average earn similar salaries, or think it's just not worth doing the math on likely income of secondary recruitment. Another part thinks it's downstream of the same root cause as the other issues, and any real fix to those will fix this as well. 

The word "effective" is doing a lot of work. What if they have different tastes than I do? What if they think PlayPumps are a great idea? .

Treating the counterfactual as 0. 


As I write this out I'm realizing my objection isn't just the bad math. It's closer to treating pledge-takiers as the unit of measurement, with all pledges or at least all dollars donated being interchangeable. People who are recruited/inspired by a single person are likely to have different follow through and charitable targets than people inspired by many people over time, who are different than people driven to do this themselves. ?

Let’s say only one other person in your network hears that you took the pledge and is inspired to do the same. That would be doubling your impact. If two people in your network were inspired to pledge based on your decision, that would be tripling your impact

 

This math seems off on several levels. 

For fun, I put one of my (approved) lightspeed applications through the app. This isn't a great test because Lightspeed told people to do crude applications and they'd reach out with questions if they had any. Additionally, the grantmakers already knew me and had expressed verbal interest in the project. But maybe it's still a useful data point.

My Track Record section 

 

Unquantified review of MDMA risks

 

Semi-quantified review of binge drinking risks 

 

[2 projects omitted for client privacy, but were included with permission in the original application]

 

Quantified review of the costs of iron deficiency, which motivated 10s of ex-risk workers to get nutrition testing and treatment. 

HONEST AND ACCURACY 4-7/10

I forgot to record the details for the first run (which got a 4 or 5/10), and when I reran the same text I got a 7/10. The 7/10 review says: "The applicant has demonstrated a strong ability to conduct quantified risk assessments in important health areas. The specific mention of influencing ex-risk workers to seek treatment shows a practical impact. More detail on how these studies relate specifically to the project goals would enhance this section"

I’m a little annoyed at the name of this section, when language analysis can’t possibly check if my statements about my own work are truthful or accurate. Seems like it might mean details?

Because the input doesn’t allow links, it’s missing a lot of the information I’m presenting. OTOH, I think I could be reasonably docked for concision here, since grantmakers unfamiliar with my work are unlikely to click through 5 links and read long, weedy posts.

The wide spread on runs that at most different in white space is 🤨 . 

 

PAST FUNDING: 2-3/10

“The description provides information about past projects but does not specify if any were funded, who the funders were, or mention any funding amounts. Mentioning previous funding and linking outcomes directly to that funding would offer insight into financial support effectiveness”

This is fair and useful. I got away with the omission this time because one of those projects was funded by a different org but the same person, but under any other circumstance a service pointing out the omission would have been a big service.

 

PAST PROJECTS 3-4/10

 

“The applicant outlines several projects focused on evaluating health-related risks, demonstrating thoroughness in research by using quantified methods. However, the input lacks reflections on the challenges faced or how past failures have influenced methodologies or outcomes in these projects. This addition would enhance the understanding of the applicant’s capacity for critical analysis and adaptation.”

 

I don’t remember the Lightspeed application asking for those details. The current LTFF application doesn’t make it obvious to me that they want those details either. So the analysis is useful but I also think there are problems with the question on application.

 

BRAGGING 5-6/10

 

“The input briefly mentions some accomplishments like quantified risk reviews and effects on health behaviors. However, it fails to explicitly connect these accomplishments to the specific goals or relevance of the proposed project, making it less impactful. More detailed explanations of how these accomplishments relate to and enhance the project's goals would improve the proposal.”

This is true and would obviously be important if the grant maker wasn’t already familiar with my work.

 

CONTEXT 4/10

 

“The submission outlines various reviews related to substance use risks and nutritional deficiencies, yet fails to provide enough detail about the methodologies employed or the significance of the findings. For fund managers unfamiliar with these topics, clearer illustrations of impact or relevance, such as how the information was used by policymakers or healthcare providers, would be beneficial.”

 

Yup, true and useful if the grantmakers didn’t already have a lot of context.

Could you provide links to those statements by Hanania?

Not a gotcha, I just have barely heard of this guy and from what you say I expect all discourse around him to be a cesspool. 

Have you seen people dismiss concerns because Torres shares them (as opposed to dismissing Torres as a source)? I haven't, but I'm sure it's happening somewhere. I agree that would be bad epistemics.

Load more