A

atucker

CS PhD Student @ Cornell
43 karmaJoined

Bio

How others can help me

Academic collaborators.

How I can help others

I'm happy to talk about AI safety and AI governance, and could answer questions about group organizing, though that was a while ago.

Comments
9

1.1) There's some weak wisdom of nature prior that blasting one of your neurotransmitter pathways for a short period is unlikely to be helpful.

I think that the wisdom of nature prior would say that we shouldn't expect blasting a neurotransmitter pathway to be evolutionarily adaptive on average. If we know why something wouldn't be adaptive, then it seems like it doesn't apply. This prior would argue against claims like "X increases human capital", but not claims like "X increases altruism", since there's a clear mechanism whereby being much more altruistic than normal is bad for inclusive genetic fitness.

1.2) I get more sceptical as the number of (fairly independent) 'upsides' of a proposed intervention increases. The OP notes psychedelics could help with anxiety and depression and OCD and addiction and PTSD, which looks remarkably wide-ranging and gives suspicion of a 'cure looking for a disease'.

I would worry about this more if the OP were referring to a specific intervention rather than a class of interventions. I think that the concern about being good on longterm and shortterm perspectives is reasonable, though there is a proposed mechanism (healing emotional blocks) that is related to both.

1.4) Thus my impression is that although I wouldn't be shocked if psychedelics are somewhat beneficial, I'd expect them to regress at least as far down to efficicacies observed in existing psychopharmacology, probably worse, and plausibly to zero

Normal drug discovery seems to be based off of coming up with hypotheses, then testing many chemicals to find statistically significant effects. In contrast, these trials are investigating chemicals that people are already taking for their effects. Running many trials then continuing the investigations that find significance is a good way to generate false positives, but that doesn't seem to be the case here, and I would be surprised to find zero effect (as opposed to shorter or different effects) if it were investigated more thoroughly.

2) On the 'longtermism' side of the argument, I agree it would be good - and good enough to be an important 'cause' - if there were ways of further enhancing human capital.
...
My impression is most of the story for 'how do some people perform so well?' will be a mix of traits/'unmodifiable' factors (e.g. intelligence, personality dispositions, propitious upbringing); very boring advice (e.g. 'Sleep enough', 'exercise regularly'); and happenstance/good fortune. I'd guess there will be some residual variance left on the table after these have taken the lion's share, and these scraps would be important to take. Yet I suspect a lot of this will be pretty idiographic/reducible to boring advice.

I also think that improving human capital is important, and am not convinced that this is a clear and unambiguous winner for that goal. I'm curious about what evidence would make you more optimistic about the possibility of large improvements to human capital.

I suspect that a crux of the issue about the relative importance of growth vs. epistemic virtue is whether you expect most of the value of the EA community comes from novel insights and research that it does, or through moving money to the things that are already known about.

In the early days of EA I think that GiveWell's quality was a major factor in getting people to donate, but I think that the EA movement is large enough now that growth isn't necessarily related to rigor -- the largest charities (like Salvation Army or YMCA) don't seem to be particularly epistemically rigorous at all. I'm not sure how closely the marginal EA is checking claims, and I think that EA is now mainstream enough that more people don't experience strong social pressure to justify it.

But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won't donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising standards to grow faster rather than people turning out to have been "evil" all along, and I think that growth at the expense of rigor is mostly bad.

Being at all intellectually dishonest is much worse for an intellectual movement's prospects than it is for normal groups.

instead of assuming that it's actually true to a significant degree

The OP cites particular instances of cases where she thinks this accusation is true -- I'm not worried that this is likely in the future, I'm worried that this happens.

Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

I agree, but I think more likely ways of dealing with the issues involve more credible signals of dealing with the issues than just saying that they should be solvable.

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through, and that it encourages you to be slightly evil more than you have to be.

I've heard other people argue that that strategy literally doesn't work, making a claim roughly along the lines of "if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do good difficult". I'm not sure how broadly this applies, but it seems to me to be worth considering. For instance, if you become a congressperson by playing normal party politics, it seems to be genuinely difficult to implement reform and policy that is far outside of the political Overton window.

I think that people shouldn't donate at least 10% of their income if they think that doing so interferes with the best way for them to do good, but I don't think that the current pledge or FAQ supports breaking it for that reason.

Coming to the conclusion that donating >=10% of one's income is not the best way to do good does not seem like a normal interpretation of "serious unforeseen circumstances".

A version of the pledge that I would be more interested in would be one that's largely the same, but has a clause to the effect that I can stop donating if I stop thinking that it's the best way to do good, and have engaged with people in good faith in coming to that decision.

Something that surprised me from the Superforecasting book is that just having a registry helps, even when those predictions aren't part of a prediction market.

Maybe a prediction market is overkill right now? I think that registering predictions could be valuable even without the critical mass necessary for the market to have much liquidity. It seems that the advantage of prediction markets is in incentivizing people to try to participate and do well, but if we're just trying to track predictions that EAs are already trying to make then that might be enough.

Also, one of FLI's cofounders (Anthony Aguirre) started a prediction registry: http://www.metaculus.com/ , http://futureoflife.org/2016/01/24/predicting-the-future-of-life/

I really liked Larks' comment, but I'd like to add that this also incentivizes research teams to go into secret. Many AI projects (and some biotech) are currently privately funded rather than government funded, and so they could profit by not publicizing their efforts.

My other point was that EA isn't new, but that we don't recognize earlier attempts because they're not really doing evidence in a way that we would recognize.

I also think that x-risk was basically not something that many people would worry about until after WWII. Prior to WWII there was not much talk of global warming, and AI, genetic engineering, nuclear war weren't really on the table yet.

I agree with your points about there being disagreement about EA, but I don't think that they fully explain why people didn't come up with it earlier.

I think that there are two things going on here -- one is that the idea of thinking critically about how to improve other people's lives without much consideration of who they are or where they live and then doing the result of that thinking isn't actually new, and the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

To the first point, I'll just list Ethical Culture, the Methodists, John Stuart Mill's involvement with the East India Company, communists, Jesuits, and maybe some empires. I could go into more detail, but doing so would require more research than I want to do tonight.

To the second point, I don't think that anything resembling modern academic social science existed until relatively recently (around the 1890s?), and so prior to that there was nothing resembling peer-reviewed academic evidence about the efficacy of an intervention.

Giving them time to develop methods and be interrupted by two world wars, we would find that "evidence" was not actually developed until fairly recently, and that prior to that people had reasons for thinking that their ideas are likely to work (and maybe even be the most effective plans), but that those reasons would not constitute well-supported evidence in the sense used by the current EA community.

Also the internet makes it much easier for people with relatively rare opinions to find each other, and enables much more transparency much more easily than was possible prior to it.