NB

Noah Birnbaum

Sophomore @ University of Chicago
104 karmaJoined Pursuing an undergraduate degree

Bio

Participation
7

I am a sophomore at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly meta ethics, formal epistemology, and decision theory), economics, and entrepreneurship. 

I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu. 

How others can help me

If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!

How I can help others

I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)

Comments
16

Good point. Will change this when it’s not midnight. Thanks! 

Thanks for the nice comment. Yea, I think this was more of "laying out the option space." 

All very interesting points! 

Enjoyed this article a lot, and I think the framing of the "root problem objection" is an underrated one!  

Thanks for the response. 

The part that I'm still stuck on is that this last part about the implicit tradeoff in one's offset seems crucial. The degree of offsetting is entirely based on tradeoff (maybe with some risk aversion under diff moral theories), but if you put that much into offsetting than it seems like you either have a major moral or epistemic disagreement with those that are donating in the first place. If that is the case, one person has got to give (either they don't offset near this much or they don't donate to AMF). 

While I'm here, I also wanted to thank you for writing this post. Super interesting, thoughtful, and I've shared with a bunch of people already! 

Perhaps I’m misunderstanding something, so please correct me if I’m wrong: 

If one accepts all these assumptions, why would the best course of action be to offset AMF donations rather than to avoid donating to AMF in the first place? 

If ITNs cause vastly more harm to mosquitoes than they help humans, wouldn’t this imply that AMF is not just a weak investment, but actually a net-negative intervention? It seems like these numbers, if taken seriously, suggest AMF should be deprioritized rather than merely balanced with shrimp welfare donations. 

I assume that this is mostly about hedging against uncertainty under diff moral theories, but it seems like making this tradeoff of offset compared to counterfactual more money to AMF implies a certain tradeoff that you're okay with such that you should never make the initial investment. 

I'm confused about what sorta epistemic/ moral uncertainty theory someone would need to be offsetting the way you propose. Tbh I've already confused myself with this comment, but I hope it's helpful(?)

Appreciate the response! This is very helpful so thanks. 

Very interesting points. Here are a few other things to think about:
1. I think there are very few people whose primary motivation is helping others, so we shouldn't empirically expect them to be doing the most good because they represent a very small portion of the population. This is especially true if you think (which I do) that the vast majority of people who do good are 1) (consciously or unconsciously) signaling for social status or 2) not doing good very effectively (the people who are are a much smaller subgroup because doing non-effective good is easy). It would be very surprising, however, if those who try to do good effectively aren't doing much better than those who aren't, as individuals, on average, but it seems unlikely to me (though feel free to throw some stats that will change my mind!). 

2. I'm very skeptical that "the defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are." Could you give more reason for why you think this? 

3. I'm skeptical that 1) searching for equanimity is truly the best thing and 2) that we have good and tractable methods of achieving it. Perhaps people would be better off as being more Buddhist on the margin, but, to me, it seems like (thoughtfully!) getting the heavy positive tail end results and be really careful and thoughtful about negatives leads to a much better off society. 

Let me know what you think! 

Yep, I think this is true. The point is that, given AI stays aligned which is stated there, the best thing for a country to do would be to accelerate capabilities. You’re right, however, that its not an argument against AI being an existential threat (I’ll make a note to make this more clear) — it’s more a point for acceleration.

He did a whole interview on this that can be found here: 

How do you generally respond to evolutionary debunking arguments and the epistemological problem for moral realism (how we acquire facts about the moral truth), especially considering that, unlike mathematics, there are no empirical feedback loops to work off of (i.e. you can't go out and check if the facts fit with the external world)? It seems to me like we wouldn't think our mathematical intuitions if 1) we didn't have the empirical feedback loops or 2) the world told us that math didn't work sometimes. 

Load more