Aaron Bergman

2455 karmaJoined Working (0-5 years)Washington, DC, USA
aaronbergman.neocities.org/

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.

Blog: aaronbergman.net

How others can help me

  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
163

Topic contributions
1

I’ll just highlight that it seems particularly crush whether to view such NDAs as covenants or contracts that are not intrinsically immoral to break

It’s not obvious to me that it should be the former, especially when the NDA comes with basically a monetary incentive for not breaking

Here is a kinda naive LLM prompt you may wish to use for inspiration and iterate on:

“List positions of power in the world with the highest ratio of power : difficulty to obtain. Focus only on positions that are basically obtainable by normal arbitrary US citizens and are not illegal or generally considered immoral

I’m interested in positions of unusually high leverage over national or international systems”
 

It’s  personal taste, but for me the high standards (if implicit) - not only in reasoning quality but also as you say, formality (and I’d add comprehensiveness/covering all your bases) are a much bigger disincentive to posting than dry/serious tone (which maybe I just don’t mind a ton).

I’m not even sure this is bad; possibly lower standards would be worse all things considered. But still, it’s a major disincentive to publishing.

@MHR🔸 @Laura Duffy, @AbsurdlyMax and I have been raising money for the EA Animal Welfare Fund on Twitter and Bluesky, and today is the last day to donate!

If we raise $3k more today I will transform my room into an EA paradise complete with OWID charts across the walls, a literal bednet, a shrine, and more (and of course post all this online)! Consider donating if and only if you wouldn't use the money for a better purpose! 

 

See some more fun discussion and such by following the replies and quote-tweets here

I was hoping he’d say himself but @MathiasKB (https://forum.effectivealtruism.org/users/mathiaskb) is our lead!

But I think you’re basically spot-on; we’re like a dozen people in a Slack, all with relatively low capacity for various reasons, trying to bootstrap a legit organization.

The “bootstrap” analogy is apt here because we are basically trying to hire the leadership/managerial and operational capacity that is generally required to do things like “run a hiring round,” if that makes any sense.

So yeah, the idea is volunteers run a hiring round, and my sense is that some of the haziness of the picture comes from the fact that what thing(s) we’ll be hiring for depends largely on how much money we’ll be able to raise, which is what we’re trying to suss out right now.

All this is complicated by the fact that everyone involved has their own takes and as a sort of proto-organization we lack the decision-making and communications procedures and infrastructure that allows like OpenPhil/Apple/the Supreme Court to act as a coherent, unified agent. Like I personally think we should strongly prioritize hiring a full time lead, but I think others disagree, and I don’t want to claim to speak to SFF!

And thanks for surfacing a sort of hazy set of considerations that I suspect others were also wondering about, if implicitly!

To expand a bit on the funding point (and speaking for myself only):

I’d consider the $15k-$100k range what makes sense as a preliminary funding round, taking into account the high opportunity cost of EA animal welfare funding dollars. This is to say that I think SFF could in fact use much more than that, but the merits and cost effectiveness of the project will be a lot clearer after spending this first $100k; it is in large part paying for value of information.

Again speaking for myself only, my inside view is that the $100k figure is too low of an upper bound for preliminary funding; maybe I’d double it.

Speaking for myself (not other coauthors), I agree that $15k is low and would describe that as the minimum plausible amount to hire for the roles described (in part because of the willingness of at least one prospective researcher to work for quite cheap compared to what I perceive as standard among EA orgs, even in animal welfare).

IIRC I threw the $100k amount out as a reasonable amount we could ~promise to deploy usefully in the short term. It was a very hasty BOTEC-type take by me: something like $30k for the roles described + $70k for a full-time project lead.

~All of the EV from the donation election probably comes from nudging OpenPhil toward the realization that they're pretty dramatically out of line with "consensus EA" in continuing to give most marginal dollars to global health. If this was explicitly thought through, brilliant.

Image

(See this comment for sourcing and context on the table, which was my attempt to categorize all OP grants not too long ago)

Load more