Interested in societal/civilizational resilience & improving individual/collective decision-making through non-behavioral means (e.g. through biological self-improvement)
founder & president of High Impact Medicine Berlin and EA Students Berlin
board team member of High Impact Medicine Germany
studying at Charité Berlin with current focus on neurotechnology and psychiatry (modular medical degree)
I mean low-cost is always relative, it definitely won't be as cheap as, let's say, some Eastern European cities, or Bangkok. But still, relative to most Western capitals, the overall cost of living in Berlin is quite low, especially if you rarely eat out. Rent is the only thing that's become pretty high in recent years but if you are lucky/utilize some connections you still might be able to find something reasonably priced. (I know several people that pay like 400-500€ a month for a room in a shared flat.)
I like this a lot, the world is an absurd place, and consciously realizing this once in a while can be very soothing, freeing, and strangely motivating!
I've found the books by Kurt Vonnegut, especially Breakfast of Champions and Cat's Cradle supremely effective at reminding me of the glorious absurdity of civilization and the human experience and I try to re-read them semi-regularly for this reason. Big recommendation to anyone who wants to try a taste of realizing absurdity as it is described in this post but doesn't find it natural/easy to really viscerally see the world like that.
I strongly agree with this post.
Thinking consequentially, in terms of expected value and utility functions, will make you tend to focus on the first-order consequences of your actions and lead to a blind-spot for things that are fuzzy and not easily quantifiable, e.g. having loyal friends or being considered a trustworthy person.
I think that especially in the realm of human relationships the value of virtues such as trust, honesty, loyalty, honor is tremendous - even if these virtues may often imply actions with first-order consequences that have 'negative expected value' (e.g. helping a friend clean the kitchen when you could be working on AI alignment).
This is why I try to embrace deontological frameworks and heuristics in day-to-day life and in such things as social relationships, friendships, co-living etc.: Even if the upside of that is hard to quantify, I am convinced that the value of the higher-order consequences of it far outweigh the 'first-order inconvenience/downside'.
A very small fraction of MDs are admitted to joint MD-PhDs. [...] in many other degrees a similar fraction of students would be publishing papers with supervisors. And the PhD that a medic does will not necessarily be as relevant as those of a computer scientist. <
It being a small fraction doesn't make it less viable for an EA approach to studying med school. Every EA approach to uni will incorporate some tight admission rate.. It might not be relevant for AI safety but it will be super relevant for e. g. neartermist EAs or EAs that don't rank AI risk as high and want to focus on biorisk.
I believe you're overthinking it. From a zoomed out view, medical classes are approximately useless, and this talk of a specialised class becoming useful by being "embedded in a translational framework" is basically waffle.<
We do not have 'medical classes'. We have classes on systems of the body: foundational classes (biochemistry, molecular biology, physics, physiology) and classes that incorporate practical info, where you would argue they're approximately useless such as pharmacology. I disagree that they are entirely useless as it teaches you on a daily basis how the fancy science translates to practice, a skill that I will continue to argue is highly important (and at the core of any problem solving inside and beyond academia) and a skill that a pure fundamental science degree is 'approximately useless' for.
funding
Fair points, as I said, I reserve my judgements here for now..
Just donated and sharing with my social circles! Thank you so much for doing this! Feedback: I personally have full trust in the integrity and competence of the fund to distribute money to where it's most counterfactually needed, but I think some people will be put off by not naming any exemplary charities in the FAQ. Right now it just restates the point that it will distribute to effective charities. Show, don't tell, what are some of these great charities most likely to be included for any funding raised? I think this would much increase appeal to social circles of EAs when shared.