Philip Gubbins - Got into EA at Vanderbilt University because of some close friends, did some contract work for a few EA orgs. Now working at a bootstrapped for-profit (CEO is an EA), AE Studio is trying to solve alignment and differentially develop technology+neurotechnology (and I want to support and learn as much as I can!)
Liked the post, has likely shifted me moreso toward less diversification and hedging my altruistic bets.
Regarding the title, upon first reading it I did a double take that this post might be about diversity in EA! I could see it currently being a bit more ambiguous than something like “against philanthropic diversification”. Though I also think that this is my personal context and might be silly (my context understanding diversity as social as opposed to finance).
Here’s the most Dr. Seuss version that we are putting together for publication: https://www.thecashthatyoustash.com/
Hi! AE Studio is working on some (Dr. Seuss-inspired) children's books to also share EA and EA-adjacent ideas, here's one we've been working on (happy to receive feedback too):
https://www.yourlivingandgiving.com/
We are currently in steps to publish our first book like this, one on personal finance, and we have written the script for another inspired by Peter Singer's pond illustration.
I have recently been learning how to publish (through IngramSpark) and may be able to help a little bit with that sort of stuff too if you'd like! Would generally be excited to connect!
I had never considered the first point regarding a local maximum - interesting thing to explore but I’m unsure, except perhaps in a more ideal world, that we are at all capable of consistently getting more than local maxes at times (and yeah dogs seem to be one of the best (easiest) one-time actions someone can take for their happiness (https://jamesclear.com/how-to-automate-a-habit), author surveys his own audience and they produce this tidbit and it matches my intuition).
And this sort of strikes me of my impression of dog-free (or pet-free?) as a movement overall - I recall a friend discussing it with me as a potential ongoing moral catastrophe that people in the future would be horrified with - which I agree with (particularly with the pug example (I can imagine this being extrapolated to all dogs somewhat perhaps), as you said!) but I feel quite horrified by a lot of more horrifying things now than this specific cause area (others with way more scale). It feels like a step for later moral progress, somewhat along the lines of the discounting argument: “people are starving now, why pursue better lives for animals before them.” (I don’t really subscribe to this argument).
I think the idea of dogs replacing children is really interesting and I will definitely think about that a bit more in the future!
Thanks for sharing.
Recently I was looking around EA organizations and I thought it might be useful to have a visualization of this database compiled by Michel Justen. This visualization was rushed out as part of a hackathon with AE Studio and with the help of Jean Mayer, a dev there.
https://ae.studio/ea/organizations
This is pretty rudimentary and feedback is more than welcome, especially regarding how I might best compile some of the below data to include in a future version in an actual post.
Also, I think it could certainly look better by (spending more time on the visualization looking nice) having the cause areas be better truncated regarding some orgs with a lot of cause areas.
This could provide a cool visualization of the comparative efforts within our community by cause area by ‘bubble size’, and help people understand a bit more about EA organizations and what it means to be ‘EA’.
What would it look like for an organization or company to become more recognized as an 'EA' org/company? What might be good ways to become more integrated with the community (only if it is a good fit, that is, with high fidelity) and what does it mean to be more 'EA' in this manner?
I recognize that there is a lot of uncertainty/fuzziness with trying to definitively identify entities as 'EA'. It is hard for me to even know to whom to ask this question, so this comment is one of a few leads I have started.
I am generally curious about the organizational/leadership structure of "EA" as a movement. I am hesitant to detail/name the company as that feels like advertising (even though I do not actually represent the company), but some details without context:
Cross-commenting from lesswrong for future reference:
I had an opportunity to ask an individual from one of the mentioned labs about plans to use external evaluators and they said something along the lines of:
“External evaluators are very slow - we are just far better at eliciting capabilities from our models.”
They earlier said something much to the same effect when I asked if they’d been surprised by anything people had used deployed LLMs for so far, ‘in the wild’. Essentially, no, not really, maybe even a bit underwhelmed.