I currently work on Fatebook (a tool to allow you to create low-friction personal forecasts) as part of Sage. I'm always looking for feedback on the site and I love chatting to users - if you book a user call I will donate ÂŁ10 to a charity of your choosing.
I'm based in London and interested in AI safety, forecasting, community building, rationality, mental health, games, running and probably other stuff.
Before I was a programmer I was a professional poker player, where I picked up the habit of calculating the EV of almost every action in my life, and subsequently discovered EA.
If you want to learn more about me, you can check out my website here: https://jonnyspicer.com
If you're interested in chatting then I'm always open to meet new people! https://calendly.com/jonnyspicer/ea-1-2-1
I'd be interested to see you weigh the pros and cons of making it easier to contribute - you don't explicitly say it in the post, but you imply that this would be a good thing by default. The forum is the way it is for a reason, and there are mechanisms put in place both by the forum team and by the community in order to try to keep the quality of the discussion high.
For example, I would argue that having a high bar for posting isn't a bad thing, and the sliding-scale karma system that helps regulate that is, in extension, valuable. If writing a full post of sufficient quality is time consuming, then there is the quick takes section.
The Alignment Forum has a significantly higher barrier to entry than this one does, but I think that is fairly universally regarded as an important factor in facilitating a certain kind of discussion either. I can see a lot of value in the EA forum trying to maintain it's current norms in order to mean it still has the potential for productive discussion between people who are sufficiently well-researched. I think meaningfully lowering the bar for participation would mean that the forum would lose some of its ability to generate anything especially novel or useful to the community and I think the quote you included:
For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet.
Somewhat points to that too. I think there should be other forums for people less familiar with EA to participate in discussions, and I think whether or not those currently exist is an interesting discussion.
Having said all that, I do wonder if that leaves the current forum community particularly vulnerable to groupthink. I'm not really sure what the solution to that is though.
My biggest takeaway from EA so far has been that the difference in expected moral value between the consensus choice and its alternative(s) can be vastly larger than I had previously thought.
I used to think that "common sense" would get me far when it came to moral choices. I even thought that the difference in expected moral value between the "common sense" choice and any alternatives was negligible, so much so that I made a deliberate decision not to invest time into thinking about my own values or ethics.
EA radically changed my opinion, and now I hold the view that the consensus view is frequently wrong, even when the stakes are high, and that is possible to make dramatically better moral decisions by approaching them with rationality, and a better-informed ethical framework.
Sometimes I come across people who are familiar with EA ideas but don't particularly engage with them or the community. I often feel surprised, and I think the above is a big part of why. Perhaps more emphasis could be placed on this expected moral value gap in EA outreach?
Thanks for the feedback - it has indeed been a long time since I did high school statistics!
I specified that the numbers I gave were "approximations to prove my point" is because I know that I do not have a technical statistical model in my head, and I didn't want to pretend that was the case. Given this is a non-technical, shortform post, I thought it was clear what I meant - apologies if that wasn't so.
Thanks for the suggestion! I have actually spent quite a lot of time thinking about this - I had my 80k call last April and this was their advice. I've hesitated against doing this for a number of reasons:
There are probably good rebuttals to at least some of these points, and I think that is adding to my confusion. My intuition is to keep doing what I'm currently doing, rather than go try and learn ML, but maybe my intuition here is bad.
Edit: writing this comment made me realise that I ought to write a proper doc with the pros/cons of learning ML and get feedback on it if necessary. Thanks for helping pull this useful thought out of my brain :)
I suffer strongly from the following, and I suspect many EAs do too (all numbers are to approximations to illustrate my point):
I'm still figuring out what to do about this. When you're highly uncertain it's obviously fine to hedge against being wrong, but again, given the numbers it's hard to justify hedging all the way down to inaction.
I am trying to learn more about AI safety, but I'm not spending very much time on it currently. I'm trying to talk to others about it, but I'm not evangelising it, nor necessarily speaking with a great sense of urgency. At the moment, it's low down my de factor priority list, even though I think there's a significant chance it changes everything I know and care about. Is part of this a lack of visceral connection to the risks and rewards? What can I do to feel like my values are in line with my actions?
The CEO has been inconsistent over time regarding his position on releasing LLMs
I find this to be a pretty poor criticism, and its inclusion makes me less inclined to accept the other criticisms in this piece at face value.
Updating your beliefs and changing your mind in light of new evidence is undoubtedly a good thing. To say that doing so leaves you with concerns about Connor's "trustworthiness and character" seems not only unfair, but also creates a disincentive for people to publicly update their views on key issues, for fear of this kind of criticism.
Tanaka Toshiko and Ogawa Tadayoshi, both Hibakusha (survivors of the Hiroshima and Nagasaki bombings) spoke at the closing ceremony for EAG London this year, although I couldn't find any explicit link between themselves and Nihon Hidankyo.