Does anyone have thoughts on whether it’s still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't:
I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if I'm only available for that time slot, as this is something I would never do for an in-person conference.
I can't find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.
EAGxVirtual is cheap to attend. I don't really see much downside to only attending one day. And you can still make connections and meet people after the conference is over.
I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from.
I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this.
Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.
Perhaps the large uncertainty around it makes it less likely that people will argue against it publicly as well. I would imagine many people might think with very low confidence that some interventions for non-human animals might not be the most cost-effective, but stay relatively quiet due to that uncertainty.
Simple Forecasting Metrics? I've been thinking about the simplicity of explaining certain forecasting concepts versus the complexity of others. Take calibration, for instance: it's simple to explain. If someone says something is 80% likely, it should happen about 80% of the time. But other metrics, like the Brier score, are harder to convey: What exactly does it measure? How well does it reflect a forecaster's accuracy? How do you interpret it? All of this requires a lot of explanation for anyone not interested in the science of Forecasting.
What if we had an easily interpretable metric that could tell you, at a glance, whether a forecaster is accurate? A metric so simple that it could fit within a tweet or catch the attention of someone skimming a report—someone who might be interested in platforms like Metaculus. Imagine if we could say, "When Metaculus predicts something with 80% certainty, it happens between X and Y% of the time," or "On average, Metaculus forecasts are off by X%". This kind of clarity could make comparing forecasting sources and platforms far easier.
I'm curious whether anyone has explored creating such a concise metric—one that simplifies these ideas for newcomers while still being informative. It could be a valuable way to persuade others to trust and use forecasting platforms or prediction markets as reliable sources. I'm interested in hearing any thoughts or seeing any work that has been done in this direction.
Imagine if we could say, "When Metaculus predicts something with 80% certainty, it happens between X and Y% of the time," or "On average, Metaculus forecasts are off by X%".
Fyi, the Metaculus track record—the “Community Prediction calibration” part, specifically—lets us do this already. When Metaculus predicts something with 80% certainty, for example, it happens around 82% of the time:
Thank you for the response! I should have been a bit clearer: This is what inspired me to write this, but I still need 3-5 sentences to explain to a policymaker what they are looking at when you show them this kind of calibration graph. I am looking for something even shorter than that.
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.
However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness.
While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
It's a pity you don't feel comfortable inviting people to the conference - that's the last thing we want to hear!
So far our visual style for EAGxUtrecht hasn't been austere[1] so we'll think more about this. Normally, to avoid looking too fancy, I ask myself: would this be something the NHS would spend money on?
But I'm not sure how to balance the appearance of prudence with making things look attractive. Things that make me lean towards making things look attractive include:
This essay on the value of aesthetics to movements
This SSC review, specifically the third reason Pease mentions for the Fabians' success
The early success of SMA and their choice to spend a lot on marketing and design
Things I've heard from friends who could really help EA, saying things like, "ugh, all this EA stuff looks the same/like it was made by a bunch of guys"
For what it's worth, the total budget this year is about half of what was spent in 2022, and we have the capacity for almost the same number of attendees (700 instead of 750).
In case it's useful, here are some links that show the benefits of EAGx events. I admit they don't provide a slam-dunk case for cost-effectiveness, but they might be useful when talking to people about why we organise them:
Open Philanthropy’s 2020 survey of people involved in longtermist priority work (a significant fraction of work in the EA community) found that about half of the impact that CEA had on respondents was via EAG and EAGx conferences.
Anecdotally, we regularly encounter community members who cite EAGx events as playing a key part in their EA journey. You can read some examples from CEA’s analysis
Thanks again for sharing your thoughts! I hope your pseudonymous account is helping you use the forum, although I definitely don't think you need to worry about looking dumb :)
Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHS's spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this year's budget per person will likely be significantly lower.
I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you!
In the past few weeks, I spoke with several people interested in EA and wondered: What do others recommend in this situation in terms of media to consume first (books, blog posts, podcasts)?
Isn't it time we had a comprehensive guide on which introductory EA books or media to recommend to different people, backed by data?
Such a resource could consider factors like background, interests, and learning preferences, ensuring the most impactful material is suggested for each individual. Wouldn’t this tailored approach make promoting EA among friends and acquaintances more effective and engaging?
Recent announcements of Meta had me thinking about "open source" AI systems more, and I am wondering whether it would be worthwhile to reframe open source models, and start referring to them as, "Models with publicly available model weights", or "Free weight-models".
This is not just more accurate, but also a better political frame for those (like me) that think that releasing model weights publicly is probably not going to lead to safer AI development.
Are you here to win or the win the race? I've been reflecting on the various perspectives within AI governance discussions, particularly within those concerned about AI safety.
One noticeable dividing line is between those concerned about the risks posed by advanced AI systems. This group advocates for regulating AI as it exists today and increasing oversight of AI labs. Their reasoning is that slowing down AI development would provide more time to address technical challenges and allow society to adapt to AI's future capabilities. They are generally cautiously optimistic about international cooperation. I think FLI falls into this camp.
On the other hand, there is a group increasingly focused not only on developing safe AI but also on winning the race, often against China. This group believes that the US currently has an advantage and that maintaining this lead will provide more time to ensure AI safety. They likely think the US embodies better values compared to China, or at least prefer US leadership over Chinese leadership. Many EA organizations, possibly including OP, IAPS, and those collaborating with the US government, may belong to this group.
I've found myself increasingly wary of the second group, tending to discount their views, trust them less, and question the wisdom of cooperating with them. My concern is that their primary focus on winning the AI race might overshadow the broader goal of ensuring AI safety. I am not really sure what to do about this, but I wanted to share my concern and hope to think a bit in the future about what can be done to prevent a rift emerging in the future, especially since I expect the policy stakes will get more and more important in the coming years.
Does anyone have thoughts on whether it’s still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't:
EAGxVirtual is cheap to attend. I don't really see much downside to only attending one day. And you can still make connections and meet people after the conference is over.
I'd be curious to know the marginal cost of an additional attendee - I'd put it between 5-30 USD, assuming they attend all sessions.
Assuming you update your availability on swapcard, and that you would get value out of attending a conference, I suspect attending is positive EV.
I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from.
I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this.
Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.
Yudkowsky's views are discussed here:
https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/
https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/i-really-don-t-understand-eliezer-yudkowsky-s-position-on
This was very helpful, thank you!
Perhaps the large uncertainty around it makes it less likely that people will argue against it publicly as well. I would imagine many people might think with very low confidence that some interventions for non-human animals might not be the most cost-effective, but stay relatively quiet due to that uncertainty.
Simple Forecasting Metrics?
I've been thinking about the simplicity of explaining certain forecasting concepts versus the complexity of others. Take calibration, for instance: it's simple to explain. If someone says something is 80% likely, it should happen about 80% of the time. But other metrics, like the Brier score, are harder to convey: What exactly does it measure? How well does it reflect a forecaster's accuracy? How do you interpret it? All of this requires a lot of explanation for anyone not interested in the science of Forecasting.
What if we had an easily interpretable metric that could tell you, at a glance, whether a forecaster is accurate? A metric so simple that it could fit within a tweet or catch the attention of someone skimming a report—someone who might be interested in platforms like Metaculus. Imagine if we could say, "When Metaculus predicts something with 80% certainty, it happens between X and Y% of the time," or "On average, Metaculus forecasts are off by X%". This kind of clarity could make comparing forecasting sources and platforms far easier.
I'm curious whether anyone has explored creating such a concise metric—one that simplifies these ideas for newcomers while still being informative. It could be a valuable way to persuade others to trust and use forecasting platforms or prediction markets as reliable sources. I'm interested in hearing any thoughts or seeing any work that has been done in this direction.
Fyi, the Metaculus track record—the “Community Prediction calibration” part, specifically—lets us do this already. When Metaculus predicts something with 80% certainty, for example, it happens around 82% of the time:
Thank you for the response! I should have been a bit clearer: This is what inspired me to write this, but I still need 3-5 sentences to explain to a policymaker what they are looking at when you show them this kind of calibration graph. I am looking for something even shorter than that.
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.
However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness.
While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
Thanks for sharing your thoughts!
It's a pity you don't feel comfortable inviting people to the conference - that's the last thing we want to hear!
So far our visual style for EAGxUtrecht hasn't been austere[1] so we'll think more about this. Normally, to avoid looking too fancy, I ask myself: would this be something the NHS would spend money on?
But I'm not sure how to balance the appearance of prudence with making things look attractive. Things that make me lean towards making things look attractive include:
For what it's worth, the total budget this year is about half of what was spent in 2022, and we have the capacity for almost the same number of attendees (700 instead of 750).
In case it's useful, here are some links that show the benefits of EAGx events. I admit they don't provide a slam-dunk case for cost-effectiveness, but they might be useful when talking to people about why we organise them:
Thanks again for sharing your thoughts! I hope your pseudonymous account is helping you use the forum, although I definitely don't think you need to worry about looking dumb :)
We're going for pink and fun instead. We're only going to spend a few hundred euros on graphic design.
Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHS's spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this year's budget per person will likely be significantly lower.
I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you!
That's nice to read! But please don't feel guilty, I found it to be a very useful prompt to write up my thoughts on the matter.
In the past few weeks, I spoke with several people interested in EA and wondered: What do others recommend in this situation in terms of media to consume first (books, blog posts, podcasts)?
Isn't it time we had a comprehensive guide on which introductory EA books or media to recommend to different people, backed by data?
Such a resource could consider factors like background, interests, and learning preferences, ensuring the most impactful material is suggested for each individual. Wouldn’t this tailored approach make promoting EA among friends and acquaintances more effective and engaging?
Recent announcements of Meta had me thinking about "open source" AI systems more, and I am wondering whether it would be worthwhile to reframe open source models, and start referring to them as, "Models with publicly available model weights", or "Free weight-models".
This is not just more accurate, but also a better political frame for those (like me) that think that releasing model weights publicly is probably not going to lead to safer AI development.
We can also talk about irreversible proliferation.
Are you here to win or the win the race?
I've been reflecting on the various perspectives within AI governance discussions, particularly within those concerned about AI safety.
One noticeable dividing line is between those concerned about the risks posed by advanced AI systems. This group advocates for regulating AI as it exists today and increasing oversight of AI labs. Their reasoning is that slowing down AI development would provide more time to address technical challenges and allow society to adapt to AI's future capabilities. They are generally cautiously optimistic about international cooperation. I think FLI falls into this camp.
On the other hand, there is a group increasingly focused not only on developing safe AI but also on winning the race, often against China. This group believes that the US currently has an advantage and that maintaining this lead will provide more time to ensure AI safety. They likely think the US embodies better values compared to China, or at least prefer US leadership over Chinese leadership. Many EA organizations, possibly including OP, IAPS, and those collaborating with the US government, may belong to this group.
I've found myself increasingly wary of the second group, tending to discount their views, trust them less, and question the wisdom of cooperating with them. My concern is that their primary focus on winning the AI race might overshadow the broader goal of ensuring AI safety. I am not really sure what to do about this, but I wanted to share my concern and hope to think a bit in the future about what can be done to prevent a rift emerging in the future, especially since I expect the policy stakes will get more and more important in the coming years.