SB 1047 is a critical piece of legislation for AI safety, but there haven’t been great ways of getting up to speed, especially since the bill has been amended several times. Since the bill's now finalized, better resources exist to catch up. Here's a few:
Some random appreciations (because someone nudged me to give positive feedback that I would have otherwise not shared with anyone):
I think the Online team at CEA has done an outstanding job improving the forum. Every month or so, they keep surprising me with more features I didn't know I needed, which reflects them doing a lot of hard work behind the scenes to create as much value for the EA community as possible. I've also been able to observe firsthand them gathering user feedback and doing feature prioritization, and as a software engineer, I'm extremely impressed!
Even though the Community Health at CEA has made many errors in the past, I think we often miss both how difficult their work is, and how every decision they make is ridden with crucial tradeoffs, many of which are invisible to others[1]. Because every successful incident they handle correctly (or even prevent) is one incident we probably don't hear about, I want to take a moment to highlight that in every interaction I've had with them, they've acted with incredible professionalism, demonstrated deep concern for everyone involved, and acted very competently.
Having been in similar positions many times as a community builder (both inside and outside EA), I know just how difficult the job is, and how, for example, often what others see as clear failures are just the result of them lacking information which can't be publicly shared without harming others.
Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to?
(Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)
SB 1047 is a critical piece of legislation for AI safety, but there haven’t been great ways of getting up to speed, especially since the bill has been amended several times. Since the bill's now finalized, better resources exist to catch up. Here's a few:
If you are working in AI safety or AI policy, I think understanding this bill is pretty important. Hopefully this helps.
Some random appreciations (because someone nudged me to give positive feedback that I would have otherwise not shared with anyone):
Having been in similar positions many times as a community builder (both inside and outside EA), I know just how difficult the job is, and how, for example, often what others see as clear failures are just the result of them lacking information which can't be publicly shared without harming others.
Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to?
(Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)
I have a post that takes readers through a basic example of how to calculate Shapley values.
I read your post while I was writing up the wiki article on Shapley values and thought it was really useful. Thanks for making that post!
You might want to use viewpoints.xyz to run a poll here.