Do people enjoy using Slack? I hate Slack and I think that Slack has bad ergonomics. I'm in about 10 channels and logging into them is horrible. There is no voice chat. I'm not getting notifications (and I fret the thought of setting them up correctly - I just assume that if someone really wanted to get in touch with me immediately, they will find a way) I'm pretty sure it would be hard to create a tool better than Slack (I'm sure one could create a much better tool for a narrower use case, but would find it hard to cover all the Slack's features) but let's assume I could. Is it worth it? Do you people find Slack awful as well or is it only me?
I realized that the concept of utility as a uniform, singular value is pretty off-putting to me. I consider myself someone who is inherently aesthetic and needs to place myself in a broader context of the society, style and so on. I require a lot of experiences— in some way, I need more than just happiness to reach a state of fulfillment. I need to have aesthetic experience of beauty, the experience of calmness, the anxiety of looking for answers, the joy of building and designing.
The richness of everyday experience might be reducible to two dimensions: positive and negative feelings but this really doesn't capture what a fulfilling human life is.
All the EA-committed dollars in the world are a tiny drop in the ocean of the world's problems and it takes really incredible talent to leverage those dollars in a way that would be more effective than adding to them.
This seems false to me. I agree that earning to give should be highly rewarded and so on, but I don't think that, for example, launching an effective giving organization requires an incredible amount of talent. There have been many launched recently, either by CE or local groups (I was part of the team that launched one in Denmark). Recently, EAIF said that they are not funding-constrained, and there are a lot of projects being funded on Manifund. It looks more like funders are looking for new projects to fund. So either most of the funders are wrong in their assessment and should just grant to existing opportunities, or there is still room for new projects.
If anything my experience was that the bar for direct work is way lower than I expected and part of reason why I thought that way was that there are comments like this.
Re 2. I agree that this is a lot of work but it's little given how much money goes into grants. Some of the predictions are also quite straightforward to resolve.
Well, glad to hear that they are using it.
I believe that an alternative could be funding a general direction, e.g., funding everything in AIS, but I don't think that these approaches are exclusive.
Meta: I'm requesting feedback and gauging interest. I'm not a grantmaker.
You can use prediction markets to improve grantmaking. The assumption is that having accurate predictions about project outcomes benefits the grantmaking process.
Here’s how I imagine the protocol could work:
Examples of grant proposals and predictions (taken from here):
A prediction market is created based on these proposed outcomes, conditional on the project receiving funding. Some of the potential grant money is staked to make people trade.
Obvious criticism is that:
Thanks! I saw that post. It's an excellent approach. I'm planning to do something similar, but less time-consuming and limited. The range of theories of change that are pursued in AIS is limited and can be broken down into:
Evals can be measured by quality and number of evals, relevance to ex-risks. It seems pretty straightforward to differentiate a bad eval org from a good eval org—engaging with major labs, having a lot of evals, and a relation to existential risks.
Field-building—having a lot of participants who do awesome things after the project.
Research—I argue that the number of citations is also a good proxy for the impact of a paper. It's definitely easy to measure and is related to how much engagement a paper received. In the absence of any work done to bring the paper to the attention of key decision makers, it's very related to the engagement.
I'm not sure how to think about governance.
Take this with a grain of salt.
EDIT: Also I think that engaging broader ML community with AI safety is extremely valuable and citations tells us how if an organization is good at that. Another thing that would be good to reivew is to ask about transparency of organizations, how thier estimate their own impact and so on - this space is really unexplored and this seems crazy to me. The amount of money that goes into AI safety is gigantic and it would be worth exploring what happens with it.
I'm a huge fan of self-hosting and even better writing simple and ugly apps, in my dream world every org would have its resident IT guy who would just code an app that would have all the features they need.