BW

Brad West🔸

Founder & CEO @ Profit for Good Initiative
2045 karmaJoined Roselle, IL, USAProfit4good.org/

Bio

Participation
2

Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.

 

Posts
21

Sorted by New

Comments
318

Because we face substantial uncertainty around the eventual moral value of AIs, any small reduction in p(doom) or catastrophic outcomes—including S-risks—carries enormous expected utility. Even if delaying AI costs us a few extra years before reaping its benefits (whether enjoyed by humans, other organic species, or digital minds), that near-term loss pales in comparison to the potentially astronomical impact of preventing (or mitigating) disastrous futures or enabling far higher-value ones.

 From a purely utilitarian viewpoint, the harm of a short delay is utterly dominated by the scale of possible misalignment risks and missed opportunities for ensuring the best long-term trajectory—whether for humans, other organic species, or digital minds. Consequently, it’s prudent to err on the side of delay if doing so meaningfully improves our chance of securing a safe and maximally valuable future. This would be true regardless of the substrate of consciousness.

I would like to see a strong argument for the risk of "replaceability" as a significant factor in potentially curtailing someone's counterfactual impact in what might otherwise be a high-impact job. This central idea is that the "second choice" applicant, after the person who was chosen, might have done just as well, or near just as well as the "first choice" applicant, making the counterfactual impact of the first small. I would want an analysis of the cascading impact argument: that you "free up" the second choice applicant to do other impactful work, who then "frees up" someone else, etc., and this stream of "freeing up value" mostly addresses the "replaceability concern.

Yeah I would think that we would want ASI-entities to (a) have positively valenced experienced as well as the goal of advancing their positively valenced experience (and minimizing their own negatively valenced experience) and/or (b) have the goal of advancing positive valenced experiences of other beings and minimizing negatively valenced experiences.

A lot of the discussion I hear around the importance of "getting alignment right" pertains to lock-in effects regarding suboptimal futures.

Given the probable irreversibility of the fate accompanying ASI  and the potential magnitude of good and bad consequences across space and time, trying to maximize the chances of positive outcomes seems simply prudent. Perhaps some of the "messaging" of AI safety seems to be a bit human-centered, because this might be more accessible to more people. But most who have seriously considered a post-ASI world have considered the possibility of digital minds both as moral patients (capable of valenced experience) and as stewards of value and disvalue in the universe.

Really glad to see the success of the Compassion Calculator and hope for its continued success in bringing more omnivores into the fight against factory farming!

The preference for humans remaining alive/in control isn't necessarily speciesist because it's the qualities of having valuable conscious experience and concern for the promotion of valuable as well as avoidance of disvaluable conscious experience that might make one prefer this outcome. 

We do not know whether ASI would have these qualities or preferences, but if we could know that it did, you would have a much stronger case for your argument. 

I would write how there's a collective action problem regarding reading EA forum posts. People want to read interesting, informative, and impactful posts and karma is a signifier of this. So often people will not read posts, especially on topics they are not familiar, unless it has already achieved some karma threshold. Given how time-sensitive front page availability is without karma accumulation and unlikely relatively low karma posts are too be read once off the front page, it is likely that good posts could be entirely ignored. On the other hand, some early traction could result in OK posts getting very high karma because a higher volume of people have been motivated to check the post out. 

 

I think this could be partially addressed by having volunteers, or even paying people, to commit to read posts within a certain time frame and upvote (or not, or downvote) if appropriate. It might be a better use of funds than myriad cosmetic changes. 

Below is a post I wrote that I think might be such a post that was good (or at least worthy of discussion) but people probably wanted to freeride on others' early evaluation. It discusses how jobs in which the performance metrics actually used are orthogonal to many ways in which good can be done may be opportunities for significant impact. 

 

https://forum.effectivealtruism.org/posts/78pevHteaRxekaRGk/orthogonal-impact-finding-high-leverage-good-in-unlikely

Another set of actors that would be incentivized in this would be the survey respondents, to say higher counterfactual values of first vs second choices. Saying otherwise could go against their goals of attracting more of the EA talent pool to their positions. The framing of irreplaceability for their staff also tends to lend to the prestige of their organizations and staff.

With limited applicants, especially in very specialized areas, I think there is definitely a case for a high value of first vs. second choice applicant. But I suspect that this set of survey respondents would be biased in the direction of overestimating the counterfactual impact.

I don't know to what extent Moskowitz could have influenced Zuckerberg, but I am somewhat intrigued by the potential power of negative emotion that you bring up.

Ironically, one of the emotions that reflection on effective altruism has brought me is rather intense anger. The vast majority of people in developed countries have the ability to use their resources to save lives, significantly mitigate the mass torture of animals, or otherwise make the world a much better place with the power they have. Yet, even when confronted squarely with this opportunity, most do not do it. 

I think about other mass injustices and movements that have sought to address them and I think we remember that there was a place for righteous fury- I think of, for instance of women's suffrage or the civil rights movement. But yet, the attitude regarding EAs is often conciliatory, milquetoast, professorial... almost embarrassed to be holding beliefs in which the judgment of most humans is only a close corollary away.

I realize that in one-on-one interactions, a condemnatory approach is unlikely to gain us allies. But I wonder if a powerful engine for fighting global poverty, animal torture, and the continued existence of conscious life might be the activation of the emotion that such matters merit.

I don't know to what extent that this can be addressed by the EA Forum team at all, but I have been pretty disappointed by the lack of new, interesting ideas about how to better the world. It does not seem that there is really much incentive to share such ideas on the forum, because most people will only look at articles on subject matters that they are already familiar or on meta-level conversations regarding community or norms or expectations around being in the EA world. I find myself pretty frequently logging in to the EA forum hoping to find new, interesting ideas for changing the world, but just finding a bunch of banal or naval-gazing content. I think EA, and resultantly, the world, would benefit from being a more vibrant, open-minded, and creative space, but I'm not sure what would help us move in this direction.

Yes Thisj Jacobs mentioned below, but thanks for bringing to my attention.

Load more