BW

Brad West

Founder & CEO @ Profit for Good Initiative
1831 karmaJoined Roselle, IL, USAProfit4good.org/

Bio

Participation
2

Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.

 

Comments
285

Thanks for thinking of us  @david_reinstein

Right now, we're focused on gathering information about Profit for Good businesses. Down the line, we’re definitely interested in compiling a guide of individuals or businesses that might offer favorable terms to Profit for Good enterprises, especially those benefiting effective charities. However, at the moment, we don’t have the capacity to work on compiling and developing this list.

Yes, both talks are on the same concept of Profit for Good.

I don't think either makes direct reference to the Profit for Good Initiative.

The issue with support roles is that it's often difficult to assess when someone in that position truly makes a counterfactual difference. These roles can be essential but not always obviously irreplaceable. In contrast, it's much easier to argue that without the initiator or visionary, the program might never have succeeded in the first place (or at least might have been delayed significantly). Similarly, funders who provide critical resources—especially when alternative funding isn't available—may also be in a position where their absence would mean failure.

This perspective challenges a more egalitarian view of credit distribution. It suggests that while support roles are crucial, it's often the key figures—initiators, visionaries, and funders—who are more irreplaceable, and thus more deserving of disproportionate recognition. This may be controversial, but it reflects the reality that some contributions, particularly at the outset, might make all the difference in whether a project can succeed at all.

I think I considered it prior to the enumerated portion, where I'd said

"it would be valuable to see an analysis—perhaps there’s something like this on 80,000 Hours—of the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact."

I agree that the "high autonomy and lack of ability to oversee or otherwise measure achievement of objectives" would be a reason that having EAs in the role might be better. The scope of jobs in this category is not clear.

There may have been an overcorrection and I still think ETG is a good default option - the scarcity of "EA jobs" and frequent posts lamenting the difficulty of getting jobs at EA orgs as an EA suggests that there is no shortage of EAs looking to fill roles for which close alignment is critical. Especially in the animal welfare EA space - everyone wants to be doing direct work and so little funding to enable excellent work. There may be more of an "aligned talent constraint" problem in AI Safety.

I didn't neglect it - I specifically raised the question of in what conditions EAs occupying roles within orgs vs non-EAs adds substantial value. You assume that having EAs in (all?) roles is critical to having a "focused" org. I think this assumption warrants scrutiny, and there may be many roles in orgs for which "identifying as an EA" may not be important and that using it as a requirement could result in neglecting a valuable talent pool.

Additionally, a much wider pool of people could align with the specific mission of an org that don't identify as EA.

One question I often grapple with is the true benefit of having EAs fill certain roles, particularly compared to non-EAs. It would be valuable to see an analysis—perhaps there’s something like this on 80,000 Hours—of the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact. If an EA doesn’t outperform the counterfactual non-EA hire, their impact is neutralized. This is why I believe that earning to give should be a strong default for many EAs. If they choose a different path, they should consider whether:

  1. They are providing specialized and scarce labor in a high-impact area where their contribution is genuinely advancing the field. This seems more applicable in specialized research than in general management or operations.
  2. They are exceptionally competent, yet the market might not compensate them adequately, thus allowing highly effective organizations to benefit from their undercompensated talent.

I tend to agree more with you on the "doer" aspect—EAs who independently seek out opportunities to improve the world and act on these insights often have a significant impact.

I appreciate the depth and seriousness with which suffering-focused ethics addresses the profound impact of extreme negative experiences. I’m sympathetic to the idea that such suffering often carries more moral weight than extreme positive experiences. For example, being tortured is not merely "worse" than having a pleasurable experience, but it is disproportionately more severe. The extreme nature of certain sufferings makes it challenging, if not impossible, to identify positive experiences that one would reasonably trade off to endure them.

However, I maintain a classical utilitarian framework, which, while recognizing the disproportionate severity of certain forms of suffering, also acknowledges the significant value of positive experiences. The example involving a toothache and heaven illustrates why positive experiences cannot be dismissed. Ending a state of eternal bliss (or preventing it from ever occurring) simply to avoid a trivial negative experience like a stubbed toe is both absurd and morally troubling. It suggests a kind of ethical myopia that undervalues the richness and depth of joy, love, and fulfillment that life can offer.

Imagine individuals behind a veil of ignorance, choosing between two potential lives: one filled with immense joy but punctuated by occasional bad days, versus a life that is consistently mediocre, without significant pain but also devoid of substantial positive experiences. It seems intuitive that most would choose the former. The prospect of immense joy outweighs the temporary pain that accompanies it, suggesting that the value of positive experiences should not be discounted but rather carefully weighed alongside the potential for suffering.

The sensible approach, in my view, is not to eliminate or devalue the significance of joy and positive experiences, but to acknowledge the depth and intensity of potential suffering. By doing so, we can ensure that our ethical frameworks remain balanced, appropriately weighting the full spectrum of the experiences of conscious beings without overcorrecting in a way that leads to counterintuitive and undesirable outcomes.

In summary, while suffering-focused ethics rightly highlights the importance of alleviating extreme suffering, we must also recognize and value the profound positive experiences that give life its richness and meaning. Both extremes of the human condition (and those of other conscious beings)—intense suffering and intense joy—deserve our moral attention and appropriate weighting in our ethical considerations.

I think Peter Singer's book, The Life You Can Save, addresses this question more fully. But I would say that the obligations of people in wealthy countries is to make life choices, including sharing of their own wealth, in a way that shows some degree of consideration for their ability to help others in such an efficient way.

Failing to make some significant effort to help, perhaps to the degree of the 10% pledge (though I would probably think more than that even would in many situations be morally required). I do not know where exactly I would draw the line, but some degree of consideration similar to that of the 10% pledge would be a minimum.

I definitely think that the very demanding requirement you stated above would make more sense than none whatsoever in which one implicitly values others less than a thousandth of how one values oneself.

My intuition doesn't really change significantly if you change the obligation from a financial one to the amount of labor that would correspond to the financial one.

If I recall correctly, the value of a statistical life used by government agencies is $10 mil/life, which is calculated by using how much people value their own lives implicitly through choices they make that entail avoiding risk by incurring costs and getting benefits by incurring risk to themselves.

If we round up the cost to save a life in the developing world to $10k, people in the developing world could save 1,000 lives for the cost at which they value their own lives.

I simply think that acting in a way that you value another person 1,000 times less than you do yourself is immoral. This is why I do think that incorporating the value of other conscious beings to some degree is morally required.

Yeah I think in the case of both choosing not to act to save the kid and acting to kill the kid (in this narrow hypothetical) you're violating the kid's rights just as much (privileging your financial interests over his life).

And regarding your point regarding conscience... You're appealing to our moral intuitions which we can question the validity of, particularly with such thought experiments as these.

I suppose I would agree that acting as a moral person requires a significant consideration of other conscious beings with regard to our choices. And I think the vast majority of people fail to take adequate consideration thereof. I suppose that's how I consider my own "conscience": am I making choices with sufficient regard for the interests of other beings across space and time? I think attempting to act accordingly is part of my "inner goodness".

Load more