R

RedTeam

Risk, Assurance & Continuous Improvement specialist
39 karmaJoined Working (6-15 years)United Kingdom

Bio

How I can help others

Reach out to me if you have questions about:
- effective internal controls / running organisations in an effective way
- how continuous improvement can make organisations more effective 
- working in the UK Charity sector
- transitioning from the private sector to the charity sector
- risk management / internal audit / organisational assurance
- business continuity and organisational resilience
 

Comments
7

Thank you for this post, Vasco. 

I'm really glad to see a post challenging what seems to be the status quo opinion of saving lives in cheapest countries is the best / 'most effective' thing to do. (Perhaps there are more posts challenging this - I'm still new to the forum and haven't gone through the archives, but from what I've seen more generally, the saving lives most cheaply view seems to be fairly ubiquitous and I worry it's an overly simplistic way to look at things)

I tend to agree that focusing on cost to save a life is not necessarily the best proxy for effectiveness, and that considering WELLBY type metrics seems a very sensible thing to take into account. 

I also think there's a bit of an under-consideration in common discourse of indirect effects. If you save a life in a rich country, does that sometimes have the potential to do more good overall because they might, post-intervention be in a better position to help others with donations/volunteering/high tax contribution? And if we only addressed what's espoused to be "the most cost-effective" - only the malaria net charities etc. and we ignore dealing with more expensive issues, we could make the issues we neglect exponentially worse and do more harm than good. It's not necessarily the case that the more expensive issues just say constant if we don't fund interventions for them. The world is far more complex than that. Many problems have a tipping point where things get much worse beyond a set point, and if we just say 'no funding for causes not on the effective charities lists' there's a strategic cost/implication to that. For example, child exploitation by criminal gangs - not something you find on the cause prioritisation lists, but if nothing is done to address issues like that, guess what - organised crime thrives without challenge and it becomes a much, much more complex, expensive problem to fix. (There are probably better examples, but I did find this https://www.bbc.co.uk/news/uk-68615776 from the UK news today really interesting and it seems like an example where there is a problem getting worse and without attention it will become effectively an epidemic. Yet, the general EA wisdom would say funding charities that try to help children escape exploitation by criminal gangs is not cost-effective so don't put it on your list - don't we need a more nuanced view?) 

Cause prioritisation is really important, considering effectiveness and cost-effectiveneness is really important, but it feels like the model needs to evolve to something a bit less simplistic than how many lives can you save with £X. To be clear, I do genuinely think charities like Against Malaria Foundation and GiveDirectly are fantastic and should receive a high level of funding - but not to the exclusion of everything else. If we fund ONLY those causes described by EA community as cost-effective, there are huge repercussions to that. We need to think about addressing root causes of issues, including those that are complex and may be expensive to solve, not just individual lives saved per £100k (or if we go for lives/£ then at least factor in WELLBYs as suggested, and make some attempt to consider knock on impacts of funding or not , including the difference between low funding for a cause vs zero funding for a cause).

I'd love to see more posts like this one that ask bold questions challenging existing assumptions - strong upvote!

Answer by RedTeam2
1
0

What proportion of people working in population ethics have experienced some kind of prolonged significant suffering themselves, e.g. destitution, 3rd degree burns on a large proportion of the body? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?

I am concerned that the views of people who have experience significant suffering are very under-represented and this results in a bias in many areas of society, including population ethics.

On asymmetry - and indeed most of the points I'm trying to make - Magnus Vinding gives better explanations than I could. On asymmetry specifically I'd recommend: https://centerforreducingsuffering.org/research/suffering-and-happiness-morally-symmetric-or-orthogonal/ 
and on whether positive can outweigh suffering: https://centerforreducingsuffering.org/research/on-purported-positive-goods-outweighing-suffering/ 
To get a better understanding of these points, I highly recommend his book 'Suffering-focused ethics' - it is the most compelling thing I've read on these topics. 

I think - probably about 90% sure rather than 100% - I agree that happiness is preferable to non-existence. However, I don't think there's an urgency/moral imperative to act to create happiness over neutral states in the same way that there is an urgency and moral imperative to reduce suffering. I.e. I think it's much more important to spend the world's resources reducing suffering (taking people from a position of suffering to a position of neutral needs met/not in suffering) than to spend resources on boosting people from a neutral needs met state (which needn't be non-existence) to a heightened 'happiness' state. 
I view that both: the value difference between neutral and suffering is much larger than the value difference between neutral and happiness AND that there is a moral imperative to reduce suffering where there isn't necessarily a moral imperative to increase happiness. 

To give an example, if presented with the option to either give someone a paracetamol for a mild headache or to give someone a bit of cake that they would enjoy (but do not need - they are not in famine/hunger), I would always choose the painkiller. And - perhaps I'm wrong - I think this would be quite a common preference in the general population. I think most people on a case by case basis would make statements that indicate they do believe we should prioritise suffering. Yet, when we talk on aggregate, suffering-prioritisation seems to be less prevalent. It reminds me of some of the examples in the Frames and Reality chapter of Thinking Fast and Slow about how people will respond the essentially the same scenario differently depending on it's framing. 

WIth apologies for getting a bit dark - (with the possible exclusion of sociopaths etc.), I think people in general would agree they would refuse an ice-cream or the joy of being on a rollercoaster if the cost of it was that someone would be tortured or raped. My point is that I can't think of any amount of positive/happiness that I would be willing to say yes, this extra happiness for me balances out someone else being raped. So there are at least some examples of suffering, that I just don't think can be offset by any amount of happiness and therefore my viewpoint definitely includes asymmetry between happiness and suffering. Morally, I just don't think I can accept a view that says some amount of happiness can offset someone else's rape or torture. 

And I am concerned that the views of people who have experience significant suffering are very under-represented and we don't think about their viewpoints because it's easier not to and they often don't have a platform. What proportion of people working in population ethics have experienced destitution or been a severe burns victim? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?

Thank you for this post - I definitely have some similar and related concerns about experience levels in senior roles in EA organisations. And I likewise don't know how worried to be about it - in particular I'm unclear about whether what I've observed is representative or exceptional cases. 

What I have observed is not from some structured analysis, but from a random selection of people and organisations I've looked up in the course of exploring job opportunities at various EA organisations. So in line with note above about being unsure whether it's representative, it's very unclear to me how much weight to assign to these observations.  

Concerns: narrowness of skillsets/lack of diversity of experience*; limited experience of 'what good looks like'; compounding effect of this on junior staff joining now.
*By 'experience' I do not strictly mean number of years working; though there may often be a correlation, I have met colleagues who have been in the workforce 40 years and have relatively limited experience/skillset and colleagues in the workforce for 5 years who have picked up a huge amount of experience, knowledge and skills. 

Observations I've been surprised by:
- Hiring managers with reasonably large teams who have only been in the workforce for a couple of years themselves and / or have no experience in the operational field they're line managing (e.g. in charge of all grant processes, but never previously worked in a grant-giving organisation). 
- Entire organisations populated solely with staff fresh out of university. Appreciate these are often startups, appreciate you can be exceptionally bright and have a lot to contribute at 22/post-PhD, but also you don't know what you don't know at that very early career stage. 
- EA orgs with significant budgets where some or all of the senior leadership team have been in the workforce for 3 years or less, and / or have only ever worked in start ups or organisations that have a very small number of employees.

Unfortunately, it is not just limited to potential(ly unfounded concerns about) inexperience of individuals. As I've been reading job descriptions, application forms etc., specific content has also made me uneasy / feel there is a lack of experience in some of these organisations. This has included, but is not limited to: poor/entirely absent consideration of disabilities and the Equality Act in job application processes; statements of 'we are going to implement xxxx' that seem wildly unrealistic/naive; job descriptions that demonstrate a lack of knowledge of how an operational area works; and funders describing practices or desired future practices that are intentionally actively avoided by established grant funding organisations due to the amount of bias it would lead to in grant awarding decisions. 

Why am I concerned? 
In the case of the content that surprised me, I'm concerned both because of the risks associated with the specific issues but also that they may be indicators of the organisations as a whole having poor controls and low risk maturity - and it's very difficult for organisations to be effective and efficient when that is the case. 

Also I've been in the workforce for nearly 15 years, and when I think about my own experience at different organisations, it strikes me that:
- I've learned an awful lot over that time and I see a huge difference in my professional competence and decision making now versus early in my career. 
- There has been a big difference in the quality of processes, learning and experience at different organisations. And being honest, in larger organisations I learned more; working with colleagues who had 20+ years' experience and had been in charge of large, complex projects or big teams or had just been around long enough to see some issues that only happen infrequently has been very valuable. Seeing how things work in lots of different organisations and sectors has taught me a lot. 
- I worry a lack of diversity amongst employees in any organisation. For all organisations, decision-making is likely to be worse if there's a lack of diversity of opinions and experience. And in particular orgs where everyone did the same degree can have big blind spots, narrow expertise, or entrenched group think issues. For EA organisations in particular, in the same way as I think career politicians are problematic, I worry that people who have only worked in EA orgs may lack understanding of the experience of the vast majority of the population who don't work in the EA sector, and this could be problematic for all sorts of reasons, particularly when trying to engage the wider public. 
- Many of these EA orgs are in growth mode. I've seen how difficult rapid growth can be for organisations. Periods of rapid growth are often high risk not just because of the higher stakes of strategic decision-making in these periods, but because on the operational side it is very different having a 5 person organisation vs a 20 person organisation vs a 100 person organisation vs a 1,000 person organisation - you need completely different controls, systems, processes and skillsets as you grow and it can be very challenging to make this transition at the same time as (1) trying to deliver the growth/ambitions and (2) probably being in a position where you have quite a lot of vacancies as the hiring into roles will always lag behind identification of the need for additional staff. Trying to do this also with a team who also hasn't seen what an organisation of the size they are aiming for looks like is going to mean even greater challenges. 

Importantly - I suspect there are advantages too:
- I can see real advantages, particularly culturally, to having a workforce where all or most staff are early career or otherwise have not experienced work outside of EA orgs. For example, lots of long-established or large organisations really struggle to shift culture and get away from misogyny or to get buy-in on prioritising mental health. Younger staff members - as a generalisation - tend to have more progressive views. And experience in the private sector can cause even progressive individuals to pick up bad habits/take on negative elements of culture.
- Though I've been alarmed by some elements of application processes, some elements of the unorthodox application processes have been positive. It's refreshing to see very honest descriptions/information, and prompts that try to minimise time from candidates in early application stages. 
- Perhaps linked to culture being set by more progressive leaders, in many EA orgs I've looked into there is a focus on time and budget for learning/wellbeing that is in line with best practice and well ahead of many organisations in other sectors. E.g. I've commonly seen a commitment to £5k personal development spend from EA orgs; in many sectors, even in large organisations this type of commitment is still rare and it's much more common to see lip-service type statements like 'learning and development is a priority' without budgetary backing. 

N.B. In the spirit of drafts amnesty week, this response is largely off the top of my head so is rough around the edges and could probably include better examples/explanations/expansions if I spent a long time thinking about and redrafting it. However, I took the approach of better rough and shared than unfinished and unsaid. 

Thank you for clarifying, Vasco - and for the welcome. I think it's important to distinguish between active reasoned preferences versus instinctive responses. There are lots of things that humans and other animals do instinctively that they might also choose not to do if given an informed choice. A trivial example - I scratch bug bites instinctively, including sometimes in my sleep, even though my preference is not to scratch them. There's lots of other examples in the world from criminals who look directly at CCTV cameras with certain sounds to turtles who go towards man-made lights instead of the ocean - and I'm sure many examples better than these ones I am thinking of off the top of my head. But in short, I am very reluctant to draw inferences on preferences from instinctive behaviour. I don't think the two are always linked. I'm also not sure - if we could theoretically communicate such a question to them - what proportion of non-human animals are capable of the level of thinking to be able to consider whether they would want to continue living or not if given the option. 

I agree with you that it is unclear whether the total sum of experiences on Earth is positive or negative; but I also don't necessarily believe that there is an equivalence or that positive experiences can be netted off against negative experiences so I'm not convinced that considering all beings experiences as a 'total' is the moral thing to do. If we do try and total them all together to get some kind of net positive or negative, how do you balance them out - how much happiness is someone's torture worth or netted off against in this scenario? It feels very dangerous to me to try to infer some sort of equivalency. I personally feel that only the individuals affected by the suffering can say under what circumstances they feel the suffering is worth it - particularly as different people can respond to and interpret the same stimuli differently. 
Like you, I am certainly not inclined to start killing people off against their will (and 'against their will' is a qualifier which adds completely different dimensions to the scenario; killing individuals is also extremely different to a hypothetical button painlessly ending all life - if you end all life, there is noone to mourn or to be upset or to feel pain or indeed injustice about individuals no longer being alive, which obviously isn't the case if you are talking about solitary deaths). If we are in favour of euthanasia and assisted suicide though, that suggests there is an acceptance that death is preferable to at least some types of/levels of suffering. To go back to the original post, what I was defending is the need for there to be more active discussion of what the implications are of accepting that concept. I do fear that because many humans find it uncomfortable to talk about death and because we may personally prefer to be alive, it can be uncomfortable to think about and acknowledge the volume of suffering that exists. It's a reasonably frequent lament in the EA world that not enough people care about the suffering of non-human animals and there is criticism of people who are viewed to be effectively ignoring the plight of animals in the food industry because they'd rather not know about it or not think about it. I worry though that many in EA do the same thing with this kind of question. I think we write off too easily the hypothetical kill all painlessly button because there's an instinctive desire to live and those of us who are happy living would rather not think about how many beings might prefer nothingness to living if given a choice. I'm not saying I definitely would push such a button but I am saying that I think a lot of people who say they definitely wouldn't say it instinctively rather than because they've given adequate consideration to the scenario. Is it really so black and white as we definitely shouldn't press that hypothetical button - and if it is, what are the implications of that? We value positive experiences more than we disvalue suffering? We think some level of happiness can justify or balance out extreme suffering? What's the tipping point - if every being on Earth was being endlessly tortured, should we push the button? What about if every being on Earth bar one? What if it's 50/50? 
I will readily admit I do not have a philosophy PhD, I have lots of further reading to do in this space and I am not ready myself to say definitively what my view is on the hypothetical button one way or the other, but I do personally view death or non-existence as a neutral state, I do view suffering as a negative to be avoided and I do think there's an asymmetry between suffering and happiness/positive wellbeing. With that in mind I really don't think that there is any level of human satisfaction that I would be comfortable saying 'this volume of human joy/positive wellbeing is worth/justifies the continuation of one human being subject to extreme torture'. If that's the case, can I really say it's the wrong thing to do to press the hypothetical painless end for all button in a world where we know there are beings experiencing extreme suffering? 

What is your basis for the statement that "most beings would rather continue to live instead of being painlessly killed"? This seems to me to be a huge assumption. Vinding and many others who write from a suffering-focused ethics perspective highlight that non-human animals in the wild experience a large amount of suffering, and there's even greater consensus on non-human animals bred for food experiencing a large amount of suffering; is there research suggesting that the majority of beings would actively choose to continue to live over a painless death if they had an informed choice or is this an assumption? Even just considering humans, we have millions of people in extreme poverty; and an unknown number of humans suffering daily physical and / or sexual abuse. Too often there's both a significant underestimation of the number of beings experiencing extreme suffering - and a cursory disregard for their lived experience with statements like 'oh well if it was that bad they'd kill themselves', which completely ignores that a large proportion of humans follow religions in which they believe they will go to hell for eternity/similar if they die via suicide. I would counter your selfishness statement with 'If we accept the theory that ceasing to live is a painless nothingness, and we say there is a button to kill all life painlessly, is it not selfish for those who want to continue to live to not push the button and cause the continuation of extreme suffering for other beings?'
Oisín Considine's point may well be uncomfortable for many to think about and therefore unpopular, but I think it's a sound question/point to make. And one with potentially very significant implications when it comes to s-risks. If death (or non-existence) is neutral vs suffering is negative then that might imply we should dedicate more resources to preventing extreme suffering scenarios than to preventing extinction scenarios for example. 

Answer by RedTeam1
0
0

(Somewhat adjacent to the Qs posed by Brad West and Max Görlitz) Does the EA community spend enough time and resource on outreach and public engagement activity? 
I often wonder whether, despite being fictional, Chidi Anagonye has done more for effective altruism than all EA orgs combined, given there is so much focus on an academic exchange of ideas and it's not clear to me how wide-reaching a lot of the existing Effective Altruism (with a capital E and A) organisations truly are; the number of pledgers on Giving What We Can is quite low for example - and awareness of many of these organisations is low outside of the EA community itself. Has there - via theory of change or impact assessment work, been an evaluation of how much prioritisation should be given to public engagement and to what extent this aligns with the reality of how much activity is currently undertaken in this space?