MDR

Mathilde da Rui

18 karmaJoined

Comments
7

Thanks for this, OP! 

I hadn't seen the latest updates to the site since ~launch and have sent my own feedback.

Fellow Forum users—consider taking a few min to look at the site and give their sense of the UX or whatever other feedback feels useful. 
You, yes, you could help aisafety.com visitors have a smoother/more impactful experience, adding real value to the time spent by the team on this already at a pretty minimal cost to you :)

Thank you for this post. I think I broadly agree. I feel:

a) some resistance around your framing and valuation of investment in community building in some of its forms, though I appreciate your arguments and don't really dispute them - it's more a case of how I would weigh them up against other factors. 

Namely, one lesson I take from the FTX affair and other issues that have come up in our movement in recent months/years is that having a more open, inclusive, dynamic community that 'recruits' among more diverse socio-economic/cultural backgrounds (even if new entrants largely stay on the periphery of e.g. key decision-making as they build a clear model of how EA looks at the world, get a better sense of what intervening in their fields of interest might look like if they join as students/early career professionals rather than high-ranking experts) could give our movement more resilience by increasing the aggregate balance in what we expect to be normal (in terms of ways of securing capital, degrees of power concentration and risk taking we deem acceptable, etc. - note that I'm not advocating against centralised functioning for certain aspects of the cooperative work we set out to do). 

If anything, perhaps the fact that 'core EA' seems to not really have seen the FTX debacle coming nor be too troubled by a select few deciding how money gets allocated despite the stakes and the degree of uncertainty and volatility of everything we're tinkering with suggests to me that we should invest in what I'll call higher perspectival pluralism, for want of a better descriptor (I'm writing on a very noisy broken down train and my attention is pretty badly challenged by that, in case any of this doesn't feel too smooth to read). I realise this must seem abhorrent to read to some people - I'm not saying expertise should not play a major role in decision making in EA, which means a lot of decision making shouldn't be democratic in the sense where everyone's opinion, however well or poorly informed, counts equally. But I think it's worth considering that there's a lot of steering of the movement that exceeds the bounds of technical decisions best made by people who really know their field. Norms around behaviour and risk taking are examples of what I mean by 'steering of the movement'. 


b) curiosity around how you construe the increased 'moderation' you find yourself drawn to. It'd be interesting to flesh out what this entails. My sense is that this 'moderation' is essentially a response to the 'naïve optimising' that has been discussed in some previous comments. It's a sound response, in my view, and it bears breaking down into something with higher descriptive signal. 

The way I would propose to conceive of this 'moderation' goes something like this - excessively optimising for an outcome fails to support the resilience of the wider system (whichever system we may be looking at in a given instance) by removing the 'inefficiencies' (e.g. redundancies) that provide buffers against shocks and distortions, including unknown unknowns. Decreased resilience is bad. It's not compatible with sustained robustness - it displaces pressure, often in poorly monitored ways that end up blowing up in our faces unexpectedly. Or worse, the faces of other stakeholders who had been minding their own business all along. 
To be clear, my understanding is you are mostly considering embracing more moderate views with regards to EA's potential to achieve a flawless record, to be a desirable social bubble, etc. - I think my take on this applies to this too (and again, I agree with your general sentiment and am simply trying to clarify what moderation might mean in this context). 

In other words, welcoming some degree of inefficiency could actually be a good way to translate the epistemic humility most of us strive for (and the moderation you speak of) into the design and implementation of our initiatives. I'd like to see any approaches to what this could look like from a design/operational research perspective, if anyone has come up with even tentative models for this. 

So my sense is that we should:
- be wary of endeavours like FTX that are willing to compromise important aspects of how we're trying to impact the world for the sake of maximal effectiveness, and 
- encourage people to build sustainably robust impact across the board rather than achieving Everything Right Away With Maximal ROI but with serious tunnel vision. In other words, valuing effectiveness means, under conditions of uncertainty, sacrificing some efficiency (as a side note, I suppose this is one of the underlying assumptions for point #a above). Obviously this wouldn't be the case if we were omniscient, but I don't think any of us is arguing that, especially post-FTX. So until then, I think we should value slack more highly across the board.
 

I see a lot of value to these kinds of events - especially given the limited cost and effort involved vs the decent-to-really high EV I would see in this - and feel very supportive of Linda's what, why and how. 

Thank you so much for posting this! I've just submitted an application for a civil service position (before reading your post, alas) and while I know they're all very competitive roles, as someone who didn't study at a super top university for various reasons, I loved the blind recruitment process and the guidance available online. Fair, clear and reasonable. Your thoughts are really valuable too and align perfectly with what I took away from the online guidance.

Hi Lewis, I'd love to know if there are any particular aspects of research on existing, emerging or desirable strategies to promote the transition of agricultural subsidies away from animal (especially factory) farming and towards plant-based agriculture that you would like to see more of. Context - I'm beginning to design a MSc dissertation (in animal welfare science, ethics and law, UK-based) and think this would be a good area to generate a bit more work on, but it's obviously moving fast so I'd love your thoughts on any especially worthwhile approaches to this.

On another note, do you think social media advocacy for animals is at risk of reaching a banalisation/fatigue/desensitisation/maximum impact plateau or even decrease anytime soon, or do you see as being something worth continuing investing in fairly aggressively, as many organisations seem to be doing?

Thanks for your good work!