I sometimes worry that focus on effectiveness creates perverse incentives in strategic settings, leading us to become less effective. Here are a few observations illustrating this concern.
Effectiveness-focused advocacy creates perverse incentives for adversaries
When we conduct cage-free campaigns, the target companies frequently ask us why they are being targeted instead of some other company. While trying to answer that, one immediately realises the following tension. If we say that "because targeting you is the most effective thing we can do", we incentivise them to not budge. Because they will know that willingness to compromise invites more aggression.
When dealing with an effectiveness-focused movement, our adversaries are further incentivised to prevent concrete results. While other movements will have to be destroyed through pressure, an effectiveness-focused movement will easily go away if you just prove to them that they can be more effective elsewhere.
For that reason, in our campaign target selection, we mainly focus on the number of animals affected by the brand and the gap between perceived quality and the reality as these criteria do not lead to perverse incentives.
Sometimes you have to fight back even when it’s not the most effective thing to do
I think this is also one of the reasons behind people's disassociation with EA. People are too quick to abandon EA because EA teaches them to do so.
If an adversary wants to weaken EA, they can damage the brand to make life difficult for movement members. Against EA, they have a significant advantage: EAs want to do the most good - an extremely high bar. As soon as associating with EA no longer aligns with doing the most good, members rationally drop the association. The bully will face minimal resistance since the members will run away to somewhere else where they can be more effective after slightest aggression.
Religious rules survive because they are stubborn
Many religious rules are stubborn, black and white, and context-independent. Pork and alcohol are haram in Islam and that's the end of the story. It doesn't matter if drinking alcohol would help you assimilate into the dominant culture and gain influence, or if social convenience suggests compromise.. This inflexibility allows religious rules to survive even when practiced by minorities facing significant pressure. Flexible rules tend to dissolve when confronting strong opposition.
We might need more commitment devices
It's possible to dismiss these concerns as naive consequentialism and argue that true consequentialism requires commitment devices for such cases. But do we have enough commitment devices in our community?
For me, the biggest reason our cage-free work in Turkey survived was that starting the work itself functioned as a commitment device. I feared that stopping midway would embolden adversaries against future advocates and signal to companies that welfare campaigns were merely temporary fads. I was afraid of making animals worse off by quitting, and this helped me persevere during difficult periods when I have considered "maybe this isn't the most effective thing I can do right now."
I don't think this is an easy problem to solve. Flexibility is deeply woven into EA through cause-neutrality, scout mindset, pragmatism and small identities. Other movements preserve their obstinacy through a combination of dogmas, soldier mindset and refusal to consider effectiveness. It's a challenge to find ways to preserve EA's strategic edge while making it more stubborn.
100. It's handing control of your actions to anyone who can play your language game.
I think, for the reason you describe, it is most effective to commit to your campaigns. You do the critical thinking first, but eventually you have to actually do the plan and give it a chance to have realized impact. The fairweather people who want every object-level action they take to be defensible as the most effective possible thing (imo the dominant EA take atm) are the ones who are wrong about effectiveness-- they can't execute a real world, multi-step plan for impact.
Constantly switching paths and always taking the next step that looks most effective, including to your critics and enemies, is a way to maximize option value, not a way to accomplish any multi-step real world plan.
This is a kind of post I like. Politely and concisely questioning an EA norm that has real-world consequences, without trying to answer all the questions. I'm interested to see if there will be further discussions of this in the comments (for now, I won't risk a position on this, I find myself modestly agreeing but don't have much to add).
I appreciate your exploration of the strategic complexity inherent in prioritizing effectiveness. A crucial aspect involves recognizing that impact often occurs in significant "chunks." Identifying key thresholds and accurately assessing their likelihood of being pivotal is essential for effective resource allocation. For instance, in farmed animal advocacy, securing cage-free commitments from major corporations can lead to disproportionate industry-wide improvements, making precise strategic targeting crucial. In these contexts, there might appear to be little impact until the critical moment. However, openly communicating these threshold calculations might inadvertently strengthen adversaries' resistance. Drawing from game theory's "madman" approach, an actor sometimes gains strategic advantage if adversaries believe it may irrationally commit excessive resources or accept high risks to achieve its goals, thus deterring aggressive opposition.
On a related semantic note, describing strategic resilience or integrating adversarial responses as "less effective" could oversimplify this nuanced issue. I would think when people say “effective” that they are talking about what best achieves one’s goals, and integrating adversarial responses would help in doing so.
Interesting ideas!
A hypothesis I found relevant to this phenomenon, similar to yours:
The problem "maximize impact per resources spent" is not well-defined a priori.
For instance, it depends on the time frame and scale: there could be very cost-effective smallish interventions, that can't scale that much, versus very large scale interventions that require massive coordination, investment, "stubborness", etc.
[Of course, you should try to see if such things actually exist in the real world; FWIW, I suspect they do]
It also depends on the entity you consider: is it you as an individual? The small group of people who are willing to listen and do a project with you? The whole EA community? Humanity?
You might be able to build a coherent system that takes into account these various levels though.
Another remark, that has more to do with execution than general principles, which you also touch upon: sharing all the information you have is not always a good idea. Unfortunately, the possible fixes (restricting information access to trusted people/groups) seem to go against the [EA/rationalist/...] culture of truth-seeking, open communication, etc.
This fantastic post by @Holly Elmore ⏸️ 🔸 "Scouts need soldiers for their work to be worth anything" carries a similar sentiment from a bit of a different angle.
I think there can also a bit of a prisoners dilemma dynamic at times here where defecting individually away from stubbornness, or away from EA can seem be the best thing for the individual and even perhaps for a short term tangible outcome, but may actually be worse for the cause we fight for or the EA movement in general over the longer term.
I think anyone who's been involved in advocacy, organising and activism knows that sometimes you need to be stubborn for purposes of leverage, movement longevity and morale even when it can be a bit anti-truth seeking at times. I've done it a number of times.
Also in the GWWC pledge we have fantastic commitment device, which is obviously for a specific use case, but we could learn from it for other cases.
That presumably depends on whether "targeting you is the most effective thing we can do" translates into because you're most vulnerable to enforcement action or because you're a major supplier of this company that's listening very carefully to your arguments or because you claim to be market leading in ethics or even just because you're the current market leader. Under those framings, it still absolutely makes sense for companies to consider compromising.
Agree with the broader argument that if you resolve to never bother about small entities or entities that tell you to get lost then that will deter even more receptive ears from listening to you though.
Nice post! Your reasoning about perverse incentives and bullying/fighting back reminds me of commitment races (Lesswrong post).