Hide table of contents

[to spread karma more accurately, I recommend sorting the answers by latest reply]

I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but please also use this thread for posts you don't plan to write for draft amnesty. The last time this question was posted, it got some great responses. 

This post is a companion post for What posts would you like someone to write?.

If you have a couple of ideas, consider whether putting them in separate answers, or numbering them, would make receiving feedback easier. 

It would be great to see:

  • 1-2 sentence descriptions of ideas as well as further-along ideas. You could even attach a Google Doc with comment access if you're looking for feedback. 
  • Commenters signalling with Reactions and upvotes the content they'd like to see written. 
  • Commenters responding with helpful resources. 
  • Commenters proposing Dialogues with authors who suggest similar ideas, or which they have an interesting disagreement with (Draft Amnesty Week might be a great time for scrappy/ unedited dialogues). 

Draft Amnesty Week

If you are getting encouragement for one of your ideas, Draft Amnesty Week (March 11-17) might be a great time to post it. Posts that are tagged "Draft Amnesty Week" don't have to be fully thought through, or even fully drafted. Bullet points and missing sections are allowed so that you can have a lower bar for posting. 

New Answer
New Comment

38 Answers sorted by

Why I care so much about Diversity, Equity and Inclusion in EA.

A more personal reason for me caring about DEI. I know at least a couple of other EAs feel the same way so might be worth having our perspectives out there. Much less about the “business case” for DEI and more about how I feel uncomfortable with the current DEI status in EA and to be a part of the movement I “have to” make the case for DEI.

FYI I just published a draft of this post. Thanks to everyone who encouraged me to write it by voting on this answer.

I’m a normie, and I think I can be an EA too

I often feel that many EAs are a bit more extreme than me. In fact, I feel pretty normal compared to the vibe I get from e.g. reading the EA Forum. Perhaps just stating all the ways in which I am normal could help other people who feel like they do not belong because they are too normal. Helping them feel like EA is a place for them too.


 

Post about how EA should be marketed more as the natural step forward from our beliefs regarding equality. Everyone's pains, dreams, and simple joys matter. This is true regardless of where or when you live, or even what species you are.

I think talking about cause neutrality, scout mindset, and the long term future is less of a natural introduction point. The idea of equality resonates pretty well with people. The implications of this might be a bit difficult to unpack as well (basically subjective experiences having the same terminal value wherever they transpire), but the basic notion comes from something pretty intuitive and uncontroversial.

I really like this and think there's some gold to be mined here. Why don't you write it ;)

6
Brad West
Thanks Nick. I'll try to find some time to write my thoughts on the matter. I've just been really behind on what I'm trying to do with my nonprofit. It really sucks having to spend most of your time at a full time job that is of very marginal direct value and try to conjure the energy you have left after that to do meaningful and impactful work. EDIT: to be clear, the full-time job is NOT the nonprofit that I run
2
Toby Tremlett🔹
I'll come back to this because I think there might be something by Richard Chapell on it (it sort of sounds like the idea that impartiality gets you most of the way to EA).  Another thought is that Draft Amnesty next week might be a good time to spend 30 mins bullet-pointing or dictating your thoughts and then posting them with minimal or no edits. I fully understand that even that can be hard to do after a day's work though, so no pressure. 
2
Brad West
Yes, impartiality is the core idea here (which might be more accessible as "equality") , and a lot of the other EA core ideas proceed from this with some modest assumptions. I'm just talking about using language that is more intuitive to connect with people more broadly. I think EA often wants to set itself apart from the rest of the world and emphasizes technical language and such. But a lot of the basic goals and ideas underlying them are pretty accessible and, I think, popular. I probably could fit in writing a rough version of this for Draft Amnesty Week.

Demographics “EA employees” vs EA community

A post just comparing quickly based on the largest EA orgs makeup in terms of race and gender to published results for overall movement. Might indicate that the talent we seek and is likely to get is more diverse than our larger movement indicates. This might imply to a larger degree taking into account their views (e.g. first listening to their asks and then implementing them).

(I hope it is ok that I make an answer for each post I am thinking about writing so I know which posts, if any, people are interested in seeing)

If anyone else wants to write this I would love for you to do that. I have some rough initial ideas if you want to DM me. If you do, I would love to know when it's published. I guess in general anyone can take any of these ideas and run with but I guess there is some unspoken agreement that the poster of the idea for the blog should perhaps at least be informed that someone intends to write "their idea". Therefore I wanted to make it clear that I am super excited and would encourage someone else to write this up as chances are I will never get around to do so.

Another post which I might work on for draft amnesty is a response to Julia Wise's you can have more than one goal, and that's fine, and Susan Wolf's Moral Saints. 
I'm less sure about finishing this one because it outlines a problem more than it raises a solution. Specifically, the problem is that:

  • Moral reasoning (especially consequentialist reasoning) can't limit itself. Even if you have to take breaks from doing good in order to do more good, you're taking that break because it helps you do more good. 
  • Consequentialist reasoning doesn't permit valuing (other) ends for themselves (in a way that you wouldn't impartially maximise). 
  • But (and this is Wolf's point and to a lesser extent Julia's), a good life contains values which are non-moral. 

The essay is mostly me struggling with trying to ground the kind of advice which Julia gives, i.e. to limit your pursuit of impartial good to one section of your life. 

I feel like there are a lot of articles about "value pluralism" and such, another one being Tyler Alterman's Effective Altruism in the Garden of Ends. This position appears to be more popular and in vogue than a more traditional utilitarian view that an agent's subjective experiences should not be valued more highly terminally than that of other moral patients.

I would like to see an article (and maybe would write it someday) that we should primarily treat any naive favoritism for our own well-being as agents as instrumental to maximizing well-being, rather than having multiple terminal values. 

2
Toby Tremlett🔹
Interesting! One of the most salient aspects of Wolf's article, to me, was her argument that a consequentialist who instrumentally valued other things (like family, personal well-being etc) as a means to their ultimate goal would: a) not really value those things (because she ~argues that valuing is valuing non-instrumentally) b) be open to (or compelled to) take opportunities to lessen their instrumental values if that would lead to better ends. For example, a consequentialist who could go on a special meditation retreat which would make them care less about their own wellbeing (or their family or etc...) should take that option, and would take it if their only non-instrumental value was the impartial good. 
4
Toby Tremlett🔹
Drafted! (directly because of upvotes + draft amnesty's lower standards. Would never have finished it otherwise)

Things we often tell people considering pivoting into biosecurity work

This will probably be a collaborative giving-advice-to-newcomers style post

Preliminary outline:

  1. Intro
    1. Write something if you repeat it a lot
    2. Biosecurity vs. scope-sensitive ‘EA’ biosecurity
  2. There are good opportunities for learning, getting started, and testing your fit.
    1. Compared to 5 years ago, we have solid options for getting up to speed
    2. (Meta) readings lists, newsletters, courses, research programs
  3. EA-specific mentoring and orgs are sparse
    1. Not many orgs, not a ton of growth
      1. There is a shortage of organizations that can absorb talent
        1. Need for more orgs in the future
        2. Good to have more founder type people 
    2. a lot of high-level strategic work (roadmapping, blueprinting) to figure out priorities and gaps in existing academic/industry research
    3. Mentoring is really bottlenecked
  4. Free yourself from the ‘EA’-label / A lot of important work happens outside of EA
    1. But don’t despair! There are many places to do important work!
    2. Biosec interventions have a pretty solid causal chain
    3. And many scope-sensitive biosec-promising interventions (path. Agnostic etc.pp.) have lots of overlap with public health or One Health focussed biosec.
    4. A lot of important stuff outside of EA, esp. When it comes to actually implementing stuff
      1. Examples
    5. Very valuable to get additional marginal scope-sensitive thinking in traditional settings
    6. Become an expert in something “boring”
      1. Link ASB thing and say it also applies to technical stuff, not only policy work
  5. Yes, you can most likely contribute [so be agentic]
    1. Biosec is highly interdisciplinary
    2. points about taking initiative and being agentic.
      1. Kind of a call to action toward the end of the post
    3. Plenty of open questions for a number of backgrounds
      1. You def don't need a biomedical background (that seems to be a common misconception)
      2. Link to Will’s post about biosecurity needs engineers 
      3. Examples
  6. (Conclusion)
    1. Reach out, etc. Maybe just a closing sentence

Good point on mentoring. Would love if you write this to also give tips about mentoring (or whether one can progress without mentors). 

I have a few ideas mulling in my head that I'm yet to decide if it would be useful. I'm unsure about posting them as not sure how popular the 'listicle' format would be as opposed to my normally very long and detailed posts/comments. These are:

Title: Top 5 Lessons Working in Frontline AI Governance
Summary: Things I've picked up regarding AI, risk, harms etc working in-industry in an AI governance role. Might end up being 10 things. Or 3. Depending how it goes.

Title: Top 5 Tips for Early Careers AI Governance Researchers
Summary: Similar to the above, things I wish I had known when I was an ECR.

Title: Why non-AGI/ASI systems pose the greatest longterm risks
Summary: Using a comparison to other technologies, some of my ideas as to why more 'normal' AI systems will always carry more risk and capacity for harm, and why EA's focus on 'super-AI' is potentially missing the wood for the trees.

Title: 5 Examples of Low-Hanging Policy Fruit to Reduce AI Risk
Summary: Again, a similar listicle-style of good research areas or angles that would be impactful 'wins' compared to the resources invested.

I think listicles would be a great style of post for Draft Amnesty (if you're interested). I'd be interested to see any of the listicles, and your 4th idea would be great to see as a more fleshed-out argument (though it is another one which could be a quickly stitched-together take to post on Draft Amnesty, possibly with a request for feedback in the comments before you do a full draft). 

Cost-Effectiveness Estimations in Animal Advocacy are Very Contingent on Future Expectations

Although most EAs try to make cost-effectiveness estimations based on "number of animals impacted" or "amount of suffering spared", overwhelming majority of the expected value of animal advocacy efforts depends on their long term effect. 

I will try to show how these estimations have a lot of variance based on different assumptions - which explains the multiple viewpoints in animal advocacy.

I will also try to provide some pros and cons for each approach and try to reveal their assumptions that support their cost-effectiveness claims for the future. 

Preliminary outline (draft):

Intro
 a. Cost-effectiveness estimates between different interventions differ by a lot based on different assumptions. Some interventions can be claimed to be x100000000 more cost-effective than others (I don't agree with Brian Tomasik here). So making the right choices matter a lot. 
 b.  They heavily depend on expectations about the future. (For example: will there be a vegan awakening or will there be a wave of (moral) animal welfare reforms or will there be technological progress that will provide price-, taste-, convenience- competitive PBMs? etc.)
c. Cost-effectiveness estimates in animal advocacy are different from global health and development.
d. Cost-effectiveness claims of different interventions in animal advocacy are typically in conflict with  the cost-effectiveness claims of other interventions since they depend on conflicting assumptions - which partly explain the infighting and debates within the movement.  And since these cost-effectiveness estimates involve expectations about the future, these debates are hard to resolve.   

Then I will try to describe some of the assumptions behind the cost-effectiveness claims of different interventions and provide some pros and cons that support or refute these assumptions. 

1. Radical change
      1.1. Radical moral change by "the commons": (Examples: Mass media and education campaigns - New Roots Institute, Netflix documentaries, mass veg*n leafleting campaigns, best-seller books, Ted Talks, Veganuary...)
      1.2. Radical moral change by "the elites": (Examples: Community building in leading universities, Animal Law programs in law schools, "academic" publications, lobby groups...)      
       1.3. Radical change via technological progress (Good Food Institute, New Harvest, Material Innovation Initiative, Impossible, Beyond..., considerations related to the rise of AI)
      1.4. Radical change due to environmental necessity  

2. Reforms
    2.1. Moral reforms (Chicken welfare campaigns, The Humane League - Open Wing Alliance: Mercy for Animals, L214, OBA, Essere Animali, Animal Equality, Sinergia Animal, Kafessiz Türkiye...)
    2.2. Efficiency reforms (Fish Welfare Initiative, Shrimp Welfare Project, Future For Fish)
    2.3. Technological reforms (Innovation Animal Ag)
    2.4. Reforms as a path to radical change? (Or radical change efforts as a way to cash in reforms)

3. Should we expect the radical change or reforms to make significant progress in a single country or region at first and have a "spreading effect" afterwards? 
     3.1. Are small yet socially favorable countries really important if they will become the first examples of animal liberation?(Switzerland - Sentience für Tiere, Germany - Albert Schweitzer Stiftung, Singapore, Israel - GFI)
      3.2. Are certain regions really important if they will become the first examples of animal liberation that will move other countries towards its vision? (EU policy - Compassion in World Farming, as well as THL and MFA in the US)
    3.3. If these are not going to happen, then should we just simply look at where most animals live and where organisations can run cheaper than in developed countries?  (Developing countries: Sinergia Animal , Kafessiz Türkiye, Fish Welfare Initiative...)

3. Wild animal welfare
    3.1.  Moral change (Animal Ethics)
     3.2. "Management reforms"(Wild Animal Initiative welfare science research)

4. Contingency due to individual advocates and advocacy groups 

5. Diversified portfolios or worldview diversification as sub-optimal and unrealistic solutions --> the need for concentration and making some bets in favor of some viewpoints 
 

I have quite a few ideas for "heretical" articles going against the grain on EA/rationalism/Ai-risk. Here's a few you can look forward to, although I'll be working slowly. 

A list of unlikely ways that EA could destroy humanity

How motivation gaps explain the disparity in critique quality between pro-anti x-risk people

Why I don't like single number P(doom estimates)

The flaws with AI risk expert surveys

Why drexler-style nanotech will probably never beat biological nanomachines in our lifetimes (even with AI)

Why the singularity will probably be disappointing

There are very few ways to reliably destroy humanity

Why AGI "boxing" is probably possible and useful

How to make AI "warning shots" more likely

Why I don't like the "Many worlds" interpretation of quantum mechanics

Why "the sequences" are overrated

Why Science beats Rationalism

I always look forward to titotal posts, so I'm very happy to see that there's a healthy pipeline!

Small tongue-in-cheek-word-of-warning, viciously attacking The Sequences can be a big sign of being a Rationalist in the same way that constantly claiming "oh I'm just EA-adjacent" is a surefire sign that someone is an EA :P

Why I hate Singer's drowning child parable. 

 

(comment version here)

A post on how EA research differs from academic research, why people who like one distrust the other, and how in the longterm academic research may be more impactful.

As always, my Forum-posting 'reach' exceeds my time-available 'grasp', so here are some general ideas I have floating around in various states of scribbles, notes, google doc drafts etc, but please don't view them as in any way finalised or a promise to write-up fully:

- AI Risk from a Moderate's Perspective: Over the last year my AI risk vibe[1] has gone down, probably lower than many other EAs who work with this area. However, I'm also more concerned about it than many other people (especially people who think most of EA is good but AI risk is bonkers). I think my intuitions and beliefs make sense, but I'd like to write them down fully, answer potential criticisms, and identify cruxes at some point.

- Who holds EA's Mandate of Heaven: Trying to look at the post-FTX landscape of EA, especially amongst the leadership, through a 'Mandate of Heaven' lens. Essentially, various parts of EA leaderships have lost the 'right to be deferred to',[2] but while some of this previous leadership/community emphasis has taken a step back, nothing has stepped in to fill the legitimacy vacuum. This post would look at potential candidates, and whether the movement needs something like this at all.

- A Pluralist Vision for 'Third Wave' EA: Ben's post has been in my mind for a long time. I don't at all claim to have to full answer to this, but I think some form of pluralism that counteracts latent totalism in EA may be a good thing. I think I'd personally tie this to proposals for EA democratisation, but I don't want to make that a load-bearing part of the piece.

- An Ideological Genealogy of e/acc: I've watched the rise of e/acc with a mixture of bewilderment, amusement, and alarm over the last year-and-a-half. It seems like a new ideology for a new age, but as Keynes said "Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back." I have some academic scribblers in mind, so it would be interesting to see if anything coherent comes out of it.

- EA EDA, Full 2023 Edition: Thanks to cribbing the work of other Forum users, I have metadata for (almost) every EA Forum post and comment, along with tag data, that was published in 2023. I've mostly got it cleaned up, but need to structure it into a readable product that tells us something interesting about the state of EA in 2023, rather than just chuck lots of graphs at the viewer.

- Kicking the Tires on 'Status': The LessWrong community and broader rationalist diaspora use the term 'status' a lot to explain the world (this activity is low/high status, this person is doing this activity to gain high status etc.), and yet I've almost never seen anyone define what this actually means, or compare it to alternative explanations. I think one of the primary LW posts grounds it in a book about improv theatre? So I might do I deep dive on it taking an eliminativism/deflationary stance on status and proposing a more idea-focused paradigm for understanding social behaviour.[3]

Finally, updates to the Criticism of EA Criticism sequence will continue intermittently so long as bad criticisms continue or until my will finally breaks.

  1. ^

    I don't like the term p(doom)

  2. ^

    If such a right ever existed in the first place, maybe the right to be trusted/benefit-of-the-doubt would be more accurate

  3. ^

    That is, instead of people acting in such-and-such a way so as to regulate/improve their social status, they act in the way do they do because they believe it is the right thing to do, based on their empirical and moral beliefs. Since 'status' isn't really needed to explain the latter imho, we can essentially discard it

Thanks for this, I had a couple of things listed out. This looks like a nice way to prioritise them. I will list some ideas here.

What might be the deontological constraints violated by animal product consumption and how serious are they. There is a meme among animal advocates that all animal product consumption is murder. For that reason, it's morally forbidden to ask people to reduce their animal product consumption since it's akin to asking them to reduce the murder they commit. 

There are also some instances of doing harm where asking for reduction is totally permissible according to common sense morality, such as asking people to reduce their carbon emissions or their consumption of products made by slaves. I want to look more into whether murder comparison is really apt.

3
Constance Li
I’d be interested in reading an analysis about the murder comparison. There seems to be some elements of direct vs. indirect harm, partial vs. full responsibility for death, and different societal values. I’m not sure if I can truly parse it out.
2
emre kaplan🔸
Thank you for all your feedback Constance!

Thoughts on how to resolve the tensions between being maximally honest vs running a coalition that unites different stakeholders that have different positions on the topic. Being maximally honest requires saying the things you believe as much as possible. Running a coalition requires acting inside the common ground, and not illegitimately seizing the platform to promote your specific position on the matter.

Assume that an agent A is doing something morally wrong, eg. fighting in a violent unjust war. You don't have power to stop the war altogether, but you can get the relevant state sign an agreement against chemical weapons and at least prevent the most horrific forms of killings. What could be deontological restrictions on negotiating with wrongdoers? My preliminary conclusion: It's good to negotiate for outcomes that are ex-ante Pareto superior even if they don't cease the constraint violations.

1
Ian Turner
Seems like the Geneva convention falls into this category?
2
emre kaplan🔸
Yeah, I think so.

Short thoughts on whether spreading the concept of veganism could be in tension with the principle "love the sinner, hate the sin". Veganism might implicitly create the category of non-vegans, and might make the activists see the world in terms of "sinners vs. innocents".

Empirical research I would like to see in animal advocacy. Primarily to 1. have more robust estimates of impact in animal advocacy 2. Most research in animal advocacy is understandably driven by donor needs. I would like to list some questions that would be more decision-relevant for the animal advocacy organisations(when to campaign, how to campaign, which targets to select etc.).

Thoughts on the idea that "Working for institutional change is far more effective than working for individual change in animal advocacy". How strong is the evidence behind that statement, in which contexts it might be wrong?

3
Constance Li
This seems like a doozy of a question to try to answer. Have you done your market research? Make sure no one else has written something about this in a similar way to what you intend first.

EA is not special, nor should we make it

I recently explained EA to a close friend. His response was "but does not everybody do that already?" I think he is not alone in thinking that generally, EA is common-sense. And if that is true, argue that we should make EA as mainstream as possible.


 

This could be interesting as a counterpoint to (for example) this essay.

I'm considering writing about my personal journey to working on wild animal welfare, which was unusually pinbally: loving animals --> learning survival skills and slaughtering a bunch of poultry --> interested in things like rewilding --> working to end factory farming --> working on wild animal welfare at Wild Animal Initiative.

People often find this story interesting when I tell it, and it might help engage or persuade some people (e.g. by demonstrating that I've seriously considered other philosophies toward nature).

But my big hangup is I don't really know who the audience for this piece would be, or what exactly I want to achieve with it. That could have a big effect on which arguments I make, what kind of language I use, and how much detail I go into. (Having an altruistic theory of change is also essential to feeling okay with spending this much time gazing into my navel.)

I'd welcome any thoughts on whether/how to proceed!

Have you seen this post from Catherine Low? It's a great example of telling this type of story in a way that Forum readers really appreciated. Maybe a way to make your story more helpful is to highlight lessons you have learned + why you changed your mind at each stage. Seeing more examples of people taking their career seriously, and reassessing deeply held values, is always useful. 

A vision for wild animal welfare - lab-grown meat, population control via contraceptives - what successful wild animal welfare interventions could look like, hundreds of years from now

I think more speculative fiction about wild animal welfare would be great! Thank you!

 

Here's a related thought, but ignore it if it deters you from writing something soon:

When I talk to people who are skeptical of or opposed to wild animal welfare work (context: I work at Wild Animal Initiative), they're more likely to cite practical concerns about interventions (e.g., "reducing predator populations will cause harmful trophic cascades") than they are to cite purely ethical disagreements (e.g., "we should never violate autonomy, even to improve welfare... (read more)

Against deference in EA, and problems with inteprreting consensus in fields where deference is common

I'm thinking of ~finishing a draft for draft amnesty that I was writing a while ago about the future of nature. From speaking with conservationists, I got the impression that many were focused on the past (restoring the past, obligations to the long history of evolution etc...) Because of the urgency of their task, I didn't see as much focus on the future. 

The post I am writing goes through a few vignettes of possible futures for the wild, including:

  • Suffering abolition. Gene drives to remove sources of suffering, ecosystem design etc...
  • Biodiversity maximalism: there are many ways we could increase biodiversity if we wanted to, beyond levels present today or in the past. When people say they care about biodiversity, they don't seem to care about this. This section might be a reductio of those arguments. 
  • Extending political categories (self determination, citizenship) to wild animals (as in Zoopolis)
  • Covering our tracks: this would be a description of my understanding of the conservationist's focus. 

Why evidence-based giving should be a mass movement (and decouple from EA somewhat)

I've been thinking very similarly for a while. Would love to read it.

Tips for effective policy work

(Sorry I don't know how to do formatting very well, so I can't make one of those great big titles others are using here):

-Appendices to: Some Observations on Alcoholism:

Appendix posts are post I write on my blog sometimes like these:

https://www.thinkingmuchbetter.com/tags/appendices/

which essentially respond to things I now disagree with in the original post, or expand on ideas I didn't get to cover very thoroughly, or just add on relevant ideas that I feel don't deserve their own separate article. This one would be to my recentish article on my struggles with alcoholism:

https://www.thinkingmuchbetter.com/main/alcoholism/

It hasn't been all that much time, but lots has happened since then, and since it is one of the most salient things in my life right now, that I think and talk about a huge amount due to my treatment, it is hard not to have takes I want to write out about it. I don't have much yet, but it is most likely to be the piece I post, if anything, for draft amnesty week. I would especially appreciate commentary from people with experience in recovery, and especially with better knowledge than me about addiction to: marijuana, kratom, ketamine, and stimulants. I have drafts for the firs two already, but they are pretty spare, and I don't have anything for the other two yet.

-Response to "Welfare and Felt-Duration"

I seriously doubt I'll have anything ready for this by draft amnesty week (maaaaybe a rough outline if I can post that), but it could be one of the most useful things for me to get feedback on, as it is what I'm planning to write for my thesis (not with that title, though if I adapt and shorten it into a blog post after writing it, it might have a title like that in the way this earlier post does):

https://www.thinkingmuchbetter.com/main/meat-veggies-response/

Essentially, it's on the topic of the issues subjective exp... (read more)

-The Case for Pluralist Evaluation

This is another one I started and never finished. I actually specifically started it as an intended draft amnesty entrant last year, but I think it is in even rougher shape, and I also haven't looked at it in a long time. Basically this was inspired by the controversy a little while ago over ACE evaluating their movement grants on criteria other than impact on animal welfare. I don't defend this specific case but rather make a general argument against this type of argument. Basically the idea is that most EA donors (especi... (read more)

-Existentialist Currents in Pawn Hearts

Unlike the others here, I probably won't post this one, either for draft amnesty, or on the forum, at all, as it isn't sufficiently relevant (though I did make a related post on the forum which uh, remains my lowest karma post):

https://forum.effectivealtruism.org/posts/fvqRCuLm4GkdwDgFd/art-recommendation-childlike-faith-in-childhood-s-end

But it's a post I am strongly thinking of putting on my own blog. Like my most recent blog post:

https://www.thinkingmuchbetter.com/main/fun-home/

This is one that I would be adapting ... (read more)

-Against National Special Obligation

I started a draft on this one a while ago, but haven't looked at it again for a while, and probably won't post it. The idea is pretty simple and I think relatively uncontroversial amongst EAs: we do not have special obligations to help people in the same country as us. This is not just also true, but especially true in political contexts. I see the contrary opinion voiced by even quite decent people, but I think it is an extremely awful position when you investigate it in a more thorough and on-the-ground way rather than noticing where it matches common sense.

About how to run an EA organization with the intent of it being a part time investment of time. 1-2 days per month with a potential 4-12 day stretch once a year. I had this idea when I talked to Ulrik Horne about how he could slowly start up a biosecurity/bio shelter project.

I think it's a good idea for an organization that can pay for 1-4 full timers and then flex once a month to work on projects together. This is what organizations like the Army Reserves does for building a capability/capacity that is needed but not imminently.

Very personal and unconventional research advice that no-one told me that I would have found helpful in my first 2 years of academic research. What I would change about this advice after taking a break and then starting a PhD.

This seems interesting and helpful!

Why in the policy world, given the current size of the movement, EAs should narrowly focus on foreign policy and science policy 

Urgency in global health - In defence of short-term, band-aid fixes

Why facial recognition tech is an underrated problem 

My PhD was in this area so I'd be super interested in hearing more about your thoughts on this. Looking forward to seeing this post if you decide on it :)

How to best engage with criticisms of EA, from the perspective of helping the community achieve the goal of doing the most good

A post calling for more exploratory altruism that focuses on discovery costs associated with different potential interventions and the plausible ranges of impact of the associated intervention.

A public list that identified different unexplored, or underexplored, interventions could be really helpful.

I actually thought about this after listening to Spencer Greenberg's podcast- his observation that we shouldn't think about personal interventions, like whether to try a new drug or adopt a habit, in terms of naive expected value, but rather in terms of variance in effect. Even if a drug's average affect on someone is negative, if some people get a large benefit from it, it is worth testing to see if you are someone who benefits from it. If it really helps you, you can exploit it indefinitely, and if it hurts you, you can just stop and limit the bad effect.

Likewise a lot of really good interventions may have low "naive EV"; that is to say, if you were to irrevocably commit to funding it, it would be a poor choice. But the better question is, is this an intervention that could plausibly be high EV and have high room for exploitation?  What are the costs associated with such discovery? With such an intervention, you could pay the discovery costs and exploit if it turns out to be high EV, and cut losses if it does not. It is worth considering that many promising interventions might look like bad bets at the outset, but still be worth discovery costs given the ability to capitalize on lottery winners.

I understand that there are many organizations like CEARCH and Rethink Priorities (and many others) that are involved in cause prioritization research. But I think if one of them, or another org, were to compile a list that focused on search costs and plausible ranges of impact, making this publicly available and easy for the public to convey thoughts and information, this could be a very useful tool to spot promising funding/research/experimentation opportunities.

Summaries of papers on the nature of consciousness (focusing on artificial consciousness in particular).

Comparing agnostic metagenomic sequencing and massively multiplexed bait capture sequencing for clinical disease surveillance and early-warning systems

This will likely be part of my lit review of my master's thesis and should also make an interesting blog post.

I am unsure whether to call it massively parallelised or massively multiplexed bait capture sequencing when you use on the order of 105-106 probes at the same time

What is agnostic metagenomic sequencing? https://doi.org/10.1128/cmr.00119-22

What is massively multiplexed bait capture sequencing? https://doi.org/10.1128/jcm.00612-23 

What are the pros and cons of the different approaches? 

  • cost?
  • amount of labor needed?
  • how quickly do you get results?
  • ...

examples of Pros of bait capture sequencing

  • “results in two to three orders of magnitude enhancement in sensitivity at a reduced cost” (Kapoor et al. 2023)
  • “mitigates Health Insurance Portability and Accountability Act (HIPAA) concerns because host sequence data are not collected” (Kapoor et al. 2023)

How are these methods different from similar, commercially available products?

(e.g., https://www.twistbioscience.com/products/ngs/fixed-panels/comprehensive-viral-research-panel)

I have been considering writing a somewhat technical post arguing that “large Transformer models are shortcut finders” is a more clarifying abstraction for these sorts of artifacts than considering them to be simulators, shoggoths, utility maximizers, etc. Empirical research on the challenges of out-of-distribution generalization, path dependency in training, lottery tickets/winning subnetworks, training set memorization, and other areas appear to lend credence to this as a more reasonable abstraction.

Beyond allowing for a more accurate conception of their function, I believe seeing Transformer models through this lens naturally leads to another conclusion: that the existential risk posed by AI in the near-term, at least as presented within EA and adjacent communities, is likely overblown.

Reconciling evidence-based development, foreign aid and international health with decolonisation / local knowledge

The most good you can do is a Schwartz Set.

Basically I think the idea is that because of inevitable uncertainty, there will be multiple activists/option/donations that may all be considered “the best”, or at least among which it is not possible to draw a comparison.

I think this is true even if all moral outcomes are comparable, but of course if not then it follows that all activities are probably not comparable either.

Ai already kills thousands of children and we are unable to stop it

Talking about lead poisoning etc. from e-waste and how that already kills people and lead to poor health outcomes. Increasingly, e-waste will come from datacentres for AI. Who today can stop these deaths? I think one can argue that nobody can. 

I'd be interested in reading a post on the scale of lead poisoning coming from e-waste. Why do you think it is that we can do nothing about it? 
I'm more sceptical that this is particularly relevant to AI, rather than electronics in general, but perhaps you could make that case too. Otherwise I'd be concerned that the argument might be besides the point, a less extreme case of something like "AI is already harming our future, just look at how many flights people take to AI conventions". Which, to clarify, wouldn't necessarily be wrong, but likely puts the focus in the wrong place/ obscures the real trade-offs involved in any human activity.  

1
Ulrik Horn
Hi Toby! I have a super lay perspective on this so if anyone would like to collaborate on a post I would love for that. Or for someone to just take the idea and run with it.  On not being able to do anything: I am imagining me in various super powerful positions and thinking if I then see e-waste stopping being an issue. I then think main reason they won't do anything:     * Sam Altman - Can't do it because "AI has promised him glory and riches" - he basically seems interested in power/impact/money/fame * CEO of Microsoft - Like sam Altman, but with extra pressure from shareholders to create returns. Also, e-waste from Microsoft is not being demanded by the public * Board of OpenAI - Seems like they do not have much control and if they do, they probably worry more about larger number of deaths from other causes * Governments that ship e-waste to poor countries - Not top 5 issue for voters, would cost money to properly handle e-waste * President of a poor country receiving e-waste - Would miss out on revenue + probably some degree of connection between governments and local businesses profiting from importing e-waste * CEO of the most powerful NGO leading grass roots activism against e-waste - Not sure they can build enough momentum, there are so many other issues * The above are only lay guesses, maybe I am missing something Basically with current deaths to e-waste, I think we are seeing the beginning of how AI will simply push out humans by taking their resources. In this case, the resource they are grabbing from humans is a clean environment. I am imagining the scale of e-waste in a world where even 40% of current labor is replaced by AI + robots. I think this would be possible to estimate initially in terms of tons, and we have some idea of number of deaths due to current volumes of e-waste so could scale those deaths up linearly for a first approximation. AI does not need to be very agentic to cause issues. It only needs to be something the economic s

I think this would be an interesting post to read. I'm often surprised that existing AI disasters with considerable death/harm counts aren't covered in more detail in people's concerns. There's been a few times where AI systems have, via acting in a manner which was not anticipated by its creators, killed or otherwise injured very large numbers of people which exceed any air disaster or similar we've ever seen. So the pollution aspect is quite interesting to me.

Posts about any of the knock-on or tertiary effects of AI would be interesting.

Just fund community health workers - the case for why EA underestimates the cost-effectiveness

The case for supporting Gulf-style immigration policy in the West

Why impartiality in conseuqentialist moral systems does not contradict virtue ethics or deontology, and in fact logically follows from both the virtue of selflessness and the principle of treating people equally

A post explaining what I take to be the best reply to Thorstad's skeptical paper On the Singularity Hypothesis.

Comments3
Sorted by Click to highlight new comments since:

I study US private foundations, and I've observed two things: EA has made virtually no progress in influencing grantmaking, while trust based philanthropy (TBP) has had massive adoption. I think that many believe that EA and TBP are in conflict with one another, but I don't think that is necessarily the case. I am thinking about writing a post/research paper that makes the case these two movements are compatible with one another.

I work in fundraising but don't have any experience with it outside EA; I'd be really interested in reading this piece. 

Your thesis also happens to parallel one of the few conversations I've had about TBP: a non-EA friend was talking about what she didn't like about EA; she espoused TBP instead; I asked her a bunch of questions and was generally confused because what she described sounded very similar to how lots of EA funding works.

This is a cool idea!

FYI, if you're excited about one of these ideas but struggling to actually get it drafted and posted, I can help with that. I wrote more here:

https://forum.effectivealtruism.org/posts/4towuFeBfbGn8hJGs/amber-dawn-s-shortform?commentId=C6Z7u57FHh6nYqo4N

Curated and popular this week
Relevant opportunities