Hello, I'm Devin, I blog here along with Nicholas Kross. Recently got my bioethics MA, now looking into getting a philosophy PhD.
Topic from last round:
Okay, so, this is kind of a catch all. Out of the possible post ideas I commented last year, I never posted or wrote “Against National Special Obligation”, “The Case for Pluralist Evaluation”, or “Existentialist Currents in Pawn Hearts”. So, this is just the comment for “one of those”.
Mid-Realist Ethics:
I occasionally bring up my meta-ethical views in blog posts, but I keep saying I’ll write a more dedicated post on the topic and never really do. A high level summary includes stuff like: “ethics” as I mean it has a ton of features that “real” stuff has, but it lacks the crucial bit which is actually being a real thing. The ways around this tend to fall into one of two major traps – either making a specific unlikely empirical prediction about the view, or labeling a specific procedure “ethics” in a way that has no satisfying difference from just stating your normative ethics view - and a couple thought experiments make me unpersuaded that I’m really interested in a realist view anyway. I discuss these things a bit in this comments thread.
I would also plan to talk about the role different kinds of intuitions play in both my ethical reasoning and in my “unethical” reasoning, something I keep mentioning but not developing in blog posts, especially these two. I don’t really have anything written for this, so I might just collect snippets from these sources and supplement with bullet-point type additions if I go with this idea.
Observations on Alcoholism Appendix G:
This would be another addition to my Sequence on Alcoholism – I’ve been thinking in particular of writing a post listing out ideas about coping strategies/things to visualize to help with sobriety. I mention several in earlier appendices in the sequence – things like leaning into your laziness or naming and yelling at your addiction – but I don’t have a neat collection of advice like this, which seems like one of the more useful things I could put together on this subject.
Cosmological Fine-Tuning Considered:
The title’s kind of self-explanatory – over time I’ve noticed the cosmological fine-tuning argument for the existence of god become something like the most favored argument, and learning more about it over time has made me consider it more formidable than I used to think as well.
I’m ultimately not convinced, but I do consider it an update, and it makes for a good excuse for me to talk more about my views on things like anthropic arguments, outcome pumps, the metaphysics of multiverses, and interesting philosophical considerations more specific to this debate – I might particularly interact with statements by Phillip Goff on this subject.
Unfortunately if this sounds like a handful, it is, and I got bogged down early in writing it during the anthropics section. This might be a good time to get more feedback from people with more metaphysics/epistemology under their belts than me, and maybe finally to get a solid idea of the difference between self-indicating and self-selecting anthropic assumptions and which anthropic arguments rely on each. I don’t have much of this to post, so I might either do this as an outline, the small portion I do have, or some combination.
Moral problems for environmental restoration:
A post idea I’ve been playing with recently is converting part of my practicum write-up into a blog post about the ethics of environmental restoration projects. My practicum was with the “Billion Oyster Project”, which seeks to use oyster repopulation for geoengineering/ecosystem restoration, and I spent a big chunk of my write-up worrying about the environmental ethics of this, and I’ve been thinking this worrying could be turned into a decent blogpost.
I’ll discuss welfare biology briefly, but lots of it will survey non-consequentialist possibilities, like “does non-aggression animal ethics bar us from restoration?”, “if we care about the ecosystem as a moral patient, what does it take for restoration to be creating a new patient versus aiding an existing one?”, or “does creating a new ecosystem burden us with new special obligations, and are they obligations we can actually fulfill?”
I already have a substantial amount written for this, just because I have that section of my practicum write-up already, but it’s currently a bit rough and I might modify it to be more general than just the billion oyster case, or even to expand it in the direction of discussing the ethics of terraforming other planets. This is the one I am most leaning towards posting just because I am most likely to have a substantial amount of writing done for it on time.
I have finally gotten around to reading the paper, and it looks like I was wrong about almost every cited example of public opinion. On euthanasia and non-human/human tradeoffs bioethicists seem to have similar views to the public, and on organ donor compensation the general public seems to be considerably more aligned with the EA consensus than bioethicists. The public view on IVF wasn't discussed and I would guess I am right about this (though considering the other results, not confidently). The only example I gave that seems more or less right is treatment of minors without parental approval. This paper updates me away from my previous views, and more towards "the general public is closer to EAs than bioethicists are on most of these issues" with the caveat that mostly they seem either similar to the general public or to the left of them on most of these issues. I still agree with aspects of my broad points here, but my update is substantial enough and my examples egregious enough that I am unendorsing this comment.
Thank you for this point, I tend to agree that at the very least people should be more surprised if they think a position is obviously correct but also think a sizable portion of people studying it for a living disagree. I haven't gotten around to reading the paper doing concrete comparisons with the general public, but I also stand by my older claim that how different these views are from those of the general public is exaggerated. I see no one in the comments, for instance, pointing out areas they think bioethicists differ from the general public in a direction EAs tend to agree with more, for instance I would guess from these results that they are unusually in favor of trading off human with non-human welfare, treating children without parental approval, and assisted euthanasia. Some of the cited areas where people dislike where bioethicists lean also seem like areas they are just closer to the general public than us, for instance I think if you ask an average person on the street about the permissibility of paying people for their organs, or IVF embryo selection, they will also lean substantially more bioconservative than EAs.
Pertinent to this idea for a post I’m stuck on:
What follows from conditionalizing the various big anthropic arguments on one another? Like, assuming you think the basic logic behind the simulation hypothesis, grabby aliens, Boltzman brains, and many worlds all works, how do these interact with one another? Does one of them “win”? Do some of them hold conditional on one another but fail conditional on others? Do ones more compatible with one another have some probabilistic dominance (like, this is true if we start by assuming it, but also might be true if these others are true)? Essentially I think this confusion is pertinent enough to my opinions on these styles of arguments in general that I’m satisfied just writing about this confusion for my post idea, but I feel unprepared to actually do the difficult, dirty work, of pulling expected conclusions about the world from this consideration, and I would love it if someone much cleverer than me tried to actually take the challenge on.