F

FlorianH

138 karmaJoined

Bio

PhD in Economics (focus on applied economics and climate & resource economics in particular) & MSc in Environmental Engineering & Science. Key interests: Interface of Economics, Moral Philosophy, Policy. Public finance, incl. optimal redistribution & tax competition. Evolution. Consciousness. AI/ML/Optimization. Debunking bad Statistics & Theories. Earn my living in energy economics & finance and by writing simulation models.

Posts
2

Sorted by New

Comments
40

You were of course right; I now fixed the A B & C round to make them consistent. Thanks!

This post calls out un-diversities in EA. Instead of attributable to EA doing something wrong, I find these patterns mainly underline a basic fact about what type of people EA tends to attract. So I don't find the post fair to EA and its structure in a very general way.

I find to detect in the article an implicit, underlying view of the EA story being something like:

                'Person becoming EA -> World giving that person EA privileges'

But imho, this completely turns upside down the real story, which I mostly see as:

                'Privileged person ->becoming EA -> trying to put their resources/privileges to good use, e.g. to help the most underprivileged in the world',

whereby privileged refers to the often a bit geeky, intellectual-ish, well-off person we often find particularly attracted to EA.

In light of this story, the fact that white dudes are over-represented relative to the overall global world population, in EA organizations, would be difficult to avoid in today's world, a bit like it would be difficult to avoid a concentration of high-testosterone males in a soccer league.

Of course, this does not deny that many biases exist everywhere in the selection process for higher ranks within EA, and these may be a true problem. Call them out specifically, and we have a starting point to work from. Also in EA, people tend to abuse of power, and this is not easy to prevent. Again, welcome to all enlightenment about how, specifically, to improve on this. Finally, that skin color is associated with privileges worldwide may be a huge issue in itself, but I'd not reproach this specifically to 'EA' itself. Certainly, EAs should also be interested in this topic, if they find cost-effective measures to address it (although, to some degree, these potential measures have tough competition, just because there is so much poverty and inequality in the world, absorbing a good part of EA's focus for not only bad reasons).

Examples of what I mean (I add the emphasize):

However, the more I learn about the people of EA, the more I worry EA is another exclusive, powerful, elite community, which has somehow neglected diversity. The face of EA appears from the outside to be a collection of privileged, highly educated, primarily young, white men.

Let's talk once you have useful info on whether they focus on the wrong things, rather than that they have the wrong skin colors. In my model, and in my observations, there is simply a bias in who feels attracted to EA, and as much as anyone here would love the average human to care about EA, it is sadly not the case (although in my experience, it is mostly true that more generally slightly geeky, young, logical, possibly well-off persons like and join EA, and can and want to use resources towards it, than simply the "white men" you mention).

The EA organizations now manage billions of dollars, but the decisions, as far as I can tell, are made by only a handful of people. Money is power, and although the decisions might be carefully considered to doing the most good, it is acutely unfair this kind of power is held by an elite few. How can it be better distributed? What if every person in low-income countries were cash-transferred one years’ wage?

The link between the last bold part and the preceding bold parts surprises me. I see two possible readings:

a. 'The rich few elite EAs get the money, but instead we should take that money to support the poorest?' That would have to be answered by: These handful work with many many EAs or other careful employees, to try to figure out what causes to prioritize based on decent cost-benefit analysis, and they don't use this money for themselves (and indeed, at times, it seems like cash-transfers to the poorest show up among promising candidates for funding, but these still compete with other ways to try to help the poorest beings or those most at risk in the future).

b. 'Give all poorest some money, so some of these could become some of the "handful of people" with the power (to decide on the EA budget allocation)'. I don't know. Seems a bit a distorted view on the most pressing reason for alleviating the most severe poverty in the world.

While it might be easy to envy some famous persons in our domain, none has chosen 'oh, whom could we give a big privilege of running the EA show', but instead there is a process, however imperfect, trying to select some of the people who seem most effective for also the higher rank EA positions. And as many skills useful for it correlate with privileged education, I'd not necessarily want to force more randomization or anything - other than through compelling, specific ways to avoid biases.

I have experience with that: eating meat at home but rather strictly not at restaurants for exactly the reasons you mention: it tends to simply be almost impossible to find a restaurant that seems to serve not-crazily-mistreated animals.

Doing that as vegan-in-restaurants (instead of vegetarian-in-restaurants) is significantly more difficult, but from my experience, one can totally get used to try to remain veg* outside but non-veg* at home where one can go for food with some expectation of net positive animal lives.

Few particular related experiences:

  1. Even people who knew me rather well, would intuitively totally not understand the principle. I at times kind of felt bad to buy meat when they're there as I knew they thought I'm vegan and will be confused, even though I would have told them time and again I simply avoid conventional meat/in restaurants and/or at their place etc.
  2. I'm always astonished at the so many people who supposedly care about animals do the other way round: In restaurants they eat meat but not at home. Weird, given it's so obvious in the restaurants is the worst stuff ((and they're not the kind of perfect EA where a dollar saved would be used towards most effective causes, which could naturally complicate the choices))
  3. Restaurants indeed do, behaviorally, absolutely not care about animal welfare. For a food animal welfare compensation project we tried to get a bunch of restaurants to accept that we source higher-welfare meat for them, without them having to pay anything for it. It was in almost all places not possible at all: (i) Even just the slightest potential logistical extra step and/or (ii) potentially a reputational fear from anything about their usual sourcing being leaked to the unconscious public, seemed to make them reluctant to participate.

(Then, I don't want to praise my habits; I hope I find the courage again to become more vegan sometime, as everything else feels like inflicting unacceptable suffering and/or wasting a lot of money on expensive food, and I'm not sure my 'maybe it helps my health' justifies it/there must be better ways. All my sympathy if someone calls health a bad excuse for non-veganism, but I definitely maintain, if it's not about health questions, once one gets used to avoid meat and/or animal products, it only becomes easier over time, in terms of logistics and getting to know tasty alternatives, either simply only outside or also at home)

Surprised. Maybe worth giving it another try, looking longer for good imitations - given today's wealth of really good ones (besides admittedly a ton of bad, assuming you really need them to imitate the original so much): I've made friends taste veg* burgers and chicken nuggets and they were rather surprised when I told them post-hoc that these had not been meat. I once had to double-check with the counter at the restaurant as I could not believe what I had in my plate was really not chicken. Maybe that speaks against the fine taste of me and some, but I really find it's rather easily possible to find truly great textures too if one really cares.

Then, I personally don't know any "uncanny valley" in that domain; make it a bit more or less fake feeling, it doesn't really matter much to me, so maybe you really experience that very differently.

*I don't know/remember whether vegan or vegetarian.

Interesting. Curious: If such hair is a serious bottleneck/costly, do some hair cutters as a default collect cut hair and sell/donate it for such use?

I tried to account for the difficulty to pin down all relevant effects in our CBA by adding the somewhat intangible feeling about the gun to backfire (standing for your point that there may be more general/typical but less easy to quantify benefits of not censoring etc.). Sorry, if that was not clear.

More importantly:

I think your last paragraph gets to the essence: You're afraid the cost-benefit analysis is done naively, potentially ignoring the good reasons for which we most often may not want to try to prevent the advancement of science/tech.

This does, however, not imply that for pausing we'd require Pause Benefit >> Pause Cost. Instead, it means, simply you're wary of certain values for E[Pause Benefit] (or of E[Pause Cost]) to be potentially biased in a particular direction, so that you don't trust in conclusions based on them. Of course, if we expect a particular bias of our benefit or our cost estimate, we cannot just use the wrong estimates.

When I'm advocating to be even-handed, I refer to a cost-benefit comparison that is non-naive. That is, if we have priors that there may exist positive effects that we've just not yet managed to pin down well, or to quantify, we have (i) used reasonable placeholders for these, avoiding bias as good as we can, and (ii) duly widened our uncertainty intervals. It is therefore, that in the end, we can remain even-handed, i.e. pause roughly iif E[Pause Benefit] > E[Pause Cost]. Or, if you like, iif E[Pause Benefit*] > E[Pause Cost*], with * = Accounting with all duty of care for the fact that you'd usually not want to stop your professor or so/usually not want to stop tech advancements because of yadayada..

I have some sympathy with 'a simple utilitarian CBA doesn't suffice' in general, but I do not end at your conclusion; your intuition pump also doesn't lead me there.

It doesn't seem to require any staunch utilitarianism to arrive at 'if a quick look at the gun design suggests it has 51% to shoot in your own face, and only 49% to shoot at the tiger you want to hunt as you otherwise starve to death'*, to decide to drop the project of it's development. Or, to halt, until a more detailed examination might allow you to update with a more precise understanding.

You mention that with AI we have 'abstract arguments', to which my gun's simple failure probability may not do full justice. But I think not much changes, even if your skepticism about the gun would be as abstract or intangible as 'err, somehow it just doesn't seem quite right, I cannot even quite perfectly pin down why, but overall the design doesn't make me trust; maybe it explodes in my hand, it burns me, it's smoke might make me fall ill, whatever, I just don't trust it; i really don't know, but HAVING TAKEN ALL EVIDENCE AND LIVE EXPERIENCE, incl. the smartest EA and LW posts and all, I guess, 51% I get the harm, and only 49% the equivalent benefit, one way or another' - as long as it's still truly the best estimate you can do at the moment.

The (potential) fact that we more typically have found new technologies to advance us, does very little work in changing that conclusion, though, of course, in a complicated case as in AI, this observation itself may have informed some of our cost-benefit reflections.

 

*Yes you guessed correctly, I better implicitly assume something like, you have 50% of survival w/o catching the tiger, and 100% with him (and you only care about your survival) to really arrive at the intended 'slightly negative in the cost-benefit comparison'; so take the thought experiment as an unnecessarily complicated quick and dirty one, but I think it still makes the simple point.

There are two factors mixed up here: @kyle_fish writes about an (objective) amount of animal welfare. The concept @Jeff Kaufman refers to instead includes the weight we humans put on that animals' welfare. For a meaningful conversation about the topic, we should not mix these two up.*

Let's briefly assume a || world with humans2: just like us, but they simply never cared about animals at all (weight = 0). Concluding: "We thus have no welfare problem" is the logical conclusion for humans2 indeed, but it would not suffice to inform a genetically mutated human2x who happened to have developed care about animal welfare - or who simply happened to be curious about absolute welfare in his universe.

In the same vein: There's no strict need to account for usual human's care when analyzing whether, "Net global welfare may be negative" (title!). On the contrary, it would lead to an unnecessary bias, that just comes on top of the analysis' necessarily huge uncertainty (that the author does not fail to emphasize, although as others comment, it could deserve even stronger emphasis).

One of my favorite passages is your remark on AI in some ways being rather more white-boxy, while instead humans are rather black boxy and difficult to align. Some often ignored truth in that (even if, in the end, what really matters, arguably is that we're so familiar with human behavior, that overall, the black boxy-ness of our inner workings may matter less).

Enjoyed the post, thanks! But it starts with an invalid deduction:

Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.

(I added the emphasis)

Instead, it seems more reasonable to simply advocate for such action exactly if, in expectation, the benefits seem to [even just about] outweigh the costs. Of course, we have to take into account all types of costs, as you advocate in your post. Maybe that includes even some unknown unknowns in terms of risks from an imposed pause. Still, in the end, we should be even-handed. That we don't impose pauses on most technologies, surely is not a strong reason to the contrary: We might (i) for bad reasons fail to impose pauses also in other cases, or, maybe more clearly, (ii) simply not see so many other technologies with so large potential downside warranting making pause a major need - after all, that's why we have started the debate in particular about this new technology, AI.

This is just a point on stringency in your provided motivation for the work; changing that beginning of your article would IMHO avoid an unnecessary 'tendentious' passage.

Load more