J

Jason

16254 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
1884

Topic contributions
2

While 40 is the US norm for "standard" full-time, it can be 35, 37.5, or some other number elsewhere (and those elsewheres often have more paid time off than the US). So I wouldnt give any moral or ethical significance to that number specifically.

You probably should add AMF as an option. It doesn't seem to be on the GWWC list, but IIRC it is tax deductible in significantly more places than any other common EA charity. That would allow people from countries with few tax-advantaged options to participate without giving up their tax benefits to do so.

I'll take an intermediate position: most readers will at least unconsciously infer an uncertainty range when presented with a point estimate only. If my mechanic tells me their best estimate for fixing my car is $1000 without saying more, I should understand from that $1200 is a reasonable possibility but would legitimately be upset if presented with a $2000 bill even if $1000 were provably the mean, median, mode, and likely outcome.

Here, I think the reader is on notice that estimating cost to save a life is likely to involve some imprecision, plus it is presented as an estimate, it is linked to a more detailed explanation, and it is rounded off.

There would be cases in which more should be said about the uncertainty range, for instance if it were between $500 and $50K! In that kind of scenario, you would need to say more to clue the reader into the degree of imprecision.

Makes sense -- one use case for me is that I'd be more inclined to defer to community judgment based on certain grounds than on others in allocating my own (much more limited!) funds.

E.g., if perspective X already gets a lot of weight from major funders, or if I think I'm in a fairly good position to weigh X relative to others, then I'd probably defer less. On the other hand, there are some potential cruxes on which various factors point toward more deference.

The specific statement I was reacting to was that people might vote based on their views about what happens after a singularity. For various reasons, I would not be inclined to defer to GH/animal welfare funding splits that were promised on that kind of reasoning. (Not that the reasoning is somehow invalid, it's just not the kind of data that would materially update how I donate.)

I wonder if it would be worthwhile to include a yes/no/undefined set of buttons that people could use to share if they are basing their decision primarily on second order considerations. Conditional on a significant fraction of people doing so, we might learn something interesting from the vote split in each category. That wouldn't provide the richness of data that a custom narrative yields, but it is easier to statistically analyze a fixed-response question, and more people may respond to a three-second question than provide a narrative.

Some of the combinatoric effect here is picking up on the number of projects, I think? If you have three projects in three separate orgs, your vote to fund one conveys information on which you rank first but not the rank order between the other two. If you have ten orgs with one project each, there are I think 36 pairwise comparisons and a first-place donation vote addresses only nine of them.

​More generally: it may be worthwhile to distinguish ​more between donation​/vote as an information mechanism and as​ an influence mechanism. ​It's plausible to me that other features of the ecosystem could significantly impair both the potential informational and influential power of "votes" before we even got to considering the issue you describe here.

​D​onation/vote as a donor influence mechanism has some significant limitations in an ecosystem where the bulk of the funding comes from a few megadonors. To the extent that smaller donors think their donations funge with those of the megadonors, and that megadonors are more capable of adjusting to enact their global preferred funding allocations, the small donors may not believe that their votes have any meaningful influence on overall funding allocations. To the extent that smaller donors believe that, I expect the belief would have an significant effect on small-donor willingness to invest in casting informed "votes." So it may seriously affect the epistemic / informational value of the votes too.

You could get some of the informational effect by merely asking donors to ​i​dentify the specific program they'd like to donate to in a non-binding fashion. Of course, the advisory nature of the ​project-specific vote would likely make donors less willing to spend time on ​c​asting informed votes. 

[Seeking clarification and offering feedback, rather than making assertions about a subject fairly far outside of my knowledge base]

Thanks for sharing this interesting piece; I can tell a lot of careful thought went into it!

A reader (like myself) who doesn't really follow this literature is likely to wonder why Vitamin D is an important focus given the production of that vitamin through sunlight exposure. So from a rhetorical standpoint, the piece might benefit from a brief discussion of why this isn't as good a mitigating technique as the reader may have assumed.

Pham et al. (2022) state that "[t]he sunlight exposure method could be insufficient for an ASRS in which UV is reduced, and dangerous in an ASRS in which UV is increased due to ozone layer destruction." 

  • A reduction seems plausible, but it seems that given baseline levels the production would have to be fairly sharply reduced in most places to pose a serious risk to many people (as they could compensate with longer exposure times and less clothing). 
    • Of course, in colder climates further from the equator this would be less viable! This hints at a possible way to stratify your conclusions; it may be that we need to give significant weight to Vitamin D for food needed for those who live far from the equator especially in colder months (but not for people near it).
  • As far as increased UV scenarios, it might even not be possible to reduce many people's UV exposure to below a level that would generate adequate Vitamin D. Those people will get their Vitamin D needs met. Moreover, unless the dangerousness scales more quickly than the Vitamin D creation rate, limiting outdoor time ~proportional to the increase in UV production would seem to mostly work. I have no idea how to model either of these, though.

So even after a skim of Pham et. al., I don't feel like I understand whether Vitamin D is likely to be a significant and widespread problem after an ASRS or is a more theoretical (or at least speculative) risk.

A less important but still meaningful win in the US and other places with existing iodization might be extending it to salt in processed food.

Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we've been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn't coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world's most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.

My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well -- potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.

I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine "too much" real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.

I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.

Load more