S

Sanjay

4387 karmaJoined

Comments
391

I don't think bringing the ISS down in a controlled way is because of the risk that it might hit someone on earth, or because of "the PR disaster" of us "irrationally worrying more about the ISS hitting our home than we are getting in their car the next day".

Space debris is a potentially material issue.

  • There are around 23,000 objects larger than 10 cm (4 inches) and about 100 million pieces of debris larger than 1 mm (0.04 inches). Tiny pieces of junk might not seem like a big issue, but that debris is moving at 15,000 mph (24,140 kph), 10 times faster than a bullet. (Source: PBS)
  • This matters because debris threatens satellites. Satellites are critical to GPS systems and international communication networks. They are used for things like helping you get a delivery, helping the emergency services get to their destination, or military operations. 
  • Any one bit of space debris probably won't cause a big deal if you ignore knock-on effects. However a phenomenon called Kessler Syndrome could make things much worse. This arises when space debris hits into satellites, causing more space debris, causing a vicious circle.

 The geopolitics of space debris gets complicated.

  • The more space debris there is, the more legitimate it is to have weapons on a satellite (to keep your satellite safe from debris). 
  • However such weapons could be dual-purpose, since attacking an enemy's satellite could be of great tactical value in a conflict scenario.

I haven't done a cost-effectiveness analysis to justify whether $1bn is a good use of that money, but I think it's more valuable than this article seems to suggest.

A donor-pays philanthropy-advice-first model solves several of these problems.

  • If your model focuses primarily on providing advice to donors, your scope is "anything which is relevant to donating", which is broad enough that you're bound to have lots of high-impact research to do, which helps with constraint 1.
  • Strategising and prioritisation are much easier when you're knee-deep in supporting donors with their donations -- this highlights the pain points in making good giving decisions, which helps with constraint 2.
  • If donors perceive that the research is worth funding, and have potentially had input into the ideation of the research project, they are likely to be willing to fund it, which helps with constraint 6.

This explains why SoGive adopted this model.

Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.

> Would such a game "positively influence the long-term trajectory of civilization," as described by the Long-Term Future Fund? For context, Rob Miles's videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.

It sounds like you're arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?

This struck me as strawmanning.

  • The original post asked whether the game would positively influence the long-term trajectory of civilisation. It didn't spell it out, but presumably we want that to be a material positive influence, not a trivial rounding error -- i.e. we care about how much positive influence.
  • The extent of that positive influence is lowered when we already have existing clear and popular explanations. Hence I do believe the existence of the videos is relevant context.
  • Your interpretation "It sounds like you're arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?" is a much stronger and more attackable claim than my read of the original.

> It seems insane to even compare, but was this expenditure of $100,000 really justified when these funds could have been used to save 20–30 children's lives or provide cataract surgery to around 4000 people?

These are totally different modes of impact. I assume you could make this argument for any speculative work.

I'm more sympathetic to this, but I still didn't find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept "funds have an opportunity cost" (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn't a helpful update for me.

On the other hand, I appreciated this comment, which I thought to be valuable:

I also like grant evaluation, but I would flag that it's expensive, and often, funders don't seem very interested in spending much money on it.

Donors contribute to these funds expecting rigorous analysis comparable to GiveWell's standards, even for more speculative areas that rely on hypotheticals, hoping their money is not wasted, so they entrust that responsibility to EA fund managers, whom they assume make better and more informed decisions with their contributions.

I think it's important that the author had this expectation. Many people initially got excited about EA because of the careful, thoughtful analysis of GiveWell. Those who are not deep in the community might reasonably see the branding "EA Funds" and have exactly the expectations set out in this quote.

I'm working from brief conversations with the relevant experts, rather than having conducted in-depth research on this topic. My understanding is:

  • the food security angle is most useful for a country which imports a significant amounts of its food; where this is true, the whole argument is premised on the idea that domestic food producers will be preserved and strengthened, so it doesn't naturally invite opposition. 
  • the economy / job creation angle is again couched in terms of "increasing the size of the pie" -- i.e. adding more jobs to the domestic economy and not taking away from the existing work. Again, this doesn't seem to naturally invite opposition from incumbent food producers.

I guess in either case it's possible for the food/agriculture lobby to nonetheless recognise that alt proteins could be a threat to them and object. I don't know how common it is for this actually happen.

When advocating that governments invest more in alt proteins, the following angles are typically used:

  • climate/environmental
  • bioeconomy (i.e. if you invest in this, it will create more jobs in your country)
  • food security

I understand the latter two are generally popular with right-wing governments; either of these two positions can be advanced without referencing climate at all (which may be preferable in some cases for the reasons Ben outlines)

I can confirm that there exists at least NGO who has this type of risk on their radar. I don't want to say too much until we have gone through the appropriate processes for publishing our notes from speaking with them. 

If any donors want to know more, feel free to reach out directly and I can tell you more.

An application I was expecting you to mention was longer term forecasts. E.g. if there was a market about, say, something in 2050, for example, the incentives for forecasters are perhaps less good, because the time until resolution is so long. But a "chained" forecast capturing something like "what will next year's forecast say" (and next year's forecast is about the following year's forecast, and so until you hit 2050, when it resolves to the ground truth).

This assumes that forecasters are less effective when it comes to markets which don't resolve for a long time.

In 2020, we at SoGive were excited about funding nuclear work for similar reasons. We thought that the departure of the MacArthur foundation might have destructive effects which could potentially be countered with an injection of fresh philanthropy.

We spoke to several relevant experts. Several of these were with (unsurprisingly) philanthropically funded organisations tackling the risks of nuclear weapons. Also unsurprisingly, they tended to agree that donors could have a great opportunity to do good by stepping in to fill gaps left by MacArthur. 

There was a minority view that this was not as good an idea as it seemed. This counterargument was MacArthur had left for (arguably) good reasons. Namely that after throwing a lot of good money after bad, they had not seen strong enough impact for the money invested. I understood these comments to be the perspectives of commentators external to MacArthur (i.e. I don't think anyone was saying that MacArthur themselves believed this, and we didn't try to work out whether MacArthur themselves believed this).

Under this line of thinking, some "creative destruction" might be a positive. On the one hand, we risk losing some valuable institutional momentum, and perhaps some talented people. On the other hand, it allows for fresh ideas and approaches. 

Thanks Larks, I definitely agree with your characterisation of Kevin Esvelt as the bio guy. An error crept into our notes but is now corrected. 

Load more