Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3323 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
335

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

 Hello!

I'm glad you found my comment useful!  I'm sorry if it came across as scolding; I interpreted Tristan's original post to be aimed at advising giant mega-donors like Open Philanthropy, moreso than individual donors.  In my book, anybody donating to effective global health charities is doing a very admirable thing -- especially in these dark days when the US government seems to be trying to dismantle much of its foreign aid infrastructure.

As for my own two cents on how to navigate this situation (especially now that artificial intelligence feels much more real and pressing to me than it did a few years ago), here are a bunch of scattered thoughts (FYI these bullets have kind of a vibe of "sorry, i didn't have enough time to write you a short letter, so I wrote you a long one"):

  • My scold-y comment on Tristan's post might suggest a pretty sharp dichotomy, where your choice is to either donate to proven global health interventions, or else to fully convert to longtermism and donate everything to some weird AI safety org doing hard-to-evaluate-from-the-outside technical work.
  • That's a frustrating choice for a lot of reasons -- it implies totally pivoting your giving to a new field, where it might no longer feel like you have a special advantage in picking the best opportunities within the space. It also means going all-in on a very specific and uncertain theory of impact (cue the whole neartermist-vs-longtermist debate about the importance of RCTs, feedback loops, and tangible impact, versus ideas like "moral uncertainty" that m.
    • You could try to split your giving 50/50, which seems a little better (in a kind of hedging-your-bets way), but still pretty frustrating for various reasons...
    • I might rather seek to construct a kind of "spectrum" of giving opportunities, where Givewell-style global health interventions and longtermist AI existential-risk mitigation define the two ends of the spectrum. This might be a dumb idea -- what kinds of things could possibly be in the middle of such a bizarre spectrum? And even if we did find some things to put in the middle, what are the chances that any of them would pass muster as a highly-effective, EA-style opportunity? But I think possibly there could actually be some worthwhile ideas here. I will come back to this thought in a moment.
  • Meanwhile, I agree with Tristan's comment here that it seems like eventually money will probably cease to be useful -- maybe we go extinct, maybe we build some kind of coherent-extrapolated-volition utopia, maybe some other similarly-weird scenario happens.
    • (In a big-picture philosophical sense, this seems true even without AGI? Since humanity would likely eventually get around to building a utopia and/or going extinct via other means. But AGI means that the transition might happen within our own lifetimes.)

 

However, unless we very soon get a nightmare-scenario "fast takeoff" where AI recursively self-improves and seizes control of the future over the course of hours-to-weeks, it seems like there will probably be a transition period, where approximately human-level AI is rapidly transforming the economy and society, but where ordinary people like us can still substantially influence the future.  There are a couple ways we could hope to influence the long-term future:

  • We could simply try to avoid going extinct at the hands of misaligned ASI (most technical AI safety work is focused on this)
    • If you are a MIRI-style doomer who believes that there is a 99%+ chance that AI development leads to egregious misalignment and therefore human extinction, then indeed it kinda seems like your charitable options are "donate to technical alignment research", "donate to attempts to implement a global moratorium on AI development", and "accept death and donate to near-term global welfare charities (which now look pretty good, since the purported benefits of longtermism are an illusion if there is effectively a 100% chance that civilization ends in just a few years/decades)".  But if you are more optimistic than MIRI, then IMO there are some other promising cause areas that open up...
  • There are other AI catastrophic risks aside from misalignment -- gradual disempowerment is a good example, as are various categories of "misuse" (including things like "countries get into a nuclear war as they fight over who gets to deploy ASI")
    • Interventions focused on minimizing the risk of these kinds of catastrophes will look different -- finding ways to ease international tensions and cooperate around AI to avoid war?  Advocating for georgism and UBI and designing new democratic mechanisms to avoid gradual disempowerment?  Some of these things might also have tangible present-day benefits even aside from AI (like reducing the risks of ordinary wars, or reducing inequality, or making democracy work better), which might help them exist midway on the spectrum I mentioned earlier, from tangible givewell-style interventions to speculative and hard-to-evaluate direct AI safety work.
  • Even among scenarios that don't involve catastrophes or human extinction, I feel like there is a HUGE variance betwen the best possible worlds, and the median outcome.  So there is still tons of value in pushing for a marginally better future -- CalebMaresca's answer mentions the idea that it's not clear whether animals would be invited along for the ride in any future utopia.  This indeed seems like an important thing to fight for.  I think there are lots of things like this -- there are just so many different possible futures.
    • (For example, if we get aligned ASI, this doesn't answer the question of whether ordinary people will have any kind of say in crafting the future direction of civilization; maybe people like Sam Altman would ideally like to have all the power for themselves, benevolently orchestrating a nice transhumanist future wherein ordinary people get to enjoy plenty of technological advancements, but have no real influence over the direction of which kind of utopia we create.  This seems worse to me than having a wider process of debate & deliberation about what kind of far future we want.)
    • CalebMaresca's answer seems to imply that we should be saving all our money now, to spend during a post-AGI era that they assume will look kind of neo-feudal.  This strikes me as unwise, since a neo-feudal AGI semi-utopia is a pretty specific and maybe not especially likely vision of the future!  Per Tristan's comment that money will eventually cease to be useful, it seems like it probably makes the most sense to deploy cash earlier, when the future is still very malleable:
      • In the post-ASI far future, we might be dead and/or money might no longer have much meaning and/or the future might already be effectively locked in / out of our control.
      • In the AGI transition period, the future will still be very malleable, we will probably have more money than we do now (although so will everyone else), and it'll be clearer what the most important / neglected / tractable things are to focus on.  The downside is that by this point, everyone else will have realized that AGI is a big deal, lots of crazy stuff will be happening, and it might be harder to have an impact because things are less neglected.
      • Today, lots of AI-related stuff is neglected, but it's also harder to tell what's important / tractable.

 

For a couple of examples of interventions that could exist midway along a spectrum from givewell-style interventions to AI safety research, which are also focused on influencing the transitional period of AGI, consider Dario Amodei's vision of what an aspirational AGI transition period might look like, and what it would take to bring it about:

  • Dario talks about how AI-enhanced biological research could lead to amazing medical breakthroughs.  To allow this to happen more quickly, it might make sense to lobby to reform the FDA or the clinical trial system.  It also seems like a good idea to lobby for the most impactful breakthroughs to be quickly rolled out, even to people in poor countries who might not be able to afford them on their own.  Getting AI-driven medical advances to more people, more quickly would of course benefit the people for whom the treatments arrive just in time.  But it might also have important path-dependent effects on the long-run future, by setting precedents and infuencing culture and etc.
  • In the section on "neuroscience and mind", Dario talks about the potential for an "AI coach who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective".  Maybe there is some way to support / accelerate the development of such tools?
    • Dario is thinking of psychology and mental health here. (Imagine a kind of supercharged, AI-powered version of Happier-Lives-Institute-style wellbeing interventions like StrongMinds?)  But there could be similarly wide potential for disseminating AI technology for promoting economic growth in the third world (even today's LLMs can probably offer useful medical advice, engineering skills, entrepeneurial business tips, agricultural productivity best practices, etc).
    • Maybe there's no angle for philanthropy in promoting the adoption of "AI coach" tools, since people are properly incentivized to use such tools and the market will presumably race to provide them (just as charitable initiatives like OneLaptopPerChild ended up much less impactful than ordinary capitalism manufacturing bajillions of incredibly cheap smartphones).  But who knows; maybe there's a clever angle somewhere.
  • He mentions a similar idea that "AI finance ministers and central bankers" could offer good economic advice, helping entire countries develop more quickly.  It's not exactly clear to me why he expects nations to listen to AI finance ministers more than ordinary finance ministers.  (Maybe the AIs will be more credibly neutral, or eventually have a better track record of success?)  But the general theme of trying to find ways to improve policy and thereby boost economic growth in LMIC (as described by OpenPhil here) is obviously an important goal for both the tangible benefits, and potentially for its path-dependent effects on the long-run future.  So, trying to find some way of making poor countries more open to taking pro-growth economic advice, or encouraging governments to adopt efficiency-boosting AI tools, or convincing them to be more willing to roll out new AI advancements, seem like they could be promising directions.
  • Finally he talks about the importance of maintaining some form of egalitarian / democratic control over humanity's future, and the idea of potentially figuring out ways to improve democracy and make it work better than it does today.  I mentioned these things earlier; both seem like promising cause areas.

"However, the likely mass extinction of K-strategists and the concomitant increase in r-selection might last for millions of years."

I like learning about ecology and evolution, so personally I enjoy these kinds of thought experiments.  But in the real world, isn't it pretty unlikely that natural ecosystems will just keep humming along for another million years?  I would guess that within just the next few hundred years, human civilization will have grown in power to the point where it can do what it likes with natural ecosystems:
 

  • perhaps we bulldoze the earth's surface in order to cover it with solar panels, fusion power plants, and computronium?
  • perhaps we rip apart the entire earth for raw material to be used for the construction of a Dyson swarm?
  • more prosaically, maybe human civilization doesn't expand to the stars, but still expands enough (and in a chaotic, unsustainable way) such that most natural habitats are destroyed
  • perhaps there will have been a nuclear war (or some other similarly devastating event, like the creation of mirror life that devastates the biosphere)
  • perhaps we create unaligned superintelligent AI which turns the universe into paperclips
  • perhaps humanity grows in power but also becomes more responsible and sustainable, and we reverse global warming using abundant clean energy powering technologies like carbon air capture, assorted geoengineering techniques, etc
  • perhaps humanity attains a semi-utopian civilization, and we decide to extensively intervene in the natural world for the benefit of nonhuman animals
  • etc

Some of those scenarios might be dismissable as the kind of "silly sci-fi speculation" mentioned by the longtermist-style meme below.  But others seem pretty mundane, indeed "to be expected" even by the most conservative visions of the future.  To me, the million-year impact of things like climate change only seems relevant in scenarios where human civilization collapses pretty soon, but in a way that leaves Earth's biosphere largely intact (maybe if humans all died to a pandemic?).
 

IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail.  Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge.  This situation, although unfortunate, is probably justified by the nature of the risks!  But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.

By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life.  Just being able to use mirror life as an example seems like it's much healthier than having zero concrete examples and everything shrouded in secrecy.

Some of the cross-cutting things I am thinking about:

  • scientific norms about whether to fund / publish risky research
  • attempts to coordinate (on a national or international level) moratoriums against certain kinds of research
  • the desirability of things like metagenomic sequencing, DNA synthesis screening for harmful sequences, etc
  • research into broad-spectrum countermeasures like UVC light, super-PPE, pipelines for very quick vaccine development, etc
  • just emphasising the basic overall point that global catastrophic biorisk seems quite real and we should take it very seriously
  • and probably lots of other stuff!

So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.

Thinking about my point #3 some more (how do you launch a satellite after a nuclear war). I realized that if you put me in charge of making a plan to DIYing this (instead of lobbying the US military to do it for me, which would be my first choice), and if SpaceX also wasn't answering my calls to see if I could buy any surplus starlinks...

You could do worse than partnering with Rocketlab, a satellite and rocket company based in New Zealand, developing the emergency satellite based on their "Photon" platform (design has flown before, small enough to still be kinda cheap, big enough to generate much more power than a cubesat). Then Rocketlab can launch their Electron rocket from New Zealand in the event of a nuclear war, and (in a real crisis like that), the whole company would help make sure the mission happened -- the idea of partnering with someone rather than just buying a satellite is key, IMO, because then it's mostly THEIR end of the world plan and in a crisis would benefit from their expertise / workforce.

I'd try to talk to the CEO, get him on board. Seems like the kind of flashy, Elon-esque, altruistic-in-a-sexy-way mission that could help with making RocketLab seem "cool" and recruiting eager mission-driven employees. (RocketLab's CEO currently has ambitions to do some similar flashy missions, like sending their own probe to Venus.)

But this would definitely be more like a $30M project, than a $300K project.

Kind of a funny selection effect going on here here where if you pick sufficiently promising / legible / successful orgs (like Against Malaria Foundation), isn't that just funging against OpenPhil funding?  This leads me to want to upweight new and not-yet-proven orgs (like the several new AIM-incubated charities), plus things like PauseAI and Wild Animal Initiative that OpenPhil feels they can't fund for political reasons.  (Same argument would apply for invertebrate welfare, but I personally don't really believe in invertebrate welfare.  Sorry!)

I'm also somewhat saddened by the inevitable popularity-contest nature of the vote; I feel like people are picking orgs they've heard of and picking orgs that match their personal cause-prioritization "team" (global health vs x-risk vs animals).  I like the idea that EA should be experimental and exploratory, so (although I am a longtermist myself), I tried to further upweight some really interesting new cause areas that I just learned about while reading these various posts:
- Accion Transformadora's crime-reduction stuff seems like a promising new space to explore for potential effective interventions in medium-income countries.
- One Acre Fund is potentially neat, I'm into the idea of economic-growth-boosting interventions and this might be a good one.
- It's neat that Observatorio de Riesgos Catastroficos is doing a bunch of cool x-risk-related projects throughout latin america; their nuclear-winter-resilience-planning stuff in Argentina and Brazil seems like a particularly well-placed bit of local lobbying/activism.

But alas, there can only be three top-three winners, so I ultimately spent my top votes on Team Popular Longtermist Stuff (Nucleic Acid Observatory, PauseAI, MATS) in the hopes that one of them, probably PauseAI, would become a winner.

(longtermist stuff)
1. Nucleic Acid Observatory
2. Observatorio de Riesgos Catastroficos
3. PauseAI
4. MATS

(interesting stuff in more niche cause areas, which i sadly doubt can actually win)
5. Accion Transformadora
6. One Acre Fund
7. Unjournal

(if longtermism loses across the board, I prefer wild animal welfare to invertebrate welfare)
8. Wild Animal Inititative
9. Faunalytics

I don't know anything about One Acre Fund in particular, but it seems plausible to me that a well-run intervention of this sort could potentially beat cash transfers (just as many Givewell-recommended charities do).

  • Increasing African agricultural productivity has been a big cause area for groups like the Bill & Melinda Gates Foundation for a long time.  Hanna Ritchie, of OurWorldInData, explains here why this cause seems so important -- it just seems kinda mathematically inevitable that if labor productivity doesn't improve, these regions will be trapped in poverty forever.  (But improving productivity seems really easy -- just use fertilizer, use better crop varieties, use better farming methods, etc.)  So this seems potentially similar to cash transfers, insofar as if we did cash transfers instead, we'd hope to see people spending a lot of the money on better agricultural inputs!
  • Notably, people who are into habitat / biodiversity preservation and fighting climate change, really like the positive environmental externalities of improving agricultrual productivity.  (The more productive the world's farmland gets, the less pressure there is to chop into jungle and farm more land.)  So if you are really into the environment, maybe those positive eco externalities make a focused intervention like this much more appealing than cash transfers, which are more about the benefits to the direct recipients and local economy.
  • One could look at this as a kind of less-libertarian, more top-down alternative to cash transfers, which makes it look bad.  (Basically -- give people the cash, and wouldn't they end up making these agricultural improvements themselves eventually?  Wouldn't cash outperform, since central planning underperforms?)  But you could also look at it as a very pro-libertarian, economic-growth-oriented intervention designed to provide public goods and create stronger markets, which makes it look good. (Hence all the emphasis about educating farmers to store crops and sell when prices are high, or preemptively transporting agricultural inputs around to local villages where they can then be sold.  Through this lens I feel like "they're solving coordination problems and providing important information to farmers.  Of course a sufficiently well-run version of this charity has the potential to outperform cash!")  This is basically me rephrasing your second bullet point.
  • Just a feeling, but I think your first bullet point (loans are more efficient because the money is paid back) wouldn't obviously make this more efficient than cash transfers?  (Maybe you are alluding to this with your use of "you believe".)  Yes, making loans is "cheaper than it first seems" because the money is paid back.  But giving cash transfers is also "better than it first seems" because the money (basically stimulus) has a multiplier effect as it percolates throughout the local economy.  Whether it's better for people to buy farming tools with cash they've been loaned (and then you get the money back and make more loans to more people who want to buy tools), versus cash they've been given (and then the cash percolates around the local economy and again other people make purchases), seems like a complicated macroeconomics question that might vary based on the local unemployment & inflation rate or etc.  It's not clear to me that one strategy is obviously better.

But these are all just thoughts, of course -- I too would be curious if One Acre Fund has some real data they can share.

Hi!  Jackson Wagner here, former aerospace engineer -- I worked as a systems engineer at Xona Space Systems (which is trying to develop next-gen GPS technology, and is recently getting involved in a military program to create a kind of backup for GPS).  I am also a big fan of the ALLFED concept.

Here are some thoughts on the emergency satellite concept mentioned -- basically I think this is a bad idea!  I am sorry that this is a harsh and overly-negative rant that just harps on one small detail of the post; I think the other ideas you mention are pretty good!:

1. No way will you be able to build and launch a satellite for $300K??  Sure, if you are SpaceX, with all the worlds' most genius engineers, and you can amortize your satellite design costs over tens of thousands of identical Starlink copies, then maybe you can eventually get marginal satellite construction cost down to around $300K.  But for the rest of us mere mortals, designing and building individual satellites, that is around the price of building and launching a tiny cubesat (like the pair I helped build at my earlier job at a tiny Virginia company called SpaceQuest).

2. I'm pretty skeptical that a tiny cubesat would be able to talk directly to cellphones?  I thought direct-to-cell satellites were especially huge due to the need for large antenas.  Although I guess Lynk Global's satellites don't seem so big, and probably you can save on power when you're just transmitting the same data to everybody instead of trying to send and recieve individual messages.  Still, I feel very skeptical that a minimum-viable cubesat will have enough power to do much of use. (Many cubesats can barely fit enough batteries to stay charged through eclipse!)

3. How are you going to launch and operate this satellite amid a global crisis??  Consider that even today's normal cubesat projects, happening in a totally benign geopolitical / economic environment, have something like a 1/3 rate of instant, "dead on arrival" mission failure (ie the ground crew is never able to contact the cubesat after deployment).  In the aftermath of nuclear war or other worldwide societal collapse, you are going to have infinitely more challenges than the typical university cubesat team.  Many ground stations will be offline because they're located in countries that have collapsed into anarchy, etc!  Who will be launching rockets, aside from perhaps the remnants of the world's militaries?  Your satellite's intended orbit will be much more radioactive, so failure rates of components will be much higher!  Basically, space is hard and your satellite is not going to work.  At the very least, you'd want to make three satellites -- one to launch and test, another to keep in underground storage for a real disaster (maybe buy some rocket, like a RocketLab Electron, to go with it!), and probably a spare.
(If the disaster is local rather than global, then you'd have an easier time launching from eg the USA to help address a faminine Africa.  But in this scenario you don't need a satellite as badly -- militaries can airdrop leaflets, neighboring regions can set up radio stations, we can ship food aid around on boats, etc.)

4. Are you going to get special permission from all the world's governments and cell-network providers, that you can just broadcast texts to anyone on earth at any time?  Getting FCC licensed for the right frequencies, making partnerships with all the cell-tower providers (or doing whatever else is necessary so that phones are pre-configured to be able to recieve your signal), etc, seems like a big ask!

5. Superpower militaries are already pretty invested in maintaining some level of communications capability through even a worst-case nuclear war.  (Eg, the existing GPS satellites are some of the most radiaiton-hardened satellites ever, in part because they were designed in the 1980s to remain operational through a nuclear war.  Modern precision ASAT weapons could take out GPS pretty easily -- but hence the linked Space Force proposal for backup "resilient GPS" systems.  I know less about military comms systems, but I imagine the situation is similar.)  Admittedly, most of these communications systems aren't aimed at broadcasting information to a broad public.  But still, I expect there would be some important communications capability left even during/after an almost inconcievably devastating war, and I would bet that crucial information could be disseminated surpisingly well to places like major cities.

6. Basically instead of building satellites yourselves, somebody should just double-check with DARPA (or Space Force or whoever) that we are already planning on keeping a rocket's worth of Starlink satellites in reserve in a bunker somewhere.  This will have the benefit of already being an important global system (many starlink terminals all around the world), reliable engineering, etc.

Okay, hopefully the above was helpful rather than just seeming mean!  If you are interested in learning more about satellites (or correcting me if it turns out I'm totally wrong about the feasibility of direct-to-cellphone from a cubesat, or etc), feel free to message me and we could set up a call!  In particular I've spent some time thinking about what a collapse of just the GPS system would look like (eg if China or Russia did a first-strike against western global positioning satellites as part of some larger war), which might be interesting for you guys to consider.  (Losing GPS would not be totally devastating to the world by itself -- at most it would be an economic disruption on the scale of covid-19.  But the problem is that if you lose GPS, you are probably also in the middle of a world war, or maybe an unprecedented worst-case solar storm, so you are also about to lose a lot of other important stuff all at once!)

Concluding by repeating that this was a hastily-typed-out, kinda knee-jerk response to a single part of the post, which doesn't impugn the other stuff you talk about! 

Personally, of the other things you mentioned, I'd be most excited about both of the "#1" items you list -- continuing research on alternative foods themselves, and lobbying naturally well-placed-to-survive-disaster governments to make better plans for resiliency.  Then #4 and #5 seem a little bit like "do a bunch of resliliency-planning research ourselves", which initially struck me as seeming less good than "lobbying governments to do resiliency planning" (since I figure governments will take their own plans more seriously).  But of course it would also be great to be able to hand those governments detailed, thoughtful information for them to start from and use as a template, so that makes #4 and #5 look good again to me.  Finally, I would be really hyped to see some kind of small-scale trials of ideas like seaweed farming, papermill-to-sugar-mill conversions, etc.

 

Load more