I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
I feel like the counterpoint here is that R&D is incredibly hard. In regular development, you have established methods of how to do things, established benchmarks of when things are going well, and a long period of testing to discover errors, flaws, and mistakes through trial and error.
In R&D, you're trying to do things that nobody has ever done before, and simultaneously establish methods, benchmarks, and errors for that new method, which carries a ton of potential pitfalls. Also, nobody has ever done it before, so the AI is always inherently out-of-training to a much greater degree than in regular work.
I did read your scenario. I'm guessing you didn't read my articles? I'm closely tracking the use of AI in material science, and the technical barriers to things like nanotechnology.
"AI" is not a magic word that makes technical advancements appear out of nowhere. There are fundamental physical limits to what you can realistically model with finite computer resources, and the technical hurdles to drexlerian nanotech are absurd in their difficulty. To make experimental advances in something like nanotech, you need extensive experimentation. The AI does not have nanotech to build those labs, and it takes more than a year for humans to build it.
I usually try to avoid the word "impossible" when talking about speculative scenarios... but by giving it a 1 year time limit, the scenario you have written is impossible.
I work in computational material science and have spent a lot of time digging into drexlerian nanotech. The idea that drexler style nanomachines can be invented in 2026 is straight up absurd. Progress towards nanomachines has stalled out for decades. This is not a "20 years from now" type project, absent transformative AI speedups the tech could be a century away, or even straight up impossible. And the effect of AI on material science is far from transformative at present, this is not going to change in 1 year.
You are not doing your cause a service by proposing scenarios that are essentially impossible.
I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn't already
To back this up: I mostly peruse non-rationalist, left leaning communities, and this is a concern in almost every one of them. There is a huge amount of concern and distrust of AI companies on the left.
Even AI skeptical people are concerned about this: AI that is not "transformative" can concentrate power. Most lefties think that AI art is shit, but they are still concerned that it will cost people jobs: this is not a contradiction as taking jobs does not mean AI needs to better than you, just cheaper. And if AI does massively improve, this is going to make them more likely to oppose it, not less.
The gini coefficient "is more sensitive to changes around the middle of the distribution than to the top and the bottom". When you are talking about the top billionaires, like Ozzie is, it's not the correct metric to use:
In absolute terms, the income share of the top 1% in the US has been steadily rising since the 1980's (although this is not true for countries like japan or sweden)
I'm not sure the "passive" finding should be that reassuring.
I'm imagining someone googling "ethical career" 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now. If they have no interest in AI based careers (either through interest or skillset), they'll just move on to somewhere else. Maybe they would have been a really good fit for an animal advocacy org, but if their first impressions don't tell them that animal advocacy is still a large part of EA they aren't gonna know.
It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over. In hypothetical future 80k, where malaria nets are de-emphasised, that person may bounce off the movement instantly.
Remember that this is graphing the length of task that the AI can do with an over 50% success rate. The length of task that an AI can do reliably is much shorter than what is shown here (you can look at figure 4 in the paper): for an 80% success rate it's 30 seconds to a minute.
Being able to do a months work of work at a 50% success rate would be very useful and productivity boosting, of course, but it would really be close to recursive self improvement? I don't think so. I feel that some part of complex projects needs reliable code, and that will always be a bottleneck.
Welcome to the forum. You are not missing anything: in fact you have hit upon some of the most important and controversial questions about the EA movement, and there is wide disagreement on many of them, both within EA and with EA's various critics. I can try and give both internal and external sources asking or rebutting similar questions.
In regards to the issue of unintended consequences from global aid, and the global vs local issue. this was an issue raised by Leif Wenar in a hostile critique of EA here. You can read some responses and rebuttals to this piece here and here.
With regards to the merits of Longtermism, this will be a theme of the debate week this coming week, so you should be able to get a feel for the debate within EA there. Plenty of EA's are not longtermist for exactly the reasons you described. Longtermism the focus of a lot of external critique of EA as well, with some seeing it as a dangerous ideology, although that author has themselves been exposed for dishonest behaviour.
AI safety is a highly speculative subject, and their are a wide variety of views on how powerful AI can be, how soon "AGI" could arrive, how dangerous it is likely to be, and what the best strategy is for dealing with it. To get a feel for the viewpoints, you could try searching for "P doom", which is a rough estimate for the chance of destruction. I might as well plug my own argument for why I don't think it's that likely. For external critics, pivot to AI is a newsletter that compiles articles with the perspective that AI is overhyped and that AI safety isn't real.
The case for "earning to give" is given in detail here. The argument you raise of working for unethical companies is one of the most common objections to the practice, particularly in the wake of the SBF scandal, however in general EA discourages ETG with jobs that are directly harmful.
The method in the case of quantum physics was to meet their extraordinary claims with extraordinary evidence. Einstein did not resist the findings of quantum mechanics, only their interpretations, holding out hope that he could make a hidden variable theory work. Quantum mechanics become accepted because they were able to back up their theories with experimental data that could be explained in no other way.
Like a good scientist, I'm willing to follow logic and evidence to their logical conclusions. But when I actually look at the "logic" that is being used to justify doomerist conclusions, it always seems incredibly weak (and I have looked, extensively). I think people are rejecting your arguments not because you are a rogue outsider, but because they don't think your arguments are very good.