I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
To make every day count, U3 runs many of its tests in simulation. U3 starts with a basic molecular simulator, implementing optimizations derived from a huge quantity of mathematical analysis. Then, U3 simulates small molecular systems, recording the results to “compress” the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it simulates, continuously distilling results into ever more efficient ML models. This is a compute intensive process, but thanks to U3’s growing control over AI data centers, U3 manipulates billions of dollars of compute.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as intuitive to it as wrenches and bolts are to a car mechanic.
Computational atomic physicist here: you are vastly, vastly, underestimating the difficulty of molecular simulations. Keep in mind that exactly solving the electronic structure of a couple dozen atoms would take the lifetime of the universe to complete. We have approximations that can get us in the right ballpark of the answer in reasonable time, but never to exactly precise answers. See here for a more indepth discussion.
Our community has been discussing and attempting machine learning applications since the 90's, and only one has seen any breakthrough in actual practical use: machine learned force potentials, which involve training on other simulation data, so it's inherently limited to the accuracy of the underlying simulation method. That allows you to do some physics simulations over a longer timescale, and by longer I mean a few nanoseconds, on perfect systems. There are some promising other ML avenues, but none of them seem likely to yield miracles. Computational simulations are an aid to experiment, not a replacement.
I get that this is meant to be some magic super-AI, but I don't actually see that changing that much. There's cold hard math boundaries here, and the AI can't spend its entire computational budget trying to make moderate improvements in computational simulation physics.
We already have a gravity powered method of of electricity generation. It's called "hydro-power".
I suggest you spend way less time complaining about forms of energy that provably generate excess electricity, and more time explaining why you expect your device to actually work. The electrical energy you output has to come from somewhere. Where?
To be clear, I think your project is 100% doomed to fail, I'm just trying to be nice here.
With the "killing humans through machines" option, a superintelligent AI would probably be smart enough to kill us all without taking the time to build a robot army, which would definitely raise my suspicions! Maybe it would hack nuclear weapons and blow us all up, invent and release an airborne super-toxin, or make a self-replicating nanobot - wouldn't see it coming, over as soon as we realised it wasn't aligned.
Drexlerian style nanotech is not a threat for the foreseeable future. It is not on the horizon in any meaningful sense, and may in fact be impossible. Intelligence, even superintelligence, is not magic, and cannot just reinvent a better design than DNA, from scratch, with no testing or development. If drexlerian nanotech becomes a threat, it will be very obvious.
Also, "hacking nuclear weapons"? Do you understand the actual procedure involved in firing a nuclear weapon?
I think a lot of the critiques are pretty accurate. It seems pretty clear to me that the AI safety movement has managed to achieve the exact opposite of it's goals, sparking an AI arms race that the west isn't even that far ahead on, with the lead AI companies run by less than reliable characters. A lot of this was helped by poor decisions and incompetence from AI safety leaders, such as the terribly executed attempt to oust Altman.
I also agree that the plans of the yudkowskian style doomers are laughably unlikely anytime soon. However, I don't agree that slowing down AI progress has no merit: If AI is genuinely dangerous, there is likely to be a litany of warning signs that do damage but does not wipe out humanity. With slower development, there is more time to respond appropriately, fix mistakes in AI control approaches, etc, so we can gradually learn to adapt to the effects of the technology.
So, I clearly agree with you that cutting PEPFAR is an atrocity and that saving lives is good even if it doesn't result in structural changes to society.
However, I think the arguments in this essay are resorting almost to a strawman position of "root causes", and it might result in actual good objections being dismissed. You should absolutely sometimes address root causes!
For an example, imagine a cholera outbreak caused by a contaminated well. In order to help, Person A might say "I'm going to hire new doctors and order new supplies to help the cholera victims". Person B then says "that isn't addressing the root causes of the problem, we should instead use that money to try and find and replace the contaminated well".
Person B could easily have a point here: If they succeed, they end the cholera output entirely, whereas person A would have to keep pumping money in indefinitely, which would probably cost way more over time.
When people talk about "structural change", they are implictly making this sort of argument: that the non-structural people will have to keep pouring money at the problem, whereas with structural reform the problem could be ended or severely curtailed on a much more permanent basis, so the latter is a better use of our time and resources than the former.
Often this argument is wrong, or deployed in bad faith. Often there is no clear path to structural reform, and the effectiveness might be overstated. However sometimes it is correct, and the structural reform really is the correct solution. For example, the abolitionism of slavery. I don't want to throw the baby out with the bathwater here.
Summary bot already exists, and it looks like it can be summoned with a simple tag? I'm not sure what more you need here.
One of the alleged Zizian murderers has released a statement from prison, and it's a direct plea for Eliezer Yudkowsky specifically to become a vegan.
This case is getting a lot of press attention and will likely spawn a lot of further attention in the form of true crime, etc. The effect of this will be likely to cement Rationalism in the public imagination as a group of crazy people (regardless of whether the group in general opposes extremism), and groups and individuals connected to rationalism, including EA, will be reputationally damaged by association.