I think this post makes some excellent points; for brevity, I'm just going to mention my (potential) disagreements.
The punchline is, of course, that the world goes on turning! Tony Hoare, an early pioneer of formal methods, wrote in 1995 “It has turned out that the world just does not suffer significantly from the kind of problem that our research was originally intended to solve”
I suspect (although it's hard to check) this is because most software is mostly used by people who aren't trying to break it. Weird edge cases, for a lot of software, usually don't have that bad of a consequence.
Throw ML into the mix and any divergence between your intended (possibly implicit) utility function and the one you actually programmed can be ruthlessly exploited. In other words: in the past, the world just didn't suffer significantly from the kind of problem that formal methods was intended to solve. But that was when human beings were responsible for the logic.
A particularly colorful example of this is story about the (RL?) agent that was being trained to land an aircraft on a carrier in simulation. Due to an overflow error, it learned a strategy of crashing into the deck at very high velocity. Most human programmers would not think to try that; it may be the case that formal methods were mostly unnecessary in the past, but become critical for guaranteeing safety of autonomy.
Your point about not knowing what the specification for safe AI looks like is still well taken.
formally verify an entire project end to end is often not the ask
Leslie Lamport has suggest that most of the value of formal methods is in getting programmers to actually write down specifications. I believe it's possible that:
An anecdote from industry: Amazon actually budgets engineer hours to write TLA+, which is not used to formally verify implementations of algorithms, but does let programmers model check their specifications, e.g. prove the absence of deadlocks. Speculatively, we might imagine improved formal methods tools and techniques that allow AI researcher to check that their specifications are sound, even while end-to-end verification remains intractable.
Overall, I'm still mildly pessimistic about formal verification being a high-impact career, largely for the reasons that you describe.
I don't remember where I initially read about it, but I found a survey paper that mentions it (and several other, similar anecdotes), along with some citations
https://arxiv.org/abs/1803.03453v1