A wayward math-muddler for bizarre designs, artificial intelligence options, and spotting trends no one wanted; articles on Medium as Anthony Repetto
Thank you for diving into the details! And, to be clear, I am not taking issue with any of Gibbard's proof itself - if you found an error in his arguments, that's your own victory, please claim it! Instead, what I point to is Gibbard's method of DATA-COLLECTION.
Gibbard pre-supposes that the ONLY data to be collected from voters is a SINGULAR election's List of Preferences. And, I agree with Gibbard in his conclusion, regarding such a data-set: "IF you ONLY collect a single election's ranked preferences, then YES, there is no way to avoid strategic voting, unless you have only one or two candidates."
However, that Data-Set Gibbard chose is NOT the only option. In a Bank, they detect Fraudulent Transactions by placing each customer's 'lifetime profile' into a Cluster (cluster analysis). When that customer's behavior jumps OUTSIDE of their cluster, you raise a red flag of fraud. This is empirically capable of detecting what is mathematically equivalent to 'strategic voting'.
So, IF each voter's 'lifetime profile' was fed into a Variational Auto-Encoder, to be placed within some Latent Space, within a Cluster of similarly-minded folks, THEN we can see if they are being strategic in any particular election: if their list of preferences jumps outside of their cluster, they are lying about their preferences. Ignore those votes, safely protecting your ballot from manipulation.
Do you see how this does not depend upon Gibbard being right or wrong in his proof? As well as the fact that I do NOT disagree with his conclusion that "strategy-proof voting with more than two candidates is not possible IF you ONLY collect a SINGLE preference-list as your one-time ballot"?
"From this informal perspective, clarity and conciseness matters far more than empirical robustness."
Then you are admitting my critique: "Your community uses excuses, to allow themselves a claim of epistemic superiority, when they are actually using a technique which is inadequate and erroneous." Yup. Thanks for showing me and the public your community's justification for using wrong techniques while claiming you're right. Screenshot done!
Oh, you entirely missed my purpose: I was sharing this with your community, as a courtesy. I publish on different newsletters online, and I wrote for that audience ABOUT your community. And, the fact that you're not interested in learning about Dirichlet, when it's industry-standard (demonstrating its superiority empirically, not with anecdotes you find palatable). So, no, I don't plan to present myself in a way you approve of, as a pre-requisite to you noticing that Bayes is out-dated by 260 years of improvements. Dirichlet, logically, would NOT have been published and adopted in 1973 and since, if it were in fact inferior to Bayes.
You evidence the same spurious assumptions and lack of attention to core facts - Dirichlet is an improvement, obviously, by coming along later and being adopted generally. I also addressed the key information which Dirichlet provides, which Bayes' Theorem is incapable of generating: a Likelihood Distribution across possible Populations, and the resultant Confidence Interval, as well as weighting your estimate to Minimize the Cost of being Wrong. Those are all key, valuable information that Bayes' Theorem will not give you on its own. When Scott Alexander claims "Bayes' Theorem; all else is commentary" he leaves-out critical, incomparable improvements in our understanding.
Alistair, I regret to inform you that after four years of Leverage's Anti-Avoidance Training, the cancer has spread: the EA Community at large is now repeatedly aghast that outsiders are noticing their subtle rug-sweeping of sexual harassment and dismissal of outside critique. In barely a decade, the self-described rats are swum 'round a stinking sh!p. I'm still amazed that, for the last year, as I kept bringing-forth concerns and issues, the EA members each insisted 'no problems here, no, never, we're always so perfect....' Yep. It shows.
This aged well... and it reads like what ChatGPT would blurt, if you asked it to "sound like a convincingly respectful and calm cult with no real output." Your 'Anti-Avoidance,' in particular, is deliciously Orwellian. "You're just avoiding the truth, you're just confused..."
I was advocating algal and fish farming, including bubbling air into the water and sopping-up the fish poop with crabs and bivalves - back in 2003. Spent a few years trying to tell any marine biologist I could. Fish farming took-off, years later, and recently they realized you should bubble air and catch the poop! I consider that a greater real-world accomplishment than your 'training 60+ people on anti-avoidance of our pseudo-research.' Could you be more specific about Connection Theory, and the experimental design of the research you conducted and pre-registered, to determine that it was correct? I'm sure you'd have to get into some causality-weeds, so those experimental designs are going to be top-notch, right? Or, is it just Geoff writing with the rigor of Freud on a Slack he deleted?
Third Generation Bay Area, here - and, if you aren't going to college at Berkeley or swirling in the small cliques of SF among 800,000 people living there, yeah, not a lot of polycules. I remember when Occupy oozed its way through here that left a residue of 'say-anything-polyamorists' who were excited to share their 'pick-up artist' techniques when only other men where present. "Gurus abuse naïve hopefuls for sex" has been a recurring theme of the Bay, every few decades, but the locals don't buy it.
I expect that, once AGI exists, and flops, the spending upon AGI researchers will taste sour. The robots with explosives, and the surveillance cameras across all of China, really were the bigger threats than AGI X-risk; you'll only admit it once AGI fails to outperform narrow superintelligences. The larger and more multi-modal our networks become, the more consistently they suffer from "modal collapse": the 'world-model' of the network becomes so strongly-self-reinforcing, that ALL gradients from the loss-function end-up solidifying the pre-existing world-model. Literally, AIs are already becoming smart enough to rationalize everything; they suffer from confirmation bias just like us. And that problem was already really bad, by the time they trained GPT-4 - go check their leaked training-regiment: they had to start-over from scratch repeatedly, because the brain found excuses for everything and performance tanked without any hope of recovery. Your AGI will have to be re-run through training 10,000 times, before one of the brains isn't sure-it's-always-right-about-its-superstitions. Narrow makes more money, and responds better, faster, cheaper in war - there won't be any Nash Equilibrium which includes "make AGI", so the X-Risk is actually ZERO.
Pre-ChatGPT, I wrote the details on LessWrong: https://www.lesswrong.com/posts/Yk3NQpKNHrLieRc3h/agi-soon-but-narrow-works-better