D

DPiepgrass

1014 karmaJoined

Bio

I'm a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I've have a chronic lack of interest in making money; instead I've developed an unhealthy interest in foundational software that free markets don't build because their effects would consist almost entirely of positive externalities.

I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).

Comments
151

Also: critical feedback can be good. Even if painful, it can help a person grow. But downvotes communicate nothing to a commenter except "f**k you". So what are they good for? Text-based communication is already quite hard enough without them, and since this is a public forum I can't even tell if it's a fellow EA/rat who is voting. Maybe it's just some guy from SneerClub―but my amygdala cannot make such assumptions. Maybe there's a trick to emotional regulation, but I've never seen EA/rats work that one out, so I think the forum software shouldn't help people push other people's buttons.

I haven't seen such a resource. It would be nice.

My pet criticism of EA (forums) is that EAs seem a bit unkind, and that LWers seem a bit more unkind and often not very rationalist. I think I'm one of the most hardcore EA/rationalists you'll ever meet, but I often feel unwelcome when I dare to speak.

Like: 

  • I see somebody has a comment with -69 karma. An obvious outsider asking a question with some unfair assumptions about EA. Yes, it was brash and rude, but no one but me actually answered him.
  • I write an article (that is not critical of any EA ideas) and, after many revisions, ask for feedback. The first two people who come along downvote it, without giving any feedback. If you downvote an article with 104 points and leave, it means you dislike it or disagree. If you downvote an article with 4 points and leave, it means you dislike it, you want the algorithm to hide it from others, you want the author to feel bad, and you don't want them to know why. If you are not aware that it makes people feel bad, you're demonstrating my point.
  • I always say what I think is true and I always try to say it reasonably. But if it's critical of something, I often get downvote instead of a disagree (often without comment).
  • I describe a pet idea that I've been working on for several years on LW (I built multiple web sites for it with hundreds of pages, published NuGet packages, the works). I think it works toward solving an important problem, but when I share it on LW the only people who comment say they don't like it, and sound dismissive. To their credit, they do try to explain to me why they don't like it, but they also downvote me, so I become far too distraught to try to figure out what they were trying to communicate.
  • I write a critical comment (hypocrisy on my part? Maybe, but it was in response to a critical article that simply assumes the worst interpretation of what a certain community leader said, and then spends many pages discussing the implications of the obvious trueness of that assumption.) This one is weird: I get voted down to -12 with no replies, then after a few hours it's up to 16 or so. I understand this one―it was part of a battle between two factions of EA―but man that whole drama was scary. I guess that's just reflective of Bay Area or American culture, but it's scary! I don't want scary!

Look, I know I'm too thin-skinned. I was once unable to work for an entire day due to a single downvote (I asked my boss to take it from my vacation days). But wouldn't you expect an altruist to be sensitive? So, I would like us to work on being nicer, or something. Now if you'll excuse me... I don't know I'll get back into a working mood so I can get Friday's work done by Monday.

Okay, not a friendly audience after all! You guys can't say why you dislike it?

Story of my life... silent haters everywhere.

Sometimes I wonder, if Facebook groups had downvotes, would it be as bad, or worse? I mean, can EAs and rationalists muster half as much kindness as normal people for saying the kinds of things their ingroup normally says? It's not like I came in here insisting alignment is easy actually.

I only mentioned human consciousness to help describe an analogy; hope it wasn't taken to say something about machine consciousness.

I haven't read Superintelligence but I expect it contains the standard stuff―outer and inner alignment, instrumental convergence etc. For the sake of easy reading, I lean into instrumental convergence without naming it, and leave the alignment problem implicit as a problem of machines that are "too much" like humans, because

  • I think AGI builders have enough common sense not to build paperclip maximizers
  • Misaligned AGIs―that seem superficially humanlike but end up acting drastically pathological when scaled to ASI―are harder to describe so instead I describe (by analogy) something similar: humans outside the usual distribution. I argue that psychopathy is absence of empathy, so when AGIs surpass human ability it's way too easy to build in a machine like that. (Indeed, I could've said, even normal humans can easily turn off their empathy with monstrous results, see: Nazis, Mao's CCP).

I don't incorporate Yudkowsky's ideas because I found the List of Lethalities to be annoyingly incomplete and unconvincing, and I'm not aware of anything better (clear and complete) that he's written. Let me know if you can point me to anything.

My feature request for EA Forum is the same as my feature request for every site: you should be able to search within a user (i.e. a user's page should have a search box). This is easy to do technically; you just have to add the author's name as one of the words in the search index.

(Preferably do it in such a way that a normal post cannot do the same, e.g. you might put "foo authored this post" in the index as @author:foo but if a normal post contains the text "@author:foo" then perhaps the index only ends up with @author (or author) and foo, while the full string is not in the index (or, if it is in the index, can only be found by searching with quotes a la Google: "@author:foo")

I didn't see a message about kneecaps, or those other things you mentioned. Could you clarify? However, given Torres' history of wanton dishonesty ― I mean, prior to reading this article I had already seen Torres lying about EA ― and their history of posting under multiple accounts to the same platform (including sock puppets), if I see an account harassing Torres like that, I would (1) report the offensive remark and (2) wonder if Torres themself controls that account.

Sorry if I sounded redundant. I'd always thought of "evaporative cooling of group beliefs" like "we start with a group with similar values/goals/beliefs; the least extreme members gradually get disengaged and leave; which cascades into a more extreme average that leads to others leaving"―very analogous to evaporation. I might've misunderstood, but SBF seemed to break the analogy by consistently being the most extreme, and actively and personally pushing others away (if, at times, accidentally). Edit: So... arguably one can still apply the evaporative cooling concept to FTX, but I don't see it as an explanation of SBF himself.

What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?

You know what, I was reading Zvi's musings on Going Infinite...

Q: But it’s still illegal to mislead a bank about the purpose of a bank account.

Michael Lewis: But nobody would have cared about it.

He seems to not understand that this does not make it not a federal crime? That ‘we probably would not have otherwise gotten caught on this one’ is not a valid answer?

Similarly, Lewis clearly thinks ‘the money was still there and eventually people got paid back’ should be some sort of defense for fraud. It isn’t, and it shouldn’t be.

...

Nor was Sam a liar, in Lewis’s eyes. Michael Lewis continued to claim, on the Judging Sam podcast, that he could trust Sam completely. That Sam would never lie to him. True, Lewis said, Sam would not volunteer information and he would use exact words. But Sam’s exact words to Lewis, unlike the words he saw Sam constantly spewing to everyone else, could be trusted.

It’s so weird. How can the same person write a book, and yet not have read it?

And it occurred to me that all SBF had to do was find a few people who thought like Michael Lewis, and people like that don't seem rare. I mean, don't like 30% of Americans think that the election was stolen from Trump, or that the cases against Trump are a witch hunt, because Trump says so and my friends all agree he's a good guy (and they seek out pep talks to support such thoughts)? Generally the EA community isn't tricked this easily, but SBF was smarter than Trump and he only needed to find a handful of people willing to look the other way while trusting in his Brilliance and Goodness. And since he was smart (and overconfident) and did want to do good things, he needed no grand scheme to deceive people about that. He just needed people like Lewis who lacked a gag reflex at all the bad things he was doing.

Before FTX I would've simply assumed other EAs had a "moral gag reflex" already. Afterward, I think we need more preaching about that (and more "punchy" ways to hammer home the importance of things like virtues, rules, reputation and conscientiousness, even or especially in utilitarianism/consequentialism). Such preaching might not have affected SBF himself (since he cut so many corners in his thinking and listening), but someone in his orbit might have needed to hear it.

this almost confirms for me that FTX belongs on the list of ways EA and rationalist organizations can basically go insane in harmful ways,

I was confused by this until I read more carefully. This link's hypothesis is about people just trying to fit in―but SBF seemed not to try to fit in to his peer group! He engaged in a series of reckless and fraudulent behaviors that none of his peers seemed to want. From Going Infinite:

He had not been able to let Modelbot rip the way he’d liked—because just about every other human being inside Alameda Research was doing whatever they could to stop him. “It was entirely within the realm of possibility that we could lose all our money in an hour,” said one. One hundred seventy million dollars that might otherwise go to effective altruism could simply go poof. [...]

Tara argued heatedly with Sam until he caved and agreed to what she thought was a reasonable compromise: he could turn on Modelbot so long as he and at least one other person were present to watch it, but should turn it off if it started losing money. “I said, ‘Okay, I’m going home to go to sleep,’ and as soon as I left, Sam turned it on and fell asleep,” recalled Tara. From that moment the entire management team gave up on ever trusting Sam.

Example from Matt Levine:

There is an anecdote (which has been reported before) from the early days of Alameda Research, the crypto trading firm that Bankman-Fried started before his crypto exchange FTX, the firm whose trades with FTX customer money ultimately brought down the whole thing. At some point Alameda lost track of $4 million of investor money, and the rest of the management team was like “huh we should tell our investors that we lost their money,” and Bankman-Fried was like “nah it’s fine, we’ll probably find it again, let’s just tell them it’s still here.” The rest of the management team was horrified and quit in a huff, loudly telling the investors that Bankman-Fried was dishonest and reckless.

It sounds like SBF drove away everyone who couldn't stand his methods until only people who tolerated him were left. That's a pretty different way of making an organization go insane.

It doesn't seem like this shouldn't be an EA failure mode when the EA community is working well. Word should have gotten around about SBF's shadiness and recklessness, leading to some kind of investigation before FTX reached the point of collapse. The first person I heard making the case against SBF post-collapse was an EA (Rob Wiblin?), but we were way too slow. Of course it has been pointed out that many people who worked with / invested in FTX were fooled as well, so what I wonder about is: why weren't there any EA whistleblowers on the inside? Edit: was it that only four people plus SBF knew about FTX's worst behaviors, and the chance of any given person whistle-blowing in a situation like that is under 25%ish? But certainly more people than that knew he was shady. Edit 2: I just saw important details on who knew what. P.S. I will never get used to the EA/Rat tendency to downvote earnest comments, without leaving comments of their own...

Load more