J

jimrandomh

1134 karmaJoined South Berkeley, Berkeley, CA, USA

Comments
78

Topic contributions
1

I was under the impression that the original intent with Hanania at Manifest 2023 was a similar sort of diplomatic-relations thing: he was going to debate Destiny, but that debate was cancelled because of political pressure.

The language shown in this tweet says:

If the Grantee becomes a Withdrawn Limited Partner, then unless, within 60 days following its applicable Withdrawal Event, the Grantee ... duly executes and delivers to the Partnership a general release of claims against the Partnership and the other Partners with regard to all matters relating to the Partnership up to and including the time of such Withdrawal Event, such Grantee's Units shall be cancelled and reduced to zero ...

It's a trick!

Departing OpenAI employees are then offered a general release which meets the requirements of this section and also contains additional terms. What a departing OpenAI employee needs to do is have their own lawyer draft, execute, and deliver a general release which meets the requirements set forth. Signing the separation agreement is a mistake, and rejecting the separation agreement without providing your own general release is a mistake.

I could be misunderstanding this; I'm not a lawyer, just a person reading carefully. And there's a lot more agreement text that I don't have screenshots of. Still, I think the practical upshot is that departing OpenAI employees may be being tricked, and this particular trick seems defeatable to me. Anyone leaving OpenAI really needs a good lawyer.

According to Kelsey's article, OpenAI employees are coerced into signing lifelong nondisparagement agreements, which also forbid discussion of the nondisparagement agreements themselves, under threat of losing all of their equity.

This is intensely contrary to the public interest, and possibly illegal. Enormous kudos for bringing it to light.

In a legal dispute initiated by an OpenAI employee, the most important thing would probably be what representations were previously made about the equity. That's hard for me to evaluate, but if it's true that they were presented as compensation and the nondisparagement wasn't disclosed, then rescinding those benefits could be a breach of contract. However, I'm not sure if this would apply if this was threatened but the threat wasn't actually executed.

CA GOV § 12964.5 and 372 NLRB No. 58 also offer some angles by which former OpenAI employees might fight this in court.

CA GOV § 12964.5 talks specifically about disclosure of "conduct that you have reason to believe is unlawful." Generically criticizing OpenAI as pursuing unsafe research would not qualify unless (the speaker believes) it rises to the level of criminal endangerment, or similar. Copyright issues would *probably* qualify. Workplace harrassment would definitely qualify.

(No OpenAI employees have alleged any of these things publicly, to my knowledge)

372 NLRB No. 58 nominally invalidates separation agreements that contain nondisparagement clauses, and that restrict discussion of the terms of the separation agreement itself. However, it's specifically focused on the effect on collective bargaining rights under the National Labor Relations Act, which could make it inapplicable.

Interesting. I think I can tell an intuitive story for why this would be the case, but I'm unsure whether that intuitive story would predict all the details of which models recognize and prefer which other models.

As an intuition pump, consider asking an LLM a subjective multiple-choice question, then taking that answer and asking a second LLM to evaluate it. The evaluation task implicitly asks the the evaluator to answer the same question, then cross-check the results. If the two LLMs are instances of the same model, their answers will be more strongly correlated than if they're different models; so they're more likely to mark the answer correct if they're the same model. This would also happen if you substitute two humans or two sittings of the same human implace of the LLMs.

[From the LW version of this post]

Me:

This comment appears transparently intended to increase the costs associated with having written this post, and to be a continuation of the same strategy of attempting to suppress true information.

Martín Soto:

This post literally strongly misrepresents my position in three important ways¹. And these points were purposefully made central in my answers to the author, who kindly asked for my clarifications but then didn't include them in her summary and interpretation. This can be checked by contrasting her summary of my position with the actual text linked to, in which I clarified how my position wasn't the simplistic one here presented.

Are you telling me I shouldn't flag that my position has been importantly misrepresented? On LessWrong? And furthermore on a post that will be seen by way more people than my original text?

¹ I mean the three latter in my above comment, since the first (the hyperbolic presentation) is worrisome but not central.

Me:

You say that he quoted bits are misrepresentations, but I checked your writing and they seem like accurate summaries. You should flag that your position has been misrepresented iff that is true. But you haven't been misrepresented, and I don't think that you think you've been misrepresented.

I think you are muddying the waters on purpose, and making spurious demands on Elizabeth's time, because you think clarity about what's going on will make people more likely to eat meat. I believe this because you've written things like:

One thing that might be happening here, is that we're speaking at different simulacra levels

Source comment. I'm not sure how how familiar you are with local usage of the the simulacrum levels phrase/framework, but in my understanding of the term, all but one of the simulacrum levels are flavors of lying. You go on to say:

Now, I understand the benefits of adopting the general adoption of the policy "state transparently the true facts you know, and that other people seem not to know". Unfortunately, my impression is this community is not yet in a position in which implementing this policy will be viable or generally beneficial for many topics.

The front-page moderation guidelines on LessWrong say "aim to explain, not persuade". This is already the norm. The norms of LessWrong can be debated, but not in a subthread on someone else's post on a different topic.

Martín Soto:

Yes, your quotes show that I believe (and have stated explicitly) that publishing posts like this one is net-negative. That was the topic of our whole conversation. That doesn't imply that I'm commenting to increase the costs of these publications. I tried to convince Elizabeth that this was net-negative, and she completely ignored those qualms, and that's epistemically respectable. I am commenting mainly to avoid my name from being associated with some positions that I literally do not hold.

I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries. If you don't provide object-level reasons why the things I pointed out in my above comment are wrong, then I can do nothing with this information. (To be clear, I do think the screenshots are fairly central parts of my clarifications, but her summaries misrepresent and directly contradict other parts of them which I had also presented as central and important.)

I do observe that providing these arguments is a time cost for you, or fixing the misrepresentations is a time cost for Elizabeth, etc. So the argument "you are just increasing the costs" will always be available for you to make. And to that the only thing I can say is... I'm not trying to get the post taken down, I'm not talking about any other parts of the post, just the ones that summarize my position.

Me:

I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries.

I'm looking at those quote-response pairs, and just not seeing the mismatch you claim there to be. Consider this one:

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.

Of course, my position is not as hyperbolic as this.

This only asserts that there's a mismatch; it provides no actual evidence of one. Next up:

his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread

In my original answers I address why this is not the case (private communication serves this purpose more naturally).

Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn't have public discussion (ie, public discussion would be suppressed). I myself wouldn't know about the results. The probability of a larger follow-up study would be greatly reduced. I personally would have less information about how widespread problems are.

(There are other subthreads on the LW version; I quoted this one because I was a participant, and I do not believe the other subthreads substantially change the interpretation.)

jimrandomh
63
26
11
3
10

So, Nonlinear-affiliated people are here in the comments disagreeing, promising proof that important claims in the post are false. I fully expect that Nonlinear's response, and much of the discussion, will be predictably shoved down the throat of my attention, so I'm not too worried about missing the rebuttals, if rebuttals are in fact coming.

But there's a hard-won lesson I've learned by digging into conflicts like this one, which I want to highlight, which I think makes this post valuable even if some of the stories turn out to be importantly false:

If a story is false, the fact that the story was told, and who told it, is valuable information. Sometimes it's significantly more valuable than if the story was true. You can't untangle a web of lies by trying to prevent anyone from saying things that have falsehoods embedded in them. You can untangle a web of lies by promoting a norm of maximizing the available information, including indirect information like who said what.

Think of the game Werewolf, as an analogy. Some moves are Villager strategies, and some moves are Werewolf strategies, in the sense that, if you notice someone using the strategy, you should make a Bayesian update in the direction of thinking the person using that strategy is a Villager or is a Werewolf.

The existence of these meta-analyses is much less convincing than you think. One, because a study of the effect of sodium reduction on blood sugar combined with a study of the effect of antihypertensive medications don't combine to make a valid estimate of the effect of sodium reduction on a mostly-normotensive population.

But second, because the meta-analyses are themselves mixed. A 2016 meta-meta-analysis of supposedly systematic meta-analyses of sodium reduction found 5 in favor, 3 against, and 6 inconclusive, and found evidence of biased selective citation.

I strongly disagree with the claim that sodium reduction does more good than harm; I think interventions to reduce sodium intake directly harm the people affected. This is true everywhere, but especially true in poorer countries with hot climates, where sodium-reduction programs have the greatest potential for harm.

(This is directly contrary to the position of the scientific establishment. I am well aware of this.)

The problem is that sodium is a necessary nutrient, but required intake varies significantly between people and between temperatures, because sweating costs 1g/L. That's why people have a dedicated taste receptor for it, and why they sometimes crave it and at other times find it aversive.

If you sweat a lot and don't consume salt, you will become lethargic; if you drink something with salt in it, you'll immediately bounce back. If you're a manual laborer, and someone sneakily removes some salt from your diet, you'll either compensate by getting more salt elsewhere, or your productive capacity will drop.

If you look at the published studies on sodium through this lens, you will find that they are universally shoddy. Most are observational but measure sodium intake via urine, causing them to be confounded by exercise. Of those that have interventions, basically all of them start by removing people's ability to self-regulate. I don't think I've seen any that check for negative effects not related to hypertension, but I know the negative effects are there because I can remove the salt from my own diet and experience them.

Props for investigating and doing quantitative analysis. If you do proceed from this intermediate report to a deep-dive report or an intervention project, I hope you'll consider the negatives that the academic research thus far has swept under the rug. I think a properly-conducted RCT, one that reduced sodium intake in a vulnerable population and then accurately reported the harms experienced, could have a significant positive impact.

People hate being taxed for doing things they like

It's much worse than that; in hotter climates, salt isn't a luxury, it's basic sustenance. Gandhi wasn't being figurative when he said "Next to air and water, salt is perhaps the greatest necessity of life."

My understanding is that they strongly prefer you do it between 5 and 7 pm in your local timezone, so that responding officers nominally working a 9-5 schedule can collect overtime payments.

Load more