This is a special post for quick takes by Ben_West🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Animal Justice Appreciation Note

Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it.

Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!

Marcus Daniell appreciation note

@Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!

8
GraceAdams🔸
I was lucky enough to see Marcus play this year at the Australian Open, and have pledged alongside him! Marcus is so hardworking - in tennis alongside his work at High Impact Athletes! Go Marcus!!!
2
NickLaing
New Zealand let's go!

First in-ovo sexing in the US

Egg Innovations announced that they are "on track to adopt the technology in early 2025." Approximately 300 million male chicks are ground up alive in the US each year (since only female chicks are valuable) and in-ovo sexing would prevent this. 

UEP originally promised to eliminate male chick culling by 2020; needless to say, they didn't keep that commitment. But better late than never! 

Congrats to everyone working on this, including @Robert - Innovate Animal Ag, who founded an organization devoted to pushing this technology.[1]

  1. ^

    Egg Innovations says they can't disclose details about who they are working with for NDA reasons; if anyone has more information about who deserves credit for this, please comment!

6
Julia_Wise🔸
For others who were curious about what time difference this makes: looks like sex identification is possible at 9 days after the egg is laid, vs 21 days for the egg to hatch (plus an additional ~2 days between fertilization and the laying of the egg.)  Chicken embryonic development is really fast, with some stages measured in hours rather than days.
6[anonymous]
I asked Google when chicken embryos start to feel pain and this was the first result (i.e. I didn't look hard and I didn't anchor on a figure):
3
Gina_Stuessy
How many chicks per year will Egg Innovations' change save? (The announcement link is blocked for me.)
4[anonymous]
  This interview with the CEO suggests that Egg Innovations are just in the laying (not broiler) business and that each hen produces ~400 eggs over her lifetime. So this will save ~750,000 chicks a year?
2
Ben_West🔸
I don't think they say, unfortunately.
1
Nathan Young
Wow this is wonderful news.

Sam Bankman-Fried's trial is scheduled to start October 3, 2023, and Michael Lewis’s book about FTX comes out the same day. My hope and expectation is that neither will be focused on EA,[1] but several people have recently asked me about if they should prepare anything, so I wanted to quickly record my thoughts.

The Forum feels like it’s in a better place to me than when FTX declared bankruptcy: the moderation team at the time was Lizka, Lorenzo, and myself, but it is now six people, and they’ve put in a number of processes to make it easier to deal with a sudden growth in the number of heated discussions. We have also made a number of design changes, notably to the community section

CEA has also improved our communications and legal processes so we can be more responsive to news, if we need to (though some of the constraints mentioned here are still applicable).

Nonetheless, I think there’s a decent chance that viewing the Forum, Twitter, or news media could become stressful for some people, and you may want to preemptively create a plan for engaging with that in a healthy way. 

  1. ^

    This market is thinly traded but is currently predicting that Le

... (read more)

My hope and expectation is that neither will be focused on EA

I'd be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it's unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80'000h (74%), Open Philanthropy (74%), and Give Well (80%), but these markets aren't traded a lot and are not very informative.[1]

This video titled The Fake Genius: A $30 BILLION Fraud (2.8 million views, posted 3 weeks ago) might give a glimpse of how EA could be handled. The video touches on EA but isn't centred on it. It discusses the role EAs played in motivating SBF to do earning to give, and in starting Alameda Research and FTX. It also points out that, after the fallout at Alameda Research, 'higher-ups' at CEA were warned about SBF but supposedly ignored the warnings. Overall, the video is mainly interested in the mechanisms of how the suspected ... (read more)

Yeah, unfortunately I suspect that "he claimed to be an altruist doing good! As part of this weird framework/community!" is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than "he was doing criminal things in crypto" (which I suspect is just not that interesting on its own at this point, even at such a large scale).

The Panorama episode briefly mentioned EA. Peter Singer spoke for a couple of minutes and EA was mainly viewed as charity that would be missing out on money. There seemed to be a lot more interest on the internal discussions within FTX, crypto drama, the politicians, celebrities etc. 

Maybe Panorama is an outlier but potentially EA is not that interesting to most people or seemingly too complicated to explain if you only have an hour.

Yeah I was interviewed for a podcast by a canadian station on this topic (cos a canadian hedge fund was very involved). iirc they had 6 episodes but dropped the EA angle because it was too complex.

2
Sean_o_h
Good to know, thank you.

Agree with this and also with the point below that the EA angle is kind of too complicated to be super compelling for a broad audience. I thought this New Yorker piece's discussion (which involved EA a decent amount in a way I thought was quite fair -- https://www.newyorker.com/magazine/2023/10/02/inside-sam-bankman-frieds-family-bubble) might give a sense of magnitude (though the NYer audience is going to be more interested in these sort of nuances than most.

The other factors I think are: 1. to what extent there are vivid new tidbits or revelations in Lewis's book that relate to EA and 2. the drama around Caroline Ellison and other witnesses at trial and the extent to which that is connected to EA; my guess is the drama around the cooperating witnesses will seem very interesting on a human level, though I don't necessarily think that will point towards the effective altruism community specifically.

5
quinn
Michael Lewis wouldn't do it as a gotcha/sneer, but this is a reason I'll be upset if Adam McKay ends up with the movie. 

Update: the court ruled SBF can't make reference to his philanthropy

5
Ben_West🔸
Yeah, "touches on EA but isn't centred on it" is my modal prediction for how major stories will go. I expect that more minor stories (e.g. the daily "here's what happened on day n of the trial" story) will usually not mention EA. But obviously it's hard to predict these things with much confidence.

The Forum feels like it’s in a better place to me than when FTX declared bankruptcy: the moderation team at the time was Lizka, Lorenzo, and myself, but it is now six people, and they’ve put in a number of processes to make it easier to deal with a sudden growth in the number of heated discussions. We have also made a number of design changes, notably to the community section

This is a huge relief to hear. I noticed some big positive differences, but I couldn't tell where from. Thank you.

If I understand this correctly, maybe not in the trial itself:

Accordingly, the defendant is precluded from referring to any alleged prior good acts by the defendant, including any charity or philanthropy, as indicative of his character or his guilt or innocence.

I guess technically the prosecution could still bring it up.

5
Ben_West🔸
I hadn't realized that, thanks for sharing
3
quinn
(I forgot to tell JP and Lizka in at EAG in NY a few weeks ago, but now's as good a time as any):  Can my profile karma total be two numbers, one for community and one for other stuff? I don't want a reader to think my actual work is valuable to people in proportion to my EA Forum karma, as far as I can tell I think 3-5x my karma is community sourced compared to my object-level posts. People should look at my profile as "this guy procrastinates through PVP on social media like everyone else, he should work harder on things that matter". 
7
Ben_West🔸
Yeah, I kind of agree that we should do something here; maybe the two-dimensional thing you mentioned, or maybe community karma should count less/not at all. Could you add a comment here?
7
Tristan Williams
Could see a number of potentially good solutions here, but think the "not at all" is possibly not the greatest idea. Creating a separate community karma could lead to a sort of system of social clout that may not be desirable, but I also think having no way to signal who has and has not in the past been a major contributor to the community as a topic would be more of a failure mode because I often use it to get a deeper sense of how to think about the claims in a given comment/post.
3
quinn
There would be some UX ways to make community clout feel lower status than the other clout, I agree with you that having community clout means more investment / should be preferred over a new account which for all you know is a driveby dunk/sneer after wandering in on twitter.  I'll cc this to my feature request in the proper thread. 
3
Michelle_Hutchinson
Thank you for the prompt.

Thoughts on the OpenAI Board Decisions

A couple months ago I remarked that Sam Bankman-Fried's trial was scheduled to start in October, and people should prepare for EA to be in the headlines. It turned out that his trial did not actually generate much press for EA, but a month later EA is again making news as a result of recent Open AI board decisions.

A couple quick points:

  1. It is often the case that people's behavior is much more reasonable than what is presented in the media. It is also sometimes the case that the reality is even stupider than what is presented. We currently don't know what actually happened, and should hold multiple hypotheses simultaneously.[1]
  2. It's very hard to predict the outcome of media stories. Here are a few takes I've heard; we should consider that any of these could become the dominant narrative.
    1. Vinod Khosla (The Information): “OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence”
    2. John Thornhill (Financial Times): One entrepreneur who is close to OpenAI says the board was “incredibly principled and brave” to confront Altman, even if it
... (read more)

I've commented before that FTX's collapse had little effect on the average person’s perception of EA

Just for the record, I think the evidence you cited there was shoddy, and I think we are seeing continued references to FTX in basically all coverage of the OpenAI situation, showing that it did clearly have a lasting effect on the perception of EA. 

Reputation is lazily-evaluated. Yes, if you ask a random person on the street what they think of you, they won't know, but when your decisions start influencing them, they will start getting informed, and we are seeing really very clear evidence that when people start getting informed, FTX is heavily influencing their opinion.

6
Ben_West🔸
Thanks! Could you share said evidence? The data sources I cited certainly have limitations, having access to more surveys etc. would be valuable.

The Wikipedia page on effective altruism mentions Bankman-Fried 11 times, and after/during the OpenAI story, it was edited to include a lot of criticism, ~half of which was written after FTX (e.g. it quotes this tweet https://twitter.com/sama/status/1593046526284410880 )

It's the first place I would go to if I wanted an independent take on "what's effective altruism?" I expect many others to do the same.

There are a lot of recent edits on that article by a single editor, apparently a former NY Times reporter (the edit log is public). From the edit summaries, those edits look rather unfriendly, and the article as a whole feels negatively slanted to me. So I'm not sure how much weight I'd give that article specifically.

Sure, here are the top hits for "Effective Altruism OpenAI" (I did no cherry-picking, this was the first search term that I came up with, and I am just going top to bottom). Each one mentions FTX in a way that pretty clearly matters for the overall article: 

... (read more)
4
Ben_West🔸
Ah yeah sorry, the claim of the post you criticized was not that FTX isn't mentioned in the press, but rather that those mentions don't seem to actually have impacted sentiment very much. I thought when you said "FTX is heavily influencing their opinion" you were referring to changes in sentiment, but possibly I misunderstood you – if you just mean "journalists mention it a lot" then I agree.
2
Habryka
You are also welcome to check Twitter mentions or do other analysis of people talking publicly about EA. I don't think this is a "journalist only" thing. I will take bets you will see a similar pattern.

I actually did that earlier, then realized I should clarify what you were trying to claim. I will copy the results in below, but even though they support the view that FTX was not a huge deal I want to disclaim that this methodology doesn't seem like it actually gets at the important thing.

But anyway, my original comment text:

As a convenience sample I searched twitter for "effective altruism". The first reference to FTX doesn't come until tweet 36, which is a link to this. Honestly it seems mostly like a standard anti-utilitarianism complaint; it feels like FTX isn't actually the crux. 

In contrast, I see 3 e/acc-type criticisms before that, two "I like EA but this AI stuff is too weird" things (including one retweeted by Yann LeCun??), two "EA is tech-bro/not diverse" complaints and one thing about Whytham Abbey.

And this (survey discussed/criticized here):

4
Habryka
I just tried to reproduce the Twitter datapoint. Here is the first tweet when I sort by most recent:  Most tweets are negative, mostly referring to the OpenAI thing. Among the top 10 I see three references to FTX. This continues to be quite remarkable, especially given that it's been more than a year, and these tweets are quite short. I don't know what search you did to find a different pattern. Maybe it was just random chance that I got many more than you did. 
2
Ben_West🔸
I used the default sort ("Top"). (No opinion on which is more useful; I don't use Twitter much.)
4
Habryka
Top was mostly showing me tweets from people that I follow, so my sense is it was filtered in a personalized way. I am not fully sure how it works, but it didn't seem the right type of filter.
4
Ben_West🔸
Yeah, makes sense. Although I just tried doing the "latest" sort and went through the top 40 tweets without seeing a reference to FTX/SBF. My guess is that this filter just (unsurprisingly) shows you whatever random thing people are talking about on twitter at the moment, and it seems like the random EA-related thing of today is this, which doesn't mention FTX. Probably you need some longitudinal data to have this be useful.
2
Nathan Young
I would guess too that these two events have made it much easier to reference EA in passing. eg I think this article wouldn't have been written 18 months ago. https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362 So I think there is a real jump of notoriety once the journalistic class knows who you are. And they now know who we are. "EA, the social movement involved in the FTX and OpenAI crises" is not a good epithet.
3
trevor1
Upvoted, I'm grateful for the sober analysis. I think this is an oversimplification. This effect is largely caused by competing messages; the modern internet optimizes information for memetic fitness e.g. by maximizing emotional intensity or persuasive effect, and people have so much routine exposure to stuff that leads their minds around in various directions that they get wary (or see having strong reactions to anything at all as immature, since a large portion of outcries on the internet are disproportionately from teenagers). This is the main reason why people take things with a grain of salt. However, overton windows can still undergo big and lasting shifts (this process could also be engineered deliberately long before generative AI emerged, e.g. via clown attacks which exploit social status instincts to consistently hijack any person's impressions of any targeted concept). The 80,000 hours podcast with Cass Sunstein covered how Overton windows are dominated by vague impressions of what ideas are acceptable or unacceptable to talk about (note: this podcast was from 2019). This dynamic could plausibly strangle EA's access to fresh talent, and AI safety's access to mission-critical policy influence, for several years (which would be far too long). On the flip side, johnswentworth actually had a pretty good take on this; that the human brain is instinctively predisposed to over-focus on the risk of their in-group becoming unpopular among everyone else:
4
Ben_West🔸
Thanks for the helpful comment – I had not seen John's dialogue and I think he is making a valid point. Fair point that the lack of impact might not be due to attention span but instead things like having competing messages.  In case you missed it: Angelina Li compiled some growth metrics about EA here; they seem to indicate that FTX's collapse did not "strangle" EA (though it probably wasn't good).
Ben_West🔸
Moderator Comment54
0
0

Possible Vote Brigading

We have received an influx of people creating accounts to cast votes and comments over the past week, and we are aware that people who feel strongly about human biodiversity sometimes vote brigade on sites where the topic is being discussed. Please be aware that voting and discussion about some topics may not be representative of the normal EA Forum user base.

Huh, seems like you should just revert those votes, or turn off voting for new accounts. Seems better than just having people be confused about vote totals.

And maybe add a visible "new account" flag -- I understand not wanting to cut off existing users creating throwaways, but some people are using screenshots of forum comments as evidence of what EAs in general think.

5
Larks
Arguably also beneficially if you thought that we should typically make an extra effort to be tolerant of 'obvious' questions from new users.
2
Ben_West🔸
Thanks! Yeah, this is something we've considered, usually in the context of trying to make the Forum more welcoming to newcomers, but this is another reason to prioritize that feature.
1
Peter Wildeford
I agree.
9
Ben_West🔸
Yeah, I think we should probably go through and remove people who are obviously brigading (eg tons of votes in one hour and no other activity), but I'm hesitant to do too much more retroactively. I think it's possible that next time we have a discussion that has a passionate audience outside of EA we should restrict signups more, but that obviously has costs.
6
Habryka
When you purge user accounts you automatically revoke their votes. I wouldn't be very hesitant to do that. 
6
Ben_West🔸
How do you differentiate someone who is sincerely engaging and happens to have just created an account now from someone who just wants their viewpoint to seem more popular and isn't interested in truth seeking? Or are you saying we should just purge accounts that are clearly in the latter category, and accept that there will be some which are actually in the latter category but we can't distinguish from the former?
5
Habryka
I think being like "sorry, we've reverted votes from recently signed-up accounts because we can't distinguish them" seems fine. Also, in my experience abusive voting patterns are usually very obvious, where people show up and only vote on one specific comment or post, or on content of one specific user, or vote so fast that it seems impossible for them to have read the content they are voting on.
3
Bob Jacobs 🔸
How about: getting a lot of downvotes from new accounts doesn't decrease your voting-power and doesn't mean your comments won't show up on the frontpage? Half a dozen of my latest comments  have responded to HBDers. Since they get a notification it doesn't surprise me that those comments get immediate downvotes which hides them from the frontpage and subsequently means that they can easily decrease my voting-power on this forum (it went from 5 karma for a strong upvote to now 4 karma for a strong upvote). Giving brigaders the power to hide things from the frontpage and decide which people have more voting-power on this forum seems undesirable.

Note: I went through Bob's comments and think it likely they were brigaded to some extent. I didn't think they were in general excellent, but they certainly were not negative-karma comments. I strong-upvoted the ones that were below zero, which was about three or four.

I think it is valid to use the strong upvote as a means of countering brigades, at least where a moderator has confirmed there is reason to believe brigading is active on a topic. My position is limited to comments below zero, because the harmful effects of brigades suppressing good-faith comments from visibility and affirmatively penalizing good-faith users are particularly acute. Although there are mod-level solutions, Ben's comments suggest they may have some downsides and require time, so I feel a community corrective that doesn't require moderators to pull away from more important tasks has value.

I also think it is important for me to be transparent about what I did and accept the community's judgment. If the community feels that is an improper reason to strong upvote, I will revert my votes.

Edit: is to are

5
Peter Wildeford
I agree.
6
Larks
Could you set a minimum karma threshold (or account age or something) for your votes to count? I would expect even a low threshold like 10 would solve much of the problem.

Yeah, interesting. I think we have a lot of lurkers who never get any karma and I don't want to entirely exclude them, but maybe some combo like "10 karma or your account has to be at least one week old" would be good.

4
Peter Wildeford
Yeah I think that would be a really smart way to implement it.
4
pseudonym
Do the moderators think the effect of vote brigading reflect support from people who are pro-HBD or anti-HBD?
Ben_West🔸
Moderator Comment55
0
0

The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly:

Where he crossed the line was his decision to dox people who worked at Leverage or affiliated organizations by researching the people who worked there and posting their names to the EA forum

The user in question said this information came from searching LinkedIn for people who had listed themselves as having worked at Leverage and related organizations. 

This is not "doxing" and it’s unclear to us why Kerry would use this term: for example, there was no attempt to connect anonymous and real names, which seems to be a key part of the definition of “doxing”. In any case, we do not consider this to be a violation of our norms.

At one point Forum moderators got a report that some of the information about these people was inaccurate. We tried to get in touch with the then-anonymous user, and when we were unable to, we redacted the names from the comment. Later, the user noticed the change and replaced the names. One of CEA’s staff asked the user to encode the names to allow those people mor... (read more)

-4
Cathleen
How I wish the EA Forum had responded I’ve found that communicating feedback/corrections often works best when I write something that approximates what I would’ve wished the other person had originally written.  Because of the need to sync more explicitly on a number of background facts and assumptions (and due to not having time for edits/revisions), my draft is longer than I think a moderator’s comment would need to be, were the moderation team to be roughly on the same page about the situation. While I am the Cathleen being referenced, I have had minimal contact with Leverage 2.0 and the EA Forum moderation team, so  I expect this draft to be imperfect in various ways, while still  pointing at useful  and important parts of reality. Here I’ve made an attempt to rewrite what I wish Ben West had posted in response to Kerry’s tweet thread:   The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly: We care a lot about ensuring that the EA Forum is a welcoming place where people are free to discuss important issues related to world improvement. While disagreement and criticism are an important part of that, we want to be careful not to allow for abuse to take place on our platform, and so we take such reports seriously.  After reviewing the situation, we have compiled the following response (our full review is still in process but we wanted to share what we have so far while the issue is live):   While Leverage was not a topic that we had flagged as “sensitive” back in Sept 2019 when the then-anonymous user originally made his post, the subsequent discussion around the individuals and organizations who were part of the Leverage/Paradigm ecosystem prior to its dissolution in June 2019 has led it to be classified as a sensitive topic to which we expend more scrutiny and are more diligent about enforcing our norms.  In reviewing

To share a brief thought, the above comment gives me a bad juju because it puts a contested perspective into a forceful and authoritative voice, while being long enough that one might implicitly forget that this is a hypothetical authority talking[1]. So it doesn't feel to me like a friendly conversational technique. I would have preferred it to be in the first person.

  1. ^

    Garcia Márquez has a similar but longer  thing going on in The Handsomest Drowned Man In The World, where everything after "if that magnificent man had lived in the village" is a hypothetical. 

8
Ben_West🔸
(fwiw I didn't mind the format and felt like this was Cathleen engaging in good faith.)
4
LarissaHeskethRowe
I would have so much respect for CEA if they had responded like this. 
-10
Kerry_Vaughan

Startups aren't good for learning

I fairly frequently have conversations with people who are excited about starting their own project and, within a few minutes, convince them that they would learn less starting project than they would working for someone else. I think this is basically the only opinion I have where I can regularly convince EAs to change their mind in a few minutes of discussion and, since there is now renewed interest in starting EA projects, it seems worth trying to write down.

It's generally accepted that optimal learning environments have a few properties:

  • You are doing something that is just slightly too hard for you.
    • In startups, you do whatever needs to get done. This will often be things that are way too easy (answering a huge number of support requests) or way too hard (pitching a large company CEO on your product when you've never even sold bubblegum before).
    • Established companies, by contrast, put substantial effort into slotting people into roles that are approximately at their skill level (though you still usually need to put in proactive effort to learn things at an established company). 
  • Repeatedly practicing a skill in "chunks"
    • Similar to the last poin
... (read more)
7
Clifford
I think I agree with this. Two things that might make starting a startup a better learning opportunity than your alternative, in spite of it being a worse learning environment: 1. You are undervalued by the job market (so you can get more opportunities to do cool things by starting your own thing) 2. You work harder in your startup because you care about it more (so you get more productive hours of learning)
1
Dave Cortright 🔸
It depends on what you want to learn. At a startup, people will often get a lot more breadth of scope than they would otherwise in an established company. Yes, you might not have in-house mentors or seasoned pros to learn from, but these days motivated people can fill in the holes outside the org.
1
Yonatan Cale
It depends what you want to learn As you said. * Founding a startup is a great way to learn how to found a startup. * Working as a backend engineer in some company is a great way to learn how to be a backend engineer in some company. (I don't see why to break it up more than that)

Plant-based burgers now taste better than beef

The food sector has witnessed a surge in the production of plant-based meat alternatives that aim to mimic various attributes of traditional animal products; however, overall sensory appreciation remains low. This study employed open-ended questions, preference ranking, and an identification question to analyze sensory drivers and barriers to liking four burger patties, i.e., two plant-based (one referred to as pea protein burger and one referred to as animal-like protein burger), one hybrid meat-mushroom (75% meat and 25% mushrooms), and one 100% beef burger. Untrained participants (n=175) were randomly assigned to blind or informed conditions in a between-subject study. The main objective was to evaluate the impact of providing information about the animal/plant-based protein source/type, and to obtain product descriptors and liking/disliking levels from consumers. Results from the ranking tests for blind and informed treatments showed that the animal-like protein [Impossible] was the most preferred product, followed by the 100% beef burger. Moreover, in the blind condition, there was no significant difference in preferences between t

... (read more)

Interesting! Some thoughts:

  1. I wonder if the preparation was "fair", and I'd like see replications with different beef burgers. Maybe they picked a bad beef burger?
  2. Who were the participants? E.g. students at a university, and so more liberal-leaning and already accepting of plant-based substitutes?
  3. Could participants reliably distinguish the beef burger and the animal-like plant-based burger in the blind condition?

(I couldn't get access to the paper.)

This Twitter thread points out that the beef burger was less heavily salted.

5
Linch
Thanks for the comment and the followup comments by you and Michael, Ben. First, it's really cool that Impossible was preferred to beef burgers in a blind test! Even if the test is not completely fair! Impossible has been around for a while, and obviously they would've been pretty excited to do a blind taste test earlier if they thought they could win, which is evidence that the product has improved somewhat over the years. I want to quickly add an interesting tidbit I learned from food science practitioners[1] a while back:  Blind taste tests are not necessarily representative of "real" consumer food preferences. By that, I mean I think most laymen who think about blind taste tests believe that there's a Platonic taste attribute that's captured well by blind taste tests (or captured except for some variance). So if Alice prefers A to B in a blind taste test, this means that Alice in some sense should like A more than B. And if she buys (at the same price) B instead of A at the supermarket, that means either she was tricked by good marketing, or she has idiosyncratic non-taste preferences that makes her prefer B to A (eg positive associations with eating B with family or something).  I think this is false. Blind taste tests are just pretty artificial, and they do not necessarily reflect real world conditions where people eat food. This difference is large enough to sometimes systematically bias results (hence the worry about differentially salted Impossible burgers and beef burgers). People who regularly design taste tests usually know that there are easy ways that they can manipulate taste tests so people will prefer more X in a taste test, in ways that do not reflect more people wanting to buy more X in the real world. For example, I believe adding sugar regularly makes products more "tasty" in the sense of being more highly rated in a taste test. However, it is not in fact the case that adding high amounts of sugar automatically makes a product more commonly

New Netflix show ~doubles search traffic for plant-based eating

Image

h/t Lewis Bollard.

Reversing start up advice
In the spirit of reversing advice you read, some places where I would give the opposite advice of this thread:

Be less ambitious
I don't have a huge sample size here, but the founders I've spoken to since the "EA has a lot of money so you should be ambitious" era started often seem to be ambitious in unhelpful ways. Specifically: I think they often interpret this advice to mean something like "think about how you could hire as many people as possible" and then they waste a bunch of resources on some grandiose vision without having validated that a small-scale version of their solution works.

Founders who instead work by themselves or with one or two people to try to really deeply understand some need and make a product that solves that need seem way more successful to me.[1]

Think about failure
The "infinite beta" mentality seems quite important for founders to have. "I have a hypothesis, I will test it, if that fails I will pivot in this way" seems like a good frame, and I think it's endorsed by standard start up advice (e.g. lean startup).

  1. ^

    Of course, it's perfectly coherent to be ambitious about finding a really good value proposition. It's just that I worry t

... (read more)
4
Ben_West🔸
Two days after posting, SBF, who the thread lists as the prototypical example of someone who would never make a plan B, seems to have executed quite the plan B.

Longform's missing mood

If your content is viewed by 100,000 people, making it more concise by one second saves an aggregate of one day across your audience. Respecting your audience means working hard to make your content shorter.

When the 80k podcast describes itself as "unusually in depth," I feel like there's a missing mood: maybe there's no way to communicate the ideas more concisely, but this is something we should be sad about, not a point of pride.[1]


  1. I'm unfairly picking on 80k, I'm not aware of any long-form content which has this mood that I claim is missing ↩︎

7
Charles He
This is a thoughtful post and a really good sentiment IMO! As you touched on, I’m not sure 80k is a good negative example, to me it seems like a positive example of how to handle this?  In addition to a tight intro, 80k has a great highlight section, that to me, looks like someone smart tried to solve this exact problem, balancing many considerations.  This highlight section has good takeaways and is well organized with headers. I guess this is useful for 90% of people who only browse at the content for 1 minute.
2
80000_Hours
We also offer audio version of those highlights for all episodes on the '80k After Hours' feed: https://80000hours.org/after-hours-podcast/
4
Ben_West🔸
Thanks for the push back! I agree that 80k cares more about the use of their listener's time than most podcasters, although this is a low bar. 80k is operating under a lot of constraints, and I'm honestly not sure if they are actually doing anything incorrectly here. Notably, the fancy people who they get on the podcast probably aren't willing to devote many hours to rephrasing things in the most concise way possible, which really constrains their options. I do still feel like there is a missing mood though.
1
Yarrow B.
To me, economy of words is what’s important, rather than overall length. Long can be wonderful, as long as the writer uses all those words well. Short can be wonderful, if the writer uses enough words to convey their complete thoughts.
Ben_West🔸
Moderator Comment15
0
0

Closing comments on posts
If you are the author of a post tagged "personal blog" (which notably includes all new Bostrom-related posts) and you would like to prevent new comments on your post, please email forum@centerforeffectivealtruism.org and we can disable them.

We know that some posters find the prospect of dealing with commenters so aversive that they choose not to post at all; this seems worse to us than posting with comments turned off.

Democratizing risk post update
Earlier this week, a post was published criticizing democratizing risk. This post was deleted by the (anonymous) author. The forum moderation team did not ask them to delete it, nor are we aware of their reasons for doing so.
We are investigating some likely Forum policy violations, however, and will clarify the situation as soon as possible.

2
Lizka
See the updates here. 

EA Three Comma Club

I'm interested in EA organizations that can plausibly be said to have improved the lives of over a billion individuals. Ones I'm currently aware of:

  1. Shrimp Welfare Project – they provide this Guesstimate, which has a mean estimate of 1.2B shrimps per year affected by welfare improvements that they have pushed
  2. Aquatic Life Institute – they provide this spreadsheet, though I agree with Bella that it's not clear where some of the numbers are coming from.

Are there any others?

9
Habryka
This is a nitpick, but somehow someone "being an individual" reads to me as implying a level of consciousness that seems a stretch for shrimps. But IDK, seems like a non-crazy choice under some worldviews.
1
Ben_West🔸
That's fair. I personally like that this forces people to come to terms with the fact that interventions targeted at small animals are way more scalable than those targeted at larger ones. People might decide on some moral weights which cancel out the scale of small animal work, but that's a nontrivial philosophical assumption, and I like prompting people to think about whether it's actually reasonable.  
5
Habryka
I think "animals that have more neurons or are more complex are morally more important" is not a "nontrivial philosophical assumption".  It indeed strikes me as a quite trivial philosophical assumption the denial of which would I think seem absurd to almost anyone considering it. Maybe one can argue the effect is offset by the sheer number, but I think you will find almost no one on the planet who would argue that these things do not matter. 
3
Ben_West🔸
On the contrary, approximately everyone denies this! Approximately ~0% of Americans think that humans with more neurons than other humans have more moral value, for example.[1]  1. ^ Citation needed, but I would be pretty surprised if this were false. Would love to hear contrary evidence though!
7
Habryka
Come on, you know you are using a hilariously unrepresentative datapoint here. Within humans the variance of neuron count only explains a small fraction of variance in experience and also we have strong societal norms that push people's map towards pretending differences like this don't matter.
-3
Ben_West🔸
Unrepresentative of what? At least in my University ethics courses we spent way more time arguing about the rights of anencephalic children or human fetuses than insects. (And I would guess that neuron count explains a large fraction of the variance in experience between adult and fetal humans, for example.) In any case: I think most people's moral intuitions are terrible and you shouldn't learn a ton from the fact that people disagree with you. But as a purely descriptive matter, there are plenty of people who disagree with you – so much so that reading their arguments is a standard part of bioethics 101 in the US.
4
Habryka
It's unrepresentative of the degree to which people believe that corollaries like neuron count and brain size and behavior complexity are an indicator of moral relevance across species (which is the question at hand here). 
2
calebp
If the funders get a nontrivial portion of the impact for early-stage projects then I think the AWF (inc. its donors) is very plausible.
4
Ben_West🔸
Yeah, I am not sure how to treat meta. In addition to funders, Charity Entrepreneurship probably gets substantial credit for SWP, etc.

@lukeprog's investigation into Cryonics and Molecular nanotechnology seems like it may have relevant lessons for the nascent attempts to build a mass movement around AI safety: 

First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work. These advocates successfully garnered substantial media attention, and this seems to have irritated the most relevant established scientific communities (cryobiology and chemistry, respectively), both because many

... (read more)
Ben_West🔸
Moderator Comment9
0
0

We are banning stone and their alternate account for one month for messaging users and accusing others of being sock puppets, even after the moderation team asked them to stop. If you believe that someone has violated Forum norms such as creating sockpuppet accounts, please contact the moderators.

Working for/with people who are good at those skills seems like a pretty good bet to me.

E.g. "knowing how to attract people to work with you" – if person A has a manager who was really good at attracting people to work with them, and their manager is interested in mentoring, and person B is just learning how to attract people to work with them from scratch at their own startup, I would give very good odds that person A will learn faster.

2
Charles He
Can you give some advice about the topic of attracting good people to work with you, or have any writeups you like?

An EA Limerick

(Lacey told me this was not good enough to actually submit to the writing contest, so publishing it as a short form.)

An AI sat boxed in a room 

Said Eliezer: "This surely spells doom! 

With self-improvement recursive, 

And methods subversive 

It will just simply go 'foom'."

1
E Vasquez
Nice!

I have recently been wondering what my expected earnings would be if I started another company. I looked back at the old 80 K blog post arguing that there is some skill component to entrepreneurship, and noticed that, while serial entrepreneurs do have a significantly higher chance of a successful exit on their second venture, they raise their first rounds at substantially lower valuations. (Table 4 here.)

It feels so obvious to me that someone who's started a successful company in the past will be more likely to start one in the future, and I continue to b... (read more)

7
Lorenzo Buonanno🔸
Wild guesses as someone that knows very little about this: I wonder if it's because people have sublinear returns on wealth, so their second company would be more mission driven and less optimized for making money. Also, there might be some selection bias in who needs to raise money vs being self funded. But if I had to bet I would say that it's mostly noise, and there's not enough data to have a strong prior.

Person-affecting longtermism

This post points out that brain preservation (cryonics) is potentially quite cheap on a $/QALY basis because people who are reanimated will potentially live for a very long time with very high quality of life.

It seems reasonable to assume that reanimated people would funge against future persons, so I'm not sure if this is persuasive for those who don't adopt person affecting views, but for those who do, it's plausibly very cost-effective.

This is interesting because I don't hear much about person affecting longtermist causes.

Yeah definitely. I don't want to claim that learning is impossible at a startup – clearly it's possible – just that, all else equal, learning usually happens faster at existing companies.

Thanks! I'm not sure I fully understand your comment – are you implying that the skills you mention are easier to learn in a startup?

Unsurprisingly, I disagree with that view :)

Curated and popular this week
Relevant opportunities