huw

1188 karmaJoined Working (0-5 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
169

huw
13
1
0

For those that have been following this: Is he serious, or is this just lip service and he's blocking it because he was lobbied by people in the tech industry?

Answer by huw6
1
0

During my last burnout, I realised that trying to push my working hours to the breaking point was making me substantially less happy (because of the pressure of that question—'could I be working right now?'), and substantially less productive (because without time to breathe, I was too focused on the wrong tasks). Cutting myself slack has really, genuinely improved the volume (quantity × quality) of value I produce. I don't think you need to accept a cop-out answer like 'just be happy', I believe that the conventional wisdom on knowledge and creative work is culturally over-moralised around 'hard work' (and particularly working hours in an American context), and isn't optimal for productivity. Just takes some time to shake it out of your system.

I can really recommend Four Thousand Weeks by Oliver Burkeman (or his Waking Up course), or Rework by Jason Fried & DHH for more pointers in this direction that have helped me 😌

I was wondering about the general conservative value around environmental conservation. Generally, I've noticed that some conservatives really seem to value nature itself (often, but not always, from a religious perspective—there's a wide range here), which I would have presumed could translate to a view of protecting animals as part of nature (rather than for the instrumental value of protecting against climate change that is more popular on the left). Why did this value not make it—is it just that U.S. conservatives need to protect & promote the agricultural industry, and directly opposing it won't fly?

I really do wonder to what extent the non-profit and then capped-profit structures were genuine, or just ruses intended to attract top talent that were always meant to be discarded. The more we learn about Sam, the more confusing it is that he would ever accept a structure that he couldn’t become fabulously wealthy from.

Just to check—is the 230mg target additive to sodium, or substitutive? I can imagine the interventions would look different if we merely had to fortify food vs start replacing sodium.

In general, I’m hugely in favour of EA considering this (and similar interventions like mandating/favouring sugar substitution). Health issues that face rich countries today are likely already facing poor countries in large quantities and will only get relatively worse as we solve other problems.

It seems like some of the biggest proponents of SB 1047 are Hollywood actors & writers (ex. Mark Ruffalo)—you might remember them from last year’s strike.

I think that the AI Safety movement has a big opportunity to partner with organised labour the way the animal welfare side of EA partnered with vegans. These are massive organisations with a lot of weight and mainstream power if we can find ways to work with them; it’s a big shortcut to building serious groundswell rather than going it alone.

See also Yanni’s work with voice actors in Australia—more of this!

Just to narrow in on a single point—I have found the 'EA fundamentally depends on uncomfortable conversations' point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defend—for example, most people here don't want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming.

When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA would've broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded).

I'm not keen to take a stance on whether this post should or shouldn't be allowed on the forum, but I am curious to hear if and where you would draw this line :)

huw
32
14
5
1
1

I downvoted this post, so I want to explain why. I don't think this post actually adds much to the forum, or to EA more generally. You have mostly just found a strawperson to beat up on, and I don't think many of your rebuttals are high quality, nor do they take her on good faith (to use a rat term I loathe, you are in 'soldier mindset').

I can't really see a benefit to doing so; demarcating our 'opponents' only serves to cut us off from them, and to become 'intellectually incurious' about why they might feel that way or how we might change their minds. This does, over time, make things harder for us—funders start turning their noses up at EAs, policymakers don't want to listen, influential people in industry can write us off as unserious.

There are numerous other potential versions of this post. It could have been a thought-provoking critique of Peter Singer for engaging in debate theatre. It could have tried to steelperson her arguments. It could have even tried to trace the intellectual lineage of those arguments to understand why she has ended up with this particular inconsistent set of them! All of those would have been useful for understanding why people hate us, and how we can make them hate us less. I am not a fan of this trend of cheerleading against our haters, and I worry about the consequences of the broader environment it has and is fostering :(

huw
17
9
0

I guess one thing worth noting here is that they raised from a16z, whose leaders are notoriously critical of AI safety. Not sure how they square that circle, but I doubt it involves their investors having changed their perspectives on that issue.

Load more