Nathan Young

Project manager/Director @ Frostwork (web app agency)
16733 karmaJoined Working (6-15 years)London, UK
nathanpmyoung.com

Bio

Participation
4

Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity. 

How others can help me

Talking to those in forecasting to improve my forecasting question generation tool

Writing forecasting questions on EA topics.

Meeting EAs I become lifelong friends with.

How I can help others

Connecting them to other EAs.

Writing forecasting questions on metaculus.

Talking to them about forecasting.

Sequences
1

Moving In Step With One Another

Comments
2451

Topic contributions
20

Interesting take. I don't like it. 

Perhaps because I like saying overrated/underrated.

But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"

Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"

Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...

Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.

 

Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.

I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.

I think given a big enough GPU, yes, it seems plausible to me. Our mids are memory stores and performing calculations. What is missing in terms of a GPU?

I think bacteria are unlikely to be conscious due to a lack of processing power. 

Training takes probably 3 years to cycle up and maybe 3 years to happen. When did we start deciding to train people in AI Safety, vs when was there enough?
 

Seems plausible to me that the AI welfare discussion happens before we might currently be ready.

Would it be wrong to dissect the child?

 

Here is a different thought experiment. Say that I was told that to find the cure to a disease that would kill 1000s of robot children, I had to either dissect the supposedly non-sentient robot or dissect a different, definitely sentient robot. Which do my intuitions point to here?

1x is an arbitrary multiplier too.

I would want to put the number at the 50th percentile belief on the forum.

What's your thought on this:

Humans are conscious in whatever way that word is normally used

  • Our brains are made of matter
  • I think it's likely we'll be able to use matter to make other conscious minds
  • These minds may be able to replicate far faster than our own
  • A huge amount of future consciousness may be non-human
  • The wellbeing of huge chunks of funture consciousness are worthy of our concern

It seems really valuable to have experts at the time the discussion happens. 

If you agree, then it seems worth trianing people for the future when we discuss it.

Worldview diversity isn't a coherent concept and mainly exists to manage internal OpenPhil conflict.

Load more