rileyharris

PhD student in Philosophy @ Oxford
615 karmaJoined www.millionyearview.com

Sequences
1

Book Summary: The Precipice

Comments
68

I'm not exactly sure what this job actually is based on the forum post. Based on a link in the form the duties might be:

  • Oversee operational aspects of the FutureTech research project.
  • Manage project timelines, resources, and deliverables.
  • Help researchers to facilitate their work and overcome logistical challenges.
  • Coordinate with team members and external stakeholders.
  • Contribute to events, research, grant writing, project planning, budgeting, and other administrative tasks.

I only skimmed it but this looks like a great article, thanks for sharing!

I hadn't, that's an interesting idea, thanks!

Hello, to clarify #1 I would say:

It could be the case that future AI systems are conscious by default, and that it is difficult to build them without them being conscious.

Let me try to spell out my intuition here:

  1. If many organisms have property X, and property X is rare amongst non-organisms, then property X is evolutionarily advantageous.

  2. Consciousness meets this condition, so it is likely evolutionarily advantageous.

  3. The advantage that consciousness gives us is most likely something to do with our ability to reason, adapt behaviour, control our attention, compare options, and so on. In other words, it's a "mental advantage" (as opposed to e.g. a physical or metabolic advantage).

  4. We will put a lot of money into building AI that can reason, problem solve, adapt behaviour appropriately, control attention, compare options and so on. Given that many organisms employ consciousness to efficiently achieve these tasks, there is a non-trivial chance that AI will too.

To be clear, I don't know that I would say "it's more likely than not that AI will be conscious by default".

I think that AI welfare should be an EA priority, and I'm also working on it. I think this post is a good illustration of what that means, 5% seems reasonable to me. I also appreciate this post, as it has many of the core motivations for me. I recently spent several months thinking hard about the most effective philosophy PhD project I could work on, and ended up thinking that it was to work on AI consciousness.

I feel like this post is missing discussion of two reasons to build conscious AI:

1. It may be extremely costly or difficult to avoid (this may not be a good reason, but it seems plausibly like why we would do it).
2. Digital minds could have morally valuable conscious experiences, and if there is very many of them, this could be extremely good (at least on some, admittedly controversial ethical theories).

I am a wizard. I have magically transported you back to June 15th 2024. You will have all your progress so far. The essays are due in one month.

Load more