BS

ben.smith

860 karmaJoined Downtown, Eugene, OR, USA

Comments
111

You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.

For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.

We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn't 10% or more.

You could model the distribution of your uncertainty with, say, a beta distribution of .

You might wonder, why b=100 and not b=200, or 101? It's an arbitrary choice, right?

To which I have two responses:

  1. You can go one level up and model the beta parameter on some distribution of all reasonable choices, say, a uniform distribution between 10 and 1000.
  2. While it is arbitrary, I claim that avoiding expected effects because we can't make a fully non-arbitrary choice is itself an arbitrary choice. This is because we are acting in a dynamic world where every second, opportunities can be lost, and no action is still an action, the action of foregoing the counterfactual option. So by avoiding assigning any outcome, and acting accordingly, you have implicitly, and arbitrarily, assigned an outcome value of 0. When there's some morally outcome we can only model with some somewhat arbitrary statistical priors, doing so nevertheless seems less arbitrary than just assigning an outcome value of 0.

This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn't (by default) include a weighting amongst the set.

It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.

If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could go for EV based on that distribution, or you could make other choices that are more risk averse. But whatever you do, you're back to using a single probability function. I think that's probably what you should do. But that sounds to me indistinguishable from the naive response.

The idea of a "precise probability function" is in general flawed. The whole point of a probability function is you don't have precision. A probability function of a real event is (in my view) just a mathematical formulation modeling my own subjective uncertainty. There is no precision to it. That's the Bayesian perspective on probability, which seems like the right interpretation of probability, in this context.

As Yann LeCun recently said, “If you do research and don't publish, it's not science.”

With all due respect to Yann LeCun, in my view he is as wrong here as he is dismissive about the risks from AGI.

Publishing is not an intrinsic and definitional part of science. Peer reviewed publishing definitely isn't--it has only been the default for several decades to a half century or so. It may not be the default in another half century.

If Trump still thinks AI is "maybe the most dangerous thing" I would be wary of giving up on chances to leverage his support on AI safety.

In 2022, individual EAs stood for elected positions within each major party. I understand there are Horizon fellows with both Democrat and Republican affiliations.

If EAs can engage with both parties in those ways, added to the fact the presumptive Republican nominee may be sympathetic, I wouldn't give up on Republican support for AI safety yet.

ha I see. Your advice might be right but I don't think "consciousness is quantum". I wonder if you could say what you mean by that?

Of course I've heard that before. In the past when I have heard people say that before, it's by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:

  1. Most importantly, as I pointed out here: consciousness is roughly orthogonal to intelligence. So your view shouldn't give you reassurance about AGI. We could have a formal definition of intelligence, and causal instantiations of it, without any qualia what-its-like-to-be subjective consciousness existing in the system. There is also conscious experience with minimal intelligence, like experiences of raw pleasure, pain, or observing the blueness of the sky. As I explain in the linked post, consciousness is also orthogonal to agency or goal-directed behavior.
  2. There's a great deal of research about consciousness. I described one account in my post, and Nick Humphrey does go out on a limb more than most researchers do. But my sense is most neuroscientists of consciousness endorse some account of consciousness roughly equivalent to Nick's. While probably some (not all or even a majority) would concede the hard problem remains, based on what we do know about the structure of the physical substrates underlying consciousness, it's hard to imagine what role "quantum" would do.
  3. It fails to add any sense of meaningful free will, because a brain that makes decisions based on random quantum fluctuations doesn't in any meaningful way have more agency than a brain that makes decisions based on pre-determined physical causal chains. While a [hypothetical] quantum-based brain does avoid being pre-determined by physical causal chains, now it is just pre-determined by random quantum fluctuations.
  4. Lastly I have to confess a bit of prejudice against this view. In the past it seems like this view has been proposed so naively it seems like people are just mashing together two phenomena that no one fully understands, and proposing they're related because ???? But the only thing they have in common, as far as I know, is that we don't understand them. That's not much of a reason to believe in a hypothesis that links them.
  5. Assuming your view was correct, if someone built a quantum computer, would you then be more worried about AGI? That doesn't seem so far off.

Elliot has a phenomenally magnetic personality and is consistently positive and uplifting. He's generally a great person to be around. His emotional stamina gives him the ability to uplift the people around him and I think he is a big asset to this community.

TLDR: I'm looking for researcher roles in AI Alignment, ideally translating technical findings into actionable policy research


Skills & background: I have been an local EA community builder since 2019. I have a PhD in social psychology and wrote my dissertation on social/motivational neuroscience. I also have a BS in computer science and spent two years in industry as a data scientist building predictive models. I'm an experienced data scientist, social scientist, and human behavioral scientist.

Location/remote: Currently located on the West Coast of the USA. Willing to relocate to the Bay area for sufficiently high renumeration, or to Southern California or Seattle for just about any suitable role. Would relocate to just about anywhere including the USA east coast, Australasia, the UK, or China for a highly impactful role.

Availability & type of work: I finish work teaching at the University of Oregon around April, and if I haven't found something by then, will be available again in June. I'm looking for full-time work from there or part time work in impactful roles for an immediate start.

Resume/CV/LinkedIn: 

Brief resume

Full academic CV


LinkedIn
Email/contact: benjsmith@gmail.com

Other notes: I don't have strong preference for cause areas and would be highly attracted to roles reducing AI existential risk, or improving animal welfare and global health, or our understanding of the long-term future. I suspect my comparative advantage is in research roles (broadly defined) and in data science work; technical summaries for AI governance or Evals work might be a comparative advantage.

But I would guess that pleasure and unpleasantness isn't always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.

This sounds right. My claim is that there are all sorts of unconscious perceptions an valenced processing going on in the brain, but all of that is only experienced consciously once there's a certain kind of recurrent cortical processing of the signal which can loosely be described as "sensation". I mean that very loosely; it even can include memories of physical events or semantic thought (which you might understand as a sort of recall of auditory processing). Without that recurrent cortical processing modeling the reward and learning process, probably all that midbrain dopaminergic activity does not get consciously perceived. Perhaps it does, indirectly, when the dopaminergic activity (or lack thereof) influences the sorts of sensations you have.

But I'm getting really speculative here. I'm an empiricist and my main contention is that there's a live issue with unknowns and researchers should figure out what sort of empirical tests might resolve some of these questions, and then collect data to test all this out.

 

I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination)....Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.

 

I think most of those things actually can be reduced to sensations; moods can't be, but then, are moods consciously experienced, or do they only predispose us to interpret conscious experiences more positively or negatively?

(Edit: another set of sensations you might overlook when you think about conscious experience of mood are your bodily sensations: heart rate, skin conductivity, etc.)

But this also seems like the thing that's more morally important to look into directly. Maybe frogs' vision is blindsight, their touch and hearing are unconscious, etc., so they aren't motivated to engage in sensory play, but they might still benefit from conscious unpleasantness and aversion for more sophisticated strategies to avoid them. And they might still benefit from conscious pleasure for more sophisticated strategies to pursue pleasure.

They "might" do, sure, but what's your expectation they in fact will experience conscious pleasantness devoid of sensations? High enough to not write it off entirely, to make it worthwhile to experiment on, and to be cautious about how we treat those organisms in the meantime--sure. I think we can agree on that. 

But perhaps we've reached a sort of crux here: is it possible, or probable, that organisms could experience conscious pleasure or pain without conscious sensation? It seems like a worthwhile question. After reading Humphrey I feel like it's certainly possible, but I'd give it maybe around 0.35 probability. As I said in OP, I would value more research in this area to try to give us more certainty. 

If your probability that conscious pleasure and pain can exist without conscious sensation is, say, over 0.8 or so, I'd be curious about what leads you to believe that with confidence.

To give a concrete example, my infant daughter can spend hours bashing her toy keyboard with 5 keys. It makes a sound every time. She knows she isn't getting any food, sleep, or any other primary reinforcer to do this. But she gets the sensations of seeing the keys light up and a cheerful voice sounding from the keyboard's speaker each time she hits it. I suppose the primary reinforcer just is the cheery voice and the keys lighting up (she seems to be drawn to light--light bulbs, screens, etc). 

During this activity, she's playing, but also learning about cause and effect--about the reliability of the keys reacting to her touch, about what kind of touch causes the reaction, and how she can fine-tune and hone her touch to get the desired effect. I think we can agree that many of these things are transferable skills that will help her in all sorts of things in life over the next few years and beyond?

I'm sort of conflating two things that Humphrey describes separately: sensory play, and sensation seeking. In this example it's hard to separate the two. But Humphrey ties them both to consciousness, and perhaps there's still something we can learn from about an activity that combines the two together.

In this case, the benefits of play are clear, and I guess the further premise is that consciousness adds additional motivation for sensory play because, e.g., it makes things like seeing lights, hearing cheery voices much more vivid and hence reinforcing, and allows the incorporation of those things with other systems that enable action planning about how to get the reinforcers again, which makes play more useful.

I agree this argument is pretty weak, because we can all agree that even the most basic lifeforms can do things like approach or avoid light. Humphrey's argument is something like the particular neurophysiology that generates consciousness also provides the motivation and ability for play. I think I have said about as much as I can to repeat the argument and you'd have to go directly to Humphrey's own writing for a better understanding of it!

Load more