PabloAMC 🔸

Quantum algorithm scientist @ Xanadu.ai
1020 karmaJoined Working (6-15 years)Madrid, España

Bio

Participation
5

Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)

Comments
107

I tend to dislike treating all AI policy equal, the type of AI policy that affects AI safety is unlikely to represent a significant burden when developing frontier models. Thus reducing red tape on AI might actually be pretty positive.

Actually, something I am confused about is whether the AI academics are per person*year as the technical researchers in various fields.

Hi there! Some minor feedback for the webpage: instead of starting with the causes, I’d argue you should start with the value proposition: “your euro goes further or something along those lines”. You may want to check ayudaefectiva.org for an example. Congratulations on the new org!

Thanks, Chris, that's very much true. I've clarified I meant donations.

I already give everything, except what's required for the bare living necessities, away.

While admirable consider whether this is healthy or sustainable. I think donating less is ok, that’s why Giving what we can suggests 10% as a calibrated point. You can of course donate more, but I would recommend against the implied current situation.

FWIW, I believe not every problem has to be centered around “cool” cause areas, and in this case I’d argue both animal welfare and AI Safety should not be significantly affected.

I divide my donation strategy into two components:

  1. The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.

  2. Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existential AI risk community, my donation to this donation election, to make donations within the EA community more democratic :)

For this donation election I have voted for Rethink Priorities, the EA long term future fund, and ALLFED. ALLFED work seems to be pretty necessary and they are often overlooked, so I am happy to support them. The other two had relatively convincing posts arguing for what they could do with additional funding. In particular, I am inclined to believe Rethink Priorities work benefits the EA community quite widely and am happy to support them, and would love them to keep carrying out the annual survey.

I think the title is a bit unfortunate at the very least. I am also skeptical of the article's thesis of highlighting population growth as the problem itself.

You understood me correctly. To be specific I was considering the third case in which the agent has uncertainty about is preferred state of the world. It may thus refrain from taking irreversible actions that may have a small upside in one scenario (protonium water) but large negative value in the other (deuterium) due to eg decreasing returns, or if it thinks there’s a chance to get more information on what the objectives are supposed to mean.

I understand your point that this distinction may look arbitrary, but goals are not necessarily defined at the physical level, but rather over abstractions. For example, is a human with high level of dopamine happier? What is exactly a human? Can a larger human brain be happier? My belief is that since these objectives are built over (possibly changing) abstractions, it is unclear whether a single agent might iron out its goal. In fact, if “what the representation of the goal was meant to mean” makes reference to what some human wanted to represent, you’ll probably never have a clear cut unchanging goal.

Though I believe an important problem in this case is how to train an agent able to distinguish between the goal and its representation, and seek to optimise the former. I find it a bit confusing when I think about it.

Separately and independently, I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles and will be oriented around a particular goal (at least behaviorally, cf. efficiency—though I would also guess that the mental architecture ultimately ends up cleanly-factored (albeit not in a way that creates a single point of failure, goalwise)).

I’d be curious to understand why you believe this happens. Humans (the only general intelligence we have so far) seems to preserve some uncertainty over goal distributions. So it is unclear to me that generality will necessarily clarify goals.

To be a bit more concrete: I find it plausible that the AGI will encounter possible fine grained (concrete) goals that map into the same high level representation of its goal, whatever it may be. Then you have to refine what the goal representation was meant to mean. After all, a representation of the goal is not the goal itself necessarily. I believe this is what humans face, and why human goals are often a small mess.

Load more