Jonas Hallgren

399 karmaJoined Uppsala, Sweden

Bio

Participation
5

Curious explorer of interesting ideas. 

I try to write as if I were having a conversation with you in person. 

I like Meditation, AI Safety, Collective Intelligence, Nature, and Civilization VI. 

I would like to claim that my current safety beliefs are a mix between Paul Christiano's, Andrew Critch's and Def/Acc

Currently the CSO of a startup ensuring safe collective systems of AIs into the real world. (Collective Intelligence Safety or Applied Cooperative AI, whatever you want to call it.)

I also think that Yuah Noah Harari has some of the best takes on the internet.
 

Comments
54

Topic contributions
3

Sorry for not noticing the comment earlier! 

Here's the Claude distillation based on my reasoning on why to use it:

Reclaim is useful because it lets you assign different priorities to tasks and meetings, automatically scheduling recurring meetings to fit your existing commitments while protecting time for important activities. 

For example, you can set exercising three times per week as a priority 3 task, which will override priority 2 meetings, ensuring those exercise timeblocks can't be scheduled over. It also automatically books recurrent meetings so they fit into your existing schedule, like for team members or mentors/mentees. 

This significantly reduces the time and effort spent on scheduling, as you can easily add new commitments without overlapping more important tasks. The main advantage is the ability to set varying priorities for different tasks, which streamlines the process of planning weekly and monthly calls, resulting in almost no overhead for meeting planning and making it simple to accommodate additional commitments without conflicting with higher-priority tasks..

Thanks Jacques! I was looking for an upgrade to some of my LLM tools. I was looking for some IDEs and I'll check that out.

The only tip I've got is using reclaim.ai instead of calendly for automatic meeting scheduling, it slaps.

Thanks! That post adresses what I was pointing at a lot better than I did in mine. 

I can see from your response that I didn't get across my point as well as I wanted to but I appreciate the answer none the less!

It was more a question of what leads to the better long-term consequences rather than combining them.

It seems plausible animals have moral patienthood and so the scale of the problem is larger for animals whilst also having higher tractability. At the same time, you have cascading effects of economic development into better decision making. As a longtermist, this makes me very uncertain on where to focus resources. I will therefore put myself centrally to signal my high uncertainty.

I think that still makes sense under my model of a younger and less tractable field? 

Experience comes partly from the field being viable for a longer period of time since there can be a lot more people who have worked in that area in the past. 

The well-described steps and concrete near-term goals can be described as a lack of easy tractability? 

I'm not saying that it isn't the case that the proposals in longtermism are worse today but rather that it will probably look different in 10 years? A question that pops up for me is about how great the proposals and applications were in the beginning of animal welfare as a field. I'm sure it was worse in terms of legibility of the people involved and the clarity of the plans.(If anyone has any light to shed on this, that would be great!) 

Maybe there's some sort of effect where the more money and talent a field gets the better the applications get. To get there you first have to have people spend on more exploratory causes though? I feel like there should be anecdata from grantmakers on this.

I enjoyed the post and I thought the platform for collective action looked quite cool.

I also want to mention that I think tractability is just generally a really hard thing for longtermism. It's also a newer field and so on expectation I think you should just believe that the projects will look worse than in animal welfare. I don't think there's any need for psychoanalysis of the people in the space even though it has its fair share of wackos.

Great point, I did not think of the specific claim of 5% when thinking of the scale but rather whether more effort should be spent in general.

My brain basically did a motte and baily on me emotionally when it comes to this question so I appreciate you pointing that out!

It also seems like you're mostly critiquing the tractability of the claim and not the underlying scale nor neglectedness?

It kind of gives me some GPR vibes as for why it's useful to do right now and that dependent on initial results either less or more resources should be spent?

Super exciting! 

I just wanted to share a random perspective here: Would it be useful to model sentience alongside consciousness itself? 

If you read Daniel Dennett's book Kinds of Minds or take some of the Integrated Information Theory stuff seriously, you will arrive at this view of a field of consciousness. This view is similar to Philip Goff's or to more Eastern traditions such as Buddhism. 

Also, even in theories like Global Workspace Theory, the amount of localised information at a point in time matters alongside the type of information processing that you have. 

I'm not a consciousness researcher or anything, but I thought it would be interesting to share. I wish I had better links to research here and there, but if you look at Dennett, Philip Goff, IIT or Eastern views of consciousness, you will surely find some interesting stuff.

Wild animal welfare and longtermist animal welfare versus farmed animal welfare? 

There's this idea of the truth as an asymmetric weapon; I guess my point isn't necessarily that the approach vector will be something like:
Expert discussion -> Policy change

but rather something like
Experts discussion -> Public opinion change -> Policy Change

You could say something about memetics and that it is the most understandable memes that get passed down rather than the truth, which is, to some extent, fair. I guess I'm a believer that the world can be updated based on expert opinion. 

For example, I've noticed a trend in the AI Safety debate: the quality seems to get better and more nuanced over time (at least, IMO). I'm not sure what this entails for the general public's understanding of this topic but it feels like it affects the policy makers.

Load more