H

harfe

832 karmaJoined

Posts
1

Sorted by New
5
harfe
· · 1m read

Comments
143

One outstanding question is at what point AI capabilities are too close to loss of control. We propose to delegate this question to the AI Safety Institutes set up in the U.K., U.S., China, and other countries.

I consider it clickbait if you write "There Is a Solution", but then say that there are these AI safety institutes that will figure out the crucial details of the solution some time in the future.

edit: the issue raised in this comment has been fixed

In my opinion you have not really argued why it is neglected. As a starting point, they seem to spend roughly $35 million per year: https://projects.propublica.org/nonprofits/organizations/824506840. $14 million of those are for salaries, so I would be surprised if new features are strongly bottlenecked by money and talent.

I am just guessing on these issues, but my suspicion as for why some features such as "live location sharing" and "display past encrypted messages from the group chat you were not a member of but only just now joined" are not (yet) implemented because they do not fit well into Signal's approach to security/privacy.

Could you provide examples of political discussions on the EA Forum that appear to have negatively impacted the forum’s environment or impaired its ability to achieve its objectives?

As far as I remember, the political discussions have been quite civilized on the EA Forum. But I think this is because of the policies and culture the EA Forum has. If political discussions were a lot more frequent, the culture and discussion styles could get worse. For example, it might attract EA-adjacent people or even outsiders to fight their political battles on the EA Forum. Maybe this can be solved by hiring additional moderators though.

Also, politics can get a lot of attention that would be better spend elsewhere. For example this post about Trump generated 60 comments, and I am not sure if it was worth it.

Could you expand a bit on how this would look like? How are they being "shunted", what kind of roles are low-level roles? (E.g. your claim could be that the average male EA CS-student is much less likely to hear "You should change from AI safety to community-building" than female EA CS-students.)

That is, many (most?) people need a break-in point to move from something like "basically convinced that EA is good, interested in the ideas and consuming content, maybe donating 10%" to anything more ambitious.

I am under the impression that EAGx can be such a break-in point, and has lower admission standards than EAG. In particular, there is EAGxVirtual (Applications are open!).

Has the rejected person you are thinking of applied to any EAGx conference?

I think sounding like a salesman is ok here.

Maybe something like "Which orgs would benefit from my unpaid labor?" or "Offer: I can work on your project for free for some months (Tech/Operation/Management)".

Great to see that you are seriously thinking about promoting etg!

If I had refreshed the frontpage and seen your post on etg I would not have posted my comment, I was just a bit surprised to see the "obvious" strategy of "lets promote etg" not explicitly mentioned.

A strategy for scaling effective giving that is not mentioned here is earning to give.

Encouraging and helping people who are already bought into the idea of donating effectively to earn more could generate a lot of money and value. I think this strategy should be considered besides encouraging high-earners to donate effectively (I am not making a claim here about which is better).

A concrete step could be to talk to people from 80k about advertising earning to give again.

I hope we will endorse this, should it come to pass

Endorse what exactly? It is unclear to me what you mean there exactly.

Load more