FS

Filip Sondej

198 karmaJoined

Comments
33

My main crux regarding inter-civ selection effect is how fast will space colonization get. F.e. if it's possible to produce small black holes, you can use them for an incredibly efficient propulsion and even just slightly grabby civs still spread at approximately the speed of light - roughly the same speed as extremely grabby civs. Maybe it's also possible with fusion propulsion but I'm not sure - you'd need to ask astro nerds.

values aligned with a (potentially discoverable?) moral truth will be more competitive than those that are the most grabbing-prone

I guess the main hope is not that morality gives you a competitive edge (that's unlikely) but rather that enough agents stumble on it anyway, f.e. realizing open/empty individualism is true, through philosophical reflection.

agents with values that might a priori seem less grabbing-prone could still prioritize colonizing space, as a first step, to not fall behind in the race against other agents (aliens or other agents within their civilization), and actually optimize for their values later, such that there is little selection effect

Yeah, I definitely expect that.

Expose "voting tribes" in the comments.

We could run a similar algorithm to X's Community Notes. Then, on contentious topics, we could easily see the main axis of disagreement. We could also have a comment sorting option, that upranks those upvoted by people from both sides of the disagreement.

See this thread for discussion and corresponding post for algorithm description.

I would not say my original version of Mutual Matching is in every sense more general.

Ok, I think you're right, it's not strictly more general.

QF mainly deserves a name of information eliciting ... as ... asymptotically all money has to come from the central funder if there are many 'small' donors

Agreed!

ability to set (or know) her monotonously increasing contribution directly as a function of the leverage, which I think is really is a core criterion for an effective 'leverage increasing'

Yeah, that is a crucial component, but I think we need not only that, but also some natural way in which the donation saturates. Because when you keep funding some project, at some point the marginal utility from further funding decreases. (I'm not sure how original Mutual Matching deals with that.) I think Andrew Critch's S-process deals with that very elegantly, and it would be nice to take inspiration from it. (In my method here, the individual donations saturate into some maximal personal limit, which I think is nice, but is not quite the same as the full project pot saturating.)

Oh yeah! They are identical in spirit, but a bit different in implementation. So they will have different results sometimes.

It would be nice to test both of them in some real world setting.

Edit: it seems that your method is more general? And that maybe you could set the curves in a way that the matching works the same as described here. So the method in this post maybe could be seen as a specific case of Mutual Matching, aiming to have some particular nice properties.

I tend to think that qualia are bound together when they causally act as one. So if left and right are highly integrated (act as one), they aren't separate experiences. So here I agree with IIT.

Ah, this story is great. In general Egan's stuff is awesome. If I remember correctly the story was more about personhood (the memories and dispositions etc.) rather than separate experiences (which would require some processes to run separately in parallel). I think it's an important distinction to make, as experience is fundamentally real, while personhood (or "continuous self") is more of a thing we assign to systems - a useful fiction (just like money, or democracy is a useful fiction).

There is also a feeling of being yourself, but that's a different thing than pure experience, and different than assigned personhood. For example there are cases (meditation and psychedelic trips) when the feeling of being someone disappears, but experience remains.

I like that soul swapping argument :D

In the case of split brain, both of the experiences would feel to be me.

Ah, no, I just read the report of results on Wikipedia (that's how they worded it). Hm, it's strange if that's not in the paper.

I don't suspect it to be that bad. More like some noise added to each post's score, and some posts not getting enough attention because of that.

In the reddit experiment single upvotes caused posts to have 25% higher mean score later (this effect was present in all parts of the distribution).

But the effect size was very dependent on the topic, so I'm curious how would that turn out for EA Forum.

Yeah, good point. It may be mostly redundant.

Oh great! I didn't know about some of them.

Still, the main thing I had in mind was to embed some custom interactive stuff.

Implementing it as iframe support, would be the most general, and you would solve all the possible "embed X" suggestions at once. So it seams to be the most efficient approach.

Load more