I have been interested in effective altruism since the early 2010s. I am also interested in transhumanism, rationality, and artificial intelligence.
I totally agree with Dr. Miller. When we talk about AI risks, it's really important to find some balance between staying rational and acknowledging our emotions. Indeed feeling down or hopeless can make us passive, but being angry or morally outraged can push us to face challenges together. The trick being to use these emotions in a productive way while still sticking to our values and rational thinking.
Thank you for your perspective. I see where you're coming from, but I disagree. I think that crypto-based prediction markets have the potential to revolutionize information gathering and processing, and that this could have a hugely positive impact on the world. I agree that some regulation is necessary, but I believe that crypto can provide a way to bypass unnecessary restrictions and allow us to tap into this powerful tool. I don't consider most prediction market regulations to be morally justified. Of course, when I'm referring to circumventing regulations, I don't mean by breaking the law, I'm thinking more along the lines of setting up a market in a friendly jurisdiction.
I am not invested in crypto in any way and I also believe that most of crypto is a scam, yet there are some diamonds in the mud. The fact that this technology, for instance, enables prediction markets with actual money (regardless of the feelings of the regulators) seems very valuable to me. I would recommend caution with this technology and industry, not outright rejection.
The link you just posted seems broken? Hilary Greaves' Discounting for public policy: A survey is available in full at the following URL.
I'm interested in the scientific arguments because, as far as I know, we don't have a good model of consciousness, and many models involve higher-level structures that we don't see in animals with very small brains. I know that some models of consciousness seem to imply that many small animals (or even LLMs!) with "integrated information" are conscious, but it's unclear enough not to pretend that there's a consensus on whether a hummingbird with 100 million neurons is able to instantiate subjective experience! I agree that when in doubt we must take steps to minimise any risk of causing suffering, but this should not lead us to assume an epistemologically questionable perspective. So maybe I'm wrong, and since I'm not a specialist in consciousness, I'm interested in why experts endorse such a statement.