JHL

James-Hartree-Law

Major in History of Philosophy, Science and Math @ St. John's College, Santa Fe.
5 karmaJoined Pursuing an undergraduate degree

Comments
3

I would describe myself as a conservative EA. 

I think I got to a conservative political compass because of cause prioritisation. I felt that Liberal political groups were focused on personal identity considerations at the expense of more important goals. An example would be diversity hiring over talent hiring (which might be an overblown, optical concern that doesn't really exist as much as I think it does.) 

My feeling is that dividing up identity groups based on what makes one similar (similar gender identity or sexual orientation) is the wrong way of understanding community. Rather community is predicated on the need for diversity (builder, baker, candlestick makers make community. Not straight men.) I am an EA because of my similarities with EA's, but I am a member of my local community first and foremost. That is the community I need to exist, local community is more important than identity based communities like EA. [I really welcome red-teaming on this!]

I think EA's have the mental strength to handle diverse political views well. What makes it difficult for conservatives and liberals to have conversations is an unwillingness to view others complexly, and being unwilling to assume good intent. We are EA's because we care about doing good well. We already assume a certain amount of good intention in interaction with other EA's. We could signal this to conservative folks. Liberals do not have a monopoly on moral feelings. 

That said, a real conservative/liberal divide is the size of one's circle of moral concern. EA's have very broad ones, conservatives have very local ones. But this also seems like a general problem with EA, that it can think too large, too broad, too un-local. I'm pretty unsure about that. 

Protesting its slow death to the bitter end, Bing launched its AI-assisted search engine in 2023, hoping to carve out a use case against Google. In 2024, Google hit back, integrating Gemini into its search function. Arguably, Gemini is now the front page of the internet. Much of the time now when I shoot out a google query, Gemini’s answer pops up at the top. In fact, if I want to find an answer written by a human, I have to scroll down. Gemini’s answer occupies my entire screen. I have an incling about what is motivating this choice architecture: for now, there is no ad placement, but surely soon there will be. For now, attention is being directed away from websites that host their own ads, and towards the Google's own Gemini box. This is a little concerning - Nora Lindemann (https://tinyurl.com/chatbotsearch) writes on chatbots as search engines, introducing the term "sealed knowledges": she is getting at how a question can have a plurality of answers that all are meaningful, something that a chatbot doesn't convey when it gives a short, structured answer written in a hyper plausible tone. There are questions with simple answers, and there are those that warrant struggle and rumination. Well-packaged chatbot answers make me less likely to accidentally learn things as I try to answer my non-linear question. I wonder, will websites lose revenue? Do chat-bot search engines help or hinder learning? Mediated by chatbots, will we relate to information more objectively?

I don't understand what is the thought connecting the death of a chicken and the possible death of a baby (if it is not a fetus). The premise of your account, I thought, is that a fetus is possibly a human life. If it is a human life, then a genocide is happening every year. If it is true that a fetus is a human life, then why is it a relevant comparison that drastically more broiler chickens get killed yearly? On what basis can a comparison of life importance be made? As an aside, I was very interested to learn that "broiler" is a species of chicken. Broil: "to cook (meat or fish) by exposure to direct, intense radiant heat."