L

Lixiang

67 karmaJoined

Comments
18

Answer by Lixiang6
1
0

I don't know international issues or bio well, but fwiw I would resist the urge to spend a ton of time thinking about planning various possible career paths in the distant future as an young undergrad. The future is hard to predict. I'd focus on getting the skills that will give you options.

Respectfully, you may need to just "put your head down" for a couple years and just focus on studying really hard and not think too much about the distant future. I'd start by focusing your studies on foundational subjects like math and then gradually shift your coursework/time more to applied subjects (e.g. statistics -> bio) as you go through undergrad and beyond.  Take hard stem classes (within your level) and try to learn as much math as possible. Don't worry about getting straight A's. B's in hard classes is better than A's in easy classes. If you can understand, say, scientific computing, MV calc, linear algebra, real analysis + functional analysis (optional), probability, bayesian statistics, and machine learning/deep learning, and you also you also background in bio (esp. if you have some research experience), you will likely be accepted to good phd programs (let alone masters) in U.S. and elsewhere. Private sector jobs or research assistantships will likely also be available. 

Every once in a while come up for air and reorient your direction. But I think it's the long slow slog (through the textbooks,  psets, coding assignments) that will bring you success rather than getting the perfect career plan from day one.  

Answer by Lixiang8
1
0

You may be interested in this 2021 WSJ article: "A Technology Race to Stop the Mass Killing of Baby Chicks: An estimated six billion newly hatched male chicks are killed world-wide each year. New technologies are being developed to stop that."

I know George Church is a big name in bio/genetics who seems to have interests in transhumanism. 

Another WSJ article just weeks ago: "Scientists at DeepMind and Meta Press Fusion of AI, Biology".  

I would think bio is a great thing to go into! However, I'd guess your point (4) has significant truth to it. If you do bio,  I'd make sure you still learn your math! Math is the language of science. I think there are a lot of people that major in bio and get a kind of "soft" pre-med style bio education which is mostly memorizing stuff. I would see if you can do like a computational bio major or double major with math, applied-math, stats, or CS. I'd try to take several courses in probability/statistics/machine learning.

  1. ^

    If that article is paywalled, try this: https://drive.google.com/file/d/1shcLzlS8qf7ODbqMqiHtwRzFminRpiC3/view

Interesting, well maybe I'm off base then.

I'm definitely not knowledgeable about AI, but my two cents is that there is a thing called the frame problem that makes AGI very hard to attain or even think about. I'm not gonna even try to exposit what that is, and that article is a bit dated, but I'd guess the problem still remains beyond anyone's comprehension.

Tangential

I think the whole issue of "one person's modus ponens is another's person's modus tolens" is not very well understood by most people and including most philosophers and myself. In fact, I'm don't think anyone knows quite how to think about these things. I guess it gets into Quinean holism and the intractability problems that accompany it. 

But, presumably, it has something to do with Bayesian networks of beliefs and regularization in machine learning (~valuing simplicity) as well as Bayesian philosophy of science more generally. [Part IV of Itzhak Gilboa's decision theory book gets into some of this stuff, which seemed pretty interesting.]

I don't understand why much more attention is not paid to these things in philosophy, where formal epistemology seems to still be considered a pretty niche field. 

I hope people think more about these issues.  

OP here. After spending some more time with ChatGPT, I admit my appreciation for this field (AI Alignment) has increased a bit. 

Edit (2 years later). I now think AI alignment is very important, though not sure if I have much to contribute to it personally.

I have opposite  intuition actually - I'd guess that people closer to animals have more empathy for their suffering.   

I also have that intuition.

I'd guess that does still hold after adjusting, but I did take it out.

One thing is that I'd guess working class, rural people are more likely to work in some area at least adjacent  to the meat/fish/food industry, and so the vegetarian movement would go against their livelihood, which might make them more likely to oppose it. To be clear, I'm not blaming those people. I think the city-dwelling meat eater who deliberately shields themselves from the unpleasant sight of the process that makes their food is much more troublesome. 

Also, working class areas just don't have vegan food available as much. 

I'm sure many farmers do care about their animals. 

Load more