Welcome to the fifth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
Welcome to the fifth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
I've been thinking of doing a 'live below the line' to raise money for MIRI/CFAR, and asking someone at MIRI/CFAR to do the same for CEA in return. The motivation is mostly to have a bit of fun. Does anyone think this is a good or bad idea?
I made a map with the opinions of many Effective Altruists and how they changed over the years.
My sample was biased by people I live with and read. I tried to account for many different starting points, and of course, I got many people's opinions wrong, since I was just estimating them.
Nevertheless there seems to be a bottleneck on accepting Bostrom's Existential Risk as The Most Important Task for Humanity. If the trend is correct, and if it continues, it would generate many interesting predictions about where new EA's will come from.
Here, have a look... (read more)
I just read Katja's post on vegetarianism (recommended). I have also been convinced by arguments (from Beckstead and others) that resources can probably be better spent to influence the long-term future. Have you seen any convincing arguments that vegetarianism or veganism are competitively cost-effective ways of doing good?
I'm thinking of giving "Giving games" for Christmas this year.
Family and friends gets a envelope with two cards. A nice Christmas card saying they now have x NOK to give on a charity of their choosing. Then it presents some interesting recommendations and encourage them to look more into them if they want to. When they have decided they have to write it down on an accompanying empty (but postaged) card addressed to me and when I get the card after Christmas I will donate the money.
Have somebody else though of something similar? Do you have any ideas that could make it more interesting or better in any way?
As a follow-up to this comment: I gave my 10-minute talk on effective altruism at Scribd. The talk went better than I expected: several of my coworkers told me afterwards that it was really good. So I thought I would summarize the contents of the talk so it can be used as a data point for presenting on effective altruism.
You can see the slides for my talk in keynote, pptx, and html. Here are some notes on the slides:
The thought experiment on the second slide was Peter Singer's drowning child thought experiment. After giving everyone a few seconds to
Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.
I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do ... (read more)
Should we try to make a mark on the Volgbrother's "Project 4 Awesome"? It can expose effective altruism to a wide and, on average, young audience.
I would love to help in any way possible, but video editing is not my thing...
People often criticise GWWC for bad reasons. In particular, people harshly criticise it for not being perfect, despite not doing anything much of value themselves. Perhaps we should somewhat discount such armchair reasoning.
However, if we do so, we should pay extra attention when people who have donated hundreds of millions of dollars, a majority of their net worth, and far more than most of us will, have harsh criticism of giving pledges.
Animal Charity Evaluators has/have found that leafleting is a highly effective form of antispeciesist activism. I want to use it generally for effective altruism too. Several times a year I’m at conventions with lots of people who are receptive to the ideas behind EA, and I would like to put some well-designed flyers into their hands.
That’s the problem—“well-designed” is. My skills kind of end at “tidy,” and I haven’t been able to find anything of the sort online. So it would be great if a gifted EA designer could create some freely licensed flyers as SVG ... (read more)
[Your recent EA activities]
Tell us about these, as in Kaj's thread last month. I would love to hear about them - I find it very inspirational to hear what people are doing to make the world a better place!
(Giving this thread another go after it didn't get any responses last month.)
I'm planning on starting an EA group at the University of Utah once I get back in January, and I need a good first meeting idea that will have broad appeal.
I was thinking that I could get someone who's known outside of EA to do a short presentation/question and answer session on Skype. Peter Singer is the obvious choice, but I doubt he'd have time (let me know if you think otherwise). Can anyone suggest another EA who might have name recognition among college students who haven't otherwise heard of EA?
Is there an audio recording of Holden's "Altruistic Career Choice Conference call"? If so, can someone point me in the right direction. I'm aware of the transcript:
http://files.givewell.org/files/calls/Altruistic%20career%20choice%20conference%20call.pdf
Thanks!
I've been growing skeptical that we will make it through AI, due to
1) civilizational competence (that is incompetence) and
2) Apparently all human cognition is based on largely subjective metaphors of radial categories which have arbitrary internal asymmetries that we have no chance of teaching a coded AI in time.
This on top of all the other impossibilities (solving morality, consciousness, the grounding problem, or at least their substitute: value loading).
So it is seeming more and more to me that we have to go with the forms of AI's that have some smal... (read more)
I posted this late before, and was told to post in a newer Open Thread so here it goes:
Is voting valuable?
There are four costs associated with voting:
1) The time you spend deciding on whom to vote.
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
3) The attention you pay to politics and associated decision cost.
4) The sensation you made a difference (this cost is conditional on voting not making a difference).
What are the benefits associated with voting:
1) If an election is decid... (read more)
I've been growing skeptical that we will make it through AI, due to
1) civilizational competence (that is incompetence) and
2) Apparently all human cognition is based on largely subjective metaphors of radial categories which have arbitrary internal asymmetries that we have no chance of teaching a coded AI in time.
This on top of all the other impossibilities (solving morality, consciousness, the grounding problem, or at least their substitute: value loading).
So it is seeming more and more to me that we have to go with the forms of AI's that have some small chance of converging naturally into human-like cognitionn, like neuromorphic or WBE. Since those are already low probability (see e.g. Superintelligence) to begin with, my growing impression is that we are very, very likely doomed.
So far the arena of people doing control problem related activities has been dominated by pessimists (say people who think we have less than 8% chance of making through). Over time as more and more people join it is likely more optimists will join. How will that affect our outcomes? Are optimists more likely to underestimate important strategical considerations?
A separate question would be: what does EA look like in a doomed world? Suppose we knew for certain that AGI would destroy life on earth; what are the most altruistic actions we can take between now and then? Is postponing the end by a few days more valuable than donating to sub-saharan effective charities?
These thoughts are not fully formed, but I wanted people to give their own opinions on these issues.
It makes sense that the earliest adopters of the idea of existential risk are more pessimistic and risk-aware than average. It's good to attract optimists because it's good to attract anyone and also because optimistic rhetoric might help to drive political change.
I think it would be pretty hard to know with probability >0.999 that the world was doomed, so I'm not that interested in thinking about it.