Welcome to the fifth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.

1

0
0

Reactions

0
0
Comments87
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What do people think should be the downvoting culture and criteria around here? I've seen some pretty harsh downvoting, and am concerned that this might make the forum seem less welcoming to new people.

5
RyanCarey
I'll just note that we're pointing people somewhat in the right direction by labelling the up- and down-vote buttons "I found this useful" and "I didn't find this useful", in order to encourage people to appraise whether the posts are valuable and evidence-based, rather than whether they're to the reader's personal taste. I think the big picture here is not whether one agrees with individual upvotes or downvotes but how the system is working overall. Largely, I think it's identifying real differences in post quality and also the fact that about 95% of votes appear to be upvotes means that the system will encourage people to post more. So, I'm pretty encouraged by the way things are going so far. Maybe we can tilt people to consider slightly more comments 'useful', though.
4
Owen Cotton-Barratt
I have mixed feelings on this. From the point of view of highlighting the best comments to allow good reading order, I think there may not be enough downvoting. Having more downvoting as well perhaps as more upvoting would give a richer distinction and help the best stuff to rise to the top quickly, even if it's new content on an old thread. On the other hand from the point of view of experienced feedback, downvoting might be a turn-off, and more of it might reduce people's inclination to post. But this effect might be reduced if downvoting were more normal. Overall I guess I'd weakly prefer more upvoting and more downvoting -- including downvoting things that you don't disagree with but would have been happy to skip reading.
-1
Tom_Ash
That's a good point, though being extra sparing in your upvoting would achieve a decent fraction of the same benefits. On the other hand, that would mean that fewer people got the warm fuzzies of upvotes, so that fewer people would get demoralising downvotes.
4
RyanCarey
Being sparing in your upvoting? That seems to be the worst of both worlds!
2
Tom_Ash
I'm imagining 2 scenarios: 1) People have a very low threshold for upvoting, so upvote most comments. They only downvote in extreme circumstances. 2) People have a high threshold for upvoting, so only upvote comments they think particularly helpful. They only downvote in extreme circumstances. My thought is that more information about comment quality is conveyed in the second.
2
Owen Cotton-Barratt
I guess that people currently upvote in the vicinity of 20% of comments they read (this is a guess, but based on how many more upvotes the top articles/comments get than the median), and downvote somewhat under 1%. Optimal for information in theory might be a 1/3 1/3 1/3 split between upvoting, downvoting and not voting. But I think higher thresholds for downvoting than that probably make sense. I guess I might like to see upvoting at about 30% and downvoting at about 3%?
1
Evan_Gaensbauer
The second scenario isn't how I started upvoting, but what I'm leaning towards now, on this forum.
1
Owen Cotton-Barratt
Downvoted to follow my own suggestion -- I'm afraid I found this confusing/confused, as I think just being more sparing with upvoting gets you no benefits at all, and you didn't explain how it was meant to work. Upvoted your original comment though. :)
0
Tom_Ash
Ha, fair enough! I tried to explain it in my reply to Ryan: http://effective-altruism.com/ea/b2/open_thread_5/1h2
3
Evan_Gaensbauer
There are different cultures for upvoting and downvoting for different websites: * As a passive user of Reddit, I'm aware of the voting culture there. Depending on the subreddit, it might be as bad as any other forum on the Internet which is a wilds of impoliteness and inconsideration. However, you might end up with one that's better. Obviously, this varies widely depending on the subreddit(s) one is using. * As an active user of Less Wrong, I tend not to downvote too much. Voting there is on the basis of whether something adds to the level of discourse, in terms of moving it in a direction of greater or less quality. * I try to treat how I vote on this forum based on the sentences that go along with the votes. For example, I upvote a comment on this site if I actively find it useful, i.e., it provides a new framing or new information which clarifies or enriches my understanding of an essay's subject matter. There are lots of comments that I don't find 'useful', per se, in the sense that I don't learn anything new from them. However, I don't (want to) downvote those comments because I don't want to imply anything is wrong with them when I don't really believe that. Such comments are just-so to me. I believe I would only downvote an essay, article, or comment on this forum if I believe it was actively harmful to people's understanding, because it would decrease clarity, or level of discourse. I would like to think I would do this regardless of whether it was from a position I agreed with it not. Generally, I tend to be liberal with upvotes, and conservative with downvotes. However, this is a personal preference based on my perception that online communities with voting systems tend to be less friendly than I would like them to be, so I try correcting for this in the opposite direction in what small way I can as a user.
3
Robert_Wiblin
I find receiving downvotes pretty demoralising, in particular when they are given for disagreeing with the conclusion, rather than thinking something is poorly reasoned.
1
Larks
"This person disagrees with me" and "the person thinks my reasoning is bad" are closely related - if your reasoning was good, they'd agree with you. And even when they differ, the original author is hardly an unbiased judge.
2
Robert_Wiblin
I don't think of it that way, because usually there are multiple important considerations on both sides of a disagreement. If someone raised a legitimate reason for their point of view, but I disagreed with their conclusion all things considered, I would not down-vote unless I thought the reason they were focussed on in their comment didn't make sense. That's rarely the case here; disagreements are most often about different weight given to different considerations.
0
Peter Wildeford
Something can be well-reasoned but still be disagreeable if it ignores an important consideration.
2
AlasdairGives
The difficulty (that no-one seems to have figured out how to solve) is a system that effectively hides low quality posts without becoming more of an echo chamber over time. While a community is small it is not too much of a problem, because even mildly down voted posts have good attention - but as it grows, highly up voted posts that reflect existing tastes or confirm existing biases increasingly dominate.
1
Michelle_Hutchinson
I don't usually use forums, so I don't know what the norm is. But I have found it somewhat demoralising so far when I've taken time to respond carefully and detailedly to questions, and then been downvoted with no explanation as to why people didn't find the comment useful. (I'm very willing to believe this is just because I'm not used to forums though - I'm only used to Facebook, where you can only upvote, hence all negative feedback has to be spelled out.) Thanks for bringing this up - seems like a useful discussion to have!
0
Austen_Forrester
Michelle, I looked through all your posts and they're all really good, not even controversial, so I wouldn't assume that they were downvoted or had low points for a legit reason. If someone had a legit criticism of something you said, he should write what it is. That's the whole point of the forum: to exchange ideas. I don't find the points system affects that in a positive way. I think without points people would have to write what they're criticism of a post is and defend it. Button clicking seems more like an act of emotion to me.
0
Peter Wildeford
I personally think that if someone downvotes something, especially a post where it costs -10 on karma, then they owe that person a brief explanation for the downvote.
0
pappubahry
I haven't seen a downvote here that I've agreed with, and for the moment I'd prefer an only-upvote system. I don't know where I'd draw the line on where downvoting is acceptable to me (or what guidelines I'd use); I just know I haven't drawn that line yet.
2
jayd
Having some downvoting is good, and part of the raison d'etre of this forum as opposed to the Facebook group. I agree that people downvote slightly too often, but that's a matter of changing the norms.
0
RyanCarey
This is because we want to encourage people to contribute, right? One approach is to be the norm you want to promote. If you want to encourage people to post, then upvote more posts. If you're concerned that material is getting downvoted when it is not spam, then give it an upvote and a substantial reply. :)
0
Tom_Ash
I wasn't personally saying that was a good idea, just that I thought there should (somewhat) fewer downvotes. Of note, I'm not thinking about myself getting downvotes but occasions where it happened to other people!
-1
Austen_Forrester
I loath the voting system. Actually, I have never clicked the up or down vote button once and I never will because it's juvenile to turn commenting on something as important as how to improve the world into a popularity contest. We are adults and should be treated like adults. It's not even useful, anyhow – I've found no correlation between the quality of the comment and its points. The highest rated comments are usually questions, or short comments like “thanks for this”. Does anyone else see the contradiction in a subculture that's purports to be about rationality bringing social approval bias into the mix? I value judging people's views solely on their merits; I don't want my judgement to be skewed by the judgement of “the group” and likewise, I only want people to judge my views by their merits, not by how popular they are. Besides skewing the logical reasoning of visitors to the forum, the voting system also promotes conservatism – people will naturally be too scared to write something original for fear of it having low points. I think that someone cannot think too broadly about how to help the world – crazy ideas should be welcomed! Perhaps most of them will be duds, but there only needs to be one that turns out to be a winner! Even without the voting system, posters have to deal with the judgement of other posters, but at least written comments can provide helpful feedback whereas simply having low votes will make the poster self-conscious and shy to write something against the grain.
2
yboris
I thought that the voting system is beneficial primarily because it allows others to "upvote" something as important. When I glance at comments, I am unlikely to read dozens of comments (limited time), but the upvotes are a simple way for me to tell which comments are more likely to provide something of value. Upvotes are not a true demonstration of value, but they help. Consider if a comment gets 100 upvotes - that suggests there is something there that others like and I would do well to at least glance at it. The points you raise are worth considering, though I think the benefits outweigh the concerns you have. Do you think otherwise?
1
Austen_Forrester
If someone thinks that the better comments have higher votes, then certainly for him the points system would be helpful, especially for long threads. I don't find that's usually the case, which is one reason why I'm not fond of it. I find that people “like” (whether that means clicking a button on your computer, or agreeing to someone in person) things that validate their pre-existing feelings, rather than open them up to new ideas they hadn't considered before (most respond with fear to the latter). I heard on the radio a few months ago that studies show that problem solving meetings are more productive when the people there have opposing perspectives, come from different fields, etc. IOW, the perspective you don't want to hear is probably the one you need to. Having said that, even if the points system doesn't correlate with the most helpful comments it could still be net positive for other reasons: encouraging more participation than it discourages, providing support/validation for those interested in EA, being normal (since most sites have voting now, people might think it was weird if CEA didn't). Another thing, that just occurred to me yesterday, is that the posts on the forum seem mostly geared to people who are already involved in EA, when it could be more productive to write posts that are geared to new people learning about EA (both in terms of content and writing style). TLYCS/GWWC blogs are more like that, although they are only for poverty.
0
RyanCarey
yeah, I agree that we've talked about effective altruism using the assumption that people already know roughly what that is and why we would care about it. It's a good idea to post more material that is of interest to a wider audience. Although having started off with stuff that affirms the purpose of the forum and our shared identity is not a bad thing, it's just that it'd be good to balance it out now with some materials that a wider range of people can enjoy.

I've been thinking of doing a 'live below the line' to raise money for MIRI/CFAR, and asking someone at MIRI/CFAR to do the same for CEA in return. The motivation is mostly to have a bit of fun. Does anyone think this is a good or bad idea?

2
Niel_Bowerman
Pbhyq lbh ng yrnfg pnyy vg fbzrguvat bgure guna "yvir orybj gur yvar". V jbeel gung fbzr crbcyr zvtug svaq vg bssrafvir, nf yvir orybj gur yvar vf gurzrq fb urnivyl nebhaq cbiregl. V qba'g frr jung rngvat gur fnzr purnc sbbq sbe n jrrx unf gb qb jvgu ZVEV/PSNE/PRN.
2
RyanCarey
I've hidden my thoughts in rot-13 to avoid biasing others: Vg pbhyq or pbafgehrq nf gevivnyvfvat cbiregl
1
Peter Wildeford
Would you be raising money mainly from an EA audience? The idea of "living below the line" seems to have no connection at all to MIRI / CFAR, so it kind of feels like a non sequiter to non-EAs. Maybe more thematic would be living without computers (or without rationality!), but that seems not worthwhile.
1
Robert_Wiblin
The fact that it's a bit of a non-sequitur is why I find it a fun idea. It sounds like most people see it as outright weird, rather than quirky in an amusing way as I intended, so I won't do it.
1
Larks
I guess if we get a Hansonian future the line would be very low. It's unlikely pre-upload Rob could actually survive at such a level though, so probably not the best idea.
1
arrowind
It'd be an interesting experiment to see how much this raised.

I made a map with the opinions of many Effective Altruists and how they changed over the years.

My sample was biased by people I live with and read. I tried to account for many different starting points, and of course, I got many people's opinions wrong, since I was just estimating them.

Nevertheless there seems to be a bottleneck on accepting Bostrom's Existential Risk as The Most Important Task for Humanity. If the trend is correct, and if it continues, it would generate many interesting predictions about where new EA's will come from.

Here, have a look... (read more)

2
Joey 🔸
I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply. "I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes" http://blog.givewell.org/2014/07/03/the-moral-value-of-the-far-future/
0
Diego_Caleiro
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell's opinion in any other year. Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, within a particular opinion. Can you give an example of a chart where a bottleneck would occur in a node that is not either the X-risk node, or the transition to the far future node? I would be interested in seeing patterns that escaped my perception, and it is really easy to change the yED graph if you download it.
1
pappubahry
The bottom part of your diagram has lots of boxes in it. Further up, "poverty alleviation is most important" is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from "poverty alleviation" to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn't exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and "poverty alleviation is most important" would be a bottleneck. Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?). I agree that there's some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare -- there's a bigger "cause-distance" between colonising Mars and working on AI than the "cause-distance" between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and "insight" overstates the difference.
1
Peter Wildeford
Is it possible to get a picture of the graph, or does that not make sense?
5
RyanCarey
Here you go: image
1
Diego_Caleiro
Thank you Ryan, I tried doing this but failed to be tech savvy enough.
2
RyanCarey
No problem. There's an Export button in YeD's file menu. Then, you have the image file that you can upload Imgur.
0
Peter Wildeford
Thanks!
0
Giles
Wow, this is amazing! It brings to mind the idea of a "what kind of altruist are you?" quiz, with the answer providing a link to the most relevant essay or two which might change your mind about something...

I just read Katja's post on vegetarianism (recommended). I have also been convinced by arguments (from Beckstead and others) that resources can probably be better spent to influence the long-term future. Have you seen any convincing arguments that vegetarianism or veganism are competitively cost-effective ways of doing good?

2
Daniel_Dewey
Related thought 3: Katja's points about trading inconveniences and displeasures are interesting. Is it good to have a norm that all goods and "currencies" that take part in one's altruism budget and spending must be tradeable with one another? Is this psychologically realistic? One reason for thinking that goods in the altruism budget should be tradeable is that in some sense my Altruism Budget is what I call the part of my life where I take the demandingness of ethics seriously. Is this how anyone else thinks about it?
1
Tom_Ash
Yes, I think about it in the same way, and think that demanding or difficult non-monetary decisions like vegetarianism should fall into your altruism budget, where you should consider the trade-off between them and, say, donating money.
1
RyanCarey
The altruism budget idea is plausible. It works well when you're literally talking about money. For example, it's really psychologically difficult to face the decision of whether to redirect your funds to charity every time you buy a dinner or go to a movie. It's much better to take out a fixed fraction of your budget each month and give it away. Then, you can make non-altruistic decisions with your 'you' money without feeling selfish. Then, if you want to change the fraction of your budget that you give away, you make that decision at the end of the month or year. It seems reasonable that something like that should happen with time i.e. that effective altruists should retain a concept of "leisure"!
2
RyanCarey
But maybe it works poorly when things aren't obviously commodities. Like, I think there's a place for virtue ethics - just being the kind of person you would want to see in the world. And I think lots of people who take a virtue-based approach could reasonably object that always thinking of good in terms of money could be self-defeating. Also, some psychological studies apparently show that thinking about money decreases your generosity.
2
Daniel_Dewey
Related thought 2: as someone who's already vegetarian, I think it would be more costly in terms of effort, bad feels, etc. to switch back than to stay veggie or slowly drift back over time.
1
RyanCarey
Yes, I agree with this. It seems like it's easier to stay vegetarian. It's cheap, it feels good. It's probably not very disadvantageous to health. Long live the status quo - for diet ethics, at least.
2
Daniel_Dewey
Related thought 1: I think some tension can be defused here by avoiding the framing "should EAs be vegetarian?", since answering "no" makes it feel like "EAs should not be vegetarian", when really it seems to me that it just implies that I can't put any costs incurred in my Altruism Budget, the same as costs I incur by doing other mundane good things.
1
RyanCarey
Yes, 'are altruistic people obligated to become vegetarian?' might be better
1
Vincent_deB
Yes, that was a good argument that EAs aren't obligated to be vegetarian, even if reasonable people can disagree about the numbers.
0
Peter Wildeford
I think people have a tendency, though, to think that vegetarianism is more costly than it actually is, though. So I'm skeptical unless a person has actually tried to give up meat and faced some sort of problem. For example, I'm not vegan because of social pressure, but I am vegetarian. At heart, even if you eat meat, there's no reason I can fathom why you can't simply try to eat less of it...
4
Daniel_Dewey
You may be right that people overestimate the cost. I'm not sure how to gather data about this. Re: your second point ("there's no reason I can fathom..."), how about this lens: view meat as a luxury purchase, like travel, movies, video games, music, etc. Instead of spending on these, you could donate this money, and I can imagine making a similar argument: "there's no reason I can fathom why you can't simply try to do less of that...", but clearly we see foregoing luxuries as a cost of some kind, and don't think that it's reasonable to ask EAs to give up all their luxuries. When one does give up luxuries for altruistic reasons, I think it's fine to try to give up the ones that are subjectively least costly to give up, and that will have the biggest impact. Other costs: changing your possibly years-long menu for lunch and dinner; feeling hungry for a while if you don't get it figured out quickly; having red meat cravings (much stronger for some people than others, e.g. not bad for me, but bad for Killian). I don't think what I've said is a case against vegetarianism; just trying to convey how I think of the costs. ETA: there are other benefits (and other costs), this is just my subjective slice. An expert review, on which individuals can base their subjective cost breakdowns, would probably be helpful.

I'm thinking of giving "Giving games" for Christmas this year.

Family and friends gets a envelope with two cards. A nice Christmas card saying they now have x NOK to give on a charity of their choosing. Then it presents some interesting recommendations and encourage them to look more into them if they want to. When they have decided they have to write it down on an accompanying empty (but postaged) card addressed to me and when I get the card after Christmas I will donate the money.

Have somebody else though of something similar? Do you have any ideas that could make it more interesting or better in any way?

4
Joey 🔸
I would also recommend running a Christmas Fundraiser (Basically asking for donations instead of gifts during Christmas). http://christmas.causevox.com/ I will post a longer description + guide on how to set this up on a main thread early December.
3
RyanCarey
That could be interesting. You could count the numbers with tracking URLs. You could even get a group of effective altruists to run a similar giving game using the same tracking URLs so that everyone can (anonymously) see how many people have voted for the same or different charity from you. This could be a pretty cool project I think.

As a follow-up to this comment: I gave my 10-minute talk on effective altruism at Scribd. The talk went better than I expected: several of my coworkers told me afterwards that it was really good. So I thought I would summarize the contents of the talk so it can be used as a data point for presenting on effective altruism.

You can see the slides for my talk in keynote, pptx, and html. Here are some notes on the slides:

  • The thought experiment on the second slide was Peter Singer's drowning child thought experiment. After giving everyone a few seconds to

... (read more)

Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.

I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do ... (read more)

2
Benjamin_Todd
Hi Fluttershy, Really appreciate hearing you're feedback. We've written about how to choose what subject to study a bunch of times, but I agree it's hard to find, and it's not a major focus of what we do. Unfortunately we have very limited research capacity and have decided to focus on choosing jobs rather than subjects because we think we'll be able to have more impact that way. In the future I'd love to have more content on subject choice though. I also realise our careers list comes across badly. I'm really keen to expand the range of careers that we consider - we're trying to hire someone to do more career profiles but haven't found anyone suitable yet. Being an actuary and engineering are both pretty high on the list. I also know that a lot of people around 80,000 Hours think most people should do earning to give. That's not something I agree with. Earning to give is just one of a range of strategies. Ben
2
John_Maxwell
Seems like 80K could probably stand to link to more of Cognito Mentoring's old stuff in general. No reason to duplicate effort.
0
Benjamin_Todd
Yeah I'll add a link to Cognito on the best resources page next time I update it.
1
AGB 🔸
"Actually, I can't find any discussion of choosing a college major on the 80,000 Hours site, though there are a couple of threads on this topic posted to LessWrong." Not a tremendous excuse, but it wouldn't surprise me if this is basically because 80k is UK-based, where there is no strong analogue to 'choosing a major' as practised by US undergraduates; by the time someone is an undergraduate in the UK (actually, probably many months before that, given application deadlines), they've already chosen their subject and have no further choices to make on that front except comparatively minor specialisation choices.
0
RyanCarey
Not to take away from the substance of your post, but when you note that impact is power-law distributed, doing important scientific research sounds (much)[https://80000hours.org/2012/08/should-you-go-into-research-part-1/] (more skill-dependent)[https://80000hours.org/2013/01/should-you-go-into-research-part-2/] than quantitative finance.

Should we try to make a mark on the Volgbrother's "Project 4 Awesome"? It can expose effective altruism to a wide and, on average, young audience.

I would love to help in any way possible, but video editing is not my thing...

https://www.youtube.com/watch?v=kD8l3aI0Srk

2
ricoh_aficio
Hi UriKatz, there's a group of us trying to do just that, and we'd love to have your help. Join the EA Nerdfighters Facebook group and I'll brief you on what we've been up to. :) https://www.facebook.com/groups/254657514743021/

People often criticise GWWC for bad reasons. In particular, people harshly criticise it for not being perfect, despite not doing anything much of value themselves. Perhaps we should somewhat discount such armchair reasoning.

However, if we do so, we should pay extra attention when people who have donated hundreds of millions of dollars, a majority of their net worth, and far more than most of us will, have harsh criticism of giving pledges.

0
Nicholas_Bregan
From his email: "When I talk to young people who seem destined for great success, I tell them to forget about charities and giving. Concentrate on your family and getting rich—which I found very hard work. I personally and the world at large are very glad you were more interested in computer software than the underprivileged when you were young. And don’t forget that those who don’t make money never become philanthropists." There is certainly truth in this. But not all of Wilson's giving was in areas suitable for effective altruism. In particular, donating to the Catholic Church arguably causes active harm. Preserving monuments and wildlife reserves is at least a good distance away from optimal. I think the strongest objection to his objection is that becoming rich doesn't make the world a better place in itself. Even if you make other people richer in the process, it's not a clear-cut world improvement. Especially if you consider replaceability effects and negative externalities from certain forms of business, making rich people more altruistic, and more effectively altruistic, could be more important than making more rich people.
0
Larks
Arguably donating to global health causes active harm (via the effect on fertility). Arguably veganism causes active harm (via wild animal suffering). Arguably donating to Xrisk causes active harm (ok, not so clear on the mechanism here, but I'm sure people have argued it). Yet these last three causes are EA causes. So merely 'arguably' causing active harm cannot be enough. What matters is how much actual good it does. And I think it is very plausible that the Catholic Church actually does a lot of good. Yes, perhaps donating to the Church is less effective than donating to SCI. On the other hand, it could be significantly less effective and him still have done more good with his donations than most EAs. Giving a lot more money slightly less efficiently does more good to others than giving a small amount of money very efficiently. More importantly, this doesn't really affect the argument. In general we should pay more attention to criticism when the critic is overcoming social desirability bias. And in this case, even if you disagree with his donation choices, he clearly scores very highly on altruism, which makes his criticism of our attempts to spread it all the more potent. Given
1
Nicholas_Bregan
Actually, my point was that donating to the Catholic Church does more harm than good, not just that it causes harm. Perhaps you should look up how little it spends on things like poverty relief, how much money it absorbs from presenting itself as an official institution of morality while spreading supernatural superstition and promoting socially harmful policies. I would probably pay money to make the Catholic Church poorer, though certainly not at a 1:1 exchange rate. I think the other EA causes you mention, while mixed blessings, have a much better profile. I do agree with Wilson's core argument, but would still point out that his money didn't come out of thin air, and neither would the money of other rich people. A lot of that is competing for profit margins, that is, a successfull hedge fund manager replaces other hedge fund managers. It can therefore be more effective to try to make rich people more altruistic rather than to make more people rich.

Animal Charity Evaluators has/have found that leafleting is a highly effective form of antispeciesist activism. I want to use it generally for effective altruism too. Several times a year I’m at conventions with lots of people who are receptive to the ideas behind EA, and I would like to put some well-designed flyers into their hands.

That’s the problem—“well-designed” is. My skills kind of end at “tidy,” and I haven’t been able to find anything of the sort online. So it would be great if a gifted EA designer could create some freely licensed flyers as SVG ... (read more)

2
Tom_Ash
Relevant to this, there's an (inactive) .impact project to make EA infographics, with some discussion of them. There's also an idea to create an EA design collective, consulting on design for EA orgs and projects. It's not quite the same thing, but you might be interested in this infographic about deworming that I made for Charity Science.
1
Dawn Drescher
Thanks, that’s a great info graphic! I’d need something more generic, intervention agnostic, though, because we’ll be fund-raising for LLINs. Maybe something will come of that design collective.
1
yboris
There is a brochure created by Fox Moldrich that I edited (he shared the Adobe Illustrator file with me). Here is a PDF of it: Trifold Please contact me for the *.AI file and/or directly contact Fox.

[Your recent EA activities]

Tell us about these, as in Kaj's thread last month. I would love to hear about them - I find it very inspirational to hear what people are doing to make the world a better place!

(Giving this thread another go after it didn't get any responses last month.)

2
RyanCarey
I've volunteered for CSER. Also, I've done most of Andrew Ng's Coursera course on Machine Learning. It seems like a valuable skill to acquire, so I think that belongs on the list.

I'm planning on starting an EA group at the University of Utah once I get back in January, and I need a good first meeting idea that will have broad appeal.

I was thinking that I could get someone who's known outside of EA to do a short presentation/question and answer session on Skype. Peter Singer is the obvious choice, but I doubt he'd have time (let me know if you think otherwise). Can anyone suggest another EA who might have name recognition among college students who haven't otherwise heard of EA?

Is there an audio recording of Holden's "Altruistic Career Choice Conference call"? If so, can someone point me in the right direction. I'm aware of the transcript:

http://files.givewell.org/files/calls/Altruistic%20career%20choice%20conference%20call.pdf

Thanks!

I've been growing skeptical that we will make it through AI, due to

1) civilizational competence (that is incompetence) and

2) Apparently all human cognition is based on largely subjective metaphors of radial categories which have arbitrary internal asymmetries that we have no chance of teaching a coded AI in time.

This on top of all the other impossibilities (solving morality, consciousness, the grounding problem, or at least their substitute: value loading).

So it is seeming more and more to me that we have to go with the forms of AI's that have some smal... (read more)

1
RyanCarey
It makes sense that the earliest adopters of the idea of existential risk are more pessimistic and risk-aware than average. It's good to attract optimists because it's good to attract anyone and also because optimistic rhetoric might help to drive political change. I think it would be pretty hard to know with probability >0.999 that the world was doomed, so I'm not that interested in thinking about it.
0
Diego_Caleiro
The underlying assumption is that for many people working on probability shifts that are between 0 and 1 percent is not desirable. They would be willing to work for the same shift if it was betwen, say, 20 and 21, but not if it is too low. This is an empirical fact about people, I'm not issuing that it is a relevant moral fact.
0
RyanCarey
Yeah so if it started to look like the world was doomed, then less people would work on x-risk, true.

I posted this late before, and was told to post in a newer Open Thread so here it goes:

Is voting valuable?

There are four costs associated with voting:

1) The time you spend deciding on whom to vote.

2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).

3) The attention you pay to politics and associated decision cost.

4) The sensation you made a difference (this cost is conditional on voting not making a difference).

What are the benefits associated with voting:

1) If an election is decid... (read more)

0
ruthie
There are more things to add to the benefits list: * When I talk to friends about how to vote I get to exhibit some of the ways I think about policy which may influence their thinking in the future * Becoming educated about local political issues helps you look educated and gain respect among other local people * Learning about public policy might be enjoyable Overall, though, none of this seems to justify either not voting if you want to vote, or voting if you don't want to vote.
0
pappubahry
Previous thread
Curated and popular this week
Relevant opportunities