Welcome to the third open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts. Thanks to an upgrade by Trike Apps, each time you visit the open thread, new comments will now be highlighted!
Cross-posting from Less Wrong.
General question: What's the best way to get (U.S.) legal advice on a weird, novel issue (one that would require research and cleverness to address well)? Paid or unpaid, in person or remotely.
Specific request (if you're interested in helping personally, please let me know at histocrat at gmail dot com !): I'm helping to set up an organization to divert money away from major party U.S. campaign funds and to efficient charities. The idea is that if I donate $100 to the Democratic Party, and you donate $200 to the Republican party (or to their nominees for President, say), the net marginal effect on the election is very similar to if you'd donated $100 and I've donated nothing; $100 from each of us is being canceled out. So we're going to make a site where people can donate to either of two opposing causes, we'll hold it in escrow for a little, and then at a preset time the money that would be canceling out goes to a GiveWell charity instead. So if we get $5000 in donations for the Democrats and $2000 for Republicans, the Democrats get $3000 and the neutral charity gets $4000. From an individual donor's point of view, each dollar you donate will either become a dollar for your side, or take away a dollar from the opposing side.
This obviously steps into a lot of election law, so that's probably the expertise I'll be looking for. We also need to figure out what type of organization(s) we need to be: it seems ideal to incorporate as a 501c(3) just so that people can make tax-deductible donations to us (whether donations made through us that end up going to charity can be tax-deductible is another issue). I think the spirit of the regulations should permit that, but I am not a lawyer and I've heard conflicting opinions on whether the letter of the law does.
And those issues aside, I feel like there could be more legal gotchas that I'm not anticipating to do with Handling Other People's Money.
If you go to http://saos.fec.gov/saos/searchao? and search for "repledge" you will find the legal opinion the FEC gave to the people behind Repledge. It was evenly split 3-3 over whether it would count as a conduit or intermediary for campaign donations (which seems to not be allowed). This seems to be what made the Repledge people decide to stop what seemed to be a very successful launch (look for them on Youtube for example). Looking at this opinion could be useful if trying to do something like this. If you are serious about it, you may want to contact the person behind Repledge (Eric Zolt) for more details.
You may also want to read my paper: http://www.amirrorclear.net/academic/papers/moral-trade.pdf
This is what repledge.com wanted to do, but didn't end up launching due to legal issues. Toby Ord talks about it at around 41:00 of this talk.
Oh, wow. Thank you so much. I'd never been able to find any evidence of other people talking about this idea.
This is just a general suggestion, and I'm not joking. It may not be the best possible way, but my local heuristic for getting legal information on a weird, novel issue is to get Carl Shulman to think about it. How he researches, I don't know, but he always seems to generate surprising answers.
Great idea! I know some couples who mutually disarm on election day, knowing they'd vote for opposite candidates.
I wonder if one side effect of this would be to reduce people's irrational love of democracy. Did Nader Trading have the same effect?
There was a thread on deworming on the Facebook group recently which people might be interested in reading or commenting on (here or there).
Matt Sharp had been reading through GiveWell's material on this and was struck by the fact that the evidence for it seemed weak and limited, quoting this passage:
It's definitely worth being aware that the evidence of effectiveness for deworming is less robust than that for bednets (which is part of why I favoured AMF - RFMF aside - in my post in the 'Where I'm Giving And Why' series). Joey Savoie pointed to this GiveWell blog post in support of the claim that it's relatively well-evidenced though.
It's also worth flagging that a new study of deworming was released this year. This was only the second RCT of deworming with long-term followup, and it had positive findings - so it substantially boosted the robustness of the evidence behind it.
An update on what happened here Matt posted the question to Givewell and they responded http://blog.givewell.org/2014/10/03/a-promising-study-on-the-long-term-effects-of-deworming/comment-page-1/#comment-913365
Request: Quotes and/or brief statements that convey EA ideas in an interesting and accurate manner. Looking for the sort of thing I can mix into an image meme and use to start conversations on Facebook/Reddit.
Note that Mason has done this VERY well before. http://prettyrational.com/
Relevant: the Facebook group thread with image memes.
Does anyone have any experience with fundraising? When I search Google, I just find sites about buying their stuff and reselling it, which doesn't seem like it would be very useful for just one person.
One thing I've considered is simply begging for donations, getting permission to stand outside a store and pass out flyers on the weekend. If printing 1 flyer costs $0.50, it wouldn't be that hard to make back my costs. I'm told I look a lot younger than I am, so there's a chance I could get mistaken for a high schooler, and get the 'oh, that's adoralbe' effect going for me. Also, by going to a different store in a different area each time, I can repeat it for quite awhile without anyone seeing me too much that I'm just old and familiar.
Another thing I've considered is doing a thing for money, but I'm not really sure how this would work. Let's say, to be specific, that I want to run 100 miles in 1 month to raise money for SCI. Why would me running 100 miles make anyone more likely to donate, and how do I get them to do so? I've considered asking a gym to let me put up a poster or something. Would it be better to ask other people to run, to just say what I'm doing a put a link to the donation page? Would asking the gym itself to sponsor me work? I'm not sure how it would benefit them at all, though.
Or, since somehow people can raise money through playing video games together, maybe somehow monetizing getting a group together to rigorously work through, say The Art of Computer Programming with full solutions and explanations to all problems. That seems unrealistic, for reasons I'm not sure of. Maybe the 'too good to be true' heuristic. But I don't really have enough experience with this sort of thing to trust my intuition in any way.
There are likely good resources online about how to do sponsored runs. These can raise a lot.
Thinking about ways to improve cause prioritization research at low-cost the idea of using giving games came up. It would be basically crowdsourcing the kind of research Give Well does. Many (potentially) effective causes already have strong supporters who would be able to provide information and research for free. There might be an effective online model of capitalizing on such supporters (and the general EA public). An online causes contest is an idea. There should be rules to ensure the discussion is evidence-based and high quality. Rules can be played with, there could be teams and the way of engaging the public opens up many opportunities. This could be used to raise awareness about effective altruism, but more importantly, the online system should be designed to advance our knowledge about the value of causes. The big dream is to build a community curated "open evidence" rank* of causes. A simple way of starting is just maintaining an online list of potentially high impact causes with associated expected values, then allowing people to submit evidence (pro or against) each cause. There could be a point system in which evidence influences causes value (there could be a panel of judges or a voting system to each new argument/evidence). Are there similar initiatives in the community? Have you discussed this before? What is the current state of giving games?
.* having only a one-dimensional quantitative rank might not be the best option.
(this occurred me when reading this post http://blog.givewell.org/2014/10/16/expert-philanthropy-vs-broad-philanthropy/#comments)
"The Openness-Equality Trade-Off in Global Redistribution"
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2509305
A very interesting (draft of a) paper that discusses trade-offs between immigrants' civil/political rights and the number of immigrants allowed. Is it better to decrease inequality within a rich country by treating immigrants well, or is it better to let in more immigrants with fewer rights?
Interesting draft, thanks for the link.
My instinct is of course to let in more immigrants with fewer rights (but we might worry if that was a politically sustainable equilibrium). One possible solution to the dilemma would be to ensure national statistics treat immigrants and citizens differently. At the moment, some people are worried about 'rising inequality' or 'median income stagnation' in the US. But this is in large part due to a rise in immigration by very poor people, which increases naively-measured inequality, and drag down average income. If people were in the habit of breaking down these statistics, and showing 'median household income for citizens, by race', much of the effect would disappear.
A similar example is how the worrying decline of the labor market participation rate looks quite different when you break it down by age and sex - though still concerning!
The Global Priorities Project, a joint project by the Centre for Effective Altruism, and the Future of Humanity Institute, seems like it does, or will do, some research that seems a corollary to the Open Philanthropy Project, a joint project by Givewell, and Good Ventures. At the 2014 Effective Altruism Summit, co-founder of Givewell, Holden Karnofsky, mentioned that for its first year, or two, the Open Philanthropy Project will be trying to figure out their recommendations between cause areas such as improving scientific research, policy interventions, and reducing global catastrophic risks. That seems a lot like what the Global Priorities Project is, and will be, doing, especially since they'll likely be focusing upon many of the same cause areas.
Additionally, both of the projects started from within organizations that were alternatives to each other in the field of charity evaluation ('Givewell classic', and Giving What We Can). It would be interesting to discover if the legacy of the original differences in their approaches continues forward with how each project will make its assessments.
I'm in contact with researchers working on both the Global Priorities Project, and the Open Philanthropy Project. I could interview them, and if they're not interested, I could still profile both organizations, and compare/contrast them. Would this be a post anyone else is interested in seeing?
Is voting valuable?
There are four costs associated with voting:
1) The time you spend deciding on whom to vote.
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
3) The attention you pay to politics and associated decision cost.
4) The sensation you made a difference (this cost is conditional on voting not making a difference).
What are the benefits associated with voting:
1) If an election is decided based on one vote, and you voted on one of the winning contestants, your vote decides who is elected, and is causally responsible for the counterfactual difference between candidates.
2) Depending on your inclinations about how decision theory and anomalous causality actually work in humans, you may think your vote is numerically more valuable because it changes/indicates/represents/maps what your reference class will vote. As if you were automatically famous and influential.
Now I ask you to consider whether benefit (1) would in fact be the case for important elections (elections say where the elected will govern over 10 000 000 people). If 100 Worlds had an election decided based on one vote, which percentage of those would be surreptitiously biased by someone who could tamper with the voting? How many would request a recount? How many would ask it's citizens to vote again? Would deem the election illegitimate? Etc...
Maybe some of these worlds would indeed accept the original counting, or do a fair recounting that would reach the exact same number, I find it unlikely this would be more than 80 of these 100 worlds, and would not be surprised if it was 30 or less.
We don't know how likely it is that this will happen, in more than 16000 elections in the US only one was decided by one vote, and it was not an executive function in a highly populated area.
This has been somewhat discussed in the rationalist community before, with different people reaching different conclusions.
Here are some suggestions for EA's that are consistent with the point of view that voting is, ceteris paribus, not valuable:
EA's who are not famous and influential should consider never making political choices.
EA's who are uncertain or live in countries where suffrage is compulsory may want to consider saving time by copying the voting decisions of someone who they trust, to avoid time and attention loss.
Suggestions for those who think voting is valuable:
EAs should consider the marginal cost of copying the voting policy of a non-EA friend/influence they trust highly, and weight it against the time, attention and decision cost of deciding themselves.
EAs should consider using safe vehicles (all the time and) during elections.
EAs who think voting is valuable because it represents what all agents in their reference class would do in that situation should consider other situations in which to apply such decision procedure. There may be a lot at stake in many decisions where using indication and anomalous causation applies - even in domains where this is not the sole ground of justification.
Hey Diego, Ryan mentioned that he was planning to start a new open thread around Monday (on a roughly fortnightly schedule), so you may get a better response from posting this in that :)
How frequently are open threads made? How frequently seems optimal to people?
I think that once every couple of weeks is best, for instance, we can start another one on Monday.
Sounds sensible. You could experiment to find what frequency maximises the number of comments across threads.
A meta comment on open-threads. I am new to the forum and trying to figure it out. Open threads have some substantial advantages: they provide a way for newcomers with no Karma to post, they stimulate lighter conversations and promote engagement. But they are basically a not easily searchable pile of text one would have to spend considerable time in just to find out what is being discussed. This makes it hard to tell if something has already been discussed, like this very issue I am raising now, lowering the probability of posting by a newcomer (should I really post this?). Is there a way to make information in open threads more easily accessible? Tagging and grouping open threads by topic (using the title) are quickly thought ideas.
"Manufacturing Growth and the Lives of Bangladeshi Women"
An interesting paper on the impact of the textile industry on women's education. Here's the abstract:
Any thoughts on the EA value of improving writing skills? Often I see EA writings as verbose and difficult to hold my attention. Obviously there is a trade-off between rigor and readability. I think EAs in general could do a better job of favoring readability (especially myself). This would also be helpful for newcomers to EA.
Possible solutions:
Personal role models of effective writing:
Thoughts on this?
Among effective altruists who believe the cause most worthy of concern is that of existential risk reduction, ensuring A.I. technology doesn't one day destroy humanity is a priority. However, the oft-cited second greatest existential risk is that of a (genetically engineered) global pandemic.
An argument in favor of focusing upon developing safer A.I. technology, in particular from the Machine Intelligence Research Institute, is that a fully general A.I. which shared human values, and safeguarded us, would have the intelligence capacity to reduce existential risk better than the whole of humanity could anyway. Humanity would be building its own savior from itself, especially if a 'Friendly' A.I. could be built within the next century, when other existential risks might come to a head. For example, the threat of other potentially dangerous technologies would be neutralized when controlled by an A.G.I.(Artificial General Intelligence), the coordination problem of mitigating climate change damage will be solved by the A.G.I. peacefully. The A.G.I. could predict unforeseen events threatening humanity better than we could ourselves, and mitigate the threat. For example, a rogue solar storm that human scientists would be unable to detect given their current understanding, and state of technology, might be predicted by an A.G.I., who would recommend to humanity how to minimize loss.
However, the earliest predictions for when a A.G.I. could be completed are 2045. Given the state of biotechnology, and the rate of progress within the field, it seems plausible that a (genetically engineered) pathogen could go globally pandemic before humanity has a A.G.I. to act as the world's greatest epidemiology computer. Considering that reducing existential risk is such a vocal cause area (currently) within effective altruism, I'm wondering why they, nor the rest of us, are paying more attention to the risk of a global pandemic.
I mean, obviously the existential risk reduction community is so concerned about A.G.I. is because of the work of the Machine Intelligence Research Institute, Eliezer Yudkowsky, Nick Bostrom, and the Future of Humanity Institute. That's all fine, and my friends, and I can now cite predictions, and arguments, and explain with decent examples what this risk is all about.
However, I don't know nearly as much about the risk of a global pandemic, or genetically engineered pathogens. I don't know why we don't have this information, or where to get it, or even how much we should raise awareness of this issue, because I don't have that information, either. If there is at least one more existential risk at least having a cursory knowledge of, this seems to be the one.
I'm thinking about contacting Seth Baum of the Global Catastrophic Risks Institute about this on behalf of effective altruism to ask his opinion on this issue. Hopefully, him, or his colleagues, can help me find more information, give me an assessment of how experts rate this existential risk compared to others, and which organizations are doing research and/or are raising awareness about it. Maybe the GCRI will have a document we can share here, or they'd be willing to present one to effective altruism. If not, I'll wrote something on it for this forum myself. If anyone wants has feedback, comments, or an interest in getting involved in this investigation process, reply publicly here, or in a private message.
There's quite a bit of interest in pandemics at FHI. Most of the pandemic scenarios look like they would be 'merely' global catastrophes rather than existential catastrophes, but I don't think we can rule the latter out entirely. The policy proposal I wrote up here was aimed primarily at reducing pandemic risk.
There's more attention from governments already on questions of how synthetic biology should be regulated. It's unclear what that means for the relative value of pursuing the question further, though.
We certainly talk about this a lot at FHI and do a fair amount of research and policy work on it. CSER is also interested in synthetic biology risk. I agree that it is talked about a lot less in wider EA circles though.