Crossposted from my personal blog, Sunyshore.
Content warning: In this post, I describe several historical and ongoing surveillance programs, including the FBI’s surveillance of Martin Luther King, Jr., and China’s surveillance of Uyghurs and other ethnic minorities, in addition to recent terrorism incidents. These topics may be disturbing to some readers.
Recently, I read “The Dangers of Surveillance,” a Harvard Law Review article by Neil M. Richards, to better understand the relationship between privacy, civil liberties, and liberal democracy. Richards argues that intellectual privacy enables freedom of thought, which is the bedrock of liberal democracy. Surveillance of intellectual activities, such as reading and web browsing, thus threatens the heart of our civil liberties. He also argues that surveillance gives the watcher power over the watched, enabling abuses of power such as blackmail, persuasion, and discrimination. Finally, Richards proposes four normative principles that should guide the reform of American surveillance law.
The surveillance efforts of governments and private actors today are intertwined. Government mass surveillance programs, such as the NSA’s PRISM program, rely heavily on data collected by private Internet companies. Data can also flow from the government to private companies: for instance, U.S. Customs and Border Protection has shared data from its license plate readers with an insurance companies’ association, the National Insurance Crime Bureau, to help track down stolen vehicles.
To be fair, surveillance isn’t all bad. Governments use it to protect citizens from many kinds of threats, including other threats to our privacy and civil liberties such as identity theft. Advertisers use it to target us with products we value, while subsidizing the free services we use. But surveillance poses certain risks to important civil liberties, and our laws and customs should take these risks into account.
The first three sections of this blog post summarize Richards’s argument in “The Dangers of Surveillance.” Surveillance policy has changed since the article was first published in 2013, so I’ve extended the argument with relevant examples from after it was published. In the last two sections, I review and critique the argument.
Intellectual privacy
Richards argues that one of the most important forms of privacy is intellectual privacy, the right to form ideas free from interference, because it enables us to exercise our civil liberties. Oliver Wendell Holmes and Louis Brandeis, two prominent U.S. Supreme Court justices, believed that freedom of thought underpins the civil liberties protected by the First Amendment, including freedom of speech, freedom of the press, and freedom of assembly and association. Because the free exchange of ideas enables citizens to become informed on public policy issues, it must have special protection under the Constitution in order for liberal democracy to thrive. And in order to exercise these civil liberties properly, citizens need the ability to carry out intellectual activities—including reading, thinking, and having private conversations about ideas—without interference from others, including private actors and the government.
Surveillance disrupts intellectual activities by making people reluctant to develop and express their thoughts authentically. The Enlightenment thinker Jeremy Bentham developed a prison structure called the panopticon (from the Greek word for “all-seeing”). At the center of the panopticon was a watchtower, from which a jailer could observe the prisoners in any part of the prison, at any time. Since the prisoners don’t know whether they are being watched at any given time, they assume they are being watched all the time and avoid doing anything that would get them punished if caught.
Likewise, surveillance of intellectual activities nudges us towards intellectual conformity. Several studies have shown that when people are aware that they are under surveillance, they restrain their own speech and Internet browsing behavior. For example, a 2016 survey studied the effects that awareness of surveillance had on Facebook users’ willingness to interact with a fake post about U.S. airstrikes against ISIS. It found that awareness of government surveillance alone did not deter people from sharing opinions that they perceived as different from the majority’s. However, when subjects were aware of government surveillance and believed that it was justified, they were significantly less likely to share dissenting opinions (p < 0.01). Another study found that monthly traffic to terrorism-related Wikipedia articles dropped 30% after the Snowden leaks in June 2013.
Other dangers of surveillance
As illustrated in the previous section, the normalizing power of surveillance depends on the subjects knowing they could be watched. But using surveillance to control people can be harmful, even when the subjects of surveillance don’t know they are being watched. Richards identifies some other ways in which secret surveillance can cause harm: blackmail, persuasion, and discrimination.
Blackmail. Information on people collected through surveillance can be used to blackmail them or even other people. During the 1950s and ‘60s, the FBI illegally spied on the Black civil rights movement, the Communist Party USA, and other left-leaning groups as part of COINTELPRO. In 1964, the FBI infamously tried to silence Martin Luther King, Jr., by sending a tape recording and an anonymous letter to his home, threatening to expose his alleged sexual improprieties. Even today, post-communist governments such as Russia and Ukraine regularly use kompromat (short for “compromised material”) to silence their critics.
Sorting, discrimination, and persecution. Data gives entities such as companies and governments the power to sort individuals into categories and treat them differently based on salient characteristics, and nowadays, they have lots of data at their disposal. For example, Upstart helps banks decide whether to issue personal loans using more than 1,600 data points per consumer. These decision-making processes are often opaque to the data subjects, enabling unfair discrimination based on sensitive attributes like race and gender to go undetected.
In extreme cases, governments have used mass surveillance to sort and persecute populations. During the Holocaust, Nazi Germany used census data to identify Jewish individuals and move them into ghettos and eventually concentration camps. Similarly, during World War II, the United States used census data to identify Japanese Americans and put them in internment camps. More recently, the Chinese government has been collecting and using various forms of data, including biometrics such as DNA and iris scans, to surveil and control Uyghurs and other Muslim ethnic minorities in Xinjiang. China’s persecution of Muslims has crossed the line into genocide, including forced sterilizations.
Risks of persuasive technologies. Richards also floats the more speculative possibility that governments can use intimate information about individuals to “nudge” them toward certain behaviors without coercing them, akin to how consumer-facing companies like Target can figure out that an individual is pregnant and “target” them (pun intended) with coupons for pregnancy-related products.
Principles for surveillance reform
We know surveillance can endanger civil liberties, undermine democracy, and facilitate discrimination and persecution. But surveillance can also be useful for a broad range of purposes, including the prevention of hate crimes and terrorist attacks that also threaten liberal values. How, then, can we capture the benefits without giving up valuable forms of privacy and civil liberties? Richards offers four guiding principles for policy reform, particularly in the context of U.S. law:
Surveillance transcends the public–private divide. Policymakers should recognize that both public and private actors engage in surveillance, and that these forms of surveillance are intertwined. They should respond accordingly by doing more to prevent governments from circumventing privacy safeguards using privately collected data. For example, the third-party doctrine allows police to circumvent Fourth Amendment protections by obtaining private information from a third party without a warrant. However, the Supreme Court ruling in Carpenter v. United States (2018) curtailed the third-party loophole by requiring police to get a warrant before using location data from cell phone networks to track individuals.
Secret surveillance is illegitimate. Richards insists that governments disclose the existence and capabilities of domestic surveillance programs so that they can be held accountable to the rule of law. A core value of republicanism is that the people are sovereign, not the state, and allowing the state to use surveillance without oversight undermines popular sovereignty. This doesn’t mean that individual subjects of surveillance have to be notified when they are surveilled, but it does mean that the public should know about surveillance programs.
Total surveillance is illegitimate. Allowing governments to record everything, in the hope of discovering something useful, would give them immense power to abuse the data they possess. As Richards writes, “a world of total surveillance would be one in which the power dangers of surveillance are even more menacing. In such a world, watchers would have increased power to blackmail, selectively prosecute, coerce, persuade, and sort individuals.” Therefore, surveillance should be authorized only in discrete, temporary instances, and only when the need for it is validated by an independent judiciary.
Surveillance is harmful, and surveillance of intellectual activities is especially so. Richards argues that intellectual surveillance should be subjected to a higher legal standard than the Fourth Amendment requirement of probable cause. For example, Title I of the U.S. Electronic Communications Privacy Act (ECPA) requires police officers seeking to intercept wire, oral, or electronic communications in transit to show that:
- the interception will be for a limited time;
- the police have exhausted all other possible ways of obtaining the information; and
- the police officer doing the surveillance will take measures to avoid intercepting extraneous information.
The ECPA embodies the idea that surveillance of private communication should be a last resort and be done in a way that avoids unnecessarily treading on sensitive communications. In information retrieval terms, it requires police officers to focus on minimizing false positives, or intercepted communications that are irrelevant to their investigations. These constraints on police surveillance are in the spirit of what Lee Bollinger has called First Amendment law’s “extraordinary protection against censorship”—the idea “that rules that might deter potentially valuable expression should be treated with a high level of suspicion by courts.” According to this principle, the law should err on the side of allowing more speech, even if some of that speech causes harm.
Discussion
I enjoyed reading this article, and it gave me a better understanding of how privacy relates to other liberal values. I agree with the main claims that intellectual privacy is a cornerstone of intellectual freedom and that surveillance can thus threaten a free society. However, I think this article omits certain considerations, and I’m not fully persuaded by some of its minor points.
It’s more complicated than just government surveillance
First, the article is very focused on government surveillance, and although it talks about surveillance by private actors as well, it doesn’t say much about why private-sector surveillance is harmful other than through its links to government surveillance. The worst thing for consumers’ intellectual privacy that an Internet company might do is share your browsing history with law enforcement, which is still participating in government surveillance. But workplace surveillance can also threaten freedom of speech and assembly: for example, Amazon regularly uses surveillance to thwart union organizing.
Second, other activities besides surveillance and censorship can threaten intellectual freedom. In recent decades, Europe has endured several terrorist attacks in which individuals or groups were targeted for their speech acts. For instance, in 2015, two gunmen shot and killed 12 people in the headquarters of the French satirical newspaper Charlie Hebdo; the newspaper had published cartoons of the prophet Muhammad, a practice that many Muslims consider blasphemous. Regardless of whether a given speech act is right or wrong, violence against people for their speech threatens intellectual freedom. Surveillance can sometimes promote intellectual freedom by deterring such violence.
Third, even though I agree that intellectual freedom is necessary for liberal democracy, I doubt that unfettered intellectual freedom would create the best outcomes. While Holmes and Brandeis asserted that a free marketplace of ideas would converge on the truth, society’s recent experiences with social media suggest that this is not always true: one study found that falsehoods spread faster and more broadly than truth on Twitter. Ideal liberal democracies converge on the best outcomes for their citizens, but only when citizens agree on basic facts.
Despite my objections, I still think that government surveillance poses a threat to civil liberties—even in the United States, whose legal system puts an unusually high value on freedom of expression and assembly. For example, U.S. federal agencies have recently deployed aerial surveillance systems, including Predator drones normally used for border security, to monitor Black Lives Matter protests.
Discrimination and other social dilemmas
Richards also raises the concerns that surveillance can lead to unfair discrimination, as well as other harms when combined with persuasive technologies. Both concerns have received lots of attention recently: the tech and public policy communities have become much more aware of algorithmic bias, and the recent documentary The Social Dilemma discusses social media’s impact on mental health and civic discourse (although it is often hyperbolic and inaccurate).
The point about surveillance and discrimination is very important. But setting aside technologies for which the main issues are about civil liberties, such as police facial recognition, I think privacy is the wrong framing for most cases of algorithmic bias involving “big data.” If an algorithm is treating people unfairly (with respect to the fairness criteria that we care about), limiting data collection would not solve the problem. Instead, the algorithm must be corrected so that it meets the relevant fairness criteria.
Technology advocates like Tristan Harris have made a host of claims about the psychological, social, and political impacts of persuasive technologies, such as the AI systems that use vast amounts of personal data to recommend content on social media platforms. However, many of these claims are not fully supported by scientific evidence. Personally, I would love to learn more about the claims and the evidence for and against them. I recommend the following sources to get started:
- “Is The YouTube Algorithm Radicalizing You? It’s Complicated,” by Jordan Harrod, 2020.
- Rob Wiblin’s interview with Tristan Harris on The 80,000 Hours Podcast, 2020. This page links to a ton of sources on these issues, including many scientific papers.
Surveillance and existential risk
Finally, I am sympathetic to one security-driven argument for surveillance: existential risk. According to Nick Bostrom’s “The Vulnerable World Hypothesis,” society might one day develop technologies that could be used to kill large numbers of people with relatively few resources. We can protect humanity from some of these risks through targeted interventions, such as requiring background checks to work in certain biosafety labs so as to discourage the creation of bioweapons. However, being able to respond to such technological risks in general would require unprecedented forms of total surveillance.
As a total utilitarian and longtermist, I believe that preventing existential catastrophes—those that would permanently doom humanity to a bad future—is extremely important. But I also believe that putting humanity on a path to a good future is important. Thus, I am skeptical of total surveillance for the same reasons as Richards is: I worry that by undermining civil liberties, total surveillance might destroy the means by which liberal societies make progress. Encouraging the development of surveillance technologies may also facilitate the spread of digital authoritarianism. I hope that by using other strategies, we can get our existential risk to a level so low that we don’t need total surveillance.
Conclusion
All things considered, I’m not convinced we need less surveillance per se, but we do need more accountability for the responsible use of surveillance, such as judicial oversight. Surveillance systems are dual-use technologies—they can be used for beneficial purposes, like preventing violent crimes, and for harmful purposes, like oppressing ethnic minorities. Surveillance can prevent a range of human rights violations, and it can perpetuate them. But surveillance must be used appropriately and with public oversight.
Thanks to Cullen O’Keefe, Keller Scholl, and Méabh Murphy.