This is the first of a two-part series on the theory of information privacy. In the first post, I review the theory to date, and outline what I regard as a huge and crucial gap. In the second post, I try to fill that chasm.
Discussion of information privacy has exploded, spurred by increasing awareness of data’s collection and use. Confusion reigns, however, for reasons such as:
- Data is often collected behind a veil of secrecy. That’s top-of-mind these days, in light of the Snowden/Greenwald revelations.
- Nobody understands all of the various technologies involved. Telecom experts don’t know a lot about data management and analysis, and vice-versa, while the political reporters don’t understand much about technology at all. I think numerous reporting errors have resulted.
- There’s no successful theory explaining when privacy should and shouldn’t be preserved. To put it quite colloquially:
- Big Brother is watching you …
- … and he’s scary.
- Privacy theory focuses on the “watching” part …
- … but the “scary” part is what really needs to be addressed.
Let’s address the last point.
Privacy theory before computers
Modern privacy theory is usually dated to an 1890 article by Louis Brandeis and Samuel Warren, which is said to have been a reaction to issues raised by new technology, specifically cameras. In that article, they outlined four different kinds of privacy violation, which may be described as:
- Identity theft.
- Creating public false impressions about the victim.
- Publicly disclosing true, but properly private, facts about the victim.
- Unreasonably intruding upon the seclusion or solitude of the victim.
But the “right to privacy” was soon widened. In 1928, Brandeis — by then on the Supreme Court — famously summarized privacy as “the right to be let alone”, a right so expansive it was even the basis for the Roe v. Wade decision assuring reproductive freedom in the matter of abortion rights.
I actually agree with a Brandeis-style right to privacy or liberty. I just don’t think it helps much when we’re discussing tough IT-related tradeoffs.
Privacy theory in the computer age
Privacy theory as applied to computers and databases was perhaps first organized in the 1960s, most famously by Alan Westin. In his 1967 book Privacy and Freedom, Westin defined privacy quite narrowly, one of his formulations being:
the claim of an individual to determine what information about himself or herself should be known to others.
A history of social and political views about privacy published by Westin in 2003 gives more insight into how this concept evolved. As for his historical views themselves, those may be perhaps be summarized as:
- People grew more concerned about privacy in line with the increase in technology’s power to intrude on it …
- … until 9/11/2001, when surveillance suddenly began to appear more appealing.
Recent privacy theory
The secondmost famous book in privacy theory is probably Helen Nissenbaum’s 2009 Privacy in Context. Nissenbaum — in my opinion correctly — observed that:
- The issue isn’t exactly or just privacy, at least not in a narrow Westin-style definition. Rather, it is all of:
- Information gathering and monitoring.
- Information analysis and use.
- Information publication and dissemination.
- Societal, political and individual views on these matters vary, as they should, according to the purpose and “context” of the information’s gathering, use, or dissemination.
Unfortunately, Nissenbaum’s focus was descriptive than prescriptive. Even so, her work was the basis for, for example, the Obama Administration’s Consumer Privacy Bill of Rights — but that didn’t work out very well.
What’s wrong with privacy theory to date
Discussions of IT privacy and related issues seem stuck, and I have an idea why. Many laws and regulations are designed to avert measurable harms — death, injury, financial loss, etc. There are complications, of course, which start:
- Usually what’s averted are risks or probabilities of loss, rather than certainties.
- The measures to avert these danger carry costs, e.g. in money or in time and inconvenience.
Even so, the rules are rooted in some kind of measurable effect, and at least in principle they can be evaluated on a cost/benefit basis. Other laws focus on benefits — for example, they fund education; but again, in principle a cost/benefit analysis can be done.
When it comes to privacy and information flow, however, the cost/benefit analysis is distressingly one-sided. Reasons for government to impinge on privacy start with anti-terrorism and other law enforcement. Reasons for corporations to impinge on privacy start with profits and customer service. But reasons to preserve privacy — well, those are discussed in terms of “creepiness” and other synonyms for “vague emotional discomfort”. And what’s more important — vague emotional discomfort, or not being blown up by evil Moslem terrorists? When that’s the trade-off, the terrorists win.