This is the second of a two-part series on the theory of information privacy. In the first post, I review the theory to date, and outline what I regard as a huge and crucial gap. In the second post, I try to fill that chasm.
The first post in this two-part series:
- Reviewed the privacy theory of the past 123 years.
- Declared it inadequate to address today’s surveillance and information privacy issues.
- Suggested a reason for its failure — the harms of privacy violation are too rarely spelled out in concrete terms, making it impractical to do even implicit cost-benefit analyses.
Actually, it’s easy to name specific harms from privacy loss. A list might start:
- Being investigated (rightly or wrongly) for a crime, with all the hassle and legal risk that ensues.
- Being discriminated against for employment, credit, or insurance.
- Being embarrassed publicly, or discriminated against socially.
- Being bullied or stalked by deplorable private-citizen acquaintances.
- Being put on the no-fly list.
I expect that few people in, say, the United States will suffer these harms in the near future, at least the more severe ones. However, the story gets worse, because we don’t know which disclosures will have which adverse effects. For example,
- Algorithms for identifying potential terrorists are secret and ever-changing …
- … and the same goes for algorithms designed to identify potential civilian criminals, fraudsters, mortgage deadbeats or lazy employees.
- Simple-minded discrimination is often illegal … but the more subtle kinds are hard to identify or prove.
- Analytic software and computing power improve over time. We don’t know what kinds of analysis will become possible in the future …
- … nor who will want to carry those analyses out.
Why is this uncertainty bad? Well, prudence might suggest:
- Not posting anything to social media that might be interpreted as fitting the profile of a terrorist sympathizer …
- … and not saying anything of that kind in email either …
- … and not surfing to websites that might be of interest to terrorists.
- (And that all applies to any kind of real or alleged terrorism, be it Islamic, rightwing/militia, or Occupy Wall Street.)
- Not doing anything online that might be positively correlated with tax evasion …
- … or pedophilia …
- … or slacking off at work.
And that’s hardly all. Car license plates are now heavily photographed, so you might not want to drive to a druggie part of town, or otherwise deviate too far from a boring routine. You might not want to buy anything that speaks to a risk-taking nature in the years before you apply for a mortgage. Indeed, almost anything you do in your life could, if observed, harm you sometime in the future. And by the way – almost everything you do is, one way or another, electronically observed.
In law, a “chilling effect” arises when you don’t exercise a freedom (e.g. free speech) out of fear of (usually legal) consequences (e.g. a libel suit that, irrespective of its merits, would be expensive to defend). But with the new data collection and analytic technologies, pretty much ANY action could have legal or financial consequences. And so, unless something is done, “big data” privacy-invading technologies can have a chilling effect on almost anything you want to do in life.
This problem will not be averted solely through controls on data collection, retention, or analysis. My reason for that opinion boils down to:
- Anti-terrorism efforts aren’t going to stop.
- Neither will the business initiatives that depend on recording and analyzing detailed consumer behavior.
- Free speech isn’t free unless you can express yourself publicly, for example in social media.
- The previous three points comprise more than enough data and analysis to fuel the chilling effects.
But what else is there? Well, the full chain is collection + retention –> analysis –> use + consequences. So for information privacy theory to be useful, it must address the use and consequences of surveillance’s fruits.
Until something better comes up, I propose a principle like:
The societal benefits of using citizens’ private information should exceed the societal cost of the chilling effects such use could produce.
I hope to suggest more detail in future posts.
- The discussion above of privacy-related harms is based in part on a 2011 post.
- This post may help explain what I wrote three Independence Day weekends ago about the essential questions of fair data use.
- One of the best precedents I’ve found for the connection between chilling effects and privacy is the 2004 TAPAC report, sponsored by — believe it or not — the US Department of Defense.