New legal limits on surveillance in the US
The United States has new legal limits on electronic surveillance, both in one specific way and — more important — in prevailing judicial theory. This falls far short of the protections we ultimately need, but it’s a welcome development even so.
The recent Supreme Court case Carpenter v. United States is a big deal. Let me start by saying:
- Most fundamentally, the Carpenter decision was based on and implicitly reaffirms the Katz test.* This is good.
- The Carpenter decision undermines the third-party doctrine.** This is great. Strict adherence to the third-party doctrine would eventually have given the government unlimited rights of Orwellian surveillance.
- The Carpenter decision suggests the Court has adopted an equilibrium-adjustment approach to Fourth Amendment jurisprudence.
- The “equilibrium” being maintained here is the balance between governmental rights to intrude on privacy and citizens’ rights not to be intruded on.
- e., equilibrium-adjustment is a commitment to maintaining approximately the same level of liberty (with respect to surveillance) we’ve had all along.
- I got the equilibrium-adjustment point from Eugene Volokh’s excellent overview of the Carpenter decision.
*The Katz test basically says that that an individual’s right to privacy is whatever society regards as a reasonable expectation of privacy at that time.
**The third-party doctrine basically says that any information of yours given voluntarily to a third party isn’t private. This includes transactional information such as purchases or telephone call detail records (CDRs)
Key specifics include: Read more
Categories: GIS and geospatial, Surveillance and privacy | 1 Comment |
Brittleness, Murphy’s Law, and single-impetus failures
In my initial post on brittleness I suggested that a typical process is:
- Build something brittle.
- Strengthen it over time.
In many engineering scenarios, a fuller description could be:
- Design something that works in the base cases.
- Anticipate edge cases and sources of error, and design for them too.
- Implement the design.
- Discover which edge cases and error sources you failed to consider.
- Improve your product to handle them too.
- Repeat as needed.
So it’s necesseary to understand what is or isn’t likely to go wrong. Unfortunately, that need isn’t always met. Read more
Categories: Analytic technologies, Text | 5 Comments |
Brittleness and incremental improvement
Every system — computer or otherwise — needs to deal with possibilities of damage or error. If it does this well, it may be regarded as “robust”, “mature(d), “strengthened”, or simply “improved”.* Otherwise, it can reasonably be called “brittle”.
*It’s also common to use the word “harden(ed)”. But I think that’s a poor choice, as brittle things are often also hard.
0. As a general rule in IT:
- New technologies and products are brittle.
- They are strengthened incrementally over time.
There are many categories of IT strengthening. Two of the broadest are:
- Bug-fixing.
- Bottleneck Whack-A-Mole.
1. One of my more popular posts stated:
Developing a good DBMS requires 5-7 years and tens of millions of dollars.
The reasons I gave all spoke to brittleness/strengthening, most obviously in:
Those minor edge cases in which your Version 1 product works poorly aren’t minor after all.
Similar things are true for other kinds of “platform software” or distributed systems.
2. The UI brittleness/improvement story starts similarly: Read more