For that reason, I believe it's worth exploring interesting edge cases when they come up, because it's thinking about the hard stuff at the fringes that will help us get the mainstream decisions right.
There's a company in San Francisco called SceneTap. The headline on its website says: "A new look into nightlife ... through the eyes of technology." Intriguing, you say, but not entirely clear. Maybe a recent SF Weekly headline describing SceneTap will help explain it better: "San Francisco bars to install creepy face-detection cameras inside venues." (The word creepy there is important. We'll come back to it.)
What SceneTap does is fascinating. The company installs a sensor at the entrance to venues along with a face-detecting camera. It doesn't take pictures or videos of anyone, but it does note - using the camera and clever software - how many people have gone into the venue, what gender they are and roughly how old they are. This information is then aggregated and presented on apps to anyone who's interested in what's going on in a range of bars across a range of cities. You can find out the average age of the punters in a venue, the male/female ratio and how full somewhere is. If you wanted to find the closest bar to you, that's 80 per cent full, with more women than men, mostly a bit younger than you, a few clicks will get you there.
I suspect your reaction to this would be rather like mine. You can see why people might use it but, at the same time, it seems a bit, well, creepy. And after seeing a Tweet a while ago quoting Ryan Calo of Stanford Law School's Centre for Internet and Society saying "Creepiness is a stand-in for concerns that we can't yet articulate. An early warning signal", I've become increasingly convinced creepiness is worth thinking about. There's something wrong here, but it's too new for us to know exactly what it is. SceneTap can point out that all the data it collects is anonymous, that it can't be tied to any individual, that most bars already have some form of surveillance going on, that it already has tons of users - but it still doesn't seem quite right.
And this is remarkably similar to the technologies we all get told about every day for more and better targeting, for more accurate research and more focused interactions. We know why we're doing it, we're convinced it will make for better services, but, to regular people, it feels a bit creepy.
We should pay attention to that feeling - it's trying to tell us something.