Data Privacy Day 2016 is almost upon us (Thursday 28th Jan), and I’ll be hosting a panel on ethical data-handling at CPDP2016 to mark the occasion. But more about that later.
Meanwhile, over on the Internet Policy mailing list a discussion is raising some very interesting topics whose relevance will continue to grow in the coming months. The discussion started with Bill Drake posting a link to a paper he co-authored with Vint Cerf and Wolfgang Kleinwachter, on “Internet Fragmentation: an Overview”; the paper was launched recently at the World Economic Forum, and it’s well worth reading:
That, in turn, prompted Richard Hill raise the question of “openness”; what is an “open” Internet, and what does it imply for service providers and users? As Richard noted, Bill’s paper included the following observations:
“An Internet in which any endpoint could not address [and exchange data packets with] any other willing endpoint … would be a rather fragmented Internet.“
Richard goes on to propose that the endpoints in question must be “willing”, and he gives a couple of examples. If an endpoint accepts traffic that has been processed by a firewall, that may introduce fragmentation of a kind, in that not all the packets sent to that endpoint might complete the journey. But that’s a good kind of fragmentation, Richard argues, because it happens with some degree of knowledge and consent on the part of the recipient, and it provides them with the benefit of blocking malware and attempted intrusions. Similarly, if a user activates some form of ad-blocking, then one could say that their endpoint is receiving only partial traffic. Again, arguably, a form of fragmentation, but a beneficial one from the user’s perspective, and usually an explicit choice on their behalf.
These examples illustrate that users may exercise very different levels of consent and control in different circumstances. For instance, users behind an enterprise firewall may have no option but to accept whatever firewall policies are put in place on their behalf. Similarly, many users rely (whether they know it or not) on third-party fragmentation of traffic in the form of prevention of DDoS attacks, and the automatic filtering-out of large volumes of spam mail.
And this is where things start to get interesting.
The examples I’ve given seem intuitively clear cut, because a number of usually-implicit assumptions are at work. For example, we assume that users would willingly choose the options they in fact get, because the outcomes are probably better than the alternatives. We assume that the third parties are acting genuinely in the interests of the user, and that they aren’t also filtering things which the user would want to receive. We assume that ad-blockers are doing their job as advertised, and that they aren’t simultaneously receiving payments to let certain ads through regardless.
At the other end of the spectrum from these examples of “good” fragmentation, there are of course plenty of examples of “bad” fragmentation: censorship, malicious tampering with the routing or contents of traffic, interference with endpoints, and so on.
And as the usually-implicit assumptions suggest, there are many ways in which the question can be a lot less clear cut. In fact, between the “good” and “bad” ends of the spectrum, there is a whole continuum of cases where it’s harder to tell whether what is done on the user’s behalf is in their best interests; where users themselves might not even be certain what they would choose, or how to express their preference.
Crucially, there are many cases (especially to do with advertising and the collection of personal data) where it is almost impossible to associate a user’s choice with a particular outcome, one way or another.
These “middle cases” are extremely common. They have to do with things like the default “Do Not Track” or cookie settings in your browser; whether apps ask for location data from your mobile device; whether you are expected to register your real name when signing up with a web-site.
In fact, although we might notice them less than we would notice censorship or massive volumes of unfiltered spam, these more subtle factors not only shape the Internet as we see it, they also shape the way the Internet sees us. They raise questions of access to information, of self-determination and of personal identity that should concern us all.
I’ll be exploring many of those questions over the coming months, as part of ISOC’s programme of work on ethical data-handling. We’ll be trying to produce clear problem statements and, more important, practical guidance about why ethical data-handling is relevant and compelling… and how to do it.
As well as this week’s CPDP panel, I’ll soon be setting up a round-table workshop, setting out our ideas at several conferences/events, and posting updates here and on Twitter (@futureidentity). I look forward to hearing your thoughts.
 To join the Internet Policy email list, please log into the ISOC Member Portal – https://portal.isoc.org/ – and then choose Interests & Subscriptions from the My Account menu.
Image credit: Masakazu Matsumoto on Flickr.