I’m writing from Addis Ababa, where the African Union’s Specialist Technical Committee on ICT is having its biannual conference. I won’t report on that, as it’s still happening, but I can report that some of the hallway conversations have been both interesting and reassuring.
The topic of privacy came up over coffee, of course – and I was glad to hear that it is not only seen as a key issue for technology and governance, but it’s also seen as being closely interconnected with issues of cybersecurity. As readers of the Internet Society’s blogs will know, we think so too. You can’t have good privacy if you don’t have good security tools, and you can’t have good security in the absence of privacy.
As you would expect in a continent with all of Africa’s rich diversity, the cultural and social approaches to privacy can also vary widely, and people face exactly the same challenges as elsewhere, about how to translate them into workable technical and governance solutions. Today I will have a few minutes to set out some thoughts on that, in one of the afternoon sessions. I plan to suggest that we keep asking the “why?” question. Why are we trying to codify data protection? Why are data subjects’ rights important and worth fighting for?
I think if we ask “why?”, we eventually get past the “because the law says so” layer, and down to the layers that tell us why privacy is important to the individual. How do individuals and societies benefit, if they respect the so-called “private sphere” of life… and what do they forfeit if they allow it to be eroded?
These are big questions. They touch on rights, and self-determination, and autonomy – and, if you think about it, those are all areas in which the machines are chipping away at us humans. As a pedestrian, what are your rights if a self-driving car decides that you are the path of least damage in a collision? How much self-determination do you really have, in a world of behavioural advertising and curated content? How much autonomy do you have, in a world where algorithms make decisions that can affect your health and livelihood? And what, if anything, can we humans do about it?
Let me make two assertions:
First: technology’s current direction of travel is one that erodes our direct control over events around us.
Second: we are increasingly impacted, not by our own actions, but by data that comes from the actions of others. That, after all, is what targeted advertising is: it’s the impact, on you, of things other people did, said and bought.
If those two assertions are true, then the key to my privacy has to lie in my ability to influence the behaviour of those who hold data that can affect me. Up to now, regulation has tended to focus on the rights I have concerning data that is about me – rather than data that can affect me. The two are not the same, though, and the shift from “personally identifiable data” to “privacy impacting data” has profound implications. If you think about it, most privacy regulations up to now have been called “data protection” laws. Doesn’t that tell us that the focus isn’t quite right? Rather than legislating to protect the data, is it possible to legislate to protect the individual against harmful effects of the use of data, regardless of who that data is “about”?
I’ll freely admit, I don’t have answers, let alone simple or easy ones. But it’s clear to me that our current thinking about privacy has shortcomings, and if those shortcomings put our privacy further at risk, I want to deal with them. I hope I will get some initial reactions on Wednesday from what may well be a somewhat bemused audience – but I also hope I will get some reactions from you, here. Please let me know what you think. Is privacy, as we know it, as dead as various white male businessmen have told us, or can humans use technology to help privacy evolve?