I’m grateful to Christian de Larrinaga, from the Internet Society’s UK Chapter, for pointing me to a recent publication by the World Bank: “Principles on identification for sustainable development: toward the digital age“.
The premise of the report is this: full participation in today’s societies and achievement of one’s desired potential are increasingly likely to depend on the ability to identify oneself; however, some 1.5 billion people are reckoned to lack “legal identification”, and action should be taken to remedy this.
The report acknowledges that private companies and other non-governmental organisations are stakeholders in such an identity infrastructure, and further notes that identity, in the desired sense, does not necessarily imply nationality or citizenship. Off-hand, two examples where “legal identification” systems already conform to this model are:
- Estonia, where the government issues an “e-citizen” credential which, although legally recognised, does not imply citizenship, and is independent of nationality;
- The Scandinavian Bank-ID system, in which credentials issued by banks (and therefore independent of nationality or citizenship) are legally recognised by public sector bodies.
So, as a principle, this is clearly viable based on existing practice. However, in two areas the report seems to me to miss opportunities that are relevant to the modern concept of digital identity.
First, the report is written entirely from the perspective of the “historical”, credential-based model of identity, in which you go through a trusted enrolment process in order to be issued with a trustworthy credential (for example, the passport issuing process). Understandably, that’s how nation-level eID systems are designed, because that’s the model of identity that state actors are familiar with.
But that’s not how identity works in the data-driven Internet context. I have never been through any kind of trusted enrolment with Google, they have never issued me with a trustworthy credential, and yet they could paint a unique and extremely intimate portrait of my identity. When, in Principle 2, the report talks about reducing information asymmetries, I fear that it is not talking about the real, relevant information asymmetries between individuals and those entities that can intimately identify (and track, and profile, and monetise) them without any need for the kind of “historical” identity described above.
The mindset behind the “historical” model of identity is one in which identity is something conferred upon the individual by an authoritative body (generally the state). That mindset is understandable, from the state actor’s perspective, but it tends to understate or even omit the notion of user control. For instance, you cannot typically choose or limit the information disclosed about you via your passport or driving licence.
Under Principle 8, the report may come close to addressing this concern, in that it talks about the requirement for attribute-level disclosures. That is, if the criterion for access to a particular service is that I must be over 18, then the appropriate design is that I should be able to present a trustworthy assertion that I am over 18, without having to provide further identifying information.
But, and this is my second concern, the ability of the user to disclose attributes selectively, and in a trustworthy way, is not an end in itself. It is an illustration of two principles:
- the principle of user control,
- the principle of anonymous or pseudonymous access to services.
Again, the report goes some way towards addressing these requirements, but stops short of doing so fully. Where it describes user control over data disclosure, it does so in terms of constraints:
“Authentication protocols should only disclose the minimal data necessary to ensure appropriate levels of assurance” – Principle 6
Users should have “the ability to selectively disclose only those attributes required for a given transaction” – Principle 8
Under the circumstances, I find it strange that the report doesn’t describe functional requirements of anonymity or pseudonymity. In fact, it doesn’t mention anonymity or pseudonymity at all. In my view, an eID system that doesn’t explicitly address these in its statement of requirements is incomplete.
In the context of this report, which is aimed at supporting the UN’s sustainable development goals (SDGs) by defining electronic ID for the next 1.5 billion people, that incompleteness gives rise to a serious risk. Those of us who are not among the 1.5 billion people lacking legal digital identification have already experienced ways in which our digital identities are dysfunctional: over collection of personal data, lack of transparency in its use and sharing, data breaches that expose millions of records of personal information, and a data monetisation economy that reduces individuals’ ability to control the use of data about them. All the risks identified above have emerged under regulatory regimes that contain exactly the kind of recommendations made in this report, but which, like this report, fall short of mandating the provision of anonymous and pseudonymous means of access where these are appropriate.
We also see a trend, in various countries, towards insisting that there should be no online access without authentication – in other words, that all online access should be identifiable – with corresponding risks to freedom of expression and freedom of access to information. While these factors affect all of us, they are likely to have a far greater proportional effect on the next 1.5 billion people to go online, given the economic and political context in which they do so.
I would like to see the UN SDGs supported by statements of requirements that do include anonymity and pseudonymity, so that we do not simply pass on, to the next 1.5 billion people to come online, an approach to digital identity that replicates flaws with which we are already dealing. There is an opportunity, here, for our approach to identity not simply to clone itself, but to evolve.