The Internet Society and Chatham House will be hosting a roundtable of experts to deconstruct the debate on encryption and law enforcement access this week. I am not under any illusion that we will walk away with the solution. This is a complex problem: one that many have tried to solve, often with limited success. However, I am optimistic that the people in the room have the potential to look beyond their own positions, to consider the impact of decisions they may make concerning encryption, and to work together to unite two important societal objectives: the security of infrastructure, devices, data and communications; and the needs of law enforcement.
Perhaps the biggest dilemma facing both law enforcement and companies that provide digital services is – how much encryption is “enough” and who gets to decide?
There is an “encryption dichotomy” in the market: some services are more “law enforcement access friendly” than others. This dichotomy is not new. But, in the last four years, a number of leading tech companies with substantial customers bases have added more encryption and removed their ability to decrypt their customers’ content, to increase the privacy and security of their services. A side-effect of these decisions has been a visible change in the landscape of law enforcement access, especially in terms of messaging app and smartphone content:
- Many more devices are encrypted by default.
- Millions of users regularly send messages using end-to-end encryption.
At the same time, other companies have chosen to retain the ability to read and use their customers’ content, or perhaps they decided there is not a sufficient business case to add end-to-end encryption or user-controlled encryption. Their users’ encrypted content is more readily available to law enforcement because they hold the decryption keys.
These differing approaches by companies, in the context on the debate on law enforcement access, raises some awkward questions: If there are reputable companies that offer their services in a way that encryption does not preclude their ability to hand over the content to law enforcement in response to a warrant, should other companies do the same? Are those services as secure? Is an end-to-end encryption strategy a security “nice to have” or something we should strive to implement?
Before we delve further into these complicated questions, it is useful to understand how we reached this apparent impasse.
The evolving debate about encryption and law enforcement access
The current version of the debate on encryption is less than five years old. But in that time, there have been subtle, yet significant shifts in the focus of the discussions.
It is hard to say for certain, but perhaps the spark that first ignited this debate was Edward Snowden’s leak of classified information from the U.S. National Security Agency (NSA). In response to revelations of mass online surveillance, there was a push by major US-based Internet companies and privacy-aware users to embrace technical measures to protect the confidentiality of communications and data. Some notable early examples include:
- Google switching to HTTPS between Gmail users and their servers;
- Google, Yahoo, Microsoft and others fully encrypting data flowing between their data centres;
- Google searches using HTTPS only;
- Facebook defaulting to HTTPS;
- Apple turning on encryption by default in iOS 8;
- Twitter adding Forward Secrecy; and
- the rise of end-to-end encrypted messaging apps (e.g. Signal, Threema, Telegram, Wire).
In hindsight – and knowing what we now know about cyber threats – it seems ludicrous that data-in-transit was routinely sent unencrypted, and data-at-rest was stored in the clear. That was just four years ago.
The Internet standards community also came out with a strong clear message that encryption should be the norm for Internet traffic. For example, the Internet Architecture Board (IAB) said in November 2014:
“The IAB now believes it is important for protocol designers, developers, and operators to make encryption the norm for Internet traffic. … Newly designed protocols should prefer encryption to cleartext operation. … We recommend that encryption be deployed throughout the protocol stack … The IAB urges protocol designers to design for confidential operation by default. We strongly encourage developers to include encryption in their implementations, and to make them encrypted by default. We similarly encourage network and service operators to deploy encryption where it is not yet deployed, and we urge firewall policy administrators to permit encrypted traffic.”
The use of encryption was already on the rise, especially among enterprises, as part of their data security strategy, and on the Web. In 2010, Google launched encrypted searches, though encryption was not the default. Similarly, in 2011, Facebook began offering users the option of connecting to its server via HTTPS. Nevertheless, the steps taken by major tech companies following Edward Snowden’s disclosures represented a major “step-up” in the trend of increasing encryption.
Concerns about government surveillance were not the only factor driving the widespread use of encryption from 2013. There was also growing awareness of the magnitude of monetization of users’ data by intermediaries, as well as the very real risks of data breaches and cybercrime.
With the upsurge in the use of encryption, some law enforcement agencies entered the debate. They pointing out challenges that encryption poses to investigations and enforcement. In particular, the concern was that while law enforcement has the legal authority to access electronic information, it lacked the technical ability to do so because the information was protected by encryption. In practical terms, this could mean law enforcement being unable to: identify and locate suspects; unearth criminal activity on the dark web; intervene while crimes are being planned; and bring criminals to justice. This became known as the “going dark” effect.
It was a difficult case to make in an environment where several national security agencies had been shown to have immense surveillance capabilities actively deployed on a mass scale, especially in those countries where the functions of law enforcement and national security overlapped. In addition, others argued that it was actually getting a lot “brighter” for law enforcement. They now have unprecedented access to information through open-source intelligence, collection of metadata, sophisticated traffic analysis tools and data analysis algorithms.
Once law enforcement entered, the debate shifted towards whether so-called “backdoors” could be provided to give law enforcement exceptional access. Divergent views exist as to whether this would be a sensible path forward. A watershed moment came with the Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications report in 2015, where eminent computer scientists and security experts concluded:
“… This report’s analysis of law enforcement demands for exceptional access to private communications and data shows that such access will open doors through which criminals and malicious nation-states can attack the very individuals law enforcement seeks to defend. The costs would be substantial, the damage to innovation severe, and the consequences to economic growth difficult to predict. The costs to developed countries’ soft power and to our moral authority would also be considerable. …”
This was followed up by various initiatives designed to preserve the integrity of encryption. These included as the Secure the Internet principles, signed by organisations, companies and individuals from all over the world. The petition calls on governments to not ban, limit access, or weaken encryption.
While some proponents for exceptional law enforcement access have continued to advocate for a built-in “door” for law enforcement, others have backed away from this approach in light of the security concerns, favouring an approach that permits lawful hacking (otherwise known as encryption circumvention). We see this, for example, in the joint statement of Europol and ENISA in 2016, where they say:
“Solutions that intentionally weaken technical protection mechanisms to support law enforcement will intrinsically weaken the protection against criminals as well, which makes an easy solution impossible. … For the investigation and disruption of crimes, it is important to use all possible and lawfully permitted means to get access to any relevant information, even if the suspect encrypted it. To achieve this, it would be worthwhile to collect and share best practices to circumvent encryption already in use in some jurisdictions. …”
In this regard, the European Commission announced on 18 October 2017, as part of its anti-terrorism package, that it would support Europol to develop further its decryption capability”.
A lawful hacking approach starts to look like a viable option as it turns out that it is not so easy to create encrypted protocols, platforms or services without any weaknesses. (See, for example, the recently unveiled weakness in the Wi-Fi encryption protocol WPA2, known as Key Reinstallation Attack (KRACK), and the Infineon crypto chip key generation bug). The presence of weaknesses means there may be a way in for law enforcement without the need for the decryption keys. However, often, exploiting security weaknesses requires a more targeted approach, as well as more sophisticated technical resources, which smaller law enforcement agencies may not have.
Also, any security weakness that law enforcement could use, if discovered, could be potentially exploited by cyber criminals or other state actors. So, introducing new weaknesses is a terrible idea and any known vulnerabilities should be fixed without delay.
Yet, if a weakness exists that could enable law enforcement access to prevent or prosecute a crime without harming the safety and security of other Internet users, the question arises – why should only criminals who ignore the law be the only ones able to use it?
Moves in the direction of lawful hacking, combined with some high-profile cases of hacked caches of hacking tools (e.g. Hacking team) and hijacked exploits (e.g. NSA’s Eternal Blue used in the Petya/Not Petya ransomware), have opened up a discussion in this debate about the repercussions of governments hoarding security vulnerabilities, the dangers of nation states developing and holding hacking tools, and the importance of responsible disclosure. Certainly, the recent series of severe government data breaches (e.g. OPM, SEC, NSA) does not lend confidence to an approach that involves creating and stockpiling hacking tools for law enforcement purposes.
In parallel, there was the infamous Apple vs. FBI case, where the FBI had sought an order to require Apple to bypass or disable the auto-erase function on a seized iPhone to enable the FBI to more effectively conduct a brute force attack on the device so it could access the content on the encrypted device. The FBI was effectively asking Apple to weaken the security of the device authentication software, not the underlying encryption. While the order was being contested, the FBI announced it had unlocked the iPhone and withdrew its request, so we will never really know what would have been the court’s outcome. It is, however, a practical and public example of lawful hacking.
While much of the focus has been on service providers and their products, attention has also turned to whether users have a right not to grant law enforcement access to the content of their encrypted devices. There have also been various court cases in the US concerning whether the Fifth Amendment (the right to remain silent) protects individuals from being legally compelled to disclose their passwords or passcodes, and from being required to unlock a device with biometrics (e.g. through Apple Touch ID).
We also see in the UK the emerging idea that the use of encryption should be an aggravating factor in sentencing for terrorist offences. While this idea is now focused on terrorism, it might be later applied to other criminal offences. This is alarming.
Encryption is not a weapon: it is a security tool.
Adding it as an aggravating factor could paint encryption as something only criminals use. Innocent users might also be discouraged from using encryption if they fear that their use could be construed as a deceitful act. Further, it is only a small step away from the idea that encryption could be evidence of criminal activity or terrorism.
Four years on, where are we now?
Encryption by default and end-to-end encryption is on the rise across a host of widely used platforms. Here is a sample:
- Google followed Apple, and turned on encryption by default in Android 6.0 Marshmallow;
- In 2016, Let’s Encrypt, a free, automated and open certificate authority was launched to help websites offer their content over HTTPS, a secure encryption Web protocol;
- WhatsApp also turned on end-to-end encryption by default in April 2016. At the time, it had more than 1 billion monthly users;
- Apple iOS 11 allows a user to manually disable Touch ID by pressing the power button 5 times in succession;
- As of October 2017, Google is starting to warn Chrome browser users about unencrypted websites by splaying a “not secure” message;
- More than 60% of web pages loaded by the Firefox browser use HTTPS.
Lawful hacking is still very much on the table, but two other aspects have become crucial in the debate – the widespread use of end-to-end encrypted messaging apps and encryption turned on as the default. End-to-end encryption takes the service provider out of the equation.
Now, in order for law enforcement to access the encrypted content, it has two basic choices, compel a user to decrypt, or help decrypt, the content (e.g. disclose the password), or find a way to circumvent the encryption by attacking the crypto or the authentication mechanisms (lawful hacking).
Neither of these is easy – it is generally much easier to require a service provider to decrypt the content specified in the warrant – and it can be done without tipping off the user. Also, studies show users rarely change default settings, which means it can have a substantial effect on the number of people who do – and do not – use encryption.
The debate is not quietening down. If anything, it is getting louder. But, it does appear to be honing in on end-to-end encryption messaging apps and user-controlled encryption (i.e. where the user, not the service provider, has the ability to decrypt the content).
For example, in the UK, where the Chatham House roundtable will take place, UK Home Secretary Rudd has focused attention on WhatsApp, identifying the end-to-end encryption in messaging apps as a problem for UK law enforcement. WhatsApp has reportedly declined a request from the UK government to offer a means for law enforcement to access encrypted messages.
We see a similar focus on end-to-end encrypted messaging apps in Australia and elsewhere.
New vocabulary has also been introduced to the debate, by US Deputy Attorney General Rod Rosenstein in a recent speech to the United States Naval Academy:
“Responsible encryption is achievable. Responsible encryption can involve effective, secure encryption that allows access only with judicial authorization. Such encryption already exists. Examples include the central management of security keys and operating system updates; the scanning of content, like your e-mails, for advertising purposes; the simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop.”
This goes straight to the heart of the dilemma. If some companies choose not to offer end-to-end encryption or user-controlled encryption – which has the unintended benefit of facilitating law enforcement access to content – what does this mean for companies that offer those encryption features? Rosenstein’s approach would seem to be calling for those companies to wind back or remove those features. Is this desirable?
If we want a test case, we need only to look to China and its most popular messaging app, WeChat. WeChat, with more than 900 million active monthly users, only uses transport encryption – reportedly to comply with Chinese law. This opens up users’ data to server-side censorship and the risk of surveillance. Also, in August this year, Apple removed VPN apps from its store to comply with China’s rules.
Law enforcement and encryption working together
Encryption is not law enforcement’s enemy. It is its ally. No one would dispute the value of encryption as a security tool, especially in light of the numerous of major data breaches that have occurred. But, we need to get past the notion that encryption is only needed for “sensitive” transactions such as banking, health and government services. We also need to figure out how law enforcement can operate effectively in the presence of both commercial-encrypted apps, services, devices, and “homemade” encryption.
The starting point is that encryption should be the norm for Internet traffic and data.
Where possible, end-to-end encryption solutions should be made available. The Internet Society believes that legal and technical attempts to limit the use of encryption, well-intentioned or not, will negatively impact the security of law-abiding citizens.
Our goal at the Chatham House roundtable on Thursday is to explore some of the core issues surrounding law enforcement access and the use of end-to-end encryption; user-controlled encryption; and encryption on by default – and to explore the range of perspectives on the very difficult question of how much encryption is “enough” and who gets to decide? We will also delve more deeply into the practice of lawful hacking, hoping to come away with some clear parameters and principles.