Categories
Building Trust Encryption

What Is a Man in the Middle (MITM) Attack?

Simply put, MITM is an attack in which a third party gains access to the communications between two other parties, without either of those parties realising it. The third party might read the contents of the communication, or in some cases also manipulate it. So, for example, if Gerald sends Leila a message, intending it to be private, and Max intercepts the message, reads it, and passes it on to Leila, that would be a MITM attack. If Gerald wants to transfer £100 to Leila’s bank account, and Max intercepts the transaction and replaces Leila’s account number with his own, that would also be a MITM attack (in this case, Max is putting himself ‘in the middle’ between Gerald and his bank).

Why should I care?

Partly because MITM attacks can undermine so much of our modern way of life. In a connected life, we depend on the reliability and security of every connection. It’s not just about your conversations, messages and emails, either. If you can’t trust the connections you make to websites and online services, you may be vulnerable to fraud or impersonation, and if your connected devices and objects can’t communicate securely and reliably, they may put you and your household at risk.

The examples in this blog post are based on encrypted traffic between humans, but MITM attacks can affect any communication exchange, including device-to-device communication and connected objects (IoT). MITM attacks undermine the confidentiality and integrity of communications, and in doing so, they may open data, devices, and objects to malicious exploitation.

Imagine the danger if a hacker were able to trigger the airbags in a connected car, or remotely unlock an electronic door lock. The fact that connected objects can now affect the physical world introduces new factors to the risk assessment, especially in cases where physical infrastructure (transport, energy, industry) is automated or remotely controlled. An MITM attack on the control protocols for these systems, where an attacker interposes themselves between the controller and the device, could have devastating effects.

Is this something new?

Not in principle: MITM attacks have existed for as long as we have had to rely on others to convey our messages for us. When people used to seal their letters shut with wax and a personal seal, that was to protect against MITM attacks. The sealing wax didn’t make it impossible for a third party to break the wax and open the letter: it was intended to make it easy to tell if they had done so, because they would have difficulty replacing the wax and forging the impression left by the sender’s personal seal. This kind of protection is referred to as “tamper evidence”, and we see it in consumer products, too, such as the foil seal under the cap of a bottle of pills, or the cellophane wrapping around a packet of cigarettes.

If someone not only wanted to know if their letter had been tampered with, but also wanted to keep the contents confidential, they generally had to write the letter in a code that only the recipient would be able to decipher.

In the digital context, we can see equivalents for all these cases. For instance, if you send unencrypted email, the contents are visible to every intermediary and network node through which the traffic passes. Unencrypted email is like sending a postcard: the postman, anyone at the sorting office, and anyone with access to the recipient’s doormat can, if they choose, read the contents. If you want only the recipient to be able to read the contents of an email, you have to encrypt the email in such a way that only they can decrypt it, and if you want to ensure that no-one can change the contents without the recipient knowing, you have to apply an integrity check, such as a digital signature, to the message.

So, for unencrypted traffic, a MITM “attack” consists of making sure you have access to the message flow between Gerald and Leila.

For encrypted traffic, that isn’t enough; you’ll probably be able to see that Gerald is writing to Leila, because that information needs to go in clear in order for the message to be routed correctly. But you won’t be able to see the contents: for that, you’d need access to the key used to encrypt the message. In the kind of encryption normally used to secure messages, the message is encrypted and decrypted using two copies of the same key, just like sending someone a message in a locked cash box. For that to work, obviously, Gerald and Leila somehow have to exchange a copy of the key. So, in this case, a MITM attack would start by intercepting that traffic, which would then give an attacker (Max) the means to unlock the message after Gerald sends it, read it, re-encrypt it, and send it on to Leila, who will be none the wiser.

Here, we have two “flavors” of MITM attack. The first is to intercept the message contents themselves; the second is to intercept the key used to protect the traffic. Of these, message interception might simply be a case of sitting between the two communicating parties and reading the traffic; intercepting the key is likely to require active impersonation of the communicating parties. This is why successful MITM attacks put you at risk of being deceived… because, to work, they have to fool you into thinking you are talking to your intended partner, even if you are not.

What can I do about it?

A successful MITM attack will give the users at each end no clue that it is happening – especially if it has been designed into the infrastructure itself. The security of this kind of system, as a whole, depends on the security of a great many elements, all of which have to function properly. Some of those elements are in the user’s hands, but others are owned and operated by third parties (such as browser manufacturers and certificate authorities).

As a user, it’s important to understand the available signs that tell you if the system is working as intended:

  • Distinguishing a secure browser session from an insecure one
  • Recognizing a valid digital signature
  • Knowing how to react appropriately to a certificate warning

It’s also important to practice good security hygiene with your passwords and keys. Of course, well-designed systems will make this easy for you, but regrettably, not all systems are designed with that goal in mind.

Want to learn more?

In 2015 a group of experts in cryptography, IT security, computer science, security engineering, and public policy produced a paper in which they set out the implications of requiring access (by third parties) to encrypted communications. This is relevant because when governments request or require access to encrypted communications, they are essentially, in most instances, asking for a man-in-the-middle option to be built in to the products, services and/or infrastructure on which their citizens rely.

Read “Keys Under Doormats” – Technical Report, MIT Computer Science and Artificial Intelligence Laboratory, 2015

Categories
Building Trust Internet of Things (IoT) Privacy

IoT Privacy for Policymakers: Solutions Need Informed Discussion

The consumer Internet of Things market is growing exponentially – one prediction suggests that people will be using 25 billion connected devices by 2021. These new products promise innovation and convenience, but they can also erode privacy boundaries and expose consumers to risk without their knowledge or consent. Is that a good bargain?

The policy brief “IoT Privacy for Policymakers” explores this question and more.

Do consumers have enough information and choice to make meaningful decisions? Do vendors and service providers have the opportunity and incentive to bring privacy-enhancing innovations to the market? Can the downsides of IoT be mitigated through policy actions – and if so, how?

IoT Privacy for Policymakers” explains the scope and nature of IoT privacy and the issues it raises. As ever, those issues are multi-party. They cross the boundaries of jurisdictions and sectoral regulations. There are no single-stakeholder solutions, so a multistakeholder approach is needed. Solutions need informed discussion that includes consumer rights, economic incentives, technical options, and regulatory measures. This paper is a positive step in that direction.

The policy brief also includes a “how to” on implementing Privacy by Design and four Guiding Principles and Recommendations:

  • Enhance User Control
  • Improve Transparency and Notification
  • Keep Pace with Technology
  • Strengthen the Multi-stakeholder Approach to IoT Privacy

Read “IoT Privacy for Policymakers” and find out how you can take steps to help safeguard privacy and trust in IoT.

Categories
Building Trust Privacy Reports

Transparency, Fairness, and Respect: The Policy Brief on Responsible Data Handling

It’s been a little over a year since the European Union’s General Data Protection Regulation (GDPR) was implemented, but almost immediately, people noticed its impact. First, there was the flurry of emails seeking users’ consent to the collection and use of their data. Since then, there’s also been an increase in the number of sites that invite the user to consent to tracking by clicking “Yes to everything,” or to reject them by going through a laborious process of clicking “No” for each individual category. (Though some non-EU sites simply broadcast “if we think you’re visiting from the EU, we can’t let you access our content.”) There was also the headline-grabbing €50 million fine imposed on Google by the French supervisory authority.

In its summary of the year, the EU Data Protection Board (EDPB) reported an increase in the number of complaints received under GDPR, compared to the previous year, and a “perceived rise in awareness about data protection rights among individuals.” Users are more informed and want more control over the collection and use of their personal data.

They’re probably irritated by the current crop of consent panels, and either ignore, bypass, or click through them as fast as possible – undermining the concept of informed, freely-given consent. They’re limited in the signals they can send about consent, and what they do signal may be meaningless. And if their access is blocked because of the geographic location of their IP address, they aren’t sending any consent signals at all. Whatever the motivation for this kind of blocking, it leads to a “fragmentation” of the Internet, in which information is freely available to some people, but inaccessible to others just because of where they seem to be located.

Nevertheless, because people are better-informed than they were, they are more motivated to complain.

If individuals’ complaints prove justified, organizations that collect personal data face the prospect of much bigger financial penalties than before for data protection offenses. The risk of penalties is independent of geographical location. It applies across national and jurisdictional boundaries, and therefore in contexts where the idea of “personal data” could have widely different cultural interpretations. When data controllers are faced with risks arising from laws outside their own jurisdiction, they are likely to need ways of setting themselves a high benchmark that reduces their exposure to compliance-related and reputational risk.

In short, everyone ends up relying on better behavior by data controllers.

If “improving behavior” involves setting a higher bar than legal compliance, it takes us into the realm of ethics, which can be a daunting prospect for the average business, so we wanted to develop something more approachable and practical: The Policy Brief on Responsible Data Handling. The policy brief looks at the issue from the data controller’s perspective, and identifies three principles to help them decide how to collect and process personal data in a responsible way: Transparency, Fairness, and Respect.

We developed each of these principles into specific guidelines. For example:

  • If what you are doing with personal data comes as a surprise to the individual, you probably shouldn’t be doing it. If you can’t , or don’t, explain the uses you make of personal data, you’re probably failing on transparency.
  • If what you do with personal data means you get the benefit, but the risk is offloaded onto the individual, your product or service probably hasn’t been designed with fairness as a key objective. Similarly, f you lock users in to your platform by making it impossible for them to retrieve their data and move it elsewhere, you’re failing on fairness.
  • If you share personal data with third parties but don’t check that they treat it properly, you may be failing to respect the individual and their rights and interests.

The Policy Brief on Responsible Data Handling includes more examples for each principle and a short list of recommendations for policymakers and data controllers, whether private or public sector. Thanks to GDPR, we know that people want more control over their data. The policy brief is a step towards protecting privacy and building trust in the Internet itself. If you have comments or suggestions about how to continue that process, please let us know.

Categories
Building Trust Identity Privacy Tutorials

Data Privacy Day: Understanding Your Digital Footprints

It may have been a quiet week in Lake Wobegon, but elsewhere things have been decidedly lively.

On Jan 17th, President Obama made his statement in response to his Advisory Board’s review of NSA surveillance practices, and Internet Society (having already commented on the review) followed up with its observations on the President’s statement.

Meanwhile, in Northern France, the International Cybersecurity Forum (FIC2014) got under way, with some 2,500 attendees gathering in Lille to hear, among others, the French Minister of the Interior outline his policies for countering the cyber threat while safeguarding citizens’ basic freedoms.

And before FIC2014 had even finished, the 2014 conference on Computers, Privacy and Data Protection (CPDP) had already started in Brussels.

All these events raised issues which directly concern us – digital citizens – and the digital footprints we create as we go about our daily business.

Crucially, we need to look at whether it is possible to control (or at least manage… or even see…) the trail of personal information we leave on the Internet.

Consider the following:

  • Obama proposes new governance measures for the collection of US citizens’ telephone metadata, but skirts the question of privacy as a universal right, and says nothing about the economic damage to companies’ trust in Internet technology. By and large, nothing the President said suggested any great change with regard to the average citizen’s data: mass interception and pervasive monitoring will continue, as will the long-term storage of vast amounts of tracking data. If there is to be substantive change, all the indications are that it will have to come from citizens themselves.
  • At FIC2014, the debate on the question of whether online anonymity is possible shows increasing maturity and sophistication. The key point is made that achieving ‘anonymity’ today does not mean what it meant 10 years ago, nor what it meant 1000 years ago. What implications does that have for 10 years hence? That’s an important question, because the data we classify as ‘anonymous’ today will still be around in 10 years’ time: will we still think they are anonymous, and will we wish, in 2024, that we had thought more carefully in 2014?
  • And at CPDP, a troubling theme is the suggestion – by some stakeholders – that we should stop worrying about controlling the collection of personal data, and instead focus our efforts on achieving better control over its use. I couldn’t agree less. Imagine how we’d feel if the nuclear industry adopted the same philosophy. For all that personal data is an increasingly vital economic asset, its retention also represents a growing liability – and by far the best way to manage that liability is not to collect the data in the first place. The principle of data minimisation, as an important element of privacy by design, is not a new one, but our interpretation of it needs to keep pace with innovation.

Despite the imbalance in the power relationship between us and service providers, data minimisation is not just something we should insist they should do on our behalf, the privacy outcomes are something for which we must take more responsibility ourselves.

The implications for individual consumers and citizens are clear. We all need to be doing more to understand our digital footprints, to understand the asymmetric power relationship they represent, and to take responsibility to the extent that we can. To that end, and to coincide with Data Privacy Day 2014, Internet Society is launching a set of materials to help us all understand our digital footprints:

What they are, and what we can do to manage them.

Here’s what you will find in the package:

We will follow this up with a short animated video in a few weeks. You can use that as a “nudge point”, to see if you have started thinking differently about your online privacy and your digital footprints. I hope you will.