Categories
Building Trust Domain Name System Security Extensions (DNSSEC) Improving Technical Security Open Internet Standards Technology

Fixing Heartbleed – It's The Culture, Not Just The Technology

In the aftermath of discovering the Heartbleed bug, now it is useful to look at the bigger picture of security building blocks that the Internet – and all of us – relies on.

A Bit About Heartbleed

Heartbleed allows someone making a connection using TLS to read a random piece of server memory, which may contain important bits – private keys, fragments of cached files, etc. Make another connection, get another 64KB of data. Do this enough times and a bad actor may get enough useful information to do some damage – for instance a private key the server uses to authenticate itself and encrypt communication.

With the private key, an adversary can impersonate a server. If they can intercept traffic, they can also decrypt it. If the server is using crypto-suites that don’t support perfect forward secrecy (PFS), the adversary can also get at data collected in the past. And because it leaves no trace, we cannot be sure if the vulnerability has been exploited and to what extent.

This is pretty bad stuff.

Most prominent sites and services have been fixed (see Mashable’s summary of affected sites). Things are getting back to normal for most web services, although we will probably hear more about the impact of this bug as time goes on.

Building Blocks of Internet Security

So how resilient is the Internet to such vulnerabilities? Let’s begin with the fact that there is no absolute security or absolutely flawless software in real life. And what differentiates a resilient system from a fragile one is not necessarily the absence of flaws. It is the absence of a single (or double, or triple, depending on the requirements) point of failure. In information security jargon it is called “defense in depth” – securing one layer alone (say, an application) is not enough.

So what are other security technologies that can provide additional layers of protection?

DNSSEC. It is much more difficult to impersonate a server if its name is protected by DNSSEC and the client can validate it. I’ve heard people question the value of DNSSEC if a site already uses an X.509 certificate issued by a respected CA. Well, apart from some other flaws of the web PKI system, Heartbleed is a prominent example of why we need more DNSSEC deployment. Even if someone gets access to the secret key (essentially stealing the X.509 certificate) and sets up a fake site, DNSSEC, if used by the name owner and validated by the client, will reveal the deception.

DANE. DANE provides an easier way to replace compromised keys without relying on CRLs and OSCP (Online Certificate Status Protocol), which are not always enabled and working. A site owner creates a new certificate and publishes it in the DNS, protected by DNSSEC. It actually takes longer to get it done in the traditional way through the issuing CA.

PFS. PFS comes with some cost to the server operators, but it pays off in situations like Heartbleed. By using crypto-suites that support PFS, even in the case of a compromised private key, past communication cannot be decrypted.

Diversity. Flawless software is rare and the more complex the code is the greater probability of bugs creeping in. Bugs hurt, but are devastating when the same bug is pervasive, which happens when there is not enough diversity in protocol or technology implementation and deployment. Open standards documenting the design of protocols and technologies enable such diversity, but it also requires a conscious choice when these components are deployed.

Users. Internet users contribute to the overall resilience of the system, too, by keeping software up-to-date and using strong passwords that vary across sites. Changing passwords periodically has always been good practice, but now it is vital in response to the Heartbleed (check if the system is still vulnerable before doing this, though, at https://www.ssllabs.com/ssltest/index.html).

Collective Responsibility

Looking at this specific, albeit unique, case we can see multiple paths toward improvement. Everyone has a role to play and an opportunity to contribute positively to the overall security and resilience of the Internet.

Security and resilience is ultimately not a technology. It is a culture.

Categories
Deploy360 Transport Layer Security (TLS)

What Can App Developers Learn From Heartbleed?

Heartbleed bugWhat lessons can application developers take from the Heartbleed bug?  How can we use this experience to make the Internet more secure?  Unless you have been offline in a cave for the past few days, odds are that you’ve seen the many stories about the Heartbleed bug (and many more stories today) and, hopefully, have taken some action to update any sites you have that use the OpenSSL library.  If you haven’t, then stop reading this post and go update your systems! (You can test if your sites are vulnerable using one of the Heartbleed test tools.) While you are at it, it would be a good time to change your passwords at services that were affected (after they have fixed their servers). There is an enormous list of the Alexa top 10000 out there, but sites like Mashable have summarized the major sites affected.  (And periodically changing your password is just a general “best practice”, so even if the site was not affected, why not spend a few minutes to make the changes?)

Client Applications Need Updating, Too

For application developers, though, it is also important to update any client applications you may have that use the OpenSSL libraries to implement TLS/SSL connections.  While most of the attention has been focused on how attackers can gain access to information stored on servers, it is also true that a malicious site could harvest random blocks of memory from clients visiting that site.  There is even demonstration code that lets you test this with your clients.  Now, granted, for this attack to work an attacker would need to set up a malicious site and get you to visit the site through, for instance, a phishing email or link shared through social media.  The attacker could then send malformed heartbeat messages to your vulnerable client in an attempt to read random blocks of memory… which then may or may not have any useful information in them.

Again, the path for an attacker to actually exploit this would be a bit complex, but you definitely should test any client applications you have that rely on any OpenSSL libraries.

With all that said, since we have started this “TLS For Applications” topic here are on Deploy360, what are some of the important lessons we can take away from this experience?  Here are a few I see coming out of this – I’d love to hear the lessons you take from all of this in the comments.

Security Testing Is Critical

It turns out that this was an incredibly trivial coding error. As Sean Cassidy points out in his excellent Diagnosis of the OpenSSL Heartbleed Bug, the issue boils down to this:

What if the requester didn’t actually supply payload bytes, like she said she did? What if pl really is only one byte? Then the read from memcpy is going to read whatever memory was near the SSLv3 record and within the same process.

There was no checking on the input and this allowed reading from other parts of the computer’s memory.  As Cassidy later writes about the fix:

This does two things: the first check stops zero-length heartbeats. The second check checks to make sure that the actual record length is sufficiently long. That’s it.

Today’s XKCD comic shows this all in an even simpler explanation.

This is the kind of trivial mistake that probably every developer has made at some point of his or her life.  I am sure that if I were to go back through many lines of code in my past I’d find cases where I didn’t do the appropriate input or boundary testing.  It highlights the importance of doing security testing – and of setting up security unit tests that are just done as part of the ongoing testing of the application. It also highlights the need for ongoing security audits, and for reviewers of code submissions to also be testing for security weaknesses.  But again, this is a common type of error that probably every developer has made.  You need testing to catch things like this.

In this instance it just happens that the mistake was in a piece of software that has now become critical for much of the Internet!  Which leads to a second lesson…

Having A Rapid Upgrade Path/Plan Is Important

As people learned about this bug earlier this week there has been a massive push to upgrade software all across the Internet. Which raises the question: how easy is it for your users to upgrade their software in a high priority situation such as this?

In many cases, it may be quite easy for users to install an update either from some kind of updated package or a download from an online application store.  In other cases, it may be extremely difficult to get updates out there.  In the midst of all this I read somewhere that many “home routers” may be vulnerable to this bug.  Given that these are often something people buy at their local electronics store, plug in, and pretty much forget… the odds of them getting updated any time soon are pretty slim.

Do you have a mechanism whereby people can rapidly deploy critical security fixes?

UPDATE: A ZDNet post notes that both Cisco and Juniper have issued update statements for some of their networking products. I expect other major vendors to follow soon.

Marketing Is Important To Getting Fixes Deployed

Finally, Patrick McKenzie had a great post out titled “What Heartbleed Can Teach The OSS Community About Marketing” that nicely hits on key elements of why we’re seeing so much attention to this – and why we are seeing fixes deployed.  He mentions the value of:

  1. Using a memorable name (“Heartbleed” vs “CVE-2014-0160”)
  2. Clear writing
  3. A dedicated web presence with an easy URL to share
  4. A visual identity that can be widely re-used

His article is well worth reading for more details.  His conclusion includes this paragraph that hit home for me (my emphasis added):

Given the importance of this, we owe the world as responsible professionals to not just produce the engineering artifacts which will correct the problem, but to advocate for their immediate adoption successfully.  If we get an A for Good Effort but do not actually achieve adoption because we stick to our usual “Put up an obtuse notice on a server in the middle of nowhere” game plan, the adversaries win.  The engineering reality of their compromises cannot be thwarted by effort or the feeling of self-righteousness we get by not getting our hands dirty with marketing, it can only be thwarted by successfully patched systems.

Exactly!

We need to make it easy for people to deploy our technologies – and our updates to those technologies.  (Sound like a familiar theme?)

What other lessons have you taken from this Heartbleed bug?  What else should application developers be thinking about to make TLS/SSL usage more secure?

Please do leave a comment here or on social media sites where this article is posted.  (And if you’re interested in helping us get more documentation out to help app developers with TLS/SSL, how about checking out our content roadmap for the TLS area?  What other items should we include?  Do you know of existing documents we should consider pointing to?  Interested in writing some documents?  Please do let us know.)

P.S. There’s now a post out about the process the Codenomicon team went through in disclosing the bug that is worth reading.