There are several commonly used mechanisms for supporting secure and private communication, transaction protection and identity assertion and management. These include the so-called Internet PKI commonly used for secure web browsing but which can be used for other applications, PKI for e-mail, RPKI used by Regional Internet Registries to assert the holders of IP resources, and DNSSEC that can be used to validate DNS queries. DANE is a new protocol that uses DNSSEC to allow owners to assert their own digital certificates, and therefore potentially incorporate the functionality of the Internet PKI into the global DNS.
The Google Cloud Platform (GCP) is now able to support IPv6 clients using HTTP(S), SSL proxy and TCP proxy load balancing. The load balancer will accept IPv6 connections from users, and proxy those over IPv4 to virtual machines (i.e. instances). This allows instances to appear as IPv6 services to IPv6 clients.
At the moment, this functionality is an alpha release and is not currently recommended for production use but it demonstrates a commitment to support IPv6 services. GCP allocates a /64 address range for forwarding purposes.
Google Cloud Platform is a cloud computing service offering website and application hosting, data storage and compute facilities on Google’s infrastructure.
More information on how to set-up IPv6 support is available on the GCP website.
(Other remote connection options can be found at the bottom of the agenda page.)
Note – this workshop is not about DNSSEC, which is a method to protect the integrity of DNS (to ensure DNS info is not modified in transit), but rather new work being done within the IETF to improve the confidentiality of DNS.
The sessions include:
How DNS Works in Tor & Its Anonymity Implications
DNS Privacy through Mixnets and Micropayments
Towards Secure Name Resolution on the Internet – GNS
Changing DNS Usage Profiles for Increased Privacy Protection
DNS-DNS: DNS-based De-NAT Scheme
Can NSEC5 be practical for DNSSEC deployments?
Privacy analysis of the DNS-based protocol for obtaining inclusion proof
Panel Discussion: The Tension between DNS Privacy and DNS Service Management
The Usability Challenge for DNS Privacy and End Users
An Empirical Comparison of DNS Padding Schemes
DNS Service Discovery Privacy
Trustworthy DNS Privacy Services
EIL: Dealing with the Privacy Problem of ECS
Panel Discussion: DNS-over-TLS Service Provision Challenges: Testing, Verification, internet.nl
If you are not there in person (as I will not be), you can also follow along on the #NDSS17 hashtag on Twitter. There will also be tweets coming out of:
Stéphane Bortzmeyer will also be attending (and speaking at) the workshop – and he is usually a prolific tweeter at @bortzmeyer.
The sessions will also be recorded for later viewing. I’m looking forward to seeing the activity coming out of this event spur further activity on making DNS even more secure and private.
Please do follow along remotely – and please do share this information with other people you think might be interested. Thank you!
Image from Unsplash – I thought about showing the wide beaches, but the reality is that the conference participants won’t really get a chance to visit them. I thought “Lifeguard” was appropriate, though, because lifeguards are all about protecting people and keeping things safe.
The best known usage of TLS is in secure web browsing (using HTTPS) which can be visually checked using the padlock icon that appears in browsers when a secure session is established. Unfortunately, mobile apps are often less transparent about the security of their connections when they connect to a server, and it can be much harder to tell whether an app is using TLS.
Apple therefore introduced App Transport Security (ATS) in iOS 9, which forces apps to connect over a secure connection. Until now, it has been possible for apps to disable this so they can use non-TLS enabled services, but from some point in 2017 this will no longer be possible.
Apps were already supposed to have migrated to using ATS by 1 January this year, but with only 3% of the 200 most popular apps (including Facebook, LinkedIn and Skype) found to be fully compliant, Apple has announced an extension to this deadline. Nevertheless, if you’re an iOS app developer or operating services accessed by iOS apps, you need to be ensuring that you can support the ATS requirements over the coming months.
We’re currently making an effort to build out our TLS for Applications resources, and we’re pleased to announce some updates.
First up, we’ve added a TLS Basics page which answers some of the questions about what TLS is, how it works, and why you should deploy it. If you need more detail, it’s worth checking out the SSL/TLS: What’s Under the Hood? paper from the SANS Institute.
We’ve also added links to a couple of other research papers from UC Berkeley on some TLS related issues, which can be found on our new TLS Reports page.
There are several tools available for checking a server’s TLS protocol support, and those that we’re aware of can be found on our TLS Tools page. We’re always happy to add other useful tools if you let us know about them.
And in case we haven’t mentioned it before ;-), Let’s Encrypt is offering free digital certificates with the view to increasing overall deployment of TLS. So there’s now even fewer excuses for not make the Internet more secure!
Finally, we’re also interested to hear from anyone who has implemented TLS and can share their experience as a case study, or who’d be willing to share other configuration and/or deployment information that others might find useful. If so, don’t hesitate to contact us.
The Deploy360 programme aims to encourage deployment of key Internet technologies by providing useful resources that can help IT professionals achieve this. We can’t do this alone though, and one of our goals for 2016 is increasing the number of contributions from the Internet community where we help you share your experiences.
We’re always interested in case studies, useful tools, new standards and statistics relating to our main topics of IPv6, DNSSEC, TLS and Securing BGP, so if you have anything you’re willing to contribute or even just point us to, then please get in touch.
However, we’re particularly interested in expanding our TLS and Securing BGP content. We’d really appreciate any of the following:
Tutorials, guides or HOWTOs explaining how to add TLS support to the likes of web servers, e-mail systems and other applications.
Reports or white papers that explain why using TLS is important.
Tools and/or developer libraries that allow TLS support to be added to applications.
Case studies of security issues related to BGP.
Tutorials, guides or HOWTOs explaining how to improve the security of the routing system, including the usage of RPKI and BGPSEC.
Reports or white papers that explain why securing BGP is important.
Tools that improve and support the security of the routing system.
More detailed information about the type of content we’re interested in can be found on the our Roadmap for the Deploy360 Programme, and note that whilst we’d obviously be greatly appreciative if contributors do it for love, we’re willing to discuss honorariums if anyone is interested in producing original and useful content as outlined in the roadmaps.
So if you’d like to help make a contribution towards improving the security and resilience of the Internet, share your deployment experiences, or volunteer your services to Deploy360 then please don’t hesitate to get in touch.
Let’s Encrypt enters public beta today, which means anyone can now sign-up for free digital certificates. Let’s Encrypt is a new trusted Certificate Authority (CA) that aims to bring down the costs of configuring secure servers in order to increase overall deployment of TLS.
This initiative offers more than just free certificates though, as it also supports automation to make the business of obtaining and managing certificates significantly less complex, whilst encouraging more frequent (90 day) renewal to limit damage from key compromise and mis-issuance. This is achieved through the Automated Certificate Management Environment (ACME) which offers a standards-based REST API allowing the agent software to authenticate that a server controls a domain, request a certificate, and then install it on a server without human intervention.
Over 26,000 certificates were issued during the closed beta trial, so the system has already been extensively tested in the wild. The CA is supported by fourteen sponsoring organisations with an interest in promoting encryption as the norm.
For more information on TLS, please do look at our Start Here page to understand how you can get started transitioning your networks, devices and applications!
Since the announcement of the Let’s Encrypt project, we’ve been enthusiastic about the idea of having an open and transparent Certification Authority (CA) that could show other CA’s that it’s possible to run a certificates business and keep it free for everyone!
So, after the Closed Beta programme was announced, we managed to get involved and secure our web servers with their certificates. Firstly, we had a look at their ACME client that’s needed if you want one of their certificates, as the only way to request a domain certificate is through interaction with a back-end API.
Firstly you need to know that a separate certificate is required for every domain and subdomain you want to secure. For example you need separate certificates for go6.si, www.go6.si, mail.go6.si, etc…
When you install the ACME tool on your system, you have a few options – or you can use Apache mode that configures your Apache web server automatically, talks to the Let’s Encrypt system to obtain an “auth token”, and publishes it on your website. It then tells the remote system to check for that file and whether it has the right content, and if that check goes through, the system will transfer the signed certificates and install them on your system. In case you are not running Apache (and for now just on Debian systems), you have other options such as Standalone mode that does a similar thing. However, since it sets up its own web server on port 80 in order to authenticate, you need to shut down your primary web server for a period of time which is not entirely ideal.
We chose Webroot mode that does the same authentication as the other methods, except that you need tell the ACME client where your webroot directory is. Then when you request a certificate for a domain, it obtains the authentication file from the remote system and puts it in your web server root directory so the letsencrypt.org system can access it and verify that you really control the server for that domain.
Why is –debug there? Because their client does not (sort of) support Python 2.6 anymore. This is the default on CentOS 6.x systems and upgrading it creates a high probability that you’ll break many things on your server. So if you end up having to deal with Python 2.6, the letsencrypt client will not do anything other than try to update the dependancies on your system and exit with a message that Python 2.6 is not supported. If you really mean it, you should add that –debug switch to the script. Well okay, so be it, debug it is 😉
After obtaining the initial certificate, I had to fix the content on the server https://go6.si/ as some pictures were referenced with http:// at the beginning of the URL and the verification system began complaining that some content was not secured. Well, now it is and the lock on the left of URL glows green.
So, the first step went quite smoothly and was not that hard at all, but with the understanding that the certificate was only valid for 90 days, we wanted to make the renewal automatic. However, how can we make it automatic if the client requires dialogs for entering email during the initial procedure?
Well, we tried to run it again with same parameters, but it started asking questions and after some research we found out that adding –renew to the script automagically skips all the questions and becomes perfect for automation. So, now we have our cron system ready to renew that certificate on the first day of every month at 00:55, fingers crossed 😉
During the process we encountered some small issues and questions that we’re still awaiting an answer to. One of the issues was that you can’t currently obtain a certificate for a server on an IPv6-only network as Let’s Encrypt seems to have some issues with the data centers where they have their system installed, although they say they’re working to rectify this.
Let’s Encrypt uses DNSSEC validating resolvers (unbound) and this ensures a quite high level of protection during the authentication process for DNSSEC signed zones. For non-signed zones, they are querying geographically sparse DNS resolvers and comparing the results – if results are not consistent, an attack is happening and validation fails. They also plan to rotate the resolving servers, so somebody could not build a sparse enough attack to poison all their DNS servers.
What we’d also hope is that Let’s Encrypt will put a big sign on on every corner of their webpage saying “Please sign your domain with DNSSEC and make the verification for certificate delivery more resilient to attacks, and by the way, your domain and services would be much more secure this way…” in order to encourage people to deploy DNSSEC. This is in the interests of everyone, and we already showed how deploying DNSSEC, signing domains and doing a KSK rollover is not difficult.
Our Conclusion: Let’s Encrypt is a great idea for running a CA that issues free certificates, and their certificate just works. The tools and client do need to a bit of polishing and made a bit more automatic so that every sysadmin will be able to install it with apt-get/yum, run it, and then state which domain they wish to secure.
Our next test will be to obtain a Let’s Encrypt certificate for our mail server sitting on mx.go6lab.si. Theoretically it should work, but let’s see next week when that subdomain will be whitelisted on the Let’s Encrypt system.
Important notice: The Let’s Encrypt Open Beta program starts up on 3 December 2015 and then more domains will be whitelisted in a much shorter time. Don’t be afraid to try it out!
A paper recently published at the 22nd ACM Conference on Computer and Communications Security in Denver, USA raises concerns about how Diffie-Hellman key exchange is implemented in many protocols including HTTPS, SSH, IPsec, SMTPS and other protocols relying on TLS. Diffie-Hellman is an asymmetric cryptographic algorithm that is commonly used to exchange session keys when establishing a secure Internet connection, but the research discovered that many server implementations are either using obsolete 512-bit so-called ‘export grade’ cryptography or are utilising a fixed or limited range of prime numbers that effectively allows 768-bit and potentially 1024-bit grade encryption to be routinely cracked using pre-computation techniques.
Tests revealed that up to 15% of servers could potentially be affected using the Logjam attack technique that forces export grade parameters (a historical legacy) for Diffie-Hellman. Whereas if 1024-bit grade encryption is broken, this could potentially compromise up to 25% of HTTPS and SSH servers and 66% of IPSec VPN connections.
The authors point out that the cracking of 1024-bit grade encryption still requires substantive amounts of computing resources that are likely only available at a nation state level, but that moving to stronger key exchange methods should be a priority for the Internet community. They make the following three recommendations:
Turn off legacy export cipher suites which in any case are no longer supported by most modern browsers;
Deploy Elliptic-Curve Diffie-Hellman (ECDHE) which avoids all known feasible cryptanalytic attacks;
Generate 2048-bit or stronger Diffie-Hellman groups with “safe” primes.
A couple of weeks ago those of us interested in Internet security formally received a new tool in our toolbox to improve the overall security of the TLS infrastructure that we use for pretty much all secure communication across the Internet. RFC 7469, “Public Key Pinning Extension for HTTP” was published and is available at:
I say “formally” because in practice what is more commonly known as “HPKP” or “PKP” has been implemented for some time now by Google Chrome and Mozilla Firefox (see ticket) as the Internet-Draft worked its way through the WEBSEC Working Group within the IETF on its way to becoming a formal RFC.
At a basic level, “certificate pinning” is a fairly simple concept: once you connect to a site and receive its TLS certificate (i.e. you switch to using HTTPS and have the “lock” icon in your browser), “pin” that certificate inside your application for a specified period of time and ONLY accept connections that use that exact TLS certificate. This removes the possibility of an attacker succeeding with a Man-In-The-Middle (MITM) attack where the attacker can fool your browser into thinking you are connecting to the correct secure site. (If you want a deeper dive, OWASP has a long description of certificate pinning.)
Certificate pinning is a concept that has been around for a while but this new HPKP header makes it easier for sites to implement.
RFC 7469’s abstract doesn’t put it quite so simply as I did above, but you get the idea:
This document defines a new HTTP header that allows web host operators to instruct user agents to remember (“pin”) the hosts’ cryptographic identities over a period of time. During that time, user agents (UAs) will require that the host presents a certificate chain including at least one Subject Public Key Info structure whose fingerprint matches one of the pinned fingerprints for that host. By effectively reducing the number of trusted authorities who can authenticate the domain during the lifetime of the pin, pinning may reduce the incidence of man-in-the-middle attacks due to compromised Certification Authorities.
Now in practice it turns out that pinning the exact TLS certificate can cause operational problems in some website configurations and so the specification allows for the pinning of the key of the entity that issues the TLS certificates for your site, such as a certificate authority (CA). This allows you, for instance, to specify that a browser should only accept TLS certificates from a specific CA. This reduces the risk of a MITM attack where an attacker uses a TLS cert from a different CA who they were able to get to issue the bogus cert.
This new RFC 7469 dives into all the details of HPKP, but if you want a higher level view, Joseph Bonneau gave a talk in March 2015 at IETF 92 in Dallas about HPKP and its companion, HTTP Strict Transport Security (HSTS – RFC 6797). The slides are available:
Certificate pinning is a great tool that we have now, although it does have a few challenges to be aware of:
Trust-On-First-Use (TOFU) – Certificate pinning relies on you connecting to the correct server on the first connection in order to get the TLS cert that you are now going to pin in the browser. As noted in the RFC 7469 Introduction, the issue is that if you were to connect to an attacker’s site first you could in fact wind up pinning the false certificate and thereby being blocked out of connecting to the correct site until the time of the pin expires (what is called in the spec “max-age”).
Blocking a site due to certificate changes – If you need to change a TLS cert, or if a TLS cert should be compromised or a private key is lost, you could potentially wind up in a situation where people using browsers that perform cert pinning would not be able to get to your site. This could happen if you pinned to an exact cert and had to change the cert, or if you pinned to a CA and then switched to a new CA, and in either case were unable to provide enough notice to manage the migration. The Security Considerations section of RFC 7469 discusses this issue.
On the TOFU issue, one way to deal with having to trust the site on first use is to “pre-load” the certificate to be pinned into the web browser or other application. This is being done by both Chrome and Firefox (see Mozilla’s list). The only concern here is that if you need to change the certificate in the pre-loaded list, you need to wait for an update to the browser to be available (and for users to install that update).
In various conversations I’ve suggested DNSSEC could help here because if the local system performed DNSSEC validation on a signed domain, the browser would then have a high level of assurance that it was connecting to the correct IP address where it could then receive a HPKP header with the correct TLS cert to be pinned. So DNSSEC could help bootstrap the pinning process and get around the TOFU concern.
Whenever I’ve raised this point, the response has typically been that if you have DNSSEC validation available you could simply use DANE to ensure you are using the correct TLS certificate. This is true, and I’d like to hope we’ll someday get there, but: 1) DANE requires the creation and usage of TLSA records, which is one more step people have to take; and 2) web browsers don’t have full support of DANE yet (although plugins are available). In the meantime, I still see DNSSEC as a powerful way to help with ensuring cert pinning works correctly.
Regardless, the key point is that RFC 7469 is now out there and certificate pinning via the HPKP header is now possible. It’s another tool we have and one that anyone interested in TLS security should definitely understand.
I’d love to hear your comments on this – please do feel free to leave them here. Tell me why cert pinning is great… tell me why it’s not. Tell me I’m wrong (or right!). Please do note that we do moderate comments because of spam but we approve basically very comment that isn’t abusive or spam.
All you do is click “Test my internet connection” to find out if your current connection supports IPv6 and DNSSEC. Enter any website address to test whether that site supports IPv6, DNSSEC and TLS. And enter any email address to find out if it supports IPv6, DNSSEC and DKIM/SPF/DMARC.
Here was the response I received for one of my email accounts:
You then have a link you can follow to get more details.
While there are obviously more detailed tests that can be performed, this site does a nice job giving a high level view of whether your connections are protected. I also like the fact that it uses “regular” language to explain why someone should care about these tests, rather than using the technical acronyms.
The site is great to have out there and we’ll be adding it to our list of DNSSEC tools and other places within Deploy360.
Congratulations to the various organizations behind Internet.nl on the launch! May this new site help many more people learn what they need to do to bring their Internet connections and sites up-to-date!
P.S. Please also read Olaf Kolkman’s post providing another perspective on the launch. And yes, both the Internet Society and our Internet Society Netherlands Chapter were involved with the launch. If you would like to get started with IPv6, DNSSEC or TLS, please visit our Start Here page to begin!
On this first day of IETF 92 in Dallas, our attention as the Deploy360 team is on securing the Internet’s routing infrastructure, improving the IPv6 protocol and securing the privacy and confidentiality of DNS queries.
At the same time over in the International Room, the 6MAN working group has a long agenda relating to various points discovered during the ongoing deployment of IPv6. Given that we keep seeing solid growth each month in IPv6 deployment measurements, it’s not surprising that we’d see documents brought forward identifying ways in which the IPv6 protocol needs to evolve. This is great to see and will only help the ongoing deployment.
Moving on to the 1300-1500 CDT session block, there are two working groups that are not ones we primarily follow, but are still related to the overall themes here on the site:
the TRANS working group is looking to standardize “Certificate Transparency” (CT), a mechanism to add a layer of checking to TLS certificates;
the DNSSD working group continues its work to standardize DNS-based service discovery beyond a simple single network. Our interest here is really that this kind of service discovery does need to be secured in some manner.
In the 1520-1650 CDT session block, a big focus for us will be the newer DPRIVE working group that is looking into mechanisms to make DNS queries more secure and confidential. As I wrote in my Rough Guide post, a concern is to make it harder for pervasive monitoring to occur and be able to track what a user is doing through DNS queries. DPRIVE has a full agenda, and knowing some of the personalities I expect the debate to be passionate.
Simultaneously, over in the Parisian Room, the Using TLS In Applications (UTA) working group will continue it’s work to make it easier for developers to add TLS to applications. The UTA agenda at IETF 92 shows a focus on one mechanism for email privacy.
After all of this, we’ll be heading to the Technical Plenary from 1710-1910 CDT where the technical topic is on “Smart Object Architecture” which sounds interesting. You can watch a live video stream of the Technical Plenary at http://www.ietf.org/live/
For some more background, please read these Rough Guide posts from Andrei, Phil, Karen and myself: