Categories
Deploy360 Domain Name System Security Extensions (DNSSEC) Events IETF

Join the DNS Security team at the IETF 96 Hackathon this weekend…

IETF 96 Hackathon

If you will be in Berlin, Germany, this weekend and are interested in putting your coding or documentation skills to good use in helping make DNS more secure, please plan to join a group of about 20 of us at the IETF 96 Hackathon who will be working on DNS-related projects. The Hackathon is at the InterContinental Hotel from 9:00am – 9:00pm on Saturday, July 16, and from 9:00am – 6:00pm on Sunday, July 17. (You don’t have to be there the whole time – some people come and go.)

NOTE: you do NOT have to be attending IETF 96 to participate in the Hackathon. It is separate – and free – but you do need to register to attend. We welcome other developers in the Berlin area who want to join us during the weekend.

Details can be found on the IETF 96 Hackathon wiki page.

We have a group of 20+ people who will be working on a variety of DNS, DNSSEC, DPRIVE and DANE projects. There are some projects that could use some additional help (including non-coding help such as documentation and user testing). You are also welcome to bring other projects to the Hackathon.

You can see the list of projects and ideas on the IETF wiki hackathon page – although you need to scroll down to find the DNS section.

The GetDNS crew has a number of projects underway, including TLS interfaces, a Universal Acceptance review and RFC5011 testing. Rick Lamb plans to make BIND work with smartcards without patches. I plan to work on the code behind the weekly DNSSEC deployment maps. I’m sure others will bring some projects, too, by the time it begins.

A good group of “DNS people”  have now done this for the past several IETF meetings. It’s been a great experience and moved a number of DNS-related projects forward.  We would definitely welcome anyone else who wants to join us, even if just for part of the time.  Bring your coding and documentation skills and help make DNS better!

P.S. And of course you can also join in with the many other excellent projects happening at the Hackathon, too, including some great work on TLS implementations.  We here at Deploy360 just happen to be focused on DNS…

Categories
Deploy360 Domain Name System Security Extensions (DNSSEC) Events Improving Technical Security

RIPE 72 – Highlights from Day 4 & 5

RIPE 72The RIPE 72 meeting was happening last week in Copenhagen, Denmark, and we’ve been highlighting the presentations and activities related to the Deploy360 technologies. In this post we’re summarising the last couple of days of activity before the meeting drew to a close.

The Thursday was also mostly devoted to Working Groups, including the second part of the IPv6 Working Group. We have highlighted this before, but it’s worth mentioning again the report on IPv6 deployment in Latin America and the Caribbean from LACNIC. This is a comprehensive survey of the current state of IPv4 exhaustion, case studies of IPv6 deployments, but perhaps more interestingly the problems, challenges and regulatory barriers that were experienced. The report also provides insight into the rationale of those ISPs not currently considering the deployment of IPv6.

This was followed by a useful presentation on Going v6-only at home by Luuk Hendriks (University of Twente) which aimed to get an IPv6 only Wireless LAN up and running in a home environment. The rationale was to find a solution that is generally deployable and therefore not to have to buy any special hardware or require hacks, as well as discover which elements had shortcomings. It transpired that problems were mostly related to applications insufficiently supporting IPv6, but the primary issue proved to be a lack of AAAA records that effectively disabled IPv6. Whilst there are workarounds such as NAT64 and DNS64, there remains a question mark over how these can support DNSSEC as we highlighted last week.

On a similar theme was the presentation on How to Make Trouble for Yourself – You Build an IPv6-Only Network in 2016 from Roger Jørgensen (nLogic). This was about the challenges of building a next generation network serving the Norwegian county of Troms which is situated in the Arctic and requires the deployment of IPv6 to ensure it can support future requirements for the next 25-30 years. Given the remoteness of the region it’s essential that Customer Premises Equipment (CPE) is reconfigurable and IPv6 manageable, but the first challenge was a lack of DHCPv6 support which meant static IPv6 addresses needed to be configured via an uploadable config file which allows the CPEs to then be registered with the Junos Space Management Platform which does fully support IPv6.

The IPv6 Working Group session was rounded off with the results of a survey about which open source code repositories were available via IPv6. As of the end of 2015 there were 25,522 ports located on 5,344 hosts in the FreeBSD ports collection. Of these, 3,925 hosts only had an IPv4 address whilst not one host only had an IPv6 address, meaning there were 1,419 hosts with both an IPv4 and IPv6 address although 359 of these did not have a resolvable DNS record. Just 10,308 ports could therefore be downloaded with IPv6, although if you add in dependencies then this rises to 17,715 ports (69.4% of the total) that cannot be built by an IPv6-only user.

It’s worth balancing this up though, with the presentation from Jen Linkova (Google) during the final plenary session about Google’s public DNS64 server that is currently in beta. More information is available at https://devsite.googleplex.com/speed/public-dns/docs/dns64

Over in the DNS Working Group there were several presentations related to DNSSEC. First up was Duane Wessels (Verisign) who summarised the changes to the Zone Signing Keys (ZSKs) for the root zone that is planned to change to 2,048 bits, followed by rollover of the Key Signing Key (KSK). This was followed by presentation on ZSK controlling from Paul Ebersman (Comcast), with the session rounded off with a panel discussion on the issues to address with the DNS. The DNS Manifesto summarises much of this, including the issues related to DNSSEC and the lack of data encryption.

Last, but not least we need to highlight a couple of appeals made during the Routing Working Group related to Deploy360 technologies. The first was on RPKI Validation from Alex Band (RIPE NCC) which summarised RPKI update in the RIPE region, as well provided a review of existing RPKI validation software including the RIPE NCC implementation. The RIPE NCC validator had been built somewhat as a proof of concept, but suffered from high CPU and memory usage with the growing data set, and did not leverage data from the Internet Routing Registry. They therefore wish to redesign this, and are asking for community feedback on the feature set.

The second appeal was made by Job Snijders (NTT) who asked network operators not to accept route announcements from external neighbours containing ‘bogon’ ASNs. NTT will adopt the policy of not accepting ASNs 23456, 64496-131071 and 4200000000-4294967295 from July 2016 as these numbers are variously reserved for private use, documentation/code examples or other reasons and should not be routed. They are normally seen due to misconfigurations or bugs, but can also be used maliciously and should be eliminated from the global routing table.

So that’s it from us for RIPE 72 which proved to be a week that broke all attendance records. All the presentations from the meeting can be found on the RIPE 72 website.

The next RIPE meeting will held on 24-28 October 2016 in Madrid, Spain. We look forward to seeing you then!

 

Categories
Deploy360 IPv6

Apple Will Require IPv6 Support For All iOS 9 Apps

Because IPv6 support is so critical to ensuring your applications work across the world for every customer, we are making it an AppStore submission requirement, starting with iOS 9.”  With those words, Sebastien Marineau, Apple’s VP of Core OS, gave a huge boost to IPv6 developer support in Apple’s WWDC Platform State Of The Union (SOTU) address yesterday.

You can watch the Platform SOTU presentation yourself (although you may need the Safari browser to do so). The IPv6 segment begins at 34:16 and Marineau’s statement about the AppStore requirement can be heard at 37:16.

UPDATE: The video of the longer WWDC session about IPv6 is available – and we’ve also captured some of the most important screenshots.

Here, though, is the quick summary.

Why IPv6?

Sebastien Marineau began by talking about IPv6 and why it is important:

Apple IPv6 support

more on IPv6 support

In particular he noted that carriers in several regions of the world are now deploying IPv6-only networks and emphasized the importance of making your application work well for everyone, everywhere.  He reinforced how critical it is to support IPv6:

“If your application doesn’t work properly with IPv6, it will simply not function on those networks, those carriers and for those customers.”

He also explained how Apple has supported IPv6 for over ten years now since early versions of Mac OS X and from iOS 4 onward.

3 Steps For Developers

He went on to explain three steps all developers can take to make sure their applications work over IPv6 networks:

3 steps to make an app work with IPv6

Those steps are:

  • Use the networking frameworks (for example, “NSURLSession”)
  • Avoid use of IPv4-specific APIs
  • Avoid hard-coded IP addresses

Essentially, if app developers are using the higher level APIs and frameworks and aren’t hacking around at the IP layer, their apps should probably “just work” on top of either IPv4 or IPv6.

This is an important point – most iOS developers probably do not need to do anything on the development side. Assuming they have followed best practices in coding and are using the iOS networking frameworks, they should be all set.  Some developers, though, may be using lower level APIs that may involve direct usage of IPv4 addresses. Some developers may also be using the user’s IPv4 address as an identifier or for logging or configuration purposes.

But again, most iOS developers probably don’t need to change their code to support IPv6.

Testing Your App Over IPv6

However, Martineau addressed the question of “how do you test your app over IPv6?“, particularly when many app developers may not have access to a native IPv6 Internet connection.  He indicated that in an upcoming release of Mac OS X there will be a new feature to help with this:

IPv6-only personal hotspot

What I understood Martineau to say was that you will be able to set up a “personal WiFi hotspot” on your Mac and check an “IPv6-only” box.  Your iPhone/iPad with your app could then connect to that specific WiFi network to work in an IPv6-only mode.  The Mac would then provide the gateway to the legacy IPv4 Internet so that the app on the IPv6-only network could connect out to services on IPv4 servers.

THIS IS HUGE! One of the struggles many application developers have had is to easily create an “IPv6-only” network in which to test systems.  Even those of us who are IPv6 advocates/enthusiasts have struggled with making this work well.  It typically involves bringing up a second access point (which you are effectively doing with this new configuration) and then turning off all IPv4 services on that access point, which some access points make difficult to do.

Whenever this feature rolls out in Mac OS X, it will greatly help all of us who are working on apps and systems and want to test them in an IPv6-only environment.

An Important Step

Now, to be clear, most iOS app developers probably won’t have to do all that much to support IPv6.  If they are already using the higher level APIs and networking frameworks they should be all set.  The exact mechanisms of IP address handling are not a concern of theirs.  However, some app developers will have to make some changes, particularly if they are directly using IPv4 addresses as any kind of identifier or in logging.

More importantly, the requirement for AppStore submission will require app developers to test their applications with IPv6 networks, and that alone will suddenly cause the millions of iOS app developers out there to have to learn at least something about IPv6 (if nothing else, the fact that it exists).

Most significantly, though, this step by Apple means that all the iOS apps that run on iOS 9 will work well over the IPv6-only networks that are starting to be deployed.  Even in dual-stack (IPv6/IPv4) networks, this should mean that iOS 9 apps will work better in those environments when, for instance, IPv6 may be faster. (More needs to be understood here about the specifics of the IPv6 support.)

And… this also will help take away the argument used by some network operators who are still not moving ahead with IPv6 that “why should we deploy IPv6 when apps don’t support it?”

Apple’s answer is that, as of iOS 9, all iPhone/iPad apps will support IPv6!

Kudos to Apple for taking these steps, creating this new AppStore submission requirement, and also providing what sounds like a new and easy way to create IPv6-only networks!

We’re looking forward to iOS users being able to use ALL their favorite applications on an IPv6-only network!


UPDATE #1:  Discussions on this post can be found at:

Other articles related to this topic:

UPDATE #2: By way of a tweet I have learned that there is a session at WWDC on Friday, June 12, 2015, about “Your App and Next Generation Networks” that will apparently have more info about IPv6 support.

UPDATE #3 – 19 June 2015: The video of this “Your App and Next Generation Networks” session at WWDC is now available – in the post I link to, I include a number of screenshots about the session.


P.S. If you want to get started with IPv6, please visit our Start Here page to find resources tailored for your role or type of organization.  The time to make the move to IPv6 is TODAY!

Also, hat tip to Adam Iredale on Twitter, who first brought this new requirement to my attention, and to Borja Reinares who provided some more information.

Categories
Deploy360 Transport Layer Security (TLS)

TLS-O-MATIC Now Available To Test TLS In Applications

TLS-O-MATIC logoDo you want to test how well your application supports TLS over HTTP?  If so, you can now head over to:

http://www.tls-o-matic.com/

and run your application through a whole series of self-tests.

As he explains in a blog post announcing TLS-O-MATIC, Olle Johansson launched the site as a public service with 15 tests for the HTTPS protocol.

I’ve known Olle for many years through his work seeking to add security to various voice-over-IP (VoIP) services and protocols – and in recent years he has been focused around the idea of getting more encryption deployed.  He typically uses the “#MoreCrypto” hashtag on Twitter and other services – and we wrote about his #MoreCrypto 2.0 slide deck he released back in December 2014.

Some of the tests included in TLS-O-MATIC are:

  • Bad hostname
  • One cert, multiple names
  • Wildcard certificate
  • Not yet valid certificate
  • Expired certificate
  • Unknown CA
  • Client certificate
  • Weak certificate
  • Intermediate certificate
  • Chain of trust
  • A huge certificate
  • A strong key
  • Wrong usage bits
  • Server Name Indication
  • International DNS

We applaud Olle for making this test site available and hope it will be helpful to application developers to test if their applications fully support TLS.  If you are an application developer, please do visit the site and give Olle any feedback you may have as you use it.

Sites like this can help make encryption available everywhere and bring about a stronger and more secure Internet!

Categories
Deploy360 Transport Layer Security (TLS)

Heartbleed, LibreSSL, and the Importance of Implementation Diversity

LibreSSL_logoUnless you’ve been living in a cave for the past 3 months you have heard of the Heartbleed bug in OpenSSL. Since the disclosure of Heartbleed there have been many positive actions taken by the security community. This post will highlight those actions in light of Heartbleed, and how they will lead to a more secure Internet overall.

One of the most publicized actions was the forking of OpenSSL by the OpenBSD community. According to OpenBSD’s Bob Beck the real impetus for the fork was the discovery of OpenSSL’s proprietary memory management functions – a system which the OpenBSD team euphamistically labelled, “exploit mitigation technique countermeasures.” However, to observers it’s hard to imagine that Heartbleed wasn’t an important impetus. Regardless of reason, the OpenBSD team has been hard at work on their own fork of OpenSSL for approximately six weeks now. It’s called LibreSSL, and their aims are to maintain backward compatibility with OpenSSL’s API for POSIX-compliant operating systems. Most of their changes have been to remove support for older platforms and make the code more accessible. Right now they’re focusing on OpenBSD compatibility, but their long term goal is to make LibreSSL run on all *NIXes. For a great explanation of LibreSSL, and where it’s headed, check out Bob Beck’s recent presentation at BSDCan.

OpenSSL also received an injection of support from the Linux Foundation. In a wonderful effort to ensure security in OpenSSL, OpenSSH and NTP, the Linux Foundation has agreed to fund a security audit and two full-time developers for OpenSSL. This is great news for all three projects, but especially for OpenSSL. This news came right before OpenSSL issued a security advisory for seven vulnerabilities on June 5th.

In addition to OpenSSL and LibreSSL, we shouldn’t forget GnuTLS. The SSL/TLS library licensed under the LGPLv2. GnuTLS has also seen a recent flurry of activity with three security vulnerabilities found in 2014.

With all the activity in this space, including all the found vulnerabilities, should we be worried? With three competing libraries for SSL/TLS does that mean we’re less secure? The answer to both is a resounding ‘No’! When vulnerabilities are found it means people are looking at the code. Remember Linus Torvald’s Law, “Given enough eyeballs, all bugs are shallow.” If we aren’t finding vulnerabilities, it probably just means no one is looking at the code.

Also, diversity is good for security. In their 2011 study, “OS Diversity for Intrusion Tolerance: Myth or Reality?”, researchers Miguel Garcia, Alysson Bessani, Ilir Gashi, Nuno Neves and Rafael Obelheiro examined how diversity affected intrusion tolerance in networked operating systems. Using vulnerability data from the NIST National Vulnerability Database from 1994 to 2010, they looked at how many exploits affected more than one operating system across 11 operating systems. Their results should be obvious to security professionals, but bear repeating for their empirical certainty. Diversity of code base decreases the probability of common vulnerabilities. In other words, having distinct code bases makes us more secure, in much the same way having distinct DNA increases a species’ disease resistance.

In their own words, “We analyzed more than 15 years of vulnerability reports from the NIST Vulnerability Database totaling 2120 vulnerabilities of eleven operating system distributions. The results suggests substantial security gains by using diverse operating systems for intrusion tolerance.” They also discuss how their conclusions regarding operating systems can be generalized to other interchangeable software components, such as SSL/TLS libraries.

To sum up, disclosing vulnerabilities and implementation diversity are good for security, and good for the Internet. To find out more about TLS for Applications visit our resource page on TLS for Apps. Also check out the IETF’s Using TLS in Applications(UTA) working group to find out how you can get involved in this important conversation.

Categories
Deploy360 Domain Name System Security Extensions (DNSSEC)

DNSSEC-name-and-shame Spotlights Top Web Sites Without DNSSEC

DNSSEC Name and ShameWhich of the Top Alexa-ranked sites support DNSSEC? How can you quickly find out if a web site supports DNSSEC?  Last week we learned of a fun new site that came out of a recent hackathon at TheNextWeb 2014 conference in Amsterdam that aims to answer these questions.  Called “DNSSEC name and shame!” the site can be found at the simple URL of:

http://dnssec-name-and-shame.com/

At the top you can just enter any domain name and the site will check whether that domain is signed with DNSSEC.  But what is perhaps more interesting is to go a bit further down the page and look at the list of the Alexa Top 25 sites and the list of the event sponsors and “known good” examples.  You can click on any link and it will tell you the result.

I won’t spoil the surprise of what you’ll find when you click those links… but suffice it to say that many of the sites need to read our information for content providers / website owners about how to sign their domains with DNSSEC!  🙂

This DNSSEC-name-and-shame site is a cool example of the type of site / service that can be easily created using some of the new APIs available for DNS and DNSSEC.  Several of the other hackathon projects were definitely cool and we’ll be spotlighting some of them in the weeks ahead.

Congrats to the developers of the site, Joel Purra and Tom Cuddy, too, for winning PayPal’s TNW Hack Battle prize.  Great to see PayPal recognizing this work… and of course paypal.com has been signed with DNSSEC for quite some time now.

Do check the site out… test out domains that you work with… and if they are not signed, why not start today on getting them signed and making the Internet more secure?

P.S. We also enjoyed that Anne-Marie Eklund Löwinder of .SE lent her shaking-fist image to the site.  She’s one of the early pioneers in the world of DNSSEC and it’s fun to see her here!

Categories
Deploy360 Transport Layer Security (TLS)

Wired: It's Time To Encrypt The Entire Internet

Wired MagazineIs it time to “dump the plain text Internet” and encrypt everything everywhere? That’s the main thrust of an article by Klint Finley in Wired last week: “It’s Time to Encrypt the Entire Internet“. As he writes:

The Heartbleed bug crushed our faith in the secure web, but a world without the encryption software that Heartbleed exploited would be even worse. In fact, it’s time for the web to take a good hard look at a new idea: encryption everywhere.

Most major websites use either the SSL or TLS protocol to protect your password or credit card information as it travels between your browser and their servers. Whenever you see that a site is using HTTPS, as opposed to HTTP, you know that SSL/TLS is being used. But only a few sites — like Facebook and Gmail — actually use HTTPS to protect all of their traffic as opposed to just passwords and payment details.

He goes on to discuss viewpoints from Google’s Matt Cutts and and a number of other security professionals. As he notes at the end, there are costs, both in terms of financial costs for TLS/SSL certificates and also in terms of performance, but the greater security benefits are ones that we all need.

We definitely agree with the need to encrypt connections across the Internet. That’s why we’ve opened up the “TLS For Applications” area here on Deploy360 and why we are seeking to find or write a number of documents to help developers more quickly integrate TLS into their apps.

What do you think? Should connections across the Internet be encrypted?

Categories
Deploy360 Transport Layer Security (TLS)

What Can App Developers Learn From Heartbleed?

Heartbleed bugWhat lessons can application developers take from the Heartbleed bug?  How can we use this experience to make the Internet more secure?  Unless you have been offline in a cave for the past few days, odds are that you’ve seen the many stories about the Heartbleed bug (and many more stories today) and, hopefully, have taken some action to update any sites you have that use the OpenSSL library.  If you haven’t, then stop reading this post and go update your systems! (You can test if your sites are vulnerable using one of the Heartbleed test tools.) While you are at it, it would be a good time to change your passwords at services that were affected (after they have fixed their servers). There is an enormous list of the Alexa top 10000 out there, but sites like Mashable have summarized the major sites affected.  (And periodically changing your password is just a general “best practice”, so even if the site was not affected, why not spend a few minutes to make the changes?)

Client Applications Need Updating, Too

For application developers, though, it is also important to update any client applications you may have that use the OpenSSL libraries to implement TLS/SSL connections.  While most of the attention has been focused on how attackers can gain access to information stored on servers, it is also true that a malicious site could harvest random blocks of memory from clients visiting that site.  There is even demonstration code that lets you test this with your clients.  Now, granted, for this attack to work an attacker would need to set up a malicious site and get you to visit the site through, for instance, a phishing email or link shared through social media.  The attacker could then send malformed heartbeat messages to your vulnerable client in an attempt to read random blocks of memory… which then may or may not have any useful information in them.

Again, the path for an attacker to actually exploit this would be a bit complex, but you definitely should test any client applications you have that rely on any OpenSSL libraries.

With all that said, since we have started this “TLS For Applications” topic here are on Deploy360, what are some of the important lessons we can take away from this experience?  Here are a few I see coming out of this – I’d love to hear the lessons you take from all of this in the comments.

Security Testing Is Critical

It turns out that this was an incredibly trivial coding error. As Sean Cassidy points out in his excellent Diagnosis of the OpenSSL Heartbleed Bug, the issue boils down to this:

What if the requester didn’t actually supply payload bytes, like she said she did? What if pl really is only one byte? Then the read from memcpy is going to read whatever memory was near the SSLv3 record and within the same process.

There was no checking on the input and this allowed reading from other parts of the computer’s memory.  As Cassidy later writes about the fix:

This does two things: the first check stops zero-length heartbeats. The second check checks to make sure that the actual record length is sufficiently long. That’s it.

Today’s XKCD comic shows this all in an even simpler explanation.

This is the kind of trivial mistake that probably every developer has made at some point of his or her life.  I am sure that if I were to go back through many lines of code in my past I’d find cases where I didn’t do the appropriate input or boundary testing.  It highlights the importance of doing security testing – and of setting up security unit tests that are just done as part of the ongoing testing of the application. It also highlights the need for ongoing security audits, and for reviewers of code submissions to also be testing for security weaknesses.  But again, this is a common type of error that probably every developer has made.  You need testing to catch things like this.

In this instance it just happens that the mistake was in a piece of software that has now become critical for much of the Internet!  Which leads to a second lesson…

Having A Rapid Upgrade Path/Plan Is Important

As people learned about this bug earlier this week there has been a massive push to upgrade software all across the Internet. Which raises the question: how easy is it for your users to upgrade their software in a high priority situation such as this?

In many cases, it may be quite easy for users to install an update either from some kind of updated package or a download from an online application store.  In other cases, it may be extremely difficult to get updates out there.  In the midst of all this I read somewhere that many “home routers” may be vulnerable to this bug.  Given that these are often something people buy at their local electronics store, plug in, and pretty much forget… the odds of them getting updated any time soon are pretty slim.

Do you have a mechanism whereby people can rapidly deploy critical security fixes?

UPDATE: A ZDNet post notes that both Cisco and Juniper have issued update statements for some of their networking products. I expect other major vendors to follow soon.

Marketing Is Important To Getting Fixes Deployed

Finally, Patrick McKenzie had a great post out titled “What Heartbleed Can Teach The OSS Community About Marketing” that nicely hits on key elements of why we’re seeing so much attention to this – and why we are seeing fixes deployed.  He mentions the value of:

  1. Using a memorable name (“Heartbleed” vs “CVE-2014-0160”)
  2. Clear writing
  3. A dedicated web presence with an easy URL to share
  4. A visual identity that can be widely re-used

His article is well worth reading for more details.  His conclusion includes this paragraph that hit home for me (my emphasis added):

Given the importance of this, we owe the world as responsible professionals to not just produce the engineering artifacts which will correct the problem, but to advocate for their immediate adoption successfully.  If we get an A for Good Effort but do not actually achieve adoption because we stick to our usual “Put up an obtuse notice on a server in the middle of nowhere” game plan, the adversaries win.  The engineering reality of their compromises cannot be thwarted by effort or the feeling of self-righteousness we get by not getting our hands dirty with marketing, it can only be thwarted by successfully patched systems.

Exactly!

We need to make it easy for people to deploy our technologies – and our updates to those technologies.  (Sound like a familiar theme?)

What other lessons have you taken from this Heartbleed bug?  What else should application developers be thinking about to make TLS/SSL usage more secure?

Please do leave a comment here or on social media sites where this article is posted.  (And if you’re interested in helping us get more documentation out to help app developers with TLS/SSL, how about checking out our content roadmap for the TLS area?  What other items should we include?  Do you know of existing documents we should consider pointing to?  Interested in writing some documents?  Please do let us know.)

P.S. There’s now a post out about the process the Codenomicon team went through in disclosing the bug that is worth reading.

Categories
Deploy360 Transport Layer Security (TLS)

Introducing A New Deploy360 Topic: TLS for Applications

TLSHow can we help make it easier for developers to learn how to add TLS (SSL) support to their applications?   If you’ve been following our work here at Deploy360 for a while, you know that part of our attention is focused on accelerating the deployment of DNSSEC and of technologies that help in securing BPG and Internet routing.

With DNSSEC, a great bit of our focus has been on the enormous potential of the DANE protocol to help make Internet connections using Transport Layer Security (TLS) more secure.  You already use TLS probably every day with your web browser… although you may know it more by its older name of “Secure Sockets Layer (SSL)”.  Any time you go to a website with a “https” at the beginning of a URL, or if you see a “lock” icon in many browsers, you are using TLS.   Any app developer using TLS is a great candidate to be using DANE.

But how do we get more developers using TLS to encrypt their connections and secure the data sent over those connections?

Around the time we were thinking about this, a new working group was launched within the IETF called “Using TLS in Applications (UTA)”.  This working group is chartered to create a set of “best practices” guides to help application developers know how to implement TLS in the best way possible to defend against attacks such as those outlined in draft-sheffer-uta-tls-attacks.  You can find out more about the UTA Working Group, including how to join the public mailing list, at these links:

It seemed to us that these documents being created within the UTA group were ones that should be shared widely.  I put some ideas forward on the UTA mailing list and received positive responses – and so we’re launching this new section of Deploy360 to help get that information out.  As the UTA working group publishes documents we’ll try to do what we can to help more developers and network operators learn about those documents.

To that end, I’ll also note that the UTA working group will be meeting this coming Friday, March 7, from 0900-1130 UTC at the IETF 89 meeting in London.  I wrote about this in my article yesterday about the DNS-related activities happening at IETF 89.  You can join the session remotely to listen in, so if this is of interest to you please do join.

Now, our “TLS for Applications” section here on Deploy360 will not be ONLY about the documents coming out of the UTA working group. We’ll also be finding the best documents and tutorials related to TLS that we can find out there on the Internet.  We’ve put up a content roadmap identifying the types of documents we intend to add to the site.

We’d love to hear your feedback about this new section of Deploy360. Do you see this as something that will be helpful to you?

How You Can Help

We need your help!  In order to provide the best possible resources to help application developers expand their use of TLS, we need to hear from you!  We need your feedback to help us know how we can best help you.  A few specific requests:

1. Read through our pages and content roadmap – Please take a look at our “TLS for Applications” page to understand what we are trying to do, and also please take a look at our content roadmap for BGP.  Are the current resources listed helpful?  Is the way we have structured the information helpful?  Will the resources we list on our roadmap help you make your routers more secure?

2. Send us suggestions – If you know of a tutorial, video, case study, site or other resource we should consider adding to the site, please let us know. We have a list of many resources that we are considering, but we are always looking for more.

3. Volunteer – If you are very interested in this topic and would like to actively help us on an ongoing basis, please fill out our volunteer form and we’ll get you connected to what we are doing.

4. Help us spread the word – As we publish resources and blog posts relating to adding TLS to applications, please help us spread those links through social networks so that more people can learn about the topic.

Thank you!  Working together we can make the Internet more secure!

Categories
Deploy360 IETF IPv6

New IETF “openv6” Mailing List For IPv6 Application Developers

IETF LogoDo we need an “open interface and a programmable platform to support various IPv6 applications? That is the question posed for a new “openv6” IETF discussion mailing list announced yesterday. The openv6 list, which is open to anyone to subscribe to, has this description:

This list is to discuss a open interface and a programmable platform to support various IPv6 applications, which may include IPv6 transition technologies, SAVI (Source Address Validation and Traceback), security, data center and etc. This discussion will focus on the problem space, use case and possible protocol extensions. The following questions are listed to be solved via this discussion:

(1) What are the problems and use cases existing in various IPv6 applications,  e.g., multiple IPv6 transition technologies co-exist?

(2) How to enable the applications to program the equipment to tunnel IPv6 traffic across an IPv4 data plane?

(3) How this work can be done through a general interface, e.g., to incorporate  the transition policies, simplifying the different stages through the transition  and guaranteeing that current decisions do not imply a complicated legacy in
the future?

(4) How to make the end-to-end configuration of devices: concentrator/CGN, CPE and the provisioning system?

(5) How to extend the existing IETF protocols, e.g., netconf, to support this open interface?

The list is not for forming a new IETF working group (WG). It is at this point purely for discussing this topic. The mailing list archive seems to be empty at the moment (or the link is not correct), but given that the list was just announced yesterday the list owners may be waiting for people to join the list before kicking off discussion. In searching IETF archives I found this recent draft from October 2013, “Problem Statement for Openv6 Scheme,” that may be part of the discussion.  I expect we should see more information soon as the discussion begins.

Anyway, if you are an application developer looking to look at how you help your applications work over IPv6 this may be an interesting mailing list to join, if for no other reason than to monitor it and see what work is happening.

I’m looking forward to seeing the discussion begin!

Categories
Deploy360 IPv6

Geoff Huston Unravels An IPv6 Bug Involving Apple Mail And Microsoft Exchange

Geoff Huston's blog postGeoff Huston at APNIC Labs published today a fascinating and very well-documented exploration of why he was having occasional seemingly random problems sending email from his Apple Mail program via APNIC’s Microsoft Exchange Server.

It’s such a good read that I’ll not spoil the story, other than to say it is a good example of the kinds of things application developers need to be thinking about with regard to how they work with IPv6 addresses!

Thanks to Geoff and his colleagues for publishing such a thorough write-up from which we all can learn.

Categories
IPv6

Geoff Huston Unravels An IPv6 Bug Involving Apple Mail And Microsoft Exchange

Geoff Huston's blog postGeoff Huston at APNIC Labs published today a fascinating and very well-documented exploration of why he was having occasional seemingly random problems sending email from his Apple Mail program via APNIC’s Microsoft Exchange Server.

It’s such a good read that I’ll not spoil the story, other than to say it is a good example of the kinds of things application developers need to be thinking about with regard to how they work with IPv6 addresses!

Thanks to Geoff and his colleagues for publishing such a thorough write-up from which we all can learn.