Categories
Building Trust Internet of Things (IoT) Security

Internet of Things Devices as a DDoS Vector

As adoption of Internet of Things devices increases, so does the number of insecure IoT devices on the network. These devices represent an ever-increasing pool of computing and communications capacity open to misuse. They can be hijacked to spread malware, recruited to form botnets to attack other Internet users, and even used to attack critical national infrastructure, or the structural functions of the Internet itself (we give several examples from recent headlines in the Reference Section, below).

The problem this poses is what to do about IoT as a source of risk. This blog post includes reflections on events that came to light in recent weeks, sets out some thoughts about technical mitigations, and sketches out the boundaries of what we think can be done technically. Beyond those boundaries lie the realms of policy measures, which – while relevant to the big picture – are not the topic of this post.

Why are we exploring this issue now? Partly because of our current campaign to improve trust in consumer IoT devices.

And partly, also, because of recent reports that, as a step towards mitigating this risk, connected devices will be subjected to active probing, to detect whether or not they can still be accessed using default user IDs and passwords. Here is one such report, from the IEEE.

We believe active probing raises practical, privacy, and security risks which ought either to rule it out as an approach, or ensure that other, less risky options should always be considered first.

Remote devices: control, ownership, and responsibility

Much of the power of a distributed denial-of-service (DDoS) attack comes from the ability to recruit devices all over the planet, regardless of the physical location of the attacker or, indeed, the target. One countermeasure is to make it harder for a malicious actor to gain remote control of an IoT device.

Gaining control of a device involves (or should involve) authenticating to it as an authorized user. IoT devices that either have no access control, or have access control based on a default password, have little or no protection against such a take-over. It is therefore often suggested that an early step towards securing connected devices is to ensure that users replace the default password with one that is hard to guess.

This step, though, is not without obstacles. Users are notoriously bad at choosing and changing passwords, frequently choosing trivial ones if they bother to set passwords at all, and of course, sometimes not even realizing that they should set a password in the first place.

Consumers’ behavior might also be based on an assumption that their devices are safe. They might assume, by default, that their Internet service provider (ISP) or connected-home solution provider is not supplying a device that puts them at risk through poor security – just as they might expect the device not to catch fire in normal use.

Multiple stakeholders, expectations and requirements

As you can see, we already have a problem whose solution may require action by more than one stakeholder:

  • Device manufacturers need to design their products to require some form of access control, and to prompt the user to enable it
  • Users need to have the awareness and discipline to use the access control mechanism and, if necessary, remember and replace passwords when needed
  • Users may, under some circumstances, assume that “someone else” is taking care of keeping their devices safe and secure

And all this has to be done in ways that reconcile the triangle of requirements, from which, traditionally, you can “pick any two.” The resulting control must be:

  • Secure (otherwise it has missed the point)
  • Usable (if it is too hard to understand or inconvenient, users will ignore it)
  • Manageable (it must be possible to repair, replace or update the control without compromising the usability of the device, or the security and privacy of the user)

In the IoT context, two further issues must be addressed.

First, whatever the solution, it must be affordable. Otherwise, “secure but expensive” products will tend to lose market share in favor of “insecure but cheap” competitors, and the risk represented by insecure IoT devices will continue to grow.

Second, the process as laid out above has a flaw, namely, “that’s not where we’re starting from.” Connected devices with poor security are already widely available and deployed in vast numbers. In those cases, it’s too late for manufacturers to design security into the product, so we need to look for alternative means to mitigate the risk of IoT devices as a threat vector.

Choosing the appropriate intervention

If the device has simply been designed without appropriate security mechanisms, and without the means to add them once deployed, and if it presents a significant risk to people’s security or well-being, there’s little to be done other than try to withdraw it from the market. (For instance, in 2017 German authorities issued a ban against a connected doll, on grounds that it was a de facto surveillance device, and could also put children at risk.)

If already-deployed devices can be secured by user action, the question becomes one of deciding how this can best be achieved. We think there will be a range of options, some more appropriate to different kinds of connected device than others.

General public-awareness campaigns, aimed at informing consumers about the importance of good password practice, may be ineffective or insufficiently accurately targeted to be relevant; but how do we increase the accuracy of such messages without intruding on users’ privacy?

Is it acceptable to target the buyers of specific kinds of device, or specific brands? Should ISPs have the means (or a duty) to scan their networks for those devices and alert their subscribers to the potential risks? Should they even test devices on their networks to see if the default password has been changed? As a last resort, and given the potential threat IoT presents to critical national infrastructure, do even governments have a responsibility in such cases, and is it desirable for them to intervene, either directly or through the ISP?

As the IEEE article notes, in comments from the Information Technology Center of the University of Tokyo, a large-scale initiative like this increases the number of stakeholders who must play a role. It will probably involve the government, an approved technical institute, and ISPs. It may mean governments have to reconcile conflicts between the actions they wish to take, and laws relating to personal privacy, consent, or unauthorized computer access. Those decisions are, as we noted, beyond the scope of this post, except to note that they increase the difficulty of ensuring that the “active probe” approach is manageable, legal and safe.

Conclusions and Recommendations

We recognize that circumstances will vary, and different situations may call for different approaches. Here is an indication of the range of interventions we think can apply. This is not an exhaustive list, but it serves to show that many options are available, and several may be needed.

  • Security by design. If all IoT devices were well designed in the first place, their risk would be greatly reduced.
  • Secure lifecycle management. Good design includes the ability to manage deployed devices over their whole lifecycle, including secure updates to firmware/software, and secure decommissioning. (This could imply that some processes and protocols need to include a “consent” step.)
  • Lab testing of devices. Assess new devices against quality criteria for security and lifecycle management, and provide feedback to manufacturers. This could extend to include certification and trust-marks.
  • General awareness-raising campaigns (e.g., encouraging users to change default passwords).
  • Targeted awareness raising/call to action (this might be based on the results of lab testing, in the form of a manufacturer’s “recall” notice for unsafe products).
  • “Passive” device targeting (e.g., an ISP can detect traffic that indicates an unsafe device, and sends an out-of-band alert to the user suggesting remedial action).
  • “Active” device targeting (e.g., an entity scans for device types known to have a security flaw, and notifies the user with suggested actions).
  • “Active probe” (e.g., an entity probes devices remotely to identify those that still have default passwords).

As this rough list suggests, many alternatives can be considered before embarking on something as potentially contentious as an active probe – and of the options listed, active probing would require the most effort in terms of governance, management, privacy/ethical impact assessment, and safety measures. Here are just some of our concerns with the “active probe” approach:

  • Doing this (or even attempting to) without the knowledge and express permission of the device owner, irrespective of the motivation, is a technical attack on that device.
  • The device owner has no way to distinguish a malicious attack from an “authorized,” legitimate one, and might therefore react inappropriately to a legitimate probe, or fail to react appropriately to a malicious one. This may give rise to unintended and undesirable outcomes. For instance, if users are warned via a general announcement that “legitimate probes will be conducted overnight on Thursday of next week”, hackers might interpret that as an opportunity to launch their own attacks, in the knowledge that householders are less likely to react.
  • It could result in the creation of a large database of vulnerable devices, which would be both a target and an asset for potential attackers. Creation of such an asset should not be done without caution and forethought.
  • It is even possible that an active probe could infringe the sovereignty of another nation: for instance, is it acceptable for a country to probe the connected devices of foreign embassies on its soil, as part of an initiative such as this?

Overall, our view is that the active probe approach carries the highest risk of undermining users’ trust in the Internet, particularly by breaching the normal expectations of the device owners and users, concerning privacy, ownership and control. We conclude that actively testing device security by attempting to log in using well-known default passwords should be a last resort, in light of a specific, identified threat, and used only when other alternatives are not available or practical.

In deciding which of the interventions is appropriate (and successful intervention may need a combination of measures), we recommend applying established principles from other, related disciplines of IT governance:

  • Necessity: is there a less risky, less intrusive way to achieve the same ends?
  • Proportionality: is the desired outcome sufficient to justify the potential safety and privacy impact of the intervention?
  • Consent: has the individual’s informed consent been sought and knowingly, freely given?
  • Transparency: is it clear to all stakeholders what is being done and why?
  • Accountability: are the outcomes measurable? Is accountability for the outcomes clear – including negative outcomes if something goes wrong?

We recognize that insecure connected devices represent a substantial and growing threat, and one that needs an effective response. However, we also believe that response can and should be graduated, based on evaluation of a full range of options and application of established principles of good governance.

Recent examples of IoT as an attack vector

Other resources

Categories
IETF Improving Technical Security Open Internet Standards Technology

ISOC Rough Guide to IETF 99: Internet Infrastructure Resilience

IETF 99 is next week in Prague, and I’d like to take a moment to discuss some of the interesting things happening there related to Internet infrastructure resilience in this installment of the Rough Guide to IETF 99.

Simple solutions sometimes have a huge impact. Like a simple requirement that “routes are neither imported nor exported unless specifically enabled by configuration”, as specified in an Internet draft “Default EBGP Route Propagation Behavior Without Policies”. The draft is submitted to IESG and expected to be published as a Standards Track RFC soon.

This specification intends to limit the impact of misbehaving networks by requiring the explicit configuration of both BGP Import and Export Policies for an External BGP (EBGP) session such as customers, peers, or confederation boundaries for all enabled address families. When widely deployed, this measure should reduce the occurrence of route leaks and some other routing misconfigurations.

Speaking of route leaks, there are still two proposals addressing the route leak problem. Now both are IDR WG documents: “Methods for Detection and Mitigation of BGP Route Leaks” (http://datatracker.ietf.org/doc/draft-ietf-idr-route-leak-detection-mitigation), and “Route Leak Prevention using Roles in Update and Open messages” (https://datatracker.ietf.org/doc/draft-ietf-idr-bgp-open-policy/). The first approach uses a so-called RLP Route Leak Prevention field to inform upstream networks and lateral peers of a “leaked” route. Another one leverages the BGP Open message to establish an agreement of the (customer, provider, complex) relationship of two BGP neighboring speakers in order to enforce appropriate configuration on both sides. Propagated routes are then marked with a flag according to agreed relationship allowing detection and mitigation of route leaks.

In the area of RPKI and BGPSEC a recently chartered SIDR Operations Working Group (SIDROPS) has taken over the technology developed in SIDR WG and is focused on developing guidelines for the operation of SIDR-aware networks, and providing operational guidance on how to deploy and operate SIDR technologies in existing and new networks. The first of such guidelines was just published and will probably be discussed during the WG meeting: “Requirements for Resource Public Key Infrastructure (RPKI) Relying Parties” (https://datatracker.ietf.org/doc/draft-madi-sidrops-rp). Being a relying party is not an easy job – one has to comply to dozen of RFCs, from protocol specifications to best practices – and this document attempts to outline a set of baseline requirements imposed on RPs and provides a single reference point for requirements for RP software for use in the RPKI, as segmented with orthogonal functionalities:

  • Fetching and Caching RPKI Repository Objects
  • Processing Certificates and CRLs
  • Processing RPKI Repository Signed Objects
  • Delivering Validated Cache Data to BGP Speakers

The IDR WG continues working on the proposal “Making Route Servers Aware of Data Link Failures at IXPs” (https://datatracker.ietf.org/doc/draft-ietf-idr-rs-bfd/). When route servers are used, the data plane is not congruent with the control plane. Therefore, the peers on the Internet exchange can lose data connectivity without the control plane being aware of it, and packets are dropped on the floor. This document proposes a means for the peers to verify connectivity amongst themselves, and a means of communicating the knowledge of the failure back to the route server. There was quite some discussion on the mailing list about whether communication of failures back to the RS is necessary. I imagine this discussion will continue during the WG session.

It seems the OPSEC WG will discuss another attempt at addressing the source IP spoofing problem. A draft “Enhanced Feasible-Path Unicast Reverse Path Filtering Anti-spoofing” (https://tools.ietf.org/html/draft-sriram-opsec-urpf-improvements) proposed a method that does not have the drawbacks of the existing modes of Unicast Reverse Path Filtering (uRPF) – strict, feasible and loose. Apart from implementation issues and a potential performance hit, uRPF presents risks of dropping traffic by an ISP implementing it. These were the major obstacles in the way of its deployment and protection against IP-spoofed traffic.

DDoS attacks are a persistent and growing threat on the Internet. And as DDoS attacks evolve rapidly in the aspect of volume and sophistication, more efficient cooperation between the victims and parties that can help in mitigating such attacks is required. The ability to quickly and precisely respond to a beginning attack, communicating the exact information to the mitigation service providers is crucial.

Addressing this challenge is what keeps the DDoS Open Threat Signaling (DOTS, http://datatracker.ietf.org/wg/dots/) WG busy. The goal of the group is to develop a communications protocol intended to facilitate the programmatic, coordinated mitigation of such attacks via a standards-based mechanism. This protocol should support requests for DDoS mitigation services and status updates across inter-organizational administrative boundaries. Specifications outlining the requirements, architecture and the use cases for DOTs are maturing and will be discussed at the meeting.

To summarize – there is important work underway at the IETF that will hopefully lead to a more resilient and secure Internet infrastructure.

Related Working Groups at IETF 99

SIDROPS (SIDR Operations) WG
Monday, 17 July, 15:50-17:20, Congress Hall III
Agenda: https://datatracker.ietf.org/meeting/99/agenda/sidrops/
Charter: https://datatracker.ietf.org/wg/sidrops/charter/

GROW (Global Routing Operations) WG
Monday, 17 July, 17:40-18:40, Congress Hall III
Agenda: https://datatracker.ietf.org/meeting/99/agenda/grow/
Charter: https://datatracker.ietf.org/wg/grow/charter/

IDR (Inter-Domain Routing Working Group) WG
Thursday, 20 July, 09:30-12:00, Congress Hall III
Agenda: https://datatracker.ietf.org/meeting/99/agenda/idr/
Charter: https://datatracker.ietf.org/wg/idr/charter/

DOTS (DDoS Open Threat Signaling) WG
Thursday, 20 July, 15:50-17:50, Berlin/Brussels
Agenda: https://datatracker.ietf.org/meeting/99/agenda/dots/
Charter: https://datatracker.ietf.org/wg/dots/charter/

OPSEC (Operational Security) WG
Wednesday, 19 July, 13:30-15:00, Berlin/Brussels
Agenda: https://datatracker.ietf.org/meeting/99/agenda/opsec/
Charter: https://datatracker.ietf.org/wg/opsec/charter/

Follow Us

There’s a lot going on in Prague, and whether you plan to be there or join remotely, there’s much to monitor. To follow along as we dole out this series of Rough Guide to IETF blog posts, follow us on the Internet Technology Matters blog, Twitter, Facebook, Google+, via RSS, or see http://dev.internetsociety.org/rough-guide-ietf99.

Categories
IETF Improving Technical Security Open Internet Standards Technology

Rough Guide to IETF 98: Internet Infrastructure Resilience

Let’s look at what’s happening in the area of Internet infrastructure resilience in the IETF and at the upcoming IETF 98 meeting. My focus here is primarily on the routing and forwarding planes and specifically routing security and unwanted traffic of Distributed Denial of Service Attacks (DDoS) attacks. There is interesting and important work underway at the IETF that can help address problems in both areas.

DDoS attacks are a persistent and growing threat on the Internet. And as DDoS attacks evolve rapidly in the aspect of volume and sophistication, a more efficient cooperation between the victims and parties that can help in mitigating such attacks is required. The ability to quickly and precisely respond to a beginning attack, communicating the exact information to the mitigation service providers is crucial.

Addressing this challenge is what keeps the DDoS Open Threat Signaling (DOTS, http://datatracker.ietf.org/wg/dots/) WG busy. The goal of the group is to develop a communications protocol intended to facilitate the programmatic, coordinated mitigation of such attacks via a standards-based mechanism. This protocol should support requests for DDoS mitigation services and status updates across inter-organizational administrative boundaries. Specifications outlining the requirements, architecture and the use cases for DOTS are maturing and will be discussed at the meeting.

Draft “Inter-organization cooperative DDoS protection mechanism” (https://datatracker.ietf.org/doc/draft-nishizuka-dots-inter-domain-mechanism) goes further than communication between a victim and a mitigation service provider. It attempts to describe possible mechanisms that implement the cooperative inter-organization DDoS protection by DOTS protocol, leveraging the capacity of the protection by sharing the resources among several organizations.

A recently chartered SIDR Operations Working Group (SIDROPS) has taken over the technology developed in the SIDR WG and is focused on developing guidelines for the operation of SIDR-aware networks, and providing operational guidance on how to deploy and operate SIDR technologies in existing and new networks. The working group meets for the first time and will, among other things, discuss mitigation mechanisms for route leaks.

There are still two proposals addressing the route leak problem. One is an IDR WG document, “Methods for Detection and Mitigation of BGP Route Leaks” (http://datatracker.ietf.org/doc/draft-ietf-idr-route-leak-detection-mitigation), where the authors suggest an enhancement to BGP that would extend the route-leak detection and mitigation capability of BGPSEC. Another is an independent submission, “Route Leak Detection and Filtering using Roles in Update and Open messages” (https://tools.ietf.org/html/draft-ymbk-idr-bgp-open-policy). This proposal enhances the BGP Open message to establish an agreement of the (peer, customer, provider, internal) relationship of two BGP neighboring speakers in order to enforce appropriate configuration on both sides. Propagated routes are then marked with a flag according to agreed relationship allowing detection and mitigation of route leaks. An updated version of the specification allows signaling a potential leak more than one hop away.

Both proposals will be discussed at the SIDROPS as well as at the IDR WG sessions.

Another item that can certainly contribute to better resilience of an IXP infrastructure and is on the agenda of the IDR WG session is a proposal, “Making Route Servers Aware of Data Link Failures at IXPs” (https://datatracker.ietf.org/doc/draft-ietf-idr-rs-bfd/). When route servers are used, the data plane is not congruent with the control plane. Therefore, the peers on the Internet exchange can lose data connectivity without the control plane being aware of it, and packets are dropped on the floor. This document proposes a means for the peers to verify connectivity amongst themselves, and a means of communicating the knowledge of the failure back to the route server.

To summarize – there is important work underway at the IETF that will hopefully lead to a more resilient and secure Internet infrastructure.

Related Working Groups at IETF 98

SIDROPS (SIDR Operations) WG
Tuesday, 28 March, 14:50-16:20, Zurich C
Agenda: https://datatracker.ietf.org/meeting/98/agenda/sidrops/
Charter: https://datatracker.ietf.org/wg/sidrops/charter/

GROW (Global Routing Operations) WG
Monday, 27 March, 17:10-18:10, Zurich G
Agenda: https://datatracker.ietf.org/meeting/98/agenda/grow/
Charter: https://datatracker.ietf.org/wg/grow/charter/

IDR (Inter-Domain Routing Working Group) WG
Friday, 31 March, 09:00-11:30, Zurich G
Agenda: https://datatracker.ietf.org/meeting/98/agenda/idr/
Charter: https://datatracker.ietf.org/wg/idr/charter/

DOTS (DDoS Open Threat Signaling) WG
Tuesday, 28 March, 16:40-18:40, Zurich G
Agenda: https://datatracker.ietf.org/meeting/98/agenda/dots/
Charter: https://datatracker.ietf.org/wg/dots/charter/

Follow Us

There’s a lot going on in Chicago, and whether you plan to be there or join remotely, there’s much to monitor. To follow along as we dole out this series of Rough Guide to IETF blog posts, follow us on the Internet Technology Matters blog, Twitter, Facebook, Google+, via RSS, or see http://dev.internetsociety.org/rough-guide-ietf98.

Categories
Improving Technical Security

Holiday DDoS Attacks: Targeting Gamers (Plus Five Things You Can Do)

Over the past few years, a new tradition has emerged, the Holiday DDoS Attack.

While distributed denial of service (DDoS) attacks happen throughout the year, some of the highest profile attacks occur during the holidays, when the most users will be impacted. Attackers may target online shopping sites to disrupt pre-holiday gift buying. Or they may attack voice over IP services, like Skype, which are used to talk to family members over the holidays. But gaming networks are most often targeted by DDoS attacks, as the end of year holidays usually bring many users online who are eager to try out their new games and systems. In December 2014 and 2015, both Sony’s PlayStation Network and Microsoft’s Xbox Live gaming networks experienced outages as a result of DDoS attacks, leaving users unable to access or play their games online.

On 23 December 2016, Steam, a digital distributions platform and multiplayer network for PC gaming, went offline for several hours. A group of hackers took credit for the outage, claiming they downed the service through a DDoS attack. Valve Corporation, the developer of Steam, did not publicly identify the cause of the outage. When the outage occurred, Steam was in its first day of its annual Winter Sale, which could have produced a large increase in legitimate traffic that could have overloaded their systems, but a DDoS attack is far more likely.

In each of these cases, thousands of average Internet users inadvertently contributed to these DDoS attacks through the participation of their unsecured and infected devices.

While DDoS attacks are annoying for the users impacted, they are incredibly expensive for the companies attacked. According to a study by Incapsula, a web security company, DDoS attacks cost companies an average of $40,000 an hour. For the Steam attack, the cost was likely much higher. The Winter Sale produces some of their largest revenues of the year. The attack’s timing just days before Christmas may have caused Valve Corporation to lose customers, who may have opted to buy their gifts from other companies when they could not access the Steam website. Some users may have lost some confidence in Steam, worrying that the attackers may have also stolen private customer data such as their billing information, and moved to a different service.

DDoS attacks work by flooding systems with seemingly legitimate traffic. The systems are overloaded, leaving legitimate users unable to access them. Since differentiating between illegitimate and legitimate traffic is difficult, DDoS attacks are hard to defend against. Defenders can attempt to block spoofed traffic, provision more bandwidth to counteract the increased traffic, or use other mitigation techniques.[1] However, if the DDoS attack is large enough, and especially if it is made up of unspoofed traffic from many sources, it can be difficult to mitigate. For this reason, DDoS attacks have become the weapon of choice for attackers looking to gain notoriety during the holiday season.

While it can be hard to mitigate a large DDoS attack, everyone can take actions to prevent them. DDoS attacks rely on networks (botnets) of infected devices (bots) to create the massive amounts of traffic necessary to overload systems. Without large numbers of bots, it is much harder for attackers to create large amounts of traffic, making attacks easier to mitigate. We can all take small actions to ensure that our devices do not double as bots. DDoS attacks can only truly be stopped if everyone does their part and protects their own devices. Until that happens, the holiday DDoS attack will remain a threat for years to come.

Five actions to protect your devices from becoming bots:

  1. Create and use strong passwords for all your devices. Do not use the default. This is especially important for smart devices, routers, and other devices with which you may not interact directly.
  2. Update your devices! Software is often patched to remove known vulnerabilities, greatly strengthening your defenses.
  3. Monitor your devices. If a device is acting strangely, investigate it. One example is bounced email messages. If email messages are not reaching their destination, your device could be infected and sending spam as a part of a botnet.[2]
  4. Run anti-virus scans and use other security tools to find and remove malicious software.
  5. Be careful to avoid infecting your devices. Avoid opening suspicious emails, attachments, or risky websites. Some anti-malware services include website security checks.

Notes

[1] Spoofed traffic is Internet traffic that is forged to look like it is from another source.
[2] For more specific tips for fighting spam, see our Anti-Spam Toolkit users page.

Categories
Improving Technical Security Technology

The DDoS Attack Against Liberia – we must take collective action for the future of the Open Internet

If it was not clear yet: the Internet Society condemns those that perform large-scale distributed denial-of-service (DDoS) attacks on Internet infrastructure and services.

These attacks are a threat to all the opportunities that the Internet brings.

Bumping a small nation off the Internet map worries everybody, including those that are most friendly to the open nature of the Internet. These sort of actions will cause reactionary measures that lead to fragmentation, decrease the ability for permissionless innovation, and give rise to calls for measures that prevent any anonymous or privacy-protecting behaviour on the Internet.  If bumping a nation from the Internet doesn’t worry enough Internet-friendly people in positions of power then another DDoS attacks with societal impact will.

I am going to be hopelessly naïve and call upon those that are involved with these botnets: stop spoiling your own nest.

On a less emotional note. “The Internet only just works” was the title of a 2006 paper by Mark Handley. His main argument was that the Internet collectively addresses issues when they get urgent. In the past two years, the dynamics of DDoS attacks seems to have changed in scale and magnitude. Individuals, organisations, companies, and even countries are impacted.  That should make it clear that there is urgency in addressing the root causes of this problem. The outline of the agenda for that l laid out in my previous blogpost:

  • Producers follow, and share, good design practices;
  • For every product sold there is a way that security researchers can responsibly disclose vulnerabilities found;
  • Producers can fix, or patch, these vulnerabilities during the lifetime of the device (Field Upgradability);
  • We clearly understand what happens if the product, or the supporting producers, reach end-of-life (Device Obsolescence);
  • Consumers can make informed choices based on these properties (Cost vs. Security trade-offs);
  • Data that IoT devices collect are protected and dealt with in privacy-honoring ways (Data Confidentiality and Access Control); and
  • Those who go about device security in an irresponsible way get penalised.

The global division in the level of interconnection and human capacity and experience is sadly exposed. I know that the Internet technical community is doing everything in its power to limit the effects of the DDoS attacks, but unfortunately, a western company like Dyn is in a much better position to cope with the effects of a DDoS than the Liberians. That suggests an agenda of capacity building. Capacity building in technical operation and security management around all the world has always been one of our priorities. We have been convening efforts to mobilise the community such as MANRSanti-spoofing work and DNS security.  We, the Internet Society, together with numerous partners, will continue developing and strengthening capacities for coping with this sort of problems, with a special focus on developing countries.

The apparent and sudden rise in scale and frequency of DDoS attacks makes this a very urgent problem, one of those that Handley told us will get fixed. But to get there collective action is needed, across the industry, the public sector, by law enforcement, and consumers, by all stakeholders.

All to protect the Open Internet, an Internet of opportunities.


Image credit: Google Maps

Categories
Domain Name System (DNS) Improving Technical Security Technology

How To Survive A DNS DDoS Attack – Consider using multiple DNS providers

How can your company continue to make its website and Internet services available during a massive distributed denial-of-service (DDoS) attack against a DNS hosting provider? In light of last Friday’s attack on Dyn’s DNS infrastructure, many people are asking this question.

One potential solution is to look at using multiple DNS providers for hosting your DNS records. The challenge with Friday’s attack was that so many of the affected companies – Twitter, Github, Spotify, Etsy, SoundCloud and many more – were using ONLY one provider for DNS services. When that DNS provider, Dyn, then came under attack, people couldn’t get to the servers running those services. It was a single point of failure.

You can see this yourself right now. If you go to a command line on a Mac or Linux system and type “dig ns twitter.com,”[1] the answer you will see is something like:

twitter.com.	10345  IN  NS   ns4.p34.dynect.net.
twitter.com.	10345  IN  NS   ns3.p34.dynect.net.
twitter.com.	10345  IN  NS   ns1.p34.dynect.net.
twitter.com.	10345  IN  NS   ns2.p34.dynect.net.

What this says is that Twitter is using only Dyn. (“dynect.net” is the domain name of Dyn’s “DynECT” managed DNS service.)

Companies using Dyn who also used another DNS provider, though, had less of an issue. Users may have experienced delays in initially connecting to the services, but they were still able to eventually connect. Here is what Etsy’s DNS looks like after Friday (via “dig ns etsy.com”):

etsy.com.	9371  IN  NS   ns1.p28.dynect.net.
etsy.com.	9371  IN  NS   ns-870.awsdns-44.net.
etsy.com.	9371  IN  NS   ns-1709.awsdns-21.co.uk.
etsy.com.	9371  IN  NS   ns3.p28.dynect.net.
etsy.com.	9371  IN  NS   ns-1264.awsdns-30.org.
etsy.com.	9371  IN  NS   ns-162.awsdns-20.com.
etsy.com.	9371  IN  NS   ns4.p28.dynect.net.
etsy.com.	9371  IN  NS   ns2.p28.dynect.net.

Etsy is now using a combination of Dyn’s DynECT DNS services and Amazon’s Route 53 DNS services.

But wait, you say… shouldn’t this be “DNS 101”?

Aren’t you always supposed to have DNS servers spread out across the world?
Why don’t they have “secondary DNS servers”?
Isn’t that a common best practice?

Well, all of these companies did have secondary servers, and their DNS servers were spread out all around the world. This is why users in Asia, for instance, were able to get to Twitter and other sites while users in the USA and Europe were not able to do so.

So what happened?

It gets a bit complicated.

20 Years Ago…

Jumping back, say, 20 years or so, it was common for everyone to operate their own “authoritative servers” in DNS that would serve out their DNS records. A huge strength of DNS that it is “distributed and de-centralized” and anyone registering a domain name is able to operate their own “authoritative servers” and publish all of their own DNS records.

To make this work, you publish “name server” (“NS”) records for each of your domain names that list which DNS servers are “authoritative” for your domain. These are the servers that can answer back with the DNS records that people need to reach your servers and services.

You need to have at least one authoritative server that would give out your DNS records. Of course, in those early days if there was a problem with that server and it went offline, people would not be able to get the DNS records that would get them to your other computers and services. Similarly you could have a problem with your connection to the Internet and people could not get to your authoritative server.

For that reason the best practice emerged of having a “secondary” authoritative DNS server that contained a copy of all of the DNS records for your domain. The idea was to have this in a different geographic location and on a different network.

On the user end, we use what is called a “recursive DNS resolver” to send out DNS queries and get back the IP addresses that our computers need to connect. Our DNS resolvers will get the list of name servers (“NS records”) and choose one to connect to. If an answer doesn’t come back after some short period of time, the resolver will try the next NS record, and the next… until it runs out of NS records to try.

Back in July 1997, the IETF published RFC 2821 dedicated to this topic: Selection and Operation of Secondary DNS Servers. It’s fun to go back and read through that document almost 20 years later as a great bit has changed. But back in the day, this was a common practice:

The best approach is usually to find an organisation of similar size, and agree to swap secondary zones – each organization agrees to provide a server to act as a secondary server for the other organisation’s zones.

As noted in RFC 2821, it was common for people to have 2, 3, 4 or even more authoritative servers. One would be the “primary” or master server where changes were made – the others would all be “secondary” servers grabbing copies of the DNS records from the primary server.

Over the years, companies and organizations would spend a great amount of time, energy and money building out their own DNS server infrastructure. Having this kind of geographic and network resilience was critical to ensure that users and customers could get the DNS records that would get them to the organizations servers and services.

The Emergence of DNS Hosting Providers

But most people really didn’t want to run their own global infrastructure of DNS servers. They didn’t want to deal with all the headaches of establishing secondary DNS servers and all of that. It was costly and complicated – and just more than most companies wanted to deal with.

Over time companies emerged that were called “DNS hosting providers” or “DNS providers” who would take care of all of that for you. You simply signed up and delegated operation of your domain name to them – and they did everything else.

The advantages were – and are today – enormous. Instead of only a couple of secondary DNS servers, you could have tens or even hundreds. Technologies such as anycast made this possible. The DNS hosting provider would take care of all the data center operation, the geographic diversity, the network diversity… everything. And they provided you with all this capability on a global and network scale that very few companies could provide all by themselves.

The DNS hosting providers gave you everything in the RFC 2821 best practices – and so much more!

And so over the past 10 years most companies and people moved to using DNS hosting providers of some form. Often individuals simply use the DNS hosting provided by whatever domain name registrar they use to register their domain name. Companies have outsourced their DNS hosting to companies such as Dyn, Amazon’s Route 53, CloudFlare, Google’s Cloud DNS, UltraDNS, Verisign and so many more.

It’s simple and easy … and probably 99.99% of the time it has “just worked”.

And you only needed one DNS provider because they were giving you all the necessary secondary DNS services and diversity protection.

Friday’s Attack

Until Friday. When for some parts of the Internet the DNS hosting services of Dyn didn’t work.

It’s important to note that Dyn’s overall DNS network still worked. They never lost all their data centers to the attack. People in some parts of the world, such as Asia, continued to be able to get DNS records and connect to all the affected services without any issues.

But on Friday, all the many companies and services that were using Dyn as their only DNS provider suddenly found that a substantial part of the Internet’s user community couldn’t get to their sites. They found that they were sharing the same fate as their DNS provider in a way that would not have been true before the large degree of centralization with DNS hosting providers.

Some companies, like Twitter, stayed with Dyn through the entire process and weathered the storm. Others, like Github, chose to migrate their DNS hosting to another provider. Still others chose to start using multiple DNS providers.

Why Doesn’t Everyone Just Use Multiple DNS Providers?

This would seem the logical question. But think about that for a second – each of these major DNS providers already has a global, distributed DNS architecture that goes far beyond what companies could provide in the past.

Now we want to ask companies to use multiple of these large-scale DNS providers?

I put this question out in a number of social networks and a friend of mine whose company was affected nailed the issue with this comment:

Because one DNS provider, with over a dozen points-of-presence (POPs) all over the world and anycast, had been sufficient, up until this unprecedented DDoS. We had eight years of 100% availability from Dyn until Friday. Dealing with multiple vendors (and paying for it) didn’t have very good ROI (and I’m still not sure it does, but we’ll do it anyway).

Others chimed in and I can summarize the answers as:

  • CDNs and GLBs – Most websites no longer sit on a single web server publishing a simple set of HTML files. They are large complex beasts pulling in data from many different servers and sites. And they very often sit behind content delivery networks (CDNs) that cache website content and make it available through “local” servers or global load balancers (GLBs) that redirect visitors to different servers. Most of these CDNs and GLBs work by using DNS to redirect people to the “closest” server (chosen by some algorithm). When using a CDN or GLB, you typically wind up having to use only that service for your DNS hosting. I’ve found myself in this situation with a few of my own sites where I use a CDN.
  • Features – Many companies use more sophisticated features of DNS hosting providers such as geographic redirection or other mechanisms to manage traffic. Getting multiple providers to modify DNS responses in exactly the same way can be difficult or impossible.
  • Complexity – Beyond CDNs and features, multiple DNS providers simply adds complexity into IT infrastructure. You need to ensure both providers are publishing the same information, and getting that information out to providers can be tricky in some complex networks.
  • Cost – The convenience of using a DNS hosting provider comes at a substantial financial cost. For the scale needed by major Internet services, the DNS providers aren’t cheap.

For all of these reasons and more, it’s not an easy decision for many sites to move to using multiple DNS providers.

It’s complicated.

And yet…

And yet the type of massive DDoS attacks we saw on Friday may require companies and organizations to rethink their “DNS strategy”. With the continued deployment of the Internet of Insecure Things, in particular, these type of DDoS attacks may become worse before the situation can improve. (Please read Olaf Kolkman’s post for ideas about how we move forward.) There will be more of these attacks.

As my friend wrote in further discussion:

These days you outsource DNS to a company that provides way more diversity than anyone could in the days before anycast, but the capacity of botnets is still greater than one of the biggest providers, and probably bigger than the top several providers combined.

And even more to the point:

The advantage of multiple providers on Friday wasn’t network diversity, it was target diversity.

The attackers targeted Dyn this time, so companies who use DNS services from Amazon, Google, Verisign or others were okay. Next time the target might be one of the others. Or perhaps attackers may target several.

The longer-term solutions, as Olaf writes about, involve better securing all the devices connected to the Internet to reduce the potential of IoT botnets. They involve the continued work collaboratively to reduce the effects of malware and bad routing info (ex. MANRS). They involve the continued and improved communication and coordination between network operators and so many others.

But in the meantime, I suspect many companies and organizations will be considering whether it makes sense to engage with multiple DNS providers. For many, they may be able to do so. Others may need the specialized capabilities of specific providers and find themselves unable to use multiple providers. Some may not find the return on investment warrants it. While others may accept that they must do this to ensure that their services are always available.

Sadly, taking DNS resilience to an even higher level may be what is required for today.

What do you think? Do you use multiple DNS providers? If so, what worked for you? If not, why not? I would be curious to hear from readers, either as comments here or out on social networks.


[1] Windows users do not have the ‘dig’ command by default. Instead you can type “nslookup -type=NS <domainname>”. The results may look different that what is shown here, but will have similar information.

NOTE: I want to thank the people who replied to threads on this topic on Hacker News, in the /r/DNS subreddit and on social media. The comments definitely helped in expanding my own understanding of the complexities of the way DNS providers operate today.

Image credit: a photo I took of a friend’s T-shirt at a conference.

Categories
Improving Technical Security Internet of Things (IoT) Technology

Trust isn’t easy: Drawing an agenda from Friday’s DDoS Attack and the Internet of Things

Last week, millions of infected devices directed Internet traffic to DNS service provider Dyn, resulting in a Distributed Denial of Service (DDoS) attack that took down major websites including Twitter, Amazon, Netflix, and more. In a recent blog post, security expert Bruce Schneier argued that “someone has been probing the defences of the companies that run critical pieces of the Internet”.  This attack seems to be part of that trend.

This disruption begs the question: Can we trust the Internet?

The answer to that question is not yes, or no, or even “it depends.”

First, it is important to realise that there is no security czar on the Internet; there is nobody who can force the global Internet and its users to solve any of these cyber issues. Various actors on the internet must take responsibility, often in collaboration with others, taking into account the fundamental values and properties that underpin the Open Internet. We call this approach the collaborative security approach. For now, it is sufficient to realise that security of the Internet depends on many actors taking responsibility. In this post, I look at this attack through the lens of the internet ‘as a system’, and I identify one success, share one observation, talk a failure, and outline an agenda that we must adopt.

The success lies in the collaborative nature of how Dyn worked with others to mitigate the attack.

As mentioned in their statement, Dyn had to work with the technical community to mitigate the attack. My speculations will not be far off if I say that this must have involved work with network operators, computer security specialists, law enforcement, computer security incident response teams, DNS providers, and their customers. Given the size and scale of the attack, I see their reactive work as a testament to the effectiveness of the coordination. So, kudos to Dyn for thwarting the attack even though, metaphorically, this is the success of a fire truck arriving on time and limiting damage and not a success of preventing the fire in the first place.

We should not take the sort of collaboration that happened here for granted. These sort of attacks can only be stopped when network operators collaborate to address issues that are not exclusively impacting their own network (the firemen from other areas coming to aid). At the Internet Society our Routing Manifesto, or MANRS, initiative speaks to just that: We are growing the community that commits to taking measures against certain types of attacks and takes action that allows for effective collaboration. MANRS acts as a signal to customers that they are dealing with an entity that understands their responsibility. I’ll get back to signalling below.

The observation.

One of the benefits of having a site’s DNS service managed by one or a few consolidated companies is that specialist expertise can be outsourced and these few organisations can efficiently deal with problems quickly. However, it also means that chokepoints are created and those few managed DNS service providers are becoming very big targets. The failure lies herein that the target painted seems to have become too big, and many major companies and websites now share their fate with these consolidated DNS providers. Given that one of the services often offered by DNS service providers is load balancing, untangling these hefty integrations may be a bit tricky. But since some companies and websites got a real hit last week, I think there may be some market-driven evolution in this space.

Now for the failure: Why is it that we are shipping an Internet of Things (IoT) that is so insecure?

These types of attacks depend on malicious software (usually referred to as “bot,” from robot) being installed on various devices that connect to the internet. The installation can happen because users (accidentally) open links that download software or because devices are open to attack from the Internet. There are some actors involved here. Any device – a computer, a phone, or an IoT thing –  is made out of a large number of software components. When bugs are discovered in the software, the fixes need to make their way into the software and then onto the devices. There is a lot of collaborative effort in identifying the problems, and creating and distributing the fixes. It involves processes like responsible disclosure of bugs, software patch policies and procedures, and device end-of-life policies. It also, somewhat, unfortunately, involves the actions of end-users since they need to pay attention that they change the default password on the camera, printer, or car they just bought.

So from this follows an agenda. Inspired by the IoT Security Questions from our Internet of Things Overview, we need to get to a point where:

  • Producers follow, and share, good design practices;
  • For every product sold there is a way that security researchers can responsibly disclose vulnerabilities found;
  • Producers can fix, or patch, these vulnerabilities during the lifetime of the device (Field Upgradability);
  • We clearly understand what happens if the product, or the supporting producers, reach end-of-life (Device Obsolescence);
  • Consumers can make informed choices based on these properties (Cost vs. Security trade-offs);
  • Data that IoT devices collect are protected and dealt with in privacy-honoring ways (Data Confidentiality and Access Control); and
  • Those who go about device security in an irresponsible way get penalised.

This is not a trivial agenda.

Take, for instance, consumers making informed choices. While consumers may care about their devices being hacked and used against them, they usually do not know that their camera may be used to bring down the Internet, so the latter isn’t part of their purchasing decision and hence an afterthought for the producers. These types of issues can be resolved through signalling mechanisms that indicate devices have at least minimal security. Getting to these signalling mechanisms could be done by consorted industry action, but may also involve regulation.

The fact that Internet of Things security is riddled with cases where manufacturers do not incur costs for any lack of security, and the fact that the global industry ships devices without having good answers for questions like responsible disclosure of bugs, software patch policies and procedures, and device end-of-life policies makes for a rather toxic mix.

We are shipping a lot of Things, so these issues need to be taken head-on with urgency. However, not through a central authority, but by consumers, producers, researchers and regulators coming up with mechanisms that allow the internet to remain open. There are multiple examples of communities taking responsibility and trying to move the needle. Let me name a few that I encountered in the past weeks:

The fact that many organisations are looking at several pieces of the agenda is reassuring; that means that good solutions will surface. Solutions that are relevant in the context in which they will need to be applied. The call to action is to get involved. To take your piece of the agenda and address that piece that you, as a consumer, as a producer, as an insurer, as a stock broker, or as a regulator can address. Together in collaboration, bring your expertise.

In the Dyn blog that reports on the DDoS attack, Kyle York says: “It is said that eternal vigilance is the price of liberty.

I believe that quote is central to the collaborative security approach. It implies that we collectively need to work to keep the Internet open, that sometimes we will feel the pain of openness — for this attack will probably not be the last one — and that most importantly the open Internet brings liberty.


Image credit: Downdetector.com

Categories
Deploy360 Domain Name System Security Extensions (DNSSEC) Encryption IETF Improving Technical Security

Deploy360@IETF95, Day 5: DNS Ops, DDoS & Crypto

Buenos Aires view

It’s just a short final day at IETF 95, but there’s still several sessions of interest for Deploy360 before we say Adios to Buenos Aires. These include the Domain Name System Operations and DDoS Open Threat Signaling Working Groups which are unfortunately scheduled at the same time.

DNSOP will meet for its second session of the week with some highly contentious issues, specifically around the discussion of “special use names” related to RFC 6761.  There will also be discussions around whether to drop usage of certain aspects of DNS.

That leaves the recently created DOTS Working Group which aims to develop a standard communications protocol intended to facilitate coordinated mitigation of denial-of-service attacks. The drafts cover use cases, requirements and architecture, with another three draft protocol proposals being discussed.

Finally, whilst not directly related to Deploy360 technologies, it may be worth checking out the Crypto Forum Research Group as cryptography ultimately has implications for technologies such as DNSSEC, RPKI and TLS. This session includes as discussion of Quantum Resistant Cryptography which aims to devise algorithms with greater security against cracking by quantum computers.

That’s then it for this IETF, and many thanks for reading along this week. Please do read our other IETF 95-related posts … and we’ll see you at IETF 96 in Berlin this coming July!


NOTE: If you are unable to attend IETF 95 in person, there are multiple ways to participate remotely.


Relevant Working Groups:

Categories
Building Trust Improving Technical Security Open Internet Standards Technology

New Whitepaper Explores Ways to Make IP Spoofing a Problem of the Past

In March 2013, Spamhaus was hit by a significant DDoS attack that made its services unavailable. The attack traffic reportedly peaked at 300Gbps with hundreds of millions of packets hitting network equipment on their way. In Q1 2015, Arbor Networks reported a 334Gbps attack targeting a network operator Asia. In the same quarter they also saw 25 attacks larger than 100Gbps globally.

What is really frightening about this is that such attacks were relatively easy to mount. Two things made these attacks possible: bots with the ability to spoof the source IP address (setting it to the IP address of a victim) and “reflectors” – usually open DNS resolvers. A well selected DNS query can offer a 100-times amplification, meaning that one needs only to generate queries totaling 3Gbps to create a merged flow of 300Gbps. A relatively small set of clients can accomplish this.

Of course there are DDoS attacks that do not use these two components; they hit the victim directly from many globally distributed points. But they are traceable and two orders of magnitude more difficult and expensive to accomplish.

Mitigating the reflection component of the attack is one way of addressing the problem. As reported by the OpenResover project, in the last two years the amount of open DNS resolvers has dropped almost by half – from 29M to 15M. However, there are other types of amplifying reflectors – NTP and SSDP are among them, and even TCP-based servers (like web servers, or ftp servers) can reflect and amplify traffic.

And reflectors are just the accomplices. The root cause of the reflection attacks lies in the ability to falsify, or spoof, the source IP address of outgoing packets. As Paul Vixie put it, “Nowhere in the basic architecture of the Internet is there a more hideous flaw than in the lack of enforcement of simple SAV (source-address validation) by most gateways.”

Tackling this problem is hard. Lack of deployment of anti-spoofing measures is aggravated by the fact that the implementation of anti-spoofing measures is often incentive-misaligned, meaning that networks only help other networks by implementing the practice and do not directly help themselves. There are also real costs and risks for implementing anti-spoofing measures.

In February 2015, a group of network operators, security experts, researchers and vendors met at a roundtable meeting organized by the Internet Society with a goal to identify various factors that aggravate or help solve the problem, and to identify paths to improve the situation going forward.

The main conclusion is that there is no silver bullet to this problem and if we want to make substantive progress it has to be addressed from many angles. BCP38, which is sometimes promoted as *the solution* to the problem, – is just one tool, not effective in some cases. Measurements, traceability, deployment scenarios and guidance, as well as possible incentives, communication and awareness – are among the areas identified by the group where positive impact should be made.

For example, measurements and statistically representative data are very important; if we want to make progress, we need to be able to measure it – the ability currently missing to a great extent.

Another recommendation that came out of this meeting was the possibility of anti-spoofing by default. The only place you can really do anti-spoofing is the edge (or as close as possible). Addressing the challenge at the edge seems only possible with automation and if anti-spoofing measures are switched on by default.

Read more about this in a whitepaper that contains the main takeaways from that discussion and articulates possible elements of a comprehensive strategy in addressing source IP address spoofing challenge.

I ask you to read the whitepaper, and ultimately deploy these anti-spoofing technologies in your own network. Can you do your part to prevent DDoS attacks? And if you are willing to do your part, how about signing on to the Mutually Agreed Norms for Routing Security (MANRS) and joining with other members of the industry to make a more secure Internet?

Categories
Deploy360 Events Improving Technical Security

SINOG 1.6 workshop on DDoS and AntiSpam for Network Operators

SINOG_1.5 workshop
SINOG_1.5 workshoThep

The SINOG (Slovenian Network Operators Group) organized an interim meeting on topic of DDoS mitigation techniques and AntiSpam tools for operators. This meeting turned into a workshop with some good presentations from experienced operators about how they are doing this in practice and also from some vendors to see what’s available on the market for this purpose. While the initial thought was of 30 – 40 attendees this topic seems to be very popular, as now we are expecting more than 100 people to show up and listen to this 4 hours event with packed agenda of good talks. Workshop starts today (1st April) at 16:00 CET (and that’s not a joke 🙂 ) and below you can also find a video stream link.

The workshop will be opened by Matjaž Straus Istenič (SINOG chairman) and then further chaired by Urban Kunc (SINOG co-chair) and myself.

Previous workshop (SINOG_1.5) theme was WiFi for operators and attendance turnout was brilliant, but we never expected that SINOG_1.6 would bring in even more people.

On 9th/10th of June Go6 Institute is organizing 10th Slovenian IPv6 summit (first day) and SINOG2 meeting (second day) and preparations for the “big” event are already underway. If you have any interest in operators community in this region and would like to propose a talk – please send an email to organizers <zavod@go6.si>.

Today workshop will be video recorded and streamed live, if you speak Slovenian or you are just curious what’s going on there – you can join the live stream.

Categories
Building Trust Improving Technical Security Technology

Coordinating Attack Response at Internet Scale

How do we help coordinate responses to attacks against Internet infrastructure and users? Internet technology has to scale or it won’t survive for long as the network of networks grows ever larger. But it’s not just the technology, it’s also the people, processes and organisations involved in developing, operating and evolving the Internet that need ways to scale up to the challenges that a growing global network can create.

One such challenge is the unwanted traffic, ranging from spam and other forms of messaging-related abuse to multi-gigabit distributed denial of service attacks, that Internet users and service providers of all kinds are subject to. Numerous incident response efforts exist to mitigate the effects of these attacks. Some are focused on specific attack types, while others are closed analysis and sharing groups spanning many attack types.

In an effort to bring together operators, researchers, CSIRT team members, service providers, vendors, information sharing and analysis centre members to discuss approaches to coordinating attack response at Internet scale, the Internet Society is working with the Internet Architecture Board to develop and host a one-day “Coordinating Attack Response at Internet Scale (CARIS) Workshop” intended to help bridge the many communities working on attack response on the Internet and to foster dialogue about how we collaborate together.

In particular we will be helping to build up a shared directory of incident response organisations and other efforts to help improve information sharing with and coordination between these diverse groups. We believe that having a clearer picture of the range of efforts around the globe and the most helpful ways to share incident-related information will improve the scalability of the overall attack response effort.

The workshop will take place on June 19, 2015 at the Intercontinental Hotel in Berlin, and is hosted by the 27th annual FIRST Conference.

Workshop information including the call for papers can be found here: https://www.iab.org/activities/workshops/caris/  The deadline for submitting papers is April 3, 2015.