My answer to that question was “an unequivocal yes!” Below is the longer answer. The key lies in accountability.
Old wine in new bags
No matter how you think about the Internet of Things, it is clear that it captures a vision of mind boggling opportunities. Suddenly everything around is being connected. Security cameras, thermostats, fridges, and cars turn into connected computers. The speed of the changes by which the Internet of Things changes our lives is unprecedented. Speed and impact amplify the challenges that we face on the Internet today: Challenges with security and privacy. Companies that used to build appliances or toys are now suddenly, and often without realizing it, IT companies. They often make the same mistakes that earlier generations made. The Mirai botnet – which exploits network-attached cameras and DVRs – relies on the same vulnerabilities that the Morris worm in the 1990s exploited. Suddenly we are faced with questions that we didn’t have before: Am I OK with my television listening to my conversations and sending them to the cloud? Or, is my daughter’s toy doll a spying device?
What happens if our conversation data is uploaded to the cloud and then is stolen during a data breach? Or, will that data then be used to influence our behavior in ways that we may, as a society, not find acceptable?
Collective Responsibility and Accountability
Let’s focus primarily on the security questions.
To face the security challenges, we need an approach that takes into account the nature of the Internet. The Internet does not have a central control. Internet security is distributed and is enforced at the edges – in your home or in your company. The Internet is not built from one gigantic blueprint. Rather, it developed organically out of interoperable and interconnected building blocks.
In an environment where everything is interconnected, the approach that works is Collaborative Security. Different players collectively assume responsibility over those aspects of the Internet which they can influence. They take into account whether their action or inaction poses a threat on the Internet as a whole. It’s like living in a giant apartment building: we expect everybody to lock the front door to the building. If you forget, your neighbor may be in trouble. And the best way to get solutions is bottom-up. To continue the analogy, the tenants of the apartment building are probably more effective in developing building access policies than the state legislature. Sometimes they address the problem by hiring a doorman, sometimes the social agreement is sufficient. They understand their local environment. But their decision about security has broad implications for the whole neighborhood and, eventually, the entire village or city.
One aspect of collaborative security we haven’t talked about often is accountability. Collective responsibility works better if the participants are in some way accountable for action or lack of action. Let’s take the example of devices that are shipped with simple-to-guess default passwords, something that has been frowned upon since the early 90s. But now we have new players in the IT marketplace who have to learn these lessons anew. There are several factors that contribute. First the lack of experience, if you are in the business of making dolls, your expertise is in children’s toys, not network security. And even if you are IT savvy, the reality is that security-by-design is difficult, costly, and time-consuming. In a highly competitive marketplace, like that for consumer goods, manufacturers feel intense pressure to rush products to market at minimum cost. The incentives are misaligned, as Schneier has often argued. How do we realistically hold doll manufacturers accountable for bringing “smart” dolls to market with insufficient security?
Market and Legal
There are roughly two factors that can reinforce accountability: market and legal mechanisms.
One possibility is to create sufficient consumer demand for security and privacy. If we successfully get consumers focused on security (and privacy), this might be enough to create a market where secure products have a competitive advantage. Perhaps consumer organizations (like Consumer Reports in the United States) could assess not just the physical security of products but also the cybersecurity of products in order to help consumers make effective choices. Consumer campaigns could raise awareness and help consumers to see the value of security. Governments may also impact the market for security by procuring products that implement the best current security practices. One could imagine a scenario whereby security incidents and data breaches that impact a large customer base have a serious impact on a company’s stock values.
The accountability mechanism at play here is a company’s bottom line. Organic food was not really a thing a few decades ago; now consumers are willing to pay more for products that carry that label. What can we learn from the dynamics at play there?
Legal liability mechanisms may also force compliance or assign liability. (I am keeping criminal law out of this equation for now). We are familiar with the mechanisms. Without minimum safety standards you may not put a vehicle on the road. And when you don’t comply you can be held liable when causing accidents.
IoT and the policy toolbag
Legal and regulatory action has its place in securing the Internet of Things. But we should not overestimate its effect or underestimate its complexity. There will be jurisdictional complexity, enforcement challenges, and unintended consequences.
The complexity of the IoT ecosystem comes from the diversity in societal applications, technical, and policy domains. There are many ways to approach this. First, take a stab at the requirements and applications in the various sectors. Look at industrial automation, the power system, healthcare, automotive, and home automation. Second, consider the different contexts in which IoT vulnerabilities pose a threat. IoT as a botnet, IoT as a privacy intruder, or IoT as physical security threat. Third, cut through the components that make up an IoT environment. Look at devices, cloud infrastructure, and data brokers and apps developers.
One way to deal with the policy complexity of IoT is to think about generic requirements, irrespective of the sector. For instance, think of IoT as a potential privacy threat and set the general boundary conditions based on well-understood public interest requirements. Define the rights of data subjects and liabilities of data controllers across a broad set of sectors. This approach may be better than defining those rights and liabilities for specific sectors because it means that one does not have to go back to the drawing board every time new issues arise. For example, in the US, there are privacy regulations that apply to the rentals of video tapes. Those do not apply to online services like Netflix. It seems logical that the expectations of movie buffs about their privacy are the same regardless of the delivery mechanism.
That all said, sometimes it makes more sense to set specific requirements. The privacy and safety concerns of medical devices are different from consumer toys.
Another axis of complexity comes from the speed of changes in IoT technology. Care needs to be taken that the rules and regulations do not cope with the problems of today at the expense of innovation tomorrow. They need to be future-proof. For example, much of the focus right now seems to be on regulating devices. But we have passed the point where the Internet of Things are just devices connected to the Internet. They form a complex interconnected system that includes components such as middleware, application clouds, external apps, etc. Thus, an approach that only focuses on devices will miss the broader security threat to the network at large.
Additionally, one has to take into account the unintended consequences of policy and regulatory measures.
The Internet is a complex, dynamic, global environment where measures may have unintended side effects. The spill-over effects of some measures may not be well understood or may be hard to predict.
For example, software often comes with a waiver of liability. Introducing liability may have a negative effect on open-source software development. Not the result you want, given that open-source software is a major driver of security innovation.
The problems that we are facing are urgent. The security problems with consumer IoT devices are rapidly becoming human safety issues.
So when I am asked: Internet of Insecurity: Can Industry Solve It or Is Regulation Required?
Then my ‘unequivocal yes’ answer translates to: both are needed. But my answer comes with a warning that regulatory hammers such as “banning unsecure devices” are not going to be very successful. Regulatory tools are likely to be more effective in creating the right environment. An environment in which solutions can develop. In other words, regulations themselves are not likely to be the solution. We have to understand possible side effects before introducing rules and regulations. That is as important as monitoring effects and side effects after they have been introduced.
Responsibility is not in the hands of any single institution.
Industry must take a leadership role in making security their business differentiator. They must create best practices. They must be kept accountable by consumers, shareholders, and, as final resort, governments.
Government should translate societal expectations into boundary conditions that must be met by consumers and industry alike. Furthermore, policies that assign liability are going to be a factor to create accountability. So does imposing serious fines when the boundary conditions are not met.
This is what we mean by collaborative security – no one actor holds the keys to “a solution” to the security challenge. And actions taken will have reactions across the ecosystem. So, Technologists, Civil Society, and Policymakers must find each other to understand the issues that face us and address them head on.