Improving Technical Security

Security Go: Young People Paving the Way to Better Online Security

At the Internet Society’s Student Pizza Night during IETF 96, I asked several students from Europe, Asia, and North America how they would respond to the same question:

How do we make young Internet users more secure?

Every student I asked said that security for young Internet users is a problem. The majority thought that both poor security practices and a general ignorance of the risks contribute to the problem. Pokémon Go was used as an example. Millions of younger users may be playing games without considering the security and privacy implications they may have. In this specific case, the developer, Niantic, gathers data from its users and may give it to third parties. Although it has since been removed in a patch, the Pokémon Go app also demanded full access to some users’ Google accounts.

While a few students supported better education about security for young users, most argued for a technical solution. They suggested that strong end-to-end encryption should be automatic on apps and services. By taking away the effort or knowledge needed to use security tools, most students thought that a greater number of young people would be better protected. This is particularly important for the youngest of users who may not yet know how to put good security practices in place.

Some students felt that improved education would help young people learn security and privacy skills. Many others disagreed. They said that few young people will apply difficult to use security tools, regardless of education. Indifference was also cited as a problem. This reminded me of a friend who said, only half-jokingly, that “in exchange for Pokémon? Niantic can read all my emails and my google searches if they want.

While I agree that technical solutions can help reduce the problem, I do not believe they are the only solution. It is important that we give people better tools to protect themselves and that they are automatically or easily implemented. There are some attacks, like phishing or social engineering, that may be difficult to address with technology. Education and awareness campaigns, particularly those developed with input from young people, are important for equipping youth with the knowledge and skills necessary to defend against these attacks.

Although the students I spoke with focused on either technical or educational solutions, there are many more ways to help young users be more secure online. It is important that youth think about their security online, talk about it with their friends, and are actively involved in creating new security solutions. [1]

In Pokémon Go, the closer you are to a Pokémon, the more likely you are to find and catch it. In life, the closer you are to a problem, the more likely you are to solve it.

Five Ways Young People Can Boost Their Online Security

1. Talk to your friends and family. How do they stay safe online?

2. Become active. Join the Youth Observatory – a group of young people around the world who are changing how we make Internet policy!

3. Check out your social media privacy settings and app permissions. Read US-CERT’s tips for Staying Safe on Social Networking Sites and Tripwire’s article on flashlight app permissions.

4. Learn about your online life! There are some great tutorials that can help like the Internet Society’s Digital Footprint.

5. Tell decision makers your voice counts when it comes to discussions around the future of the Net. Use the hashtag #dreamInternet and let people know your solutions for helping young people be more secure online.


[1] The Internet Society Fellowship to the Internet Engineering Task Force enables technology professionals, advanced IT students, and other qualified professionals from emerging and developing economies to attend IETF meetings.

Image credit: University Life 30 CC BY 2.0


Four Basic Steps to Protecting Your Digital Privacy in 2015

(Photo: Don’t Spy On Us – Light Brigading / CC BY-SA)

Every January 28th for close to a decade, 27 countries in Europe and the U.S. and Canada have recognized Data Protection Day and Data Privacy Day, respectively, to raise awareness about the importance of protecting personal information online. And while the issue has recently become more heated—prompting U.S. President Obama’s call for a Consumer Privacy Bill of Rights—a new global study shows that most Internet users “still don’t feel they are completely aware of the information that’s being collected about them.”

The results of the study, conducted by Microsoft through more than 12,000 online interviews across 12 countries, are shocking given it’s been almost 10 years since the first Data Privacy Day. Then again, different kinds of personal data are collected in different ways across different websites, applications and devices, making it extremely difficult for consumers to understand what data is collected, where, and by whom. Did you know, for instance, that Facebook, Twitter and Google+ track your visits to any website with a ‘Like’, ‘Tweet’ or ‘+1’ button whether or not you click that button? Further complicating things: how this data is used is just as varied, and every day businesses are finding new ways to use it. Find out more about data collection. 

We thought it would be useful to mark Data Privacy Day 2015 by reviewing some basic facts about where and how data is collected, what data is collected, how it is being used today, and what you can do to protect yourself.

Basic Fact #1: Data is collected about you every time you visit a website, shop online, engage in social sharing, enable location services and send digital messages and email.

Basic Fact #2: The information collected on you can be broken up into two categories: 1) data you provide by consenting when you register with a website; and 2) data that is taken without your explicit knowledge or consent from your computer and browsing history. The former can include name, address, email, phone number, and more. If you’re on Facebook, for instance, you may have registered marital and employment status (and even the name of your employer). The latter can include anything from your IP address and general geography to insights into your age, gender, income, hobbies, health status and financial situation, by way of your browsing and purchase history.

Basic Fact #3: Websites that collect information about you need some way to tell if you’re the same person visiting multiple times. To link the information you leave on each successive visit, the website might set a ‘cookie’—a small text file in your browser. A ‘cookie’ is a kind of memo to itself that the website can retrieve and read when you visit again. Cookies are also created by other websites running ads, widgets and other features on a page, which means that visiting one website can result in cookies being set by companies you weren’t even aware you were ‘visiting.’

Basic Fact #4: Cookies are good and bad. Because they ‘remember’ you, they can customize a site to your information and preferences, helping you navigate it more quickly, easily and safely. They are the reason a website can recall your user name and password, and save shopping cart items even if you’re not logged in. They also make it possible for you to shop securely online by authenticating your identity throughout the purchase process. But because they can also track your online movements, and the information you input into online forms, they can link information about you without your awareness or consent.

Basic Fact #5: The information being taken from you may not seem like much, but companies are increasingly stringing these seemingly disparate pieces of data together to get a bigger, more complete picture of you, and using it to make inferences about your behavior, including your habits, preferences, values, aspirations and intentions. What’s more, while we may believe this data doesn’t personally identify us, research from as far back as 2008 shows that supposedly anonymous data isn’t necessarily hard to re-identify.

Basic Fact #6: Inferences about you based on your personal data can have significant implications that you may have never considered. While these can vary, for average consumers the threat of financial implications is particularly high. According to one study, you may pay more than others when shopping online based on your web browsing history or the kind of smartphone you own. Some consumers have seen their credit limits reduced by their credit card companies because they shopped at stores frequented by cardholders who don’t have good repayment histories. From there, it’s not inconceivable to think insurance companies might eventually string your data together to determine if you’re insurable (and what kind of premium you should be charged based on your perceived risk) and credit card companies could use it to determine your creditworthiness (and charge you higher interest).

So, if you’re starting to feel alarmed, what can you do to control too much of your data falling into the wrong hands? Again, the basics:

➢ ‘Fracture’ your digital identity. Strategically use different email addresses, browsers, credit cards, and maybe even devices, for different web activities (like personal, work and online shopping) to make it more difficult to collect one cohesive data set about you.

Proactively check privacy settings. Browsers, devices and apps are often set to share your personal data out of the box. Find and review default settings and see if you’re comfortable with them. A quick search for “default settings” and a specific type of browser or device will yield information about that system’s settings and how to find and change them.

Regularly and actively review your browser’s cookies. You may be shocked by how many cookies have been set on your browser by sites you weren’t aware of visiting. See if your browser lets you block third-party cookies. If not, there are plug-ins you can use. Visit to find out what browser you’re using, and to see if cookies are enabled and your ‘Do Not Track’ capability is on.  This site tells you how to find cookies for four browsers. If yours isn’t on this list, after you find out what browser you’re using, do an online search for that browser and “how to find cookies.” Then, you can opt for browser privacy settings such as ‘Do Not Track’ and ‘Private’ browsing to protect against default cookies.

Read the fine print. Know the privacy policies of the devices, websites, social sharing services and applications you use. Find out what permissions apply to the content you upload and how it can be used.

Unfortunately, there’s no one-click answer to controlling your personal data. It requires persistent education, consistent engagement and ongoing management. But there are many online privacy tools that can help make it easier, and allow you to keep track of the information you’re sharing as you surf. Plug-ins like SSL Everywhere encrypt your communications with many major websites, making your browsing more secure, while Ghostery blocks tracking software.

The key thing is, regardless of what tools you use, be willing to adjust your online habits. Sure, it’s a little inconvenient having to lock your house or car every time you leave it, but it’s better than being robbed. Your personal data is valuable: it’s worth giving up a little convenience to protect it.

Identity Privacy

Ten Tips To Manage Your Digital Footprint

For a lot of us it’s shopping season. And for those of us who can’t be bothered with crowded malls or queues at the register – it’s online shopping season.

But before you spend time loading up your online shopping cart, take a few minutes to learn a little about things like your digital footprint.  When it comes to your online privacy and identity – it’s the gift that keeps on giving!

Here 10 tips that can help!

1. Get a better understanding of the issues.

There’s a lot of information about privacy to take in. Think about the implications of what you’re sharing when you sign up for new services, or install new apps.

2. Develop your ‘basic hygiene” habits

Privacy is about context. If you use one email address for home and another for work, or one credit card for online shopping and another for everything else – it will help keep different parts of your digital footprint separate.

Be mindful about what you share via social sites and elsewhere, because every selfie, retweet, or like is probably more public, and more persistent than you think.

3. Become a sophisticated user of your online tools and services

Browsers, devices and apps are often set to share your personal data out of the box. Take a look at the privacy settings and see if you’re comfortable with what the default settings are.

When an application asks for “permission to send you push notifications and use your location data”, think about if that’s really what you want. Your camera and smartphone usually record your time and location in each photo you take, and when you share those photos, you could be sharing that data.

4. Find and use specific online privacy tools

There are many helpful online privacy tools. Use them to protect your online privacy, and to keep track of what information you’re sharing as you surf.

5. Manage cookies

Check what settings your browser(s) have for cookies; find your browser’s “cookie store” and spend some time looking through it. Notice how many of the cookies in there have been set by sites you weren’t even aware of visiting… and then see whether your browser allows you to block third-party cookies. Some browsers offer this as an easy option, but there are also a lot of plug-ins you can use to help control tracking cookies.

6. Check your privacy settings

Erasing cookies only goes so far. You should also know your rights when it comes to information that you share on websites, especially open services such as social networks, blogs, and photo sharing sites. It’s a lot easier to prevent your data from being shared than it is trying to remove it from an advertiser database later. Check what permissions apply to content you upload.

7. Understand the realities of sharing your stuff

When you post something on the internet, it’s out there forever. Deleting online content often only removes it from public view, it can be stored in archives and databases forever. Even deleting your account isn’t a guarantee that your content will be deleted. It may still be accessible through other means

8. Think about the trade-off between convenience and privacy.

OK, one is instant gratification and the other is a long-term intangible… but the choice is still up to you. Maybe a little inconvenience is worth it, to regain some control over your digital footprint.

9.  Understand the “bargain” you make with online service providers.

“Free” doesn’t mean “free”: it usually means you pay through the monetization of data about you. “Freemium” doesn’t mean your data isn’t monetized: it usually means you don’t see advertisements in that service, app or game.

10. “There is no app for this”.

That’s the bottom line. We can inform you and suggest some privacy tools, but the reality is that there’s no one-click answer: in the long term, the best way to improve your privacy is to change your online habits. We’re here to help, but you hold the key.

Want to know more? Watch our got tutorials on managing your digital footprint.

Building Trust Identity Privacy Tutorials

Data Privacy Day: Understanding Your Digital Footprints

It may have been a quiet week in Lake Wobegon, but elsewhere things have been decidedly lively.

On Jan 17th, President Obama made his statement in response to his Advisory Board’s review of NSA surveillance practices, and Internet Society (having already commented on the review) followed up with its observations on the President’s statement.

Meanwhile, in Northern France, the International Cybersecurity Forum (FIC2014) got under way, with some 2,500 attendees gathering in Lille to hear, among others, the French Minister of the Interior outline his policies for countering the cyber threat while safeguarding citizens’ basic freedoms.

And before FIC2014 had even finished, the 2014 conference on Computers, Privacy and Data Protection (CPDP) had already started in Brussels.

All these events raised issues which directly concern us – digital citizens – and the digital footprints we create as we go about our daily business.

Crucially, we need to look at whether it is possible to control (or at least manage… or even see…) the trail of personal information we leave on the Internet.

Consider the following:

  • Obama proposes new governance measures for the collection of US citizens’ telephone metadata, but skirts the question of privacy as a universal right, and says nothing about the economic damage to companies’ trust in Internet technology. By and large, nothing the President said suggested any great change with regard to the average citizen’s data: mass interception and pervasive monitoring will continue, as will the long-term storage of vast amounts of tracking data. If there is to be substantive change, all the indications are that it will have to come from citizens themselves.
  • At FIC2014, the debate on the question of whether online anonymity is possible shows increasing maturity and sophistication. The key point is made that achieving ‘anonymity’ today does not mean what it meant 10 years ago, nor what it meant 1000 years ago. What implications does that have for 10 years hence? That’s an important question, because the data we classify as ‘anonymous’ today will still be around in 10 years’ time: will we still think they are anonymous, and will we wish, in 2024, that we had thought more carefully in 2014?
  • And at CPDP, a troubling theme is the suggestion – by some stakeholders – that we should stop worrying about controlling the collection of personal data, and instead focus our efforts on achieving better control over its use. I couldn’t agree less. Imagine how we’d feel if the nuclear industry adopted the same philosophy. For all that personal data is an increasingly vital economic asset, its retention also represents a growing liability – and by far the best way to manage that liability is not to collect the data in the first place. The principle of data minimisation, as an important element of privacy by design, is not a new one, but our interpretation of it needs to keep pace with innovation.

Despite the imbalance in the power relationship between us and service providers, data minimisation is not just something we should insist they should do on our behalf, the privacy outcomes are something for which we must take more responsibility ourselves.

The implications for individual consumers and citizens are clear. We all need to be doing more to understand our digital footprints, to understand the asymmetric power relationship they represent, and to take responsibility to the extent that we can. To that end, and to coincide with Data Privacy Day 2014, Internet Society is launching a set of materials to help us all understand our digital footprints:

What they are, and what we can do to manage them.

Here’s what you will find in the package:

We will follow this up with a short animated video in a few weeks. You can use that as a “nudge point”, to see if you have started thinking differently about your online privacy and your digital footprints. I hope you will.

Identity Privacy

The Language of Privacy

Is it possible to come up with a single definition for privacy? Moreover, can we craft a definition that is based on durable principles, and remains robust despite developments in technology and online services? When it comes to online privacy the Internet Society has been using the same definition for a while, and I still think it is valid. I went back to it recently, to check that I was quoting it correctly, and the more I looked at it, the more I thought it would be worth breaking it down into its elements and validating each one. Here’s the definition:

“Privacy is about retaining the ability to disclose data consensually, and with expectations regarding the context and scope of sharing.”

Everything is in there for a purpose.

  • “retaining the ability to disclose”: privacy depends on user control; if you don’t have meaningful control over whether or not you disclose something, you don’t control your own privacy.
  • “disclose data”: privacy is about disclosure, not about keeping all your data to yourself (that is secretiveness… and taken to extremes, it’s a symptom of what we might even call paranoia). Privacy is a social construct, or set of conventions, relating to the disclosure of data, not to secrecy.
  • “consensually”: orthogonal to the idea of control; not only is it important that I consent to reveal things about myself, it’s also important that I respect others’ wishes about disclosure of data relating to them.
    • This is a key principle in two respects: first, if Alice tells Bob something, her privacy depends on Bob not telling Carl. This is a problem I describe as “privacy beyond first disclosure”. It is difficult to solve by technical means, because once Bob knows the thing Alice tells him, it is hard to prevent him, technically, from further disclosure.
    • Second, if the only way Alice can tell Carl something is via Bob, Bob’s role and obligations are a factor. This is especially true in today’s online world, where all interactions are mediated via third parties.
  • “expectations”: note that, for global applicability, we don’t use the word “rights”… but bear in mind that in some cultures/jurisdictions, our expectations about privacy may well stem from the fact that privacy is regarded as a fundamental right (not necessarily an unqualified one, but fundamental nonetheless). Expectations may be well- or -ill-founded; and they are often subject to being disappointed.
  • “context of sharing”: if I tell my doctor personal details in order to be treated, I don’t expect those details to crop up on “Hilariously Embarrassing Ailment of the Month” website.
    • The healthcare context may appear to be clear-cut, but it really is not. For example: justification for disclosure of healthcare data is often claimed on the grounds that it helps medical research… but the medical research benefits are speculative, while the financial benefit to research companies is not; genetic data about one person is intimately revealing about their parents, siblings and (potential) descendants; supposedly “anonymised” clinical trials data may prove to be easily de-identifiable… This is not argue one way or the other – just to illustrate that healthcare privacy is not a simple context.
  • “scope of sharing”: as Danah Boyd puts it “publishing, to a wider audience, something that was intended for a narrow one may be just as much a violation of privacy as publishing something that was not meant to be published in the first place”. Scope is important, and intimately related, of course, to example (1) under “consensus”, above, and to the matter of “expectations” and “control”.

For all its clarity, though, this definition raises as many questions as it answers. None of those questions is simple – and the answers are many, and nuanced. Currently, the technical means we have to express our privacy preferences and wishes are still pretty crude, and mostly binary. The problem, at its heart, is the clumsy language they offer us, with which to express subtle inter-personal concepts like trust and discretion.

If we start thinking of privacy as a ‘language’, we might start asking what users would like to say in that language, when they engage with apps and services. Currently, users are lucky if they can say “yes” or “no”. We need a much richer vocabulary, with verbs and tenses, (especially the conditional tense!). We need a language that can express wishes, and we need translation, so that we can make ourselves understood across different cultures of privacy. Then we’ll need to teach users how to become fluent in the language of privacy.


Privacy Resolutions for 2013 – New Digital Identity Tutorials