Categories
Building Trust Encryption Privacy Security Strengthening the Internet

This Data Privacy Day It’s the Little Things That Count

Today we’re celebrating Data Privacy Day, which is all about empowering people and organizations to respect privacy, safeguard data, and enable trust.

Let’s face it, protecting your privacy can feel overwhelming. It can seem like we conduct our entire lives online and it’s hard not to notice headlines about our privacy being undermined, like law enforcement trying to gain access to encrypted data. But whether you know it or not, you’re making choices about what you share and how you share it each day. These seemingly-small actions can make a big impact.

You might already be doing some of these, but here are six actions you can take to protect your privacy:

  • Use end-to-end encrypted messaging apps. Switch to using messaging apps that offer end-to-end encryption, such as WhatsApp, Signal, Threema, and Telegram. Some are better than others, so make sure to read the reviews.
  • Turn on encryption on your devices or services. Some devices or services will offer encryption, but not set it as the default. Make sure to turn on encryption.
  • Use strong passwords. Do not just use a default password, a simple guessable password, or a password that uses personal information, such as your pet’s name. No matter how strongly your device or application is encrypted, if someone can figure out your password – they can access your data.
  • Keep up with updates. No system is perfectly secure. Security vulnerabilities are always being discovered and fixed with updates. That’s why it is so important to keep up with updates to your applications, devices and services. The update could be fixing a vulnerability and making you safer!
  • Turn on two-factor log-in (2FA). Two-factor log-in adds another factor (like a bank security fob) to your usual log-in process (e.g. a username and password). Adding another factor makes it even harder for criminals to access your data.
  • Turn on erase-data options. Some smartphones and services have an option that will erase your data after 3 or 10 failed attempts. Turn this on to protect yourself from thieves or if you lose your phone.

This Data Privacy Day, join the global community of people who are taking steps to secure our data. Your small actions can make a big difference!


Image by Vlad Tchompalov via Unsplash

Categories
Human Rights Identity Privacy Public Policy

Shield & Sword – 10 tips to protect yourself online

Last weekend, millions joined the Women’s March in the US and across the world. They stood up for their rights, and the rights of everyone. We need to do the same on the Internet.

The Internet gives everyone a voice, but we need people to protect those voices.

Online harassment and cyber bullying are real. And, some groups are targeted more than others.

Last year, the Guardian exposed the stark reality in the field of journalism. An analysis of written comments posted in response to articles on the Guardian website revealed that of the ten journalists who received the most abusive comments, eight were women, and the two men were black.

They concluded this “provides the first quantitative evidence for what female journalists have long suspected: that articles written by women attract more abuse and dismissive trolling than those written by men”.

Sadly, who you are affects how you are treated by others online, as well as offline.

However, a powerful way to counter online abuse, threats and violence is to share our knowledge with each other so that we can become stronger champions of privacy and security, and to stand up for others when they need it.

So, to mark this year’s International Data Privacy Day, the Internet Society would like to share with you 10 tips to protect yourself online.

1. Know the terrain.

The Internet is a powerful tool for communication. Learn how to use the Internet, keep your eyes open for good and bad actors, and make the most of what the Internet offers.

2. Keep your private life private.

Keep your personal information separate from your professional role. Use different personas for different roles.

3. Protect communications.

Use end-to-end encryption and two-factor authentication for confidential communications.

4. Obscure your location.

Remove location data from images and videos before posting. Turn off application access to location. Don’t disclose your location in public posts.

5. Guard your devices.

They’re more precious than any jewels. Protect them from both physical and digital tampering. Use encryption and strong access credentials.

6. Prepare for an attack.

Find allies and prepare a plan for dealing with online harassment, doxing and other forms of abuse. Don’t feed the trolls! They don’t deserve your attention.

7. Stand firm.

Don’t let cyber bullies undermine what you are doing. Show them you are not afraid. Others will stand with you. Be willing to ask for help.

8. Beware of Trojan horses.

Look out for spear-phishers. Check before connecting with someone new. If something seems too good to be true, don’t trust it!

9. Lead.

Share your experience with others. Let people know that you are there to help.

10. Protect others.

If you host user-generated content, prevent users from posting derogatory or other abusive messages. Help remove personal information that has been exposed to hurt someone. Report offenders.

Please share your tips! This year, don’t sit by when you see abuse on social media. Offer a helping hand.

Categories
Privacy

The Internet Society believes privacy is key for a trusted Internet

Let’s use Data Privacy Day (28 January 2016) to advocate for respect of Internet users’ privacy across the world!

User privacy faces more challenges today than ever before: mass online surveillance; commercial profiling; tracking; an Internet of sensors; and even confidential communications are at risk through governmental attempts to limit the use of encryption.

That is why it is so important that we, as Internet users, assert our rights and expectations, demand effective protection, and challenge practices that undermine our privacy.

It’s a question of ensuring an ethical approach to data collection and handling that provides legitimacy, transparency, accountability, proportionality and fairness, as well as empowering users so that they can exercise effective choice and control over their personal data.

The starting point should be “do no harm”.

The privacy risks that Internet users face are real and wide-ranging, extending beyond revealing something that was meant to be private or only shared for a specific purpose, to discrimination and other forms of harm.

It seemed like barely a day passed in 2015 without news of yet another major data breach affecting thousands or millions of Internet users. But, what about the silent privacy violations that occur every day that no one hears about? For example, organisations that misuse personal data entrusted to their care, industries that profit from your personal data without your knowledge or consent, and the covert pervasive surveillance that goes on in the background.

Also, the full extent of harm that today’s privacy breaches may cause in the future is still largely unknown. While it may be hard to prepare for the unknown, action can be taken now to mitigate privacy risks for Internet users through privacy-in-design, privacy-in-practice, strong internationally compatible privacy norms, effective enforcement, and user empowerment.

In this context, it is important to appreciate that there is an inherent power imbalance between data controllers and data subjects. Most often the person who handles the data (the data controller) makes the privacy risk management decisions. But, the person whose privacy is at risk, is the data subject. One way to address this imbalance, is to encourage data controllers to apply an ethical approach to data handling where they give due consideration to the interests of the data subject.

Also, data which is seemingly anonymous (e.g. sensor data) often can be easily linked to an individual. Further, the use of such data can have a privacy impact even if the identity of the individual (e.g. Amy, Maria or John) is unknown. Therefore, it is also useful to carefully and collectively consider how to mitigate the present and future privacy risks associated with such data: data that may not always fall within the legal definition of “personal data”, the gate-keeper criterion for privacy laws.

The Internet Society believes that privacy is key for reinforcing user trust in the Internet.

We are actively involved in championing and shaping privacy policy (e.g. in the OECD, APEC, IGF), Internet technology (e.g. IETF, W3C) and practices around the world so that we can all experience better privacy on the Internet.

Also, this week, my colleague, Robin Wilton, will be moderating a panel at the 2016 Computers, Privacy & Data Protection Conference in Brussels exploring ethical data handling and privacy risk. We invite you to join us in person if you are attending CPDP2016, or online via Twitter (#CPDP2016), and watch out for a blog post shortly afterwards on our Tech Matters blog.

If you are new to privacy concerns, try our Digital Footprint tutorials or our policy brief about privacy

Join us in celebrating Data Privacy Day by sharing your ideas for better privacy on the Internet.Leave your thoughts here as comments, share them on social media – or write your own posts and articles.

Help us bring about a more trusted Internet!


Image credit: Stay Safe Online DPD banners

Categories
Building Trust Privacy

Hey! Someone fragmented my Internet, and didn't even tell me.

Data Privacy Day 2016 is almost upon us (Thursday 28th Jan), and I’ll be hosting a panel on ethical data-handling at CPDP2016 to mark the occasion. But more about that later.

Meanwhile, over on the Internet Policy mailing list[1] a discussion is raising some very interesting topics whose relevance will continue to grow in the coming months. The discussion started with Bill Drake posting a link to a paper he co-authored with Vint Cerf and Wolfgang Kleinwachter, on “Internet Fragmentation: an Overview”; the paper was launched recently at the World Economic Forum, and it’s well worth reading:

http://www.weforum.org/reports/internet-fragmentation-an-overview

That, in turn, prompted Richard Hill raise the question of “openness”; what is an “open” Internet, and what does it imply for service providers and users? As Richard noted, Bill’s paper included the following observations:

An Internet in which any endpoint could not address [and exchange data packets with] any other willing endpoint … would be a rather fragmented Internet.

Richard goes on to propose that the endpoints in question must be “willing”, and he gives a couple of examples. If an endpoint accepts traffic that has been processed by a firewall, that may introduce fragmentation of a kind, in that not all the packets sent to that endpoint might complete the journey. But that’s a good kind of fragmentation, Richard argues, because it happens with some degree of knowledge and consent on the part of the recipient, and it provides them with the benefit of blocking malware and attempted intrusions. Similarly, if a user activates some form of ad-blocking, then one could say that their endpoint is receiving only partial traffic. Again, arguably, a form of fragmentation, but a beneficial one from the user’s perspective, and usually an explicit choice on their behalf.

These examples illustrate that users may exercise very different levels of consent and control in different circumstances. For instance, users behind an enterprise firewall may have no option but to accept whatever firewall policies are put in place on their behalf. Similarly, many users rely (whether they know it or not) on third-party fragmentation of traffic in the form of prevention of DDoS attacks, and the automatic filtering-out of large volumes of spam mail.

And this is where things start to get interesting.

The examples I’ve given seem intuitively clear cut, because a number of usually-implicit assumptions are at work. For example, we assume that users would willingly choose the options they in fact get, because the outcomes are probably better than the alternatives. We assume that the third parties are acting genuinely in the interests of the user, and that they aren’t also filtering things which the user would want to receive. We assume that ad-blockers are doing their job as advertised, and that they aren’t simultaneously receiving payments to let certain ads through regardless.

At the other end of the spectrum from these examples of “good” fragmentation, there are of course plenty of examples of “bad” fragmentation: censorship, malicious tampering with the routing or contents of traffic, interference with endpoints, and so on.

And as the usually-implicit assumptions suggest, there are many ways in which the question can be a lot less clear cut. In fact, between the “good” and “bad” ends of the spectrum, there is a whole continuum of cases where it’s harder to tell whether what is done on the user’s behalf is in their best interests; where users themselves might not even be certain what they would choose, or how to express their preference.

Crucially, there are many cases (especially to do with advertising and the collection of personal data) where it is almost impossible to associate a user’s choice with a particular outcome, one way or another.

These “middle cases” are extremely common. They have to do with things like the default “Do Not Track” or cookie settings in your browser; whether apps ask for location data from your mobile device; whether you are expected to register your real name when signing up with a web-site.

In fact, although we might notice them less than we would notice censorship or massive volumes of unfiltered spam, these more subtle factors not only shape the Internet as we see it, they also shape the way the Internet sees us. They raise questions of access to information, of self-determination and of personal identity that should concern us all.

I’ll be exploring many of those questions over the coming months, as part of ISOC’s programme of work on ethical data-handling. We’ll be trying to produce clear problem statements and, more important, practical guidance about why ethical data-handling is relevant and compelling… and how to do it.

As well as this week’s CPDP panel, I’ll soon be setting up a round-table workshop, setting out our ideas at several conferences/events, and posting updates here and on Twitter (@futureidentity). I look forward to hearing your thoughts.

—-

[1] To join the Internet Policy email list, please log into the ISOC Member Portal – https://portal.isoc.org/ – and then choose Interests & Subscriptions from the My Account menu.

Image credit: Masakazu Matsumoto on Flickr.