Artificial Intelligence Deploy360 Internet Exchange Points (IXPs) Mutually Agreed Norms for Routing Security (MANRS) Securing Border Gateway Protocol (BGP)

GLIF 2018 Held at the Home of Hamlet

The 18th Annual Global LambaGrid Workshop (GLIF 2018) was held on 18-21 September 2018 at the Kulturværftet in Helsingør (Elsinore), Denmark. Kronberg Castle, located next to the venue, was immortalised as Elsinore in the William Shakespeare play Hamlet, but there proved to be nothing rotten with the state of high-bandwidth networking as 50 participants from 19 countries came to hear how these networks are facilitating exascale computing in support of biological, medical, physics, energy production and environmental research, and to discuss the latest infrastructure developments.

This event was organised by myself with support from NORDUnet who hosted the event in conjunction with the 30th NORDUnet Conference (NDN18), and where I also took the opportunity to raise awareness of the MANRS initiative.

The keynote was provided by Steven Newhouse (EBI) who presented the ELIXIR Compute Platform which was being used for analysing life science data. In common with high-energy physics, genomics research produces a lot of data, but this is more complex and variable, requires sequencing and imqging on shorter timescales, and of course has privacy issues. The European Molecular Biology Laboratory is based across six countries and employs over 1,600 people, but also collaborates with thousands of other scientists and requires access to existing national repositories as well. High-bandwidth networks are therefore necessary to interconnect their on-site computer and storage clusters, but will increasingly be necessary to facilitate connectivity with other research and commercial cloud resources such as and HelixNebula.

David Martin (Argonne National Labs) continued this theme, by presenting on the US Department of Energy’s Exascale Computing Initiative. This aims to develop and operate the next generation of supercomputers at the Argonne, Lawrence Livermore, Los Alamos and Oak Ridge National Labs by 2021, along with a software stack that will present a common computing platform for supporting advanced research applications and neural networks. The Argonne Labs Computing Facility will be based around an Intel Aurora supercomputer with over 1,000 petaflops of processing, 8 PB of memory, and 10 TB/s of input/output capability that will require future network connections in the petabit-per-second range.

Joe Mambretti (Northwestern University) then discussed the Open Science Cloud (OSDC) which is an open-source cloud-based infrastructure that allows scientists to manage, share and analyse large datasets. The aim is to have 1-2 PB of storage at each participating campus, interconnected with 100 Gb/s+ links, but presented and managed as a common namespace with uniform interfaces and policies.

The rest of the day was devoted to how network automation can integrate compute and storage facilities, particularly across multiple domains. Migiel de Vos (SURFnet) presented the work being undertaken for SURFnet 7, and explained the distinction between automation and orchestration whereby the former is considered task and domain specific, whilst the latter is developing intelligent processes that consist of multiple automated tasks across multiple domains. This required the development of new information models, standardised interfaces, automated administration, and then predetermined service delivery agreements.

Gerben van Malenstein (SURFnet) then discussed LHCONE Point-to-Point Service that allowed Layer 2 circuits to be dynamically established between Data Transfer Nodes for exchanging data from the Large Hadron Collider. This was built on the AutoGOLE work which was now enabled on 21 open exchange points. Nevertheless, whilst AutoGOLE was a functional and proven multi-domain system, there was still limited uptake by network services and end-users, which was necessary to completely remove human configuration of network equipment and create a truly global research platform.

Most of the following day was devoted to technical discussions chaired by Lars Fischer (NORDUnet) and Eric Boyd (University of Michigan). These focused around some practical examples of network automation being used at the University of Michigan, a passive network measurement system with programmable querying at 100 Gb/s line rates that was being developed by the IRNC AMIS Project, as well as discussions on how to automate the generation of network topology maps.

Topology maps are useful for users to show how they can reach counterparts in other parts of the world, and where particular services are available. They are also useful as a marketing tool to show investors and stakeholders how they contribute towards creating a truly global infrastructure, and demonstrate how the NREN model is accepted around the world, and for example, the GLIF map has become a somewhat iconic piece of artwork.

Other developments were the establishment of a new exchange point called South Atlantic Crossroads (SAX) based in Fortaleza, Brazil that was expected to interconnect with new cable systems to Angola (SACS) and Portugal (EllaLink), as well as to AMPATH and SouthernLight over the existing MONET connection. There were also plans to build procure a new 100 Gb/s connection from Europe to the Asia-Pacific, from Geneva to Singapore via the Indian Ocean to supplement the existing link from Amsterdam to Tokyo via Russia.

There were further updates on the new KREOnet network which supported 100 Gb/s links between five major Korean cities and Chicago (StarLight) via KRLight, as well as multiple 10 Gb/s links to 11 other Korean cities, Hong Kong and Seattle. The KREOnet-S infrastructure further offered SDN capabilities permitting dynamic and on-demand virtual network slicing, whilst a Science DMZ provided high-performance computing facilities for KISTI’s new 25.5 petaflop supercomputer.

SURFnet is transitioning its network to SURFnet 8 and would be upgrading its core network and international links, whilst StarLight was developing a Trans-Pacific SDN testbed, as well as an SDX for the GENI initiative.

The closing plenary session focused on how high-bandwidth research connections and exchange points can be better planned and coordinated, and whether a new entity should be created to support this. The GLIF Co-Chairs Jim Ghadbane (CANARIE) and David Wilde (AARNet) outlined some ideas around this, and then hosted a discussion on how things should be progressed.

Further Information

Artificial Intelligence Internet Governance

How to Reform Basic Education for a Digital Future: Views from a Multistakeholder Group

In June 2018, in the city of Panamá, a parallel session was organized by the Internet Society during the international meeting of ICANN 62. This session had the aim of promoting a key discussion underlining our digital future: the impacts of technology and the Internet on future jobs.

This article is an outcome of the discussion carried out by a particularly diverse table of young people* from different stakeholder groups that choose the subject of “the future of education” as its central debate point.

The question that drove the debate was: what should basic education look like in the future? This inquiry originates from the fact that the mainstream method presently deployed across the world assumes memorization of information as the most substantial part of the learning experience.

Even schools that attempt diverging methodologies still need to invest in that route to some degree, as the selection processes of most universities and many job opportunities rely on some form of standardized testing.

A glaring problem with this approach, though, is that memorization is something that most machines are incredible at, while most humans can only hold on to a certain amount of information in a reliable manner.

So, why are we so focused on teaching the young how to excel at tasks that will inevitably end up being outsourced to machines in one way or another? This system almost works towards reinforcing the fear we have of being replaced by machines, rather than alleviating it.

With that in mind, one potential path to take is placing more emphasis on a curriculum that teaches and contributes towards the development of what could broadly be defined as Philosophy. Within this far-reaching subject matter lie concepts such as analysis, ethics, law, logic, politics, and other building blocks that assemble the skill we find to be the most necessary for the future of basic education: critical thinking.

Critical thinking is a toolset that enables the understanding of emergent technologies such as Artificial Intelligence as merely tools to achieve our development goals, seeing as the questions that machines set out to answer are invariably made by humans.

Even in a scenario in which a machine generates its own questions, those will still be based on the perception of the humans who code it and on the content of the organic datasets it learns its key concepts and language from.

Several other issues can also be helped along by incentivizing critical thinking. For example, the Internet is becoming so ubiquitous that the dichotomy between online and offline shows clear signs of deterioration. The digital is ceasing to be a layer on top of so-called real life, and is becoming as much a part of it as anything else.

The children born in the realm of the digital do not seem fully aware that online and offline life are one and the same, and that the consequences of what they do virtually are likely to reverberate in the flesh.

How to deliver all of this knowledge, though? The developing world still struggles with the challenge of achieving decent levels of literacy, but it seems more and more like there will be no time to catch up; the world keeps moving ahead at an accelerated pace.

One of the only logical strategies for developing countries is to prioritize digital literacy alongside what we normally understand as literacy. This means, of course, that there needs to be reasonable access to the Internet available, and this task is a collective undertaking that all stakeholders need to participate in to some degree.

In the midst of this competing processes of globalization and digitalization, it is important to remain attentive to the development of better and more efficient policies that are forward-thinking.

As stakeholders of varied specialties, those who are currently involved in global processes such as that of Internet Governance occupy a unique position that enables them to be in contact with diverse points of view and life experiences, and need to carry out more discussions such as these to enable actors to generate informed change within their own spheres of influence.

The attendees were Salvador Camacho (Intellectual Property), Jennifer Chung (Domain Name industry), Mark Datysgeld (Commercial sector), Jelena Ozegovic (Country Code operator), Isarael Rosas (Government) and Martin Valent (Non-commercial sector).


For an Internet to exist for the good of all people, it must be shaped by each one of us. Learn about Internet Governance and why every voice matters.

Artificial Intelligence Deploy360 Improving Technical Security Internet of Things (IoT)

IoT Security is the Heart of the Matter

The Internet Society is raising awareness around the issues and challenges with Internet of Things (IoT) devices, and the OTA IoT Trust Framework is promoting best practices in protection of user security and privacy. The importance of this was brought home with the keynote talk at the recent TNC18 Conference, which was given by Marie Moe (SINTEF) who related her experiences with her network-connected heart pacemaker.

Marie is a security researcher (who also formerly worked for NorCERT, the Norwegian National Cybersecurity Centre) who has an implanted pacemaker to monitor and control her heart, and has used the opportunity to investigate the firmware and security issues that have had detrimental and potentially fatal consequences. Quite aside from uncovering misconfigurations that required tweaking (e.g. the maximum heartbeat setting turned out to be set too low for a younger person), and an adverse event that required a firmware upgrade, she was even more concerned to discover that little consideration had gone into the authentication and access aspects that might allow an attacker to take control of the device.

These devices allow their recipients to lead normal lives, and of course being network-connectable has many practical advantages in terms of monitoring and non-intrusive configuration and firmware updates. However, the medical companies who develop them do not necessarily consider the security implications of this type of very personal critical infrastructures, and is why initiatives such as the OTA IoT Trust Framework are important for raising awareness of the need for good security practices, whilst encouraging vendors to take user security seriously and put it at the forefront of their development processes.

This interesting and inspiring talk can be found at, and we thank Marie for giving us permission to amplify the issues raised in her talk.

Further Information

Artificial Intelligence Human Rights Internet Governance

Some Fake News Fighters Embrace AI, Others Seek the Human Touch

Fake news doesn’t seem to be going away anytime soon, and some entrepreneurs are targeting false news reports with new services designed to alert readers.

Some countries have pushed for new laws to criminalize the creation of fake news – raising questions about government censorship – but these new fake news fighters take a different approach, some using Artificial Intelligence, some using human power, and some using a combination of AI and humans.

Several high-profile fake news fighting services have launched in recent years, some of them driven by the amount of fake news generated during the 2016 U.S. election. These services generally focus on web content appearing to be legitimate news, as an alternative to traditional fact-checking services like Snopes – which takes a broad look at Web-based news and rumors – or PolitiFact – which addresses claims made by politicians and political groups.

The amount of fake news generated during the election campaign was the main reason FightHoax founder Valentinos Tzekas began working on his service two years ago. At the time, Tzekas was a first-year applied informatics student at a Greek university, but he is planning to leave school to work full time on FightHoax.

The 2016 U.S. elections “took the world by storm,” said Tzekas, named to the Internet Society’s 25 under 25 list in 2017. “All of a sudden, rumors and fake news started coming out of nowhere.”

Tzekas saw news reports about a student Macedonia making thousands of dollars each month by writing false news stories. “The worst thing is that people believe anything they read on the Internet,” he said.

FightHoax takes a tech-centric approach to identifying fake news by using Artificial Intelligence, including IBM’s Watson, to rate articles on seven criteria. The algorithms test for the quality and level of the writing, whether the article includes polarized language, and whether the headline is clickbait. The service also checks the political leaning of the publication, among other things.

While thinking about fake news, “one day I thought to myself: ‘Can I make something to analyze news articles and warn readers about anything suspicious, such as the use of propaganda rhetoric?’ Tzekas said. “’Can I make something that will, in a few minutes, do the work of human fact checkers and get a result as to whether a given story is true or a hoax?’”

FightHoax, which claimed an 89 percent accuracy rates in early tests, is planning for an enterprise dashboard version release within 18 months, Tzekas said. He plans to work first with newsrooms and academics, with the enterprise dashboard allowing the service’s API to connect with advertising-serving companies, news distributors, and social networks.

The advantage of an AI-powered approach is a service that can “analyze harmful news content that exists on the Internet, at a scale,” Tzekas said. While a tech-focused approach doesn’t work to analyze all factors involved in fake news, it’s a good fit to identify markers like clickbait headlines, he added.

“Machines cannot fully understand the messy-polymorphic human written language so sometimes you need to think really simple,” he said. “Our number one mission here at FightHoax is to make people think. [We want to] solve disinformation at a scale, with technology that works on a human scale.”

At the opposite end of the technology spectrum from FightHoax is NewsGuard, announced in March. The service, cofounded by journalism veterans Steven Brill and Gordon Crovitz, will take a human approach to rooting out fake news, with plans to hire dozens of journalists to review and rate 7,500 news and information websites most accessed and shared in the United States.

The trained journalists will write “nutrition-label” style reviews of the sites and include green, yellow, or red labels. The founders plan to license the service to social media platforms and online search companies, as well as to interested consumers.

In April, NewsGuard launched a fake news hotline for the public to report suspected sites.

If FightHoax embraces technology to fighting fake news, and NewsGuard embraces a human approach, U.K.-based Factmata splits the difference. The service, with several high-profile investors, uses a combination of AI tools and human intervention to target hate speech, propaganda, and spoof news sites.

The combination of AI and human insight promises to be the best way to identify fake news, said Dhruv Ghulati, Factmata’s founder and CEO. “Pure human is too slow and unscalable for the pace of today’s news cycle, and pure AI just will get a lot of things wrong if not carefully supervised and trained,” he said.

Factmata has, in fact, reached out to NewsGuard about a potential partnership, he added. “Their community of journalists we feel can be greatly augmented by our pipelines for assessing content and judging news for its credibility, and our AI to help automatically flag certain types of content,” Ghulati said

The target audience for Factmata’s news platform is journalists and other experts, but the company is also planning a B2B product that provides news quality scores for customers including advertising networks and agencies.

Factmata plans to soft launch its Web-based Briefr news sharing service within weeks, with about 350 journalists and other experts participating, Ghulati said. A full launch is scheduled for June.

Other fake news fighting efforts include Full Fact, a U.K. fact-checking service, the Fake News Challenge, which ran a fake news-fighting competition in 2017.

Read our interview with Wired Editor in Chief Nicholas Thompson on the changing role of media, then explore the 2017 Internet Society Global Internet Report: Paths to Our Digital Future and read the recommendations to ensure that humanity remains at the core of tomorrow’s Internet.

Artificial Intelligence Human Rights Internet Governance Shaping the Internet's Future

Future Thinking: Getachew Engida on Digital Divides

Last year, the Internet Society unveiled the 2017 Global Internet Report: Paths to Our Digital Future. The interactive report identifies the drivers affecting tomorrow’s Internet and their impact on Media & Society, Digital Divides, and Personal Rights & Freedoms. In April 2018, we interviewed two stakeholders – Getachew Engida, Deputy Director-General of the United Nations Educational, Scientific and Cultural Organization (UNESCO), and Augusto Mathurin, who created Virtuágora, an open source digital participation platform – to hear their different perspectives on the forces shaping the Internet.

Getachew Engida is the Deputy Director-General of UNESCO. He has spent the past twenty years leading and managing international organizations and advancing the cause of poverty eradication, peace-building, and sustainable development. He has worked extensively on rural and agricultural development, water and climate challenges, education, science, technology and innovation, intercultural dialogue and cultural diversity, communication and information with emphasis on freedom of expression, and the free flow information on and offline. (You can read Augusto Mathurin’s interview here).

The Internet Society: You have, in the past, stressed the role that education has played in your own life and can play in others’ lives. Do you see technology helping to promote literacy and education in all regions in the future?

Getachew EngidaEducation unleashes new opportunities and must be available to all. If it were not for educational opportunities, I certainly would not have been where I am today. Though coming from a humble and poor family, I was given the opportunity to go to public primary and secondary schools that also had feeding programs thanks to UN agencies. I benefitted from scholarships to undertake higher education that made a huge difference to my career progression.

Technology, indeed, is a great enabler and allows us to reach the marginalized and those left behind from quality education. But while connectivity is increasing at a rapid pace, educational material lags behind, particularly in mother tongues. Appropriate and relevant, quality education, combined with technology, will be a potent weapon to drastically improve access to education and eliminate illiteracy around the world.

How can we ensure that future generations are taught the right skills to flourish in future workplaces, which will demand a thorough command of digital skills?

No doubt, inclusive knowledge societies and the UN’s 2030 Agenda for Sustainable Development cannot be achieved without an informed population and an information-literate youth. Digital skills constitute a crucial part of quality education and lifelong learning.

UNESCO believes in empowering  women and men, but particularly youth, by focusing specifically on what we call “Media and Information Literacy” (MIL). This includes human rights literacy, digital security skills, and cross-cultural competencies. These skills enable people to critically interpret their complex digital information environments and to constructively access and contribute information about matters like democracy, health, environment, education, and work.

As the media and communications landscape is complex and rapidly changing, we need to constantly update the substance of media and information literacy education to keep pace with technological development. The youth need, for example, to grapple with the attention economy, personal data and privacy, and how these and other developments impact them through algorithms and Artificial Intelligence (AI). Facing increasing concerns about the misuse of information and disinformation (‘fake news’), propaganda, hate speech, and violent extremism, we see an urgent need for a concerted effort from all stakeholders to empower societies with stronger media and information literacy competencies. In this way, the targets of malicious online endeavours will be able to detect, decipher and discredit attempts to manipulate their feelings, networks, and personal identities.

What role does the UN in general and UNESCO more specifically have to play in promoting and protecting human rights online? How does UNESCO navigate tensions between different interpretations of human rights online – e.g., first amendment fundamentalism in the US versus more balanced approaches in Europe?

One of the great achievements of the United Nations is the creation of a comprehensive body of human rights law—a universal and internationally protected code to which all nations can subscribe and all people can aspire. In the digital age, the UN General Assembly and Human Rights Council have constantly updated this human rights mandate by issuing a number of resolutions to promote human rights equally online and offline.

UNESCO, in turn, is the UN agency with a mandate to defend freedom of expression, instructed by its constitution to promote “the free flow of ideas by word and image.” UNESCO also recognizes the right to privacy underpins other rights and freedoms, including freedom of expression, association, and belief. We work worldwide to promote freedom of expression and privacy both online and offline.

UNESCO has taken a lead to flag Internet freedom issues at a number of key conferences and events such as the upcoming RightsCon gatherings, the annual WSIS Forum, and the Internet Governance Forum. We also do the same at UNESCO World Press Freedom Day celebrations each year on May 3, and meetings to mark the International Day for Universal Access to Information, on 28 September every year. To provide member states and stakeholders with cutting-edge knowledge and policy advice, UNESCO has commissioned a number of pioneering policy studies, the Internet Freedom series, too. They shed light on issues such as protecting journalism sources in digital age, principles for governing the Internet, and the evolution of multistakeholder participation in Internet governance.

How do you see emerging technologies, such as IoT or AI, impacting sustainable development and the future of our world? As we promote connectivity, do we risk cultural and linguistic diversity?

AI could profoundly shape humanity’s access to information and knowledge, which will make it easier to produce, distribute, find and assess. This could allow humanity to concentrate on creative development rather than more mundane tasks. The implications for open educational resources, cultural diversity, and scientific progress could also be significant. In addition, AI could also provide new opportunities to understand the drivers of intercultural tension and other forms of conflict, providing the capacities to collect, analyze, and interpret vast quantities of data to better understand, and perhaps predict, how and when misunderstandings and conflict may arise. In turn, these can all contribute to democracy, peace and achieving the SDGs.

However, AI and automated processes, which are particularly powerful when fuelled by big data, also raise concerns for human rights, especially where freedom of expression and the right to privacy are concerned. Internet companies have begun to use AI in content moderation and in ranking orders for personalized search results and social media newsfeeds. Without human values and ethics being instilled from the start during the design stage, and without relevant human oversight, judgement, and due process, such practices can have a negative impact on human rights.

AI is already beginning to shape news production and dissemination and shifting the practice and value of journalists and journalism in the digital age. Internet and news media companies, especially whether they intersect, need to consciously reflect on the ambiguities of data mining and targeting, as well as Big Data business models for advertising in the attention economy.

There is therefore a crucial need to explore these issues in depth and to reflect on ways to harness Big Data and AI technologies in order to mitigate disadvantages and advance human rights and democracy, build inclusive knowledge societies, and achieve the 2030 Sustainable Development Agenda. Current societal mechanisms including moral and legal frameworks are not geared to effectively deal with such rapid developments.

What are your hopes for the future of the Internet? What are your fears?

I hope to see a free and open Internet which is accessed and governed by all, leaving no one behind and making the world a better place for future generations. To do this we have to continuously counter emerging divides, such as linguistic capacities of computer recognition of speech which is making great strides in English, for example, but which leaves many other languages on the periphery. We need a proportionate response to the problems on the Internet which does not  damage “the good” in countering “the bad.” We should expand and maintain connectivity as the default setting in the digital age, and do everything possible to avoid the increasing tendencies of complete Internet shutdowns in certain regions or places. We need better respect for personal data and privacy from both corporate and state actors who track our online data. We need strong journalism online to counter disinformation, and we need heightened media and information literacies for everybody.

My fear is that Internet as a double-edge sword: if not properly harnessed, it might end up being used to regress, rather than to advance, those classic values we cherish such as a private life, transparency, and public-interest journalism. Without dialogue amongst all stakeholders, we could see the Internet and related technologies being exploited to pose severe challenges to peace, security, and human rights. Such fears need to be offset by maintaining a sense of proportion whereby the good of the Internet significantly dwarfs the bad, and where we can increasingly utilise existing and emerging digital technologies to achieve the planet’s agreed development goals by 2030.

What do you think the future of the Internet looks like? Explore the 2017 Global Internet Report: Paths to Our Digital Future to see how the Internet might transform our lives across the globe, then choose a path to help shape tomorrow.

Artificial Intelligence Technology

Information Gatekeeping: Not a Laughing Matter

There’s a joke that goes something like this: How do you make a little money in the online news business?

The punchline: Start with a huge pile of money, and work your way down from there.

It seems the same joke would work for the online comedy business, judging by the layoff news coming out of Funny or Die in January. Recently, published an interesting Q&A with comedy veteran Matt Klinman, and he talked about the woes of online comedy outlets.

Klinman focused his ire on Facebook and its role as an information gatekeeper, in which the site determines what comedy clips to show each of its users. But much of his criticism could have just as easily been targeted at a handful of other online gatekeepers that point Internet users to a huge percentage of the original content that’s out there.

As Klinman says about Facebook, these services have created their own “centrally designed Internet” in which they serve as “our editor and our boss. They hide behind algorithms that they change constantly.”

As a thrice-laid-off online journalist, I can sympathize. I’m pretty sure I can’t blame any of the current gatekeepers for my 2002 layoff or a near miss in 2000, when I left a job just weeks before a round of forced departures. Heck, some of them weren’t even around back then.

Still, a contributing factor in my 2015 and 2017 layoffs appeared to be the company’s inability to compete in an Internet landscape dominated by a few gatekeepers. I’m exhibit No. 238,092 showing the online news business model isn’t working these days.

When Klinman says content algorithms often favor clickbait and “things that appeal to the lowest common denominator,” those words resonate with me, even though it may sound like small gatekeeper Funny or Die complaining about a larger gatekeeper.

With perhaps a bit of hyperbole – mixed with a larger truth – Klinman added: “I would gladly give money to anyone who could tell me of any digital publisher that is doing well other than Facebook.”

But Facebook is certainly not the only Internet giant trying to hang onto your eyeballs as long as it can. Google has become the Internet’s default search and recommendation engine, and Amazon has become the place where you can buy just about anything.

Twitter, Reddit, and LinkedIn have also turned into gatekeepers for certain information.

At the heart of these criticisms is the rise of this small group of gatekeepers controlling access to information. While Internet users are still free to seek information from other sources, a small group of companies increasingly serves as the Internet’s entry point.

The existence of information gatekeepers isn’t new, of course. Before the mass adoption of the Internet, back in the dark ages of my early journalism career, the gatekeepers were the owners of newspapers, TV stations, and book publishing houses.

The Internet promised to change that arrangement by giving everyone the tools to publish. There were early attempts to create new gatekeepers – America Online and Yahoo’s Web Guide are two examples – but those efforts ultimately failed.

Instead, a Wild West of information sharing flourished online for several years. Many executives in the traditional news industry failed to embrace the Internet early, putting them at a huge disadvantage to upstarts like Google, Craigslist, and Facebook.

But the promise of an Internet with no central gatekeepers – or perhaps, instead, dozens of competing gatekeepers – didn’t last. Today, we have news organizations – and apparently comedy sites – embracing the dark art of gaming gatekeeper algorithms in an attempt to get in front of the largest possible audience.

But guessing an ever-changing algorithm’s preferences is not a sustainable business model. It’s not a sustainable strategy for creating high-quality online content.

Klinman suggests a couple of ways to fix this problem. He recommends web users adopt a new attitude based on the eating local and shopping local movements. Support your favorite comedy or news site.

And if large gatekeepers are making money by delivering content created by someone else, perhaps they should pay for it, like cable TV providers are required to pay for programming.

A conversation about ways to change this dynamic has been happening in some circles for a while now, but it’s past time for a wider debate about the future of Internet content creation.

Read the 2017 Global Internet Report: Paths to Our Digital Future, which explores the changing media landscape and how Artificial Intelligence might impact the future of the Internet.

25th Anniversary Artificial Intelligence Internet of Things (IoT)

How Governments Can Be Smart about Artificial Intelligence

The French MP and Fields medal award winner, Cédric Villani, officially auditioned Constance Bommelaer de Leusse, the Internet Society’s Senior Director, Global Internet Policy, last Monday on national strategies for the future of artificial intelligence (AI). In addition, the Internet Society was asked to send written comments, which are reprinted here.

Practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful […] Once in use, successful AI systems were simply considered valuable automatic helpers.”

Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence

AI is not new, nor is it magic. It’s about algorithms.

“Intelligent” technology is already everywhere – such as spam filters or systems used by banks to monitor unusual activity and detect fraud – and it has been for some time. What is new and creating a lot of interest from governments stems from recent successes in a subfield of AI known as “machine learning,” which has spurred the rapid deployment of AI into new fields and applications. It is the result of a potent mix of data availability, increased computer power and algorithmic innovation that, if well harnessed, could double economic growth rates by 2035.

So, governments’ reflection on what good policies should look like in this field is both relevant and timely. It’s also healthy for policymakers to organise a multistakeholder dialogue and empower their citizens to think critically about the future of AI and its impact on their professional and personal lives. In this regard, we welcome the French consultation.

Our recommendations

I had a chance to explain the principles the Internet Society believes should be at the heart of AI norms, whether driven by industry or governments:

  • Ethical considerations in deployment and design: AI system designers and builders need to apply a user-centric approach to the technology. They need to consider their collective responsibility in building AI systems that will not pose security risks to the Internet and its users.
  • Ensure interpretability of AI systems: Decisions made by an AI agent should be possible to understand, especially if they have implications for public safety or result in discriminatory practices.
  • Empower users: The public’s ability to understand AI-enabled services, and how they work, is key to ensuring trust in the technology.
  • Responsible deployment: The capacity of an AI agent to act autonomously, and to adapt its behaviour over time without human direction, calls for significant safety checks before deployment and ongoing monitoring.
  • Ensure accountability: Legal certainty and accountability has to be ensured when human agency is replaced by the decisions of AI agents.
  • Consider social and economic impacts: Stakeholders should shape an environment where AI provides socioeconomic opportunities for all.
  • Open Governance: The ability of various stakeholders, whether in civil society, government, private sector, academia or the technical community to inform and participate in the governance of AI is crucial for its safe deployment.

You can read more about how these principles translate into tangible recommendations here.

The audition organised by the French government also showed that the debate around AI is currently too narrow. So, we’d like to propose a few additional lenses to stage the debate about the future of AI in a helpful way.

Think holistically, because AI is everywhere

Current dialogues around AI usually focus on applications and services that are visible and interacting with our physical world, such as robots, self-driving cars and voice assistants. However, as our work on the Future of the Internet describes, the algorithms that structure our online experience are everywhere. The future of AI is not just about robots, but also about the algorithms that provide guidance to arrange the overwhelming amount of information from the digital world – algorithms that are intrinsic to the services we use in our everyday lives and a critical driver for the benefits that the Internet can offer.

The same algorithms are also part of systems that collect and structure information that impact how we perceive reality and make decisions in a much subtler and surprising way. They influence what we consume, what we read, our privacy, and how we behave or even vote. In effect, they place AI everywhere.

Look at AI through the Internet access lens

Another flaw in today’s AI conversation is that much of it is solely about security implications and how they could affect users’ trust in the Internet. As shown in our Future’s report, AI will also influence how you access the Internet in the very near future.

The growing size and importance of “AI-based” services, such as voice-controlled smart assistants for your home, means they are likely to become a main entry point to many of our online experiences. This could impact or exacerbate current challenges we see – including on mobile platforms – in terms of local content and access to platform-specific ecosystems for new applications and services.

Furthermore, major platforms are rapidly organising, leveraging AI through IoT to penetrate traditional industries. There isn’t a single aspect of our lives that will not be embedded in these platforms, from home automation and car infotainment to health care and heavy industries.

In the future, these AI platforms may become monopolistic walled gardens if we don’t think today about conditions to maintain competition and reasonable access to data.

Create an open and smart AI environment

To be successful and human centric, AI also needs to be inclusive. This means creating inclusive ecosystems, leveraging interdependencies between universities that can fuel business with innovation, and enabling governments to give access to qualitative and non-sensitive public data. Germany sets a good example: Its well-established multistakeholder AI ecosystem includes the German Research Center for Artificial Intelligence (DFKI), a multistakeholder partnership that is considered a blueprint for top-level research. Industry and Civil Sociey sit on the board of the DFKI to ensure research is application and business oriented.

Inclusiveness also means access to funding. There are many ways for governments to be useful, such as funding areas of research that are important to long term innovation.

Finally, creating a smart AI environment is about good, open and inclusive governance. Governments need to provide a regulatory framework that safeguards responsible AI, while supporting the capabilities of AI-based innovation. The benefits of AI will be highly dependent on the public’s trust in the new technology, and governments have an important role in working with all stakeholders to empower users and promote its safe deployment.

Learn more about Artificial Intelligence and explore the interactive 2017 Global Internet Report: Paths to Our Digital Future.

Take action! Send your comments on AI to Mission Villani and help shape the future.

Artificial Intelligence Internet Governance Technology

The Future Internet I Want for Me, Myself and AI

Artificial Intelligence has the potential to bring immense opportunities, but it also poses challenges.

Artificial intelligence (AI) is dominating the R&D agenda of the leading Internet industry. The Silicon Valley and other startup hubs are buzzing about artificial intelligence and the issue has come at the top of policymakers’ agenda including the G20, the ITU, and the OECD, where leaders gathered this week in Paris.

AI isn’t new, but its recent acceleration can be explained by its convergence with big data and IoT, and the endless applications and services it allows. In the market, this translates into investments across all industries as stakeholders try to understand the potential of AI for their own businesses. For instance, at the beginning of the year, Ford motors announced a plan to invest $1 billion over the next five years in Argo AI, an artificial intelligence startup that is focused on developing autonomous vehicle technology. It’s an indication that AI is a hot topic beyond the traditional ICT sector.

How our community feels about AI

There is a growing expectation on the part of many stakeholders that AI and machine learning will fundamentally reshape the future of the Internet and society around it.

This is one of the trends we’ve observed in our own project about the Internet’s Future, where AI, together with five other areas, have been identified as key “Drivers” of change in the coming 5 to 10 years. There is a sense that “we may be experiencing a new [technology] Renaissance.” Indeed, in 10 years’ time AI technologies may dominate all aspects of our day to day lives from driving to banking or even working.

Yet, the uncertainties raised by our community about this technology in the context of the Internet are extensive. These include the potential loss of human agency and decision-making, lack of transparency in how algorithms make decisions, discrimination, the pace of technological change outstripping governance and policy, and ethical considerations.

A number of participants raised concerns related to the impact on industry and employment – and therefore society – noting the consequences of automation-led change across industries and business practices, and the possible increase in inequalities and societal disruption.

Will AI replace human labour?

The discussions at the OECD this week revolved around a specific issue: Will AI replace human labour?

What do humans do at work? They perceive their environment, learn, use language to communicate, plan and navigate tasks – all of them abilities that can be imitated to varying degrees by machines.

Looking back at the history of AI, the concept was born when a group of visionary researchers, including Marvin Minsky and John McCarthy, gathered in the summer of 1956 at Dartmouth College to kick off the project to create computers programmed to act as humans. The risk – or opportunity – was embedded, although perhaps not consciously, in the group’s objectives: replicating human intelligence.

So, is it realistic that we could all be replaced by robots and algorithms?

It depends who you ask and how you analyse the challenge. So far the estimated impact on job displacement has had a broad range: from 9-47%. From the OECD to the University of Oxford, the measuring techniques are quite different. The numbers are alarming and should be taken seriously, but they also do not tell the whole story.

Shaping a future we can look forward to

Fears are natural, but should be put into perspective. Lets think about how AI could improve human performance and lives.

Deep learning has made tremendous progress in reasoning to the benefit of humans. See the example of the Go Game guru, Lee Sedol, who was defeated by “Alpha Go.” He explained that beyond personal disappointment, he also experienced a positive feedback loop. He learned from AI Go patterns and techniques and raised his own performance level. AI performing at the level, or higher, than humans is not necessarily a threat – it can augment intelligence and support our own development.

AI can have also have a positive effect on humanity, notably by drawing inferences from enormous sets of data. For example, in the pharmaceutical field, the combination of AI and big data expands the industry’s ability to solve new scales of problems, which in turn enables the acceleration of research and can bring major breakthroughs in drug discoveries and disease diagnoses.

Energy efficient homes, personal assistants that make our lives easier, etc. There are many other reasons and fields where hope – and even excitement – is possible.

But what we do know is that Artificial Intelligence is already a topic that has triggered hopes and concerns. Going forward it is important that we broaden and demystify the debate in order to balance the headlines with insights and facts. To this end, ISOC recently published a Policy Paper on Artificial Intelligence and Machine Learning, introducing the fundamentals of the technology at hand and some of the key challenges it presents.

As one of our guiding principles from this paper clearly states: “The public’s ability to understand AI-enabled services, and how they work, is key to ensuring trust in the technology.”

Artificial Intelligence Building Trust Improving Technical Security Internet Governance Technology

Will Artificial Intelligence Change The World For the Better? Or Worse? Read our new policy paper

Artificial Intelligence (AI) is a concept that has a long standing tradition in the realm of science-fiction, popularized by Hollywood movies and iconic writers such as Isaac Asimov. However, AI has also received increased attention in recent years following news of progress in the field and the prospect of new, tangible, innovation such as self-driving cars. The Internet has played an important role in these developments, particularly as the platform for AI enabled services  – some with significant implications for the continued development of a trusted Internet. 

The Internet Society is pleased to release a policy paper on Artificial Intelligence and Machine Learning to help navigate some of the opportunities and challenges the technology presents, and to support an informed debate by de-mystifying some of its fundamental concepts. A key aspect is understanding machine learning, a specific AI technique that has been driving the development of new algorithms to substitute or support human decision-making – some of which are already deployed online. Smart assistants, such as “Siri” or “Alexa”, use machine learning to interpret voice commands, email servers use the technique to better filter out junk mail, and some e-commerce websites use it to personalize the web experience of their users.

AI is taking on an increasingly important role in international discussions on the Internet. Recently in Dusseldorf, as part of the German G20 presidency, ministers responsible for their countries’ digitalization agendas met with other stakeholders to discuss policies for the digital future. The impact of AI driven applications, alongside strategies for how to capitalize on the Internet’s vast opportunities for productivity and economic growth, were centre stage.

The ability of machines to exhibit advanced cognitive skills to process natural language, to learn, to plan or to perceive, makes it possible for new tasks to be performed by intelligent systems, sometimes with more success than humans. By using AI-driven automation in existing industries, alongside using AI technologies in new emerging areas, artificial intelligence could vastly boost productivity and economic growth.

AI is a technology that could change the world for the better. It can make medical procedures safer, increase productivity and boost the economy, or be used in applications to improve the quality of life for the disabled. But, AI is also a technology that comes with challenges, such as accountability, security, technological mistrust, and the displacement of human workers.

The private sector has acknowledged these opportunities, and investments in AI have grown over the past several years. Major corporations have invested in developing AI technologies. Forrester predicts that investments in AI are set to grow by 300% in 2017 alone. At the same time, workers fear that their livelihood could be replaced by machines. There are serious questions as to who will benefit and who may lose.

However, beyond the economic impact that AI may have, AI will also affect how people perceive and use the Internet. It has the potential to intensify users’ concerns surrounding the Internet, such as questions of accountability, openness, safety, security, and its socio-economic impacts.

With the potential to dramatically impact the economy and society in the near future, AI has moved to the forefront of many policy debates around the world. These debates range from the governance of AI, such as ensuring accountability of algorithmic decisions, to mitigating the impact of AI on employment. There are clear challenges for AI that must be addressed now to support the technology’s positive future.

It is important to note that the anticipated impact of AI is largely based on predictions and estimates. But regardless of the level of impact, AI will affect the world’s economies, citizens, and the Internet.

It is up to all stakeholders today, be they policymakers, businesses, technical, or civil society, to ensure that AI’s impact is a positive one by proactively tackling the challenges, while ensuring the opportunities remain available.

Please read and share our new policy paper: Artificial Intelligence and Machine Learning.