Artificial Intelligence Human Rights Internet Governance

Some Fake News Fighters Embrace AI, Others Seek the Human Touch

Fake news doesn’t seem to be going away anytime soon, and some entrepreneurs are targeting false news reports with new services designed to alert readers.

Some countries have pushed for new laws to criminalize the creation of fake news – raising questions about government censorship – but these new fake news fighters take a different approach, some using Artificial Intelligence, some using human power, and some using a combination of AI and humans.

Several high-profile fake news fighting services have launched in recent years, some of them driven by the amount of fake news generated during the 2016 U.S. election. These services generally focus on web content appearing to be legitimate news, as an alternative to traditional fact-checking services like Snopes – which takes a broad look at Web-based news and rumors – or PolitiFact – which addresses claims made by politicians and political groups.

The amount of fake news generated during the election campaign was the main reason FightHoax founder Valentinos Tzekas began working on his service two years ago. At the time, Tzekas was a first-year applied informatics student at a Greek university, but he is planning to leave school to work full time on FightHoax.

The 2016 U.S. elections “took the world by storm,” said Tzekas, named to the Internet Society’s 25 under 25 list in 2017. “All of a sudden, rumors and fake news started coming out of nowhere.”

Tzekas saw news reports about a student Macedonia making thousands of dollars each month by writing false news stories. “The worst thing is that people believe anything they read on the Internet,” he said.

FightHoax takes a tech-centric approach to identifying fake news by using Artificial Intelligence, including IBM’s Watson, to rate articles on seven criteria. The algorithms test for the quality and level of the writing, whether the article includes polarized language, and whether the headline is clickbait. The service also checks the political leaning of the publication, among other things.

While thinking about fake news, “one day I thought to myself: ‘Can I make something to analyze news articles and warn readers about anything suspicious, such as the use of propaganda rhetoric?’ Tzekas said. “’Can I make something that will, in a few minutes, do the work of human fact checkers and get a result as to whether a given story is true or a hoax?’”

FightHoax, which claimed an 89 percent accuracy rates in early tests, is planning for an enterprise dashboard version release within 18 months, Tzekas said. He plans to work first with newsrooms and academics, with the enterprise dashboard allowing the service’s API to connect with advertising-serving companies, news distributors, and social networks.

The advantage of an AI-powered approach is a service that can “analyze harmful news content that exists on the Internet, at a scale,” Tzekas said. While a tech-focused approach doesn’t work to analyze all factors involved in fake news, it’s a good fit to identify markers like clickbait headlines, he added.

“Machines cannot fully understand the messy-polymorphic human written language so sometimes you need to think really simple,” he said. “Our number one mission here at FightHoax is to make people think. [We want to] solve disinformation at a scale, with technology that works on a human scale.”

At the opposite end of the technology spectrum from FightHoax is NewsGuard, announced in March. The service, cofounded by journalism veterans Steven Brill and Gordon Crovitz, will take a human approach to rooting out fake news, with plans to hire dozens of journalists to review and rate 7,500 news and information websites most accessed and shared in the United States.

The trained journalists will write “nutrition-label” style reviews of the sites and include green, yellow, or red labels. The founders plan to license the service to social media platforms and online search companies, as well as to interested consumers.

In April, NewsGuard launched a fake news hotline for the public to report suspected sites.

If FightHoax embraces technology to fighting fake news, and NewsGuard embraces a human approach, U.K.-based Factmata splits the difference. The service, with several high-profile investors, uses a combination of AI tools and human intervention to target hate speech, propaganda, and spoof news sites.

The combination of AI and human insight promises to be the best way to identify fake news, said Dhruv Ghulati, Factmata’s founder and CEO. “Pure human is too slow and unscalable for the pace of today’s news cycle, and pure AI just will get a lot of things wrong if not carefully supervised and trained,” he said.

Factmata has, in fact, reached out to NewsGuard about a potential partnership, he added. “Their community of journalists we feel can be greatly augmented by our pipelines for assessing content and judging news for its credibility, and our AI to help automatically flag certain types of content,” Ghulati said

The target audience for Factmata’s news platform is journalists and other experts, but the company is also planning a B2B product that provides news quality scores for customers including advertising networks and agencies.

Factmata plans to soft launch its Web-based Briefr news sharing service within weeks, with about 350 journalists and other experts participating, Ghulati said. A full launch is scheduled for June.

Other fake news fighting efforts include Full Fact, a U.K. fact-checking service, the Fake News Challenge, which ran a fake news-fighting competition in 2017.

Read our interview with Wired Editor in Chief Nicholas Thompson on the changing role of media, then explore the 2017 Internet Society Global Internet Report: Paths to Our Digital Future and read the recommendations to ensure that humanity remains at the core of tomorrow’s Internet.

Internet Governance Shaping the Internet's Future

Future Thinking: Augusto Mathurin on Digital Divides

Last year, the Internet Society unveiled the 2017 Global Internet Report: Paths to Our Digital Future. The interactive report identifies the drivers affecting tomorrow’s Internet and their impact on Media & Society, Digital Divides, and Personal Rights & Freedoms. In April 2018, we interviewed two stakeholders –Getachew Engida, Deputy Director-General of the United Nations Educational, Scientific and Cultural Organization (UNESCO), and Augusto Mathurin, who created Virtuágora, an open source digital participation platform – to hear their different perspectives on the forces shaping the Internet.

Augusto Mathurin is a 25-year-old Argentinian who strongly believes in the need to enable all people to participate in decision-making which can impact them and their communities. With this in mind, Augusto developed an open source digital participation platform as part of a university project. The main goal of this platform, Virtuágora, was to create a common space in which citizens’ opinions and their representatives’ proposals could converge. The concept was derived from the Greek agora – the central square of ancient Grecian cities where citizens met to discuss their society. In 2017, Augusto was awarded the Internet Society’s 25 under 25 award for making an impact in his community and beyond.  (You can read Getachew Engida’s interview here).

The Internet Society: Your reason for creating the multistakeholder platform Virtuágora in your home town shares similarities with the notion of multistakeholder participation in Internet governance. What challenges do you think are plaguing participation in these types of crowdsourced platforms and Internet governance today?

Augusto Mathurin: It’s still really difficult to involve government in these kinds of collaborative mechanisms. While the challenges we have to tackle – like digital divides – need the input of multiple stakeholders, governments are sometimes fearful of collaborating with other stakeholders because they think it means they’ll lose their sovereignty. This challenge is compounded by the fact that governments tend to operate in election cycles, making it difficult to get sustained commitments to collaborative decision-making models.

On the other hand, participation is to some extent becoming easier with emerging technologies. Video calls, for instance, have made it possible to have real conversations around the world, which is a real improvement. There is still a lot of work to do to improve remote participation, however.

What role do you think other technology has to play in sustainable development in developing regions (like Latin America)?

Open source technology, in particular, offers a great opportunity to achieve sustainable development in developing regions. When someone develops any kind of technology and publishes it as open source, it doesn’t matter where it was done because from that moment it starts to belong to everyone. Remote and neglected communities can take these open technologies and replicate them to create solutions for their problems.

Community networks are a great example of this, but there are a lot of initiatives in other areas like green energy, mobility, health, education, etc.

What trends do you think will impact information societies in three to five years? What role will or do algorithms play?

Many organizations are spending a lot of money on virtual reality and augmented reality. I think all the trending technologies in the future will continue to merge the virtual world and the real one.

While algorithms play an increasingly important role in, for example, the media, it is important to always place people first. When we talk about the media we are talking about trust, and we cannot trust algorithms when we are not even sure if they are running without being manipulated or corrupted. On the other hand, when a person is communicating something, she would put her reputation at risk if she said something fake. I’m sure that AI can help offer quick and efficient organization and filtering of data, but I think we will need good journalists we can trust at least for some more years to come.

What do you think about increasing calls in various countries to regulate online platforms like Facebook more strictly? Do you agree or disagree?

Platforms extract data from us and from our devices compulsively, and that’s clearly wrong. But regulation is not necessarily the solution. It’s problematic that we are using systems in the cloud that are like black boxes and there’s no way to audit them in an efficient way. We can promulgate laws and impose penalties, but these companies can just pretend they are complying with the regulation and at the same time could still be collecting our data without consent.

We had the Volkswagen emissions scandal as proof of how hard is to assure systems are according to the regulation. In 2015, it was discovered that Volkswagen had intentionally programmed one of its diesel engines to activate their emissions controls only during laboratory emissions testing to meet US standards, but emit up to 40 times in real-world driving.

When we start talking about platforms in the cloud, it’s even harder to do regulatory compliance audits, and so stricter regulatory measures are useless. The solution is not in regulation, but it rather lies in embracing open technology.

What are your hopes for the future of the Internet? What are your fears?

My biggest fear is that virtual spaces are becoming more and more centralized. I fear that our online rights might be at stake because we are delegating so much control and power to just a handful of companies.

Luckily the spirit of the Internet is open. My hope is that more people will develop even more open technologies, in a collaborative manner and with the possibility of replicating it in other environments. At the same time, I hope that more people will become more aware of the challenges facing the future of the Internet.

What do you think the future of the Internet looks like? Explore the 2017 Global Internet Report: Paths to Our Digital Future to see how the Internet might transform our lives across the globe, then choose a path to help shape tomorrow.

Artificial Intelligence Human Rights Internet Governance Shaping the Internet's Future

Future Thinking: Getachew Engida on Digital Divides

Last year, the Internet Society unveiled the 2017 Global Internet Report: Paths to Our Digital Future. The interactive report identifies the drivers affecting tomorrow’s Internet and their impact on Media & Society, Digital Divides, and Personal Rights & Freedoms. In April 2018, we interviewed two stakeholders – Getachew Engida, Deputy Director-General of the United Nations Educational, Scientific and Cultural Organization (UNESCO), and Augusto Mathurin, who created Virtuágora, an open source digital participation platform – to hear their different perspectives on the forces shaping the Internet.

Getachew Engida is the Deputy Director-General of UNESCO. He has spent the past twenty years leading and managing international organizations and advancing the cause of poverty eradication, peace-building, and sustainable development. He has worked extensively on rural and agricultural development, water and climate challenges, education, science, technology and innovation, intercultural dialogue and cultural diversity, communication and information with emphasis on freedom of expression, and the free flow information on and offline. (You can read Augusto Mathurin’s interview here).

The Internet Society: You have, in the past, stressed the role that education has played in your own life and can play in others’ lives. Do you see technology helping to promote literacy and education in all regions in the future?

Getachew EngidaEducation unleashes new opportunities and must be available to all. If it were not for educational opportunities, I certainly would not have been where I am today. Though coming from a humble and poor family, I was given the opportunity to go to public primary and secondary schools that also had feeding programs thanks to UN agencies. I benefitted from scholarships to undertake higher education that made a huge difference to my career progression.

Technology, indeed, is a great enabler and allows us to reach the marginalized and those left behind from quality education. But while connectivity is increasing at a rapid pace, educational material lags behind, particularly in mother tongues. Appropriate and relevant, quality education, combined with technology, will be a potent weapon to drastically improve access to education and eliminate illiteracy around the world.

How can we ensure that future generations are taught the right skills to flourish in future workplaces, which will demand a thorough command of digital skills?

No doubt, inclusive knowledge societies and the UN’s 2030 Agenda for Sustainable Development cannot be achieved without an informed population and an information-literate youth. Digital skills constitute a crucial part of quality education and lifelong learning.

UNESCO believes in empowering  women and men, but particularly youth, by focusing specifically on what we call “Media and Information Literacy” (MIL). This includes human rights literacy, digital security skills, and cross-cultural competencies. These skills enable people to critically interpret their complex digital information environments and to constructively access and contribute information about matters like democracy, health, environment, education, and work.

As the media and communications landscape is complex and rapidly changing, we need to constantly update the substance of media and information literacy education to keep pace with technological development. The youth need, for example, to grapple with the attention economy, personal data and privacy, and how these and other developments impact them through algorithms and Artificial Intelligence (AI). Facing increasing concerns about the misuse of information and disinformation (‘fake news’), propaganda, hate speech, and violent extremism, we see an urgent need for a concerted effort from all stakeholders to empower societies with stronger media and information literacy competencies. In this way, the targets of malicious online endeavours will be able to detect, decipher and discredit attempts to manipulate their feelings, networks, and personal identities.

What role does the UN in general and UNESCO more specifically have to play in promoting and protecting human rights online? How does UNESCO navigate tensions between different interpretations of human rights online – e.g., first amendment fundamentalism in the US versus more balanced approaches in Europe?

One of the great achievements of the United Nations is the creation of a comprehensive body of human rights law—a universal and internationally protected code to which all nations can subscribe and all people can aspire. In the digital age, the UN General Assembly and Human Rights Council have constantly updated this human rights mandate by issuing a number of resolutions to promote human rights equally online and offline.

UNESCO, in turn, is the UN agency with a mandate to defend freedom of expression, instructed by its constitution to promote “the free flow of ideas by word and image.” UNESCO also recognizes the right to privacy underpins other rights and freedoms, including freedom of expression, association, and belief. We work worldwide to promote freedom of expression and privacy both online and offline.

UNESCO has taken a lead to flag Internet freedom issues at a number of key conferences and events such as the upcoming RightsCon gatherings, the annual WSIS Forum, and the Internet Governance Forum. We also do the same at UNESCO World Press Freedom Day celebrations each year on May 3, and meetings to mark the International Day for Universal Access to Information, on 28 September every year. To provide member states and stakeholders with cutting-edge knowledge and policy advice, UNESCO has commissioned a number of pioneering policy studies, the Internet Freedom series, too. They shed light on issues such as protecting journalism sources in digital age, principles for governing the Internet, and the evolution of multistakeholder participation in Internet governance.

How do you see emerging technologies, such as IoT or AI, impacting sustainable development and the future of our world? As we promote connectivity, do we risk cultural and linguistic diversity?

AI could profoundly shape humanity’s access to information and knowledge, which will make it easier to produce, distribute, find and assess. This could allow humanity to concentrate on creative development rather than more mundane tasks. The implications for open educational resources, cultural diversity, and scientific progress could also be significant. In addition, AI could also provide new opportunities to understand the drivers of intercultural tension and other forms of conflict, providing the capacities to collect, analyze, and interpret vast quantities of data to better understand, and perhaps predict, how and when misunderstandings and conflict may arise. In turn, these can all contribute to democracy, peace and achieving the SDGs.

However, AI and automated processes, which are particularly powerful when fuelled by big data, also raise concerns for human rights, especially where freedom of expression and the right to privacy are concerned. Internet companies have begun to use AI in content moderation and in ranking orders for personalized search results and social media newsfeeds. Without human values and ethics being instilled from the start during the design stage, and without relevant human oversight, judgement, and due process, such practices can have a negative impact on human rights.

AI is already beginning to shape news production and dissemination and shifting the practice and value of journalists and journalism in the digital age. Internet and news media companies, especially whether they intersect, need to consciously reflect on the ambiguities of data mining and targeting, as well as Big Data business models for advertising in the attention economy.

There is therefore a crucial need to explore these issues in depth and to reflect on ways to harness Big Data and AI technologies in order to mitigate disadvantages and advance human rights and democracy, build inclusive knowledge societies, and achieve the 2030 Sustainable Development Agenda. Current societal mechanisms including moral and legal frameworks are not geared to effectively deal with such rapid developments.

What are your hopes for the future of the Internet? What are your fears?

I hope to see a free and open Internet which is accessed and governed by all, leaving no one behind and making the world a better place for future generations. To do this we have to continuously counter emerging divides, such as linguistic capacities of computer recognition of speech which is making great strides in English, for example, but which leaves many other languages on the periphery. We need a proportionate response to the problems on the Internet which does not  damage “the good” in countering “the bad.” We should expand and maintain connectivity as the default setting in the digital age, and do everything possible to avoid the increasing tendencies of complete Internet shutdowns in certain regions or places. We need better respect for personal data and privacy from both corporate and state actors who track our online data. We need strong journalism online to counter disinformation, and we need heightened media and information literacies for everybody.

My fear is that Internet as a double-edge sword: if not properly harnessed, it might end up being used to regress, rather than to advance, those classic values we cherish such as a private life, transparency, and public-interest journalism. Without dialogue amongst all stakeholders, we could see the Internet and related technologies being exploited to pose severe challenges to peace, security, and human rights. Such fears need to be offset by maintaining a sense of proportion whereby the good of the Internet significantly dwarfs the bad, and where we can increasingly utilise existing and emerging digital technologies to achieve the planet’s agreed development goals by 2030.

What do you think the future of the Internet looks like? Explore the 2017 Global Internet Report: Paths to Our Digital Future to see how the Internet might transform our lives across the globe, then choose a path to help shape tomorrow.


The Week in Internet News: AI Goes to the Dogs

Do you trust this documentary? Do You Trust This Computer? is a new documentary from filmmaker Chris Paine that’s dedicated to the dangers of artificial intelligence. Elon Musk, who’s been vocal about the potential downsides of the technology, appears in the film and has promoted it. But The Verge finds the film a bit overly dramatic, saying “feels more like a trailer for a bad sci-fi movie than a documentary on AI.”

Or you could just get a dog: Speaking of AI, researchers at the University of Washington in Seattle are using canine behavior to train an AI system to make dog-like decisions, reports MIT Technology Review.  The researchers are using dog behavior as a way to help AI better learn how to plan, with hopes of helping AI better understand visual intelligence, among other things.

News apps meet the Great Firewall: The Chinese government has temporarily blocked four news apps from being downloaded from Android app stores, ZDNet reports. The apps, with a combined user base of more than 400 million, have been suspended for up to three weeks in an apparent government media crackdown. Meanwhile, Chinese regulators have permanently banned a joke app for supposed vulgar content.

Mr. Zuckerberg goes to Washington: Facebook founder Mark Zuckerberg testified before Congress last week after recent reports of data analytics firm Cambridge Analytica taking personal data from the social media giant to profile potential voters for Donald Trump’s presidential campaign. Zuckerberg apologized and promised to abide by strict European data standards worldwide, while some lawmakers called for new U.S. privacy regulations. Here’s a New York Times roundup of his appearance before Congress.

EU presses websites about fake news: One of the big criticisms of Facebook among lawmakers was the way it assisted the spread of so-called fake news during the 2016 presidential election. Meanwhile, the European Union is looking at ways to force tech giants to do more to stop the spread of fake news, according to Reuters. The EU plans to release a “Code of Practice” by July that would require online platforms and advertisers to take a number of steps to prevent fake news.

Encryption backdoor only for feds? U.S. Senator Dianne Feinstein, a California Democrat, has proposed a government-only encryption backdoor, reports The Register. Like in the past, many encryption experts have questioned whether the U.S. government could keep that backdoor to itself.

Want to learn more about AI? Read the Internet Society’s Artificial Intelligence and Machine Learning policy paper and explore how it might impact the Internet’s future.

Human Rights Technology

Countries Consider Penalties for Spreading ‘Fake News’

A handful of countries have recently considered passing new laws or regulations to combat so-called fake news, with Malaysia adding penalties of up to six years in jail for distributors.

Malaysia’s controversial Anti-Fake News 2018 bill, which passed this week, also includes a fine of US$123,000. An earlier draft of the legislation included jail time of up to 10 years. Under the new law, fake news is “news, information, data and reports which is or are wholly or partly false,” as determined by Malaysian courts.

The new Malaysian law covers digital news outlets, including video and audio, and social media, and it applies to anyone who maliciously spreads fake news inside and outside the country, including foreigners, as long as Malaysia or its citizens are affected.

Eric Paulsen, cofounder and executive director of Malaysian civil rights group Lawyers for Liberty, called the new law “shocking.” “Freedom of speech, info & press will be as good as dead in Malaysia,” he tweeted in late March.

The law will create a chilling effect on free speech, Malaysia lawyer Syahredzan Johan wrote in “While we may hope that the implementation of the bill will be transparent and fair, the wide definition given to ‘fake news’ and the imprecise nature of some of the provisions may lead to selective and arbitration implementation and abuse,” he added.

Government officials have defended the law. Social media outlets are unable to monitor fake news, Azalina Othman, minister in charge of law, told the Washington Post. “No one is above the law. We are all accountable for our actions,” she said.

Meanwhile, India had proposed new rules that would allow the government to pull the official accreditation from journalist found to have written or broadcast “fake news.” But the Indian government quickly withdrew the proposal this week after strong opposition from journalists.

The proposal came just days after an Indian website editor was arrested for an apparently false report saying that Muslims had attacked a monk from the Jain faith.

The European Union is also looking for ways to combat fake news, primarily by cracking down on social media companies. The EU wants a “clear game plan” that sets the rules on how social media outlets can operate during sensitive election periods, said Julian King, the European commissioner for security.

King wants more transparency for the internal algorithms used by websites to promote stories, new limits on the harvesting of personal information for political purposes, and disclosure about the funding for sponsored content on websites, according to CNBC.

The EU proposals came partially in response to news reports saying Facebook indirectly shared millions of user profiles with Cambridge Analytica, a voter data vendor used by U.S. President Donald Trump’s campaign.

Read our interview with Wired Editor in Chief Nicholas Thompson on the changing role of media, then explore the 2017 Internet Society Global Internet Report: Paths to Our Digital Future and read the recommendations to ensure that humanity remains at the core of tomorrow’s Internet.

Artificial Intelligence Technology

Information Gatekeeping: Not a Laughing Matter

There’s a joke that goes something like this: How do you make a little money in the online news business?

The punchline: Start with a huge pile of money, and work your way down from there.

It seems the same joke would work for the online comedy business, judging by the layoff news coming out of Funny or Die in January. Recently, published an interesting Q&A with comedy veteran Matt Klinman, and he talked about the woes of online comedy outlets.

Klinman focused his ire on Facebook and its role as an information gatekeeper, in which the site determines what comedy clips to show each of its users. But much of his criticism could have just as easily been targeted at a handful of other online gatekeepers that point Internet users to a huge percentage of the original content that’s out there.

As Klinman says about Facebook, these services have created their own “centrally designed Internet” in which they serve as “our editor and our boss. They hide behind algorithms that they change constantly.”

As a thrice-laid-off online journalist, I can sympathize. I’m pretty sure I can’t blame any of the current gatekeepers for my 2002 layoff or a near miss in 2000, when I left a job just weeks before a round of forced departures. Heck, some of them weren’t even around back then.

Still, a contributing factor in my 2015 and 2017 layoffs appeared to be the company’s inability to compete in an Internet landscape dominated by a few gatekeepers. I’m exhibit No. 238,092 showing the online news business model isn’t working these days.

When Klinman says content algorithms often favor clickbait and “things that appeal to the lowest common denominator,” those words resonate with me, even though it may sound like small gatekeeper Funny or Die complaining about a larger gatekeeper.

With perhaps a bit of hyperbole – mixed with a larger truth – Klinman added: “I would gladly give money to anyone who could tell me of any digital publisher that is doing well other than Facebook.”

But Facebook is certainly not the only Internet giant trying to hang onto your eyeballs as long as it can. Google has become the Internet’s default search and recommendation engine, and Amazon has become the place where you can buy just about anything.

Twitter, Reddit, and LinkedIn have also turned into gatekeepers for certain information.

At the heart of these criticisms is the rise of this small group of gatekeepers controlling access to information. While Internet users are still free to seek information from other sources, a small group of companies increasingly serves as the Internet’s entry point.

The existence of information gatekeepers isn’t new, of course. Before the mass adoption of the Internet, back in the dark ages of my early journalism career, the gatekeepers were the owners of newspapers, TV stations, and book publishing houses.

The Internet promised to change that arrangement by giving everyone the tools to publish. There were early attempts to create new gatekeepers – America Online and Yahoo’s Web Guide are two examples – but those efforts ultimately failed.

Instead, a Wild West of information sharing flourished online for several years. Many executives in the traditional news industry failed to embrace the Internet early, putting them at a huge disadvantage to upstarts like Google, Craigslist, and Facebook.

But the promise of an Internet with no central gatekeepers – or perhaps, instead, dozens of competing gatekeepers – didn’t last. Today, we have news organizations – and apparently comedy sites – embracing the dark art of gaming gatekeeper algorithms in an attempt to get in front of the largest possible audience.

But guessing an ever-changing algorithm’s preferences is not a sustainable business model. It’s not a sustainable strategy for creating high-quality online content.

Klinman suggests a couple of ways to fix this problem. He recommends web users adopt a new attitude based on the eating local and shopping local movements. Support your favorite comedy or news site.

And if large gatekeepers are making money by delivering content created by someone else, perhaps they should pay for it, like cable TV providers are required to pay for programming.

A conversation about ways to change this dynamic has been happening in some circles for a while now, but it’s past time for a wider debate about the future of Internet content creation.

Read the 2017 Global Internet Report: Paths to Our Digital Future, which explores the changing media landscape and how Artificial Intelligence might impact the future of the Internet.


Webinar: "IPv6 For Broadcasters" on Wednesday, July 11

Society of Broadcast Engineers logoWhy should radio and television broadcasters care about IPv6? What potential impact will IPv6 have on broadcasting?  How can broadcasters get started learning more about IPv6?

We were very pleased to see that the Society of Broadcast Engineers is offering a live webinar on “IPv6 For Broadcasters” on:

Wednesday, July 11, 2012, from 2:00 – 3:30 US Eastern time

We couldn’t agree more with this part of the session description:

As a broadcaster, if you are providing content to the Internet, IPv6 migration should be considered to enable providing the best Quality of Experience (QoE) to a growing IPv6 content consumer audience without the use of translation schemes. Carriers and Internet service providers utilize translation devices to provide mixed IPv4 and IPv6 interoperability. The various translation schemes are suitable for TCP based applications such as email and web surfing, but can be detrimental to UDP based real-time media used by the broadcaster. In order to provide the best QoE, broadcasters should strive to provide their media content in a native format to IPv6 only users without the need for translation in addition to providing content to the legacy IPv4 users.

Any number of panelists at recent IPv6-related events have discussed the fact that IPv4-to-IPv6 translation services – as well as techniques like carrier-grade-NAT (CGN) to prolong IPv4 usage – introduce latency into the network connection and can degrade the user experience for real-time communications, including streaming media.  Making your media available over IPv6 will ensure viewers can see it in the best possible fashion.

Google’s already leading the way with YouTube.  Netflix is now offering streaming over IPv6. They will ensure their content is available to users regardless of whether they are on IPv6 or IPv4.

So with that, it’s rather important that other broadcasters understand how they, too, can make their content accessible over IPv6.

This webinar sounds like a great start and we look forward to seeing more broadcasters offering their content over IPv6.

P.S. If you want more info about how to get started with IPv6, take a look at some of the IPv6 resources we’ve included here at our site.