Categories
Building Trust Technology

Measure Your Bufferbloat! New Browser-based Tool from DSLReports

All things come to those who wait, and bufferbloat measurement tools are no exception. When we hosted a workshop on reducing Internet latency way back in 2013, one of the identified outcomes was the need for better tools to help users understand when they had a bufferbloat problem, and now we have just such a tool from the awesome folks over at DSLReports.

Before going any further I should probably clarify what we mean by bufferbloat. Rather than going into the details of bufferbloat, what it is and what causes it, it may be simpler to think about the observable result of bufferbloat: increased latency under load. This is a measure of the additional time it takes to send data over the Internet when your link to the Internet is loaded with traffic. Today we have technology that can help to reduce that additional latency to zero, or very close to that, but it is not widely deployed. To help stimulate deployment, end-users and network engineers alike have needed a tool that quickly and simply illustrates the existence of a bufferbloat problem on any given link. Enter DSLReports new speedtest.

Go to: http://www.dslreports.com/speedtest and click the button for your Internet connection type, e.g. “DSL” for ADSL lines. While the speedtest is running, you’ll see a gauge on the left-hand side of the display illustrating ‘Buffer Bloat’ – you want that to stay green for most or all of the download and upload tests.

After you run the test, click the green “Results + Share” button to see more detailed information. For the moment, you need to be logged in to see the more detailed latency results. There’s a “register” link on each page.

The first time I ran the new tool it showed me that, although I had installed a new router on my connection recently that I knew included the latest technology to minimise latency under load, it wasn’t configured correctly.

Pushing 4 seconds of additional latency under load while uploading indicates a problem.

Armed with that knowledge I tweaked the router configuration (by shaping upstream and downstream bandwidth to be just below the connection bandwidth, thereby enabling the queue management technology in the router to have an effect) and now have a much better looking set of test results.

Almost no variation in latency between unloaded and loaded during both download and upload tests – this is what we want.

Minimising latency under load means that I should be better able to simultaneously use my connection for downloading large files and conducting realtime interactive communications via voice and video. Why not test your link today and share your results with colleagues and friends to raise awareness of bufferbloat as an issue and to help improve our collective Internet performance experience?

For more information about bufferbloat, how to test for it and what you can do about it, see the great resources over at bufferbloat.net.


UPDATE: In response to a few questions people have raised:

  • All connection types are now enabled with bloat testing except 3G and GPRS.
  • If you don’t like the results you’re getting and want to know what to do about it, see Quick_Test_for_Bufferbloat and links from there.
  • If you have hardware supported by the Openwrt project then you can install Openwrt and follow instructions on configuration here:  What_to_do_about_Bufferbloat
  • If you don’t have Openwrt-supported hardware it’s going to be more complicated and you may be out of luck until you have different hardware.
Categories
Technology

Reducing Internet Latency: The Long-term Challenge of Making the Internet Faster

Editor’s Note: This is a guest blog post by Andreas Petlund from Simula Research and the RITE Project. You can read more about the Internet Society’s work related to Internet Latency at https://dev.internetsociety.org/tags/latency.

The world is waking up to the need for consistent low latency on the Internet. Some people, like Stuart Cheshire of Apple, have tried for decades to make the technical world think about latency when they design systems and standards. Recently, efforts like the Bufferbloat and RITE (reducing Internet transport latency) projects have been working on some of the problems that increase Internet delays. We’ve also seen great initiatives like the Internet Society Workshop on Reducing Internet Latency. Now papers are being written that urge the network community to increase its efforts to realise a near-lightspeed Internet in order to release the potential that such stable low-latency communication will give the apps of the future <http://conferences.sigcomm.org/hotnets/2014/papers/hotnets-XIII-final111.pdf>.

I recently got a question from a journalist when interviewed about our new video explaining sources of Internet latency: when can we have near-zero delay in the Internet? I could glean from his tone that what he hoped for was something like “next year”. I could hear his disappointment over the phone when I answered that we can make good progress within the next years, but that the big changes that would give us the Internet of our dreams could take decades.

So why is the progress towards that goal so slow?

The root lies in the distributed structure of the Internet. If we could deploy a “new” Internet tomorrow, with every component under the same all-powerful control, we would have our low-latency net immediately. The technology is there and has been for a long time.

If we discount this clean-slate utopia however, the road to low-latency happiness has obstacles related to the political, economic and technical domains.

ISPs have businesses to run and customers to keep. They’re very nervous about making changes that may scare their customers away. Since providing network services is a low-margin business, very often, _any_ change proposed will meet strong resistance. Such fear will help maintain a status quo that keeps all the actors at the same level. If we can educate and encourage decision makers within such organisations, the chances of deployment will increase.

For the technical challenges, any solution that will drastically improve the situation has to be widely embraced by the community in order to succeed. Not only that, but it has to support incremental deployment. Some legacy equipment will lurk in the shadows waiting to break your beautifully designed algorithm for low latency communication. So to increase the chance of success, solutions should be standardised and designed with implicit incentives for people to adopt them, even though there may be a phase with sub-maximal benefits due to lack of widespread deployment. In the meantime, there are many ways to make smaller changes that have less impact, but that can still cut some milliseconds from Internet response times.

An important element for increasing the chance of success for Internet latency reduction is to raise public awareness. When the benefits are known, there should be a growing pressure on the influential players to help with the low-latency efforts. My feeling is that we’re about to reach a point where Internet latency is no longer a topic only for small groups of people with a special interest. We’re witnessing a growing interest, much due to the Bufferbloat project’s work. In RITE, we have just released an informational video aiming for raising public awareness about the topic. We’ve included educational material so that it can easily be used in IT 101 courses, allowing a new generation of technicians to be conscious of the latency aspect.

I’m an optimist about this. My hope is that the raised awareness will motivate a collective effort so that we’ll reach agreement on changes that will transform the Internet –without having to wait until 2040.

Categories
Growing the Internet Technology

Why ‘Megafast’ Internet Often Isn’t (Video)

What’s the most important thing determining your satisfaction with your Internet connection? I’m sure a lot of people would say speed, and that’s not surprising as headline bandwidth figures have been the way many commercial Internet service providers have chosen to compete in the marketplace for subscribers for many years. 50Mbps must be better than 10Mbps, right? Superfast sounds really, erm, fast, right?

Well, in many cases, increased bandwidth won’t result in significant improvements to user experience and one of the reasons for that is related to latency, or delay. Measurements from Google show that upgrading your connection from 1Mbps to 2Mbps halves web page load times, but quickly thereafter we are into diminishing returns: upgrading from 5Mbps to 10Mbps results in a mere 5% improvement in page load times.

To help more Internet users with understanding this seeming conundrum, I’ve been working with the lovely folks over at the RITE project to develop a new video, released today, that seeks to explain the difference between bandwidth and delay, and the different ways that latency affects Internet performance.

Why not take a look at the video and then see how you do in the quiz? There are even more resources for teaching and other activities available at the RITE project website.

It’s great to see these educational efforts being launched, as they start to address some of the important actions identified during the Reducing Internet Latency workshop we held last year. There’s still lots more to be done though and, “getting smarter queue management techniques more widely deployed” remains a priority.

So check out the video today and share it with your contacts to help us all get educated about this challenge. You can also keep watching our ITM blog for more posts about tackling the scourge of network latency!

Enjoy the video!