Categories
Artificial Intelligence Deploy360 Internet Exchange Points (IXPs) Mutually Agreed Norms for Routing Security (MANRS) Securing Border Gateway Protocol (BGP)

GLIF 2018 Held at the Home of Hamlet

The 18th Annual Global LambaGrid Workshop (GLIF 2018) was held on 18-21 September 2018 at the Kulturværftet in Helsingør (Elsinore), Denmark. Kronberg Castle, located next to the venue, was immortalised as Elsinore in the William Shakespeare play Hamlet, but there proved to be nothing rotten with the state of high-bandwidth networking as 50 participants from 19 countries came to hear how these networks are facilitating exascale computing in support of biological, medical, physics, energy production and environmental research, and to discuss the latest infrastructure developments.

This event was organised by myself with support from NORDUnet who hosted the event in conjunction with the 30th NORDUnet Conference (NDN18), and where I also took the opportunity to raise awareness of the MANRS initiative.

The keynote was provided by Steven Newhouse (EBI) who presented the ELIXIR Compute Platform which was being used for analysing life science data. In common with high-energy physics, genomics research produces a lot of data, but this is more complex and variable, requires sequencing and imqging on shorter timescales, and of course has privacy issues. The European Molecular Biology Laboratory is based across six countries and employs over 1,600 people, but also collaborates with thousands of other scientists and requires access to existing national repositories as well. High-bandwidth networks are therefore necessary to interconnect their on-site computer and storage clusters, but will increasingly be necessary to facilitate connectivity with other research and commercial cloud resources such as EGI.eu and HelixNebula.

David Martin (Argonne National Labs) continued this theme, by presenting on the US Department of Energy’s Exascale Computing Initiative. This aims to develop and operate the next generation of supercomputers at the Argonne, Lawrence Livermore, Los Alamos and Oak Ridge National Labs by 2021, along with a software stack that will present a common computing platform for supporting advanced research applications and neural networks. The Argonne Labs Computing Facility will be based around an Intel Aurora supercomputer with over 1,000 petaflops of processing, 8 PB of memory, and 10 TB/s of input/output capability that will require future network connections in the petabit-per-second range.

Joe Mambretti (Northwestern University) then discussed the Open Science Cloud (OSDC) which is an open-source cloud-based infrastructure that allows scientists to manage, share and analyse large datasets. The aim is to have 1-2 PB of storage at each participating campus, interconnected with 100 Gb/s+ links, but presented and managed as a common namespace with uniform interfaces and policies.

The rest of the day was devoted to how network automation can integrate compute and storage facilities, particularly across multiple domains. Migiel de Vos (SURFnet) presented the work being undertaken for SURFnet 7, and explained the distinction between automation and orchestration whereby the former is considered task and domain specific, whilst the latter is developing intelligent processes that consist of multiple automated tasks across multiple domains. This required the development of new information models, standardised interfaces, automated administration, and then predetermined service delivery agreements.

Gerben van Malenstein (SURFnet) then discussed LHCONE Point-to-Point Service that allowed Layer 2 circuits to be dynamically established between Data Transfer Nodes for exchanging data from the Large Hadron Collider. This was built on the AutoGOLE work which was now enabled on 21 open exchange points. Nevertheless, whilst AutoGOLE was a functional and proven multi-domain system, there was still limited uptake by network services and end-users, which was necessary to completely remove human configuration of network equipment and create a truly global research platform.

Most of the following day was devoted to technical discussions chaired by Lars Fischer (NORDUnet) and Eric Boyd (University of Michigan). These focused around some practical examples of network automation being used at the University of Michigan, a passive network measurement system with programmable querying at 100 Gb/s line rates that was being developed by the IRNC AMIS Project, as well as discussions on how to automate the generation of network topology maps.

Topology maps are useful for users to show how they can reach counterparts in other parts of the world, and where particular services are available. They are also useful as a marketing tool to show investors and stakeholders how they contribute towards creating a truly global infrastructure, and demonstrate how the NREN model is accepted around the world, and for example, the GLIF map has become a somewhat iconic piece of artwork.

Other developments were the establishment of a new exchange point called South Atlantic Crossroads (SAX) based in Fortaleza, Brazil that was expected to interconnect with new cable systems to Angola (SACS) and Portugal (EllaLink), as well as to AMPATH and SouthernLight over the existing MONET connection. There were also plans to build procure a new 100 Gb/s connection from Europe to the Asia-Pacific, from Geneva to Singapore via the Indian Ocean to supplement the existing link from Amsterdam to Tokyo via Russia.

There were further updates on the new KREOnet network which supported 100 Gb/s links between five major Korean cities and Chicago (StarLight) via KRLight, as well as multiple 10 Gb/s links to 11 other Korean cities, Hong Kong and Seattle. The KREOnet-S infrastructure further offered SDN capabilities permitting dynamic and on-demand virtual network slicing, whilst a Science DMZ provided high-performance computing facilities for KISTI’s new 25.5 petaflop supercomputer.

SURFnet is transitioning its network to SURFnet 8 and would be upgrading its core network and international links, whilst StarLight was developing a Trans-Pacific SDN testbed, as well as an SDX for the GENI initiative.

The closing plenary session focused on how high-bandwidth research connections and exchange points can be better planned and coordinated, and whether a new entity should be created to support this. The GLIF Co-Chairs Jim Ghadbane (CANARIE) and David Wilde (AARNet) outlined some ideas around this, and then hosted a discussion on how things should be progressed.

Further Information

Categories
Deploy360 IPv6

RIPE NCC Hackathon Version 6

The RIPE NCC will be holding its sixth Hackathon on 4-5 November 2017 in Copenhagen, Denmark, and by no coincidence at all, will be focusing on IPv6. This will be part of Danish IPv6 Week that’s being hosted by DKNOG and sponsored by Comcast Cable, and which will also have Deploy360 involvement in the shape of our colleague Jan Žorž.

Hackathons are opportunities for network operators, coders and hackers to get together to develop new tools, as well as exchange knowledge and experience with others. Some possible projects for this hackathon include improving IPv6 measurements such as IPv6 RIPEness, improving the IXP Country Jedi tool. that compares traceroutes between IPv4 and IPv6, and developing tools to advance IPv6 deployment.

The RIPE NCC are specifically looking for UX and UI experts including graphic designer, developers familiar with Python, Node.js, Perl and Go, Internet measurement researchers, and network and hosting operators who have experience of deploying IPv6.

If you’re interested in participating, then you need to apply before 10 October 2017.

Travel funding of EUR 500 per person is also available to six participants, with preference given to applicants from “least developed countries”, those working for not-for-profit organisations, and those with previous contributions to free and open-source software and projects. Please note though, the deadline for applicants who require require funding is 9 September 2017.

Further Information

Categories
Deploy360 Events IPv6

Talking NAT64Check at DKNOG in Copenhagen

Tomorrow (16 March) from 13:45 – 14:30 CET (UTC+1), at the Danish Network Operators’ Group (DKNOG) in Copenhagen, I’ll talk about our experiments on NAT64 and DNS64 in the Go6lab and also about NAT64Check. Watch live via DKNOG’s live stream page at https://dknog7.dknog.dk/main-room-webstream/.

As many mobile operators are moving to IPv6-only, which is incompatible with IPv4 on the wire, it’s necessary to employ transition mechanisms such as 464XLAT or NAT64. The Go6lab NAT64/DNS64 test bed was established so that operators, service providers, and hardware and software vendors can see how their solutions work in these environments. This has already generated significant interest; instructions on how to participate are available on the Go6lab website.

NAT64check allows websites to be checked for consistency over IPv4, IPv6-only, and NAT64, and to compare responsiveness using the different protocols. This allows network and system administrators to easily identify if anything is ‘broken’ and to pinpoint where the problems are occurring, thus allowing any non-IPv6-compatible elements to be fixed. For example, even if a web server is not running IPv6 (why not?), hard coded IPv4 addresses can cause NAT64 to fail.

During the talk I’ll share some insight and discuss issues that I found while testing NAT64/DNS64 technology in real life scenarios and use cases.

If you are at DKNOG, I’m more than happy to chat and discuss all this new technology that makes the Internet such a great place!