The year 2022 has come to an end. And it was a very interesting one with lots of changes behind the scenes. That’s why we want to present the most relevant and interesting updates.
After the crisis is before the crisis
In summer 2022, just when we thought that the Covid crisis was over, the next one appeared on the horizon: electricity shortage scenarios. A secure, stable power supply has become such a matter of course in Switzerland over the past 100 years that hardly any organisation has been concerned with shutdowns or impending blackouts. Until now, we too confined ourselves to a redundant design of the power supply with an unspecified protection by a UPS. Then the federal government announced procedures with scheduled power interruptions. Immediately, discussions began internally about how we would deal with such situations. At the same time, more and more customer enquiries were received about this. The topic of power shortages occupied us accordingly in the second half of 2022. Two results have emerged so far:
We now have detailed information about the power supply at our backbone locations.
We are looking at a promising battery solution to improve autonomy at key backbone sites.
In addition to all the technical changes, there were also personnel changes in the network team last year. In anticipation of upcoming retirements, we have strengthened our team with Simone Glinz and Daniel Sutter. Simone started as a successor of Ulrich Schmid who will retire in April 2023. She hence is responsible for many systems in the background (monitoring, statistics, data engineering, sysadmin, etc). Daniel supports us as project manager for our fibre optic infrastructure. Andy Hebeisen has left our team. He is now taking on overall responsibility as vocational trainer for the apprentices at SWITCH.
Business as usual
As mentioned at the beginning, we have made many changes in the background this year. In addition to various upgrades of peerings and upstreams, this also includes the addition of new locations and customers. The most significant milestone, however, is the successful completion of our first backbone links with a bandwidth of 400 Gbps. In December 2022, we were able to put the links on the triangle between ETHZ in Zurich, CSCS in Lugano and CERN in Geneva into operation. We were thus able to successfully demonstrate that our optical transport platform is still suitable for bandwidths beyond 100 Gbps.
In addition to old and new crises and our business-as-usual, we were also busy with various projects in the past year. In the case of SCION, for example, we were able to achieve two important milestones in addition to its introduction as an official SWITCHlan service. On the one hand, the Secure Swiss Finance Network (SSFN), created in collaboration with the Swiss National Bank and the SIX Group, went live in mid-year. On the other hand, we were able to convince GÉANT to set up SCION in their backbone and to establish a global isolation domain (ISD) for research networks.
We were able to achieve another highlight in the transmission of high-precision time signals that can be verified against UTC(CH). In collaboration with METAS and Armasuisse S+T, we launched a pilot project to explore the technical feasibility and market opportunities. We recorded a first highlight on the technical side on the route between METAS in Wabern/Bern and Armasuisse S+T in Thun. The measured deviation of the time out and back was only 4 ps (4 * 10-12 s).
A smaller, but still very useful project was the installation of a new speed test server. The new server has a direct backbone connection with 100 Gbps. It can be selected on the Ookla website as “SWITCH – Zurich”: see https://www.speedtest.net/.
Besides providing excellent network connectivity services, we obviously have some interesting goals on our roadmap for 2023. For example, the CSCS will become our first customer with a 400 Gbps SWITCHlan IP Access service. And in a couple of months, we’ll be able to provide this service redundantly. To achieve this, we are going to deploy a second set of 400G routers at CSCS, the ETH Zurich and at CERN in summer. Further, we will improve our resiliency in terms of upstreams and peerings. With GTT, we’ll add another upstream provider. For the peerings, we will most notably add a second private network interconnect to Microsoft as well as augmenting the bandwidth to 100 Gbps. More upgrades to 100 Gbps will follow for other big peers as well. And that’s only a small set of our roadmap.
As the national research and education network (NREN) of Switzerland, we operate a nationwide fiber backbone where our customers can exchange traffic between each other on the most direct way without congestion. But how are we connected to the rest of the Internet?
There are two different kinds of external traffic: Research and Education (R&E) and commercial.
Reaching other universities or research institution in other countries happens via a redundant 200G connection to GEANT, an organisation that interconnects all NRENs in Europe.
Our connection to GEANT is at our CERN PoP, where we have a 2x100G connection to their router in Geneva as well as a backup 2x100G connection to Paris.
As for all our connectivity, our philosophy is to always overprovision and make sure that there are no bottlenecks, so the capacity of these links are always upgraded before necessary.
This is also true for our commercial connectivity to the Internet.
There are two different kinds of commercial connectivity, peering and transit.
Peering means connectivity to other networks where each side provides just the prefixes of its own network and customers. Usually this is beneficial to both parties in the same way and thus settlement free.
It’s always best to directly connect to another network if possible, as then you have control over the connection and can make sure that there is no congestion. Going over third parties makes this more intransparent.
Transit basically means the carrier takes traffic for all destinations in the Internet and will thus “transit” packets towards other networks that are not directly connected.
Peering traffic is usually exchanged by networks at so called Internet Exchanges (IX). Those are basically distributed switch-fabrics, that can span multiple buildings/locations or even cities or countries. Connecting to such an Internet Exchange switch makes it possible to just need one connection to the IX fabric to be able to communicate to all other networks present.
In Switzerland, the most well known Internet Exchanges are SwissIX, Equinix Zurich IX and CERN Internet Exchange (CIXP), where we each have a presence.
To SwissIX we are redundantly connected with 100Gbit/s to their switch at the datacenter Interxion Glattbrugg as well as to the one at the data center Equinix Zurich, also with 100Gbit/s.
At the Equinix Zürich location we are also connected to the Equinix Zurich IX switch with 10G,
For geographical redundancy, there is also CERN Internet Exchange in Geneva, where we have a 10G connection to the switch fabric, although not that many peers are present there.
Since Switzerland is relatively small, and only a very limited set of international networks have a presence on Swiss Internet exchanges, we also have connections to the two largest Internet exchange points in Europe, 30Gbit/s to Amsterdam (AMS-IX) and 100Gbit/s to Frankfurt (DE-CIX).
For peers with whom we exchange larger volumes of traffic (usually above 1Gibt/s), notably large content- and cloud-providers, we have separate direct physical fiber links between our and their routers, so not going over a central switch fabric. This saves capacity on the shared link to the switch fabric.
Swisscom, Liberty Global (sunrise/UPC), Microsoft, Google, Amazon, Facebook are among those larger peers we have direct connections to, so called private network interconnects (PNI).
Of all our commercial Internet traffic volume, on average about 90% is handled directly over peerings as described above. For the rest, we rely on multiple Transit providers for redundancy.
Currently we have a 10G transit connection to Arelion (former Telia) in Zurich, a 10G to Lumen in Geneva and Basel, a 10G connection to Cogent in Basel, with GTT planned with 100G in Geneva and 10G in Zurich. Apart from the obvious redundancy, having multiple transit providers gives us the possibility to adjust routing over different paths to solve intermediate problems with certain prefixes.
In case you have any questions about our external connectivity, leave a comment and I’m happy to answer them.
When the community met in Schaffhausen on January 21-22, 2020 for the Network and Security WG, I don’t think any of us thought it would be the last such event for a long time.
Impact of COVID-19
The first measures were taken at SWITCH in early March. Among other things, some employees were sent to the home office as a “field force” to be deployed in urgent cases in case the disease spread to the SWITCH head office.
Then, on March 13, the Federal Council announced measures to contain the coronavirus for the whole of Switzerland.
At that point it became clear that the network team—like most other teams at SWITCH—would no longer be meeting in the Head Office until further notice. We had no idea that this would remain the case until now, ten months later, with two exceptions.
Since we already had a long-term remote worker in the team, and some of us had been sent home as part of the aforementioned “field force”, we were not entirely unprepared. Also, the fact that most of us had been working together for a few years (or decades) helped with the transition to ~100% remote work. In addition to email, we use internal text chat for exchanges.
Our colleagues in the MAPS and Collaboration teams had already set up a Jitsi infrastructure in no time at all at the beginning of March in order to be able to offer the SWITCH community an uncomplicated solution for videoconferencing (in the meantime, commercial services such as Zoom, Teams, WebEx or Meet mostly cover this need). In our team, we have been using this “meet.switch.ch” intensively since the first lockdown, for traditional work meetings but also for informal exchanges in the form of “coffee breaks”—twice a day, because networkers are known to consume a lot of coffee.
Impact on network traffic
From one day to the next, SWITCHlan’s usage patterns changed drastically. Normally, during normal (research and teaching) working hours, we transfer a lot of traffic from the general Internet to the campus networks of the universities. This traffic suddenly disappeared as virtually no one was on campus. In its place, demand for lecture streaming and other collaboration and video solutions skyrocketed. The users were (and are) mainly connected to the broadband networks of operators such as Swisscom, UPC, etc., with whom we maintain peerings (direct interconnections). Since many users apparently use the VPNs of their respective institutions, most of this traffic then traverses the campus networks again—only it is sent on from there via SWITCHlan in the direction of the aforementioned broadband providers.
As a result, traffic on our peering links with Swiss broadband providers rose sharply. We immediately set about upgrading the most important of these, and thanks to energetic assistance from our partner providers and the operators of the exchange points, we were able to put some of these upgrades into operation in the first few days of the lockdown. Among other things, SWITCHlan’s connection to SwissIX is now running redundantly and at 100 Gb/s.
In addition to the Swiss access providers, there were also other peers with a sharp increase in traffic, especially those hosting the large video services (Zoom, etc.). Here we had to do some “traffic engineering” in some cases. A lucky coincidence was that after long months (or was it years?) of planning, we were finally able to put a 100Gb/s link to DE-CIX in Frankfurt into operation in mid-March.
Some SWITCHlan connections also needed emergency upgrades; so on March 16—the day the Federal Council declared an extraordinary situation—we were able to help the Hôpitaux Universitaires de Genève (which includes the National Reference Center for Emerging Viral Infections) prepare for the onslaught by upgrading to redundant 10Gb/s.
Even though the COVID-19 pandemic has shaken up a lot, we have been able to work on some interesting projects. Here are some highlights.
Time and frequency distribution
Highly accurate time and frequency sources, such as those maintained by national metrological institutes (in Switzerland: METAS), can be verified or made even more accurate by synchronizing them with each other. This is possible via high-precision methods over optical fibers. These processes are quite different from those we use to transmit data. But with a few tricks, it’s possible to use the same fibers for both.
SWITCH was actively involved in two projects on this topic in 2020: Together with METAS, we set up a triangle Bern (METAS in Wabern) – University of Basel – ETH Zurich. The first METAS publications with measurements on this system should be published soon. A special feature of the system is that we use bidirectional links here (outbound and inbound signals on the same fiber), which requires the use of special components. Thus, some bidirectional optical amplifiers have been in use since this year, and we are now working out their teething troubles with the manufacturer.
At the European level, GÉANT and some national metrological institutes are also working on time/frequency distribution. The discussion about suitable approaches for the coexistence between frequency transmission and “normal” data transmission is much more complex. We have been able to contribute valuable experience from building the Swiss triangle.
In the SCION project for secure interdomain routing, we were also involved in two projects: One is SCI-ED (SCION for the ETH Domain), in which all institutions of the ETH Domain get SCION connections. On the other hand SSFN (Swiss Secure Finance Network), where the SCION concept is being tested under the aegis of the Swiss National Bank for its suitability as a future infrastructure for Swiss interbank clearing. Other partners include SIX and two commercial Swiss ISPs, as well as the ETH spinoff Anapaya.
In the EU project GÉANT (officially GN4-3), the SWITCH network team is involved in the area of time/frequency (see above), but also in activities around novel “programmable dataplanes”. P4 is the state of the art here and is considered an improved successor to OpenFlow. In the GN4-3 subproject RARE (Router for Academia, Research and Education) an already existing open source software platform (FreeRouter) was equipped with a backend for P4 compatible devices.
SWITCH was able to contribute important experience here, as we are already using P4-based platforms productively for a special application: As part of our Netflow/IPFIX traffic data collection, we programmed a “packet broker” that aggregates the data from several router ports to be measured onto a few fast server ports. (These servers run a Netflow/IPFIX exporter, also from our kitchen). Alex Gall presented this system at a workshop on Telemetry and Big Data of the GÉANT project in November.
Data Center Interconnect (DCI)
We built a DCI (Data Center Interconnect) solution for a major data center relocation project in southern Switzerland. For this we introduced a new optical platform, which seems to be suitable for further applications in the backbone in the future. The data center migration itself started towards the end of the year. The challenges involved are considerable and will continue to keep us busy in 2021.
“Business as usual”
Besides all these more or less extraordinary activities, the “normal” business had to go on. The aforementioned meeting of the Network/Security WG in Schaffhausen in January was a nice (but, as mentioned, unfortunately also the last for a while) opportunity to exchange experiences, opinions and ideas within the SWITCH network community.
In addition to the new 100Gb/s link to DE-CIX in Frankfurt (see also under COVID-19), there were, as usual, various upgrades. For example, in November we upgraded the link to GÉANT from 100Gb/s to 2*100GB/s. In this context, the existing Cisco ASR9000 was also replaced by a newer NCS 55A1 router. This comparatively very compact and energy-saving platform is proving to be the new standard for sites with a need for (many) 100Gb/s links.
In our optical platform (ECI), we are also benefiting from technical innovations that allow us to build new N*100Gb/s links relatively inexpensively on the main routes where the need exists.
Long-time NeWo employees Ulrich Schmid and Markus Wittmer have moved to the newly created DTec (Development & Technology) team. We will continue to benefit from their expertise.
After an excursion of several years into the field of cloud infrastructure, Simon Leinen returned to the network team.
Kurt Baumann was heavily involved in SWITCH’s newly launched work on research data management in 2020 and will join the Business Development team in 2021.
Finally, we had one retirement to mourncelebrate, that of Ernst Heiri, who left after 25 years of service to SWITCH, including 17 years with the network team.
Outlook for 2021
In the year that has just started, we are looking for a new member to join our team. This could be someone fresh out of university who wants to become a learned networking practitioner in a stimulating but relatively stable environment—job ad here; please forward to any interested parties!
For March 11 we plan to invite to a meeting of the Network WG, which will take place virtually for the first time.
On the technical side, we will build on what we have achieved and continue to work on the projects mentioned above. Also, in 2021 we want to evaluate candidates for a new backbone router platform that also supports 400Gb/s connections—as cost effectively as possible and hopefully with (at least) the current level of stability.
For 2021, we wish the entire SWITCH community every success and look forward to continued good cooperation. If a little normality returns, so much the better. Even so, we certainly won’t be bored!