As the national research and education network (NREN) of Switzerland, we operate a nationwide fiber backbone where our customers can exchange traffic between each other on the most direct way without congestion.
But how are we connected to the rest of the Internet?
There are two different kinds of external traffic: Research and Education (R&E) and commercial.
Reaching other universities or research institution in other countries happens via a redundant 200G connection to GEANT, an organisation that interconnects all NRENs in Europe.
Our connection to GEANT is at our CERN PoP, where we have a 2x100G connection to their router in Geneva as well as a backup 2x100G connection to Paris.
As for all our connectivity, our philosophy is to always overprovision and make sure that there are no bottlenecks, so the capacity of these links are always upgraded before necessary.
This is also true for our commercial connectivity to the Internet.
There are two different kinds of commercial connectivity, peering and transit.
Peering means connectivity to other networks where each side provides just the prefixes of its own network and customers.
Usually this is beneficial to both parties in the same way and thus settlement free.
It’s always best to directly connect to another network if possible, as then you have control over the connection and can make sure that there is no congestion.
Going over third parties makes this more intransparent.
Transit basically means the carrier takes traffic for all destinations in the Internet and will thus “transit” packets towards other networks that are not directly connected.
Peering traffic is usually exchanged by networks at so called Internet Exchanges (IX). Those are basically distributed switch-fabrics, that can span multiple buildings/locations or even cities or countries.
Connecting to such an Internet Exchange switch makes it possible to just need one connection to the IX fabric to be able to communicate to all other networks present.
In Switzerland, the most well known Internet Exchanges are SwissIX, Equinix Zurich IX and CERN Internet Exchange (CIXP), where we each have a presence.
To SwissIX we are redundantly connected with 100Gbit/s to their switch at the datacenter Interxion Glattbrugg as well as to the one at the data center Equinix Zurich, also with 100Gbit/s.
At the Equinix Zürich location we are also connected to the Equinix Zurich IX switch with 10G,
For geographical redundancy, there is also CERN Internet Exchange in Geneva, where we have a 10G connection to the switch fabric, although not that many peers are present there.
Since Switzerland is relatively small, and only a very limited set of international networks have a presence on Swiss Internet exchanges, we also have connections to the two largest Internet exchange points in Europe, 30Gbit/s to Amsterdam (AMS-IX) and 100Gbit/s to Frankfurt (DE-CIX).
For peers with whom we exchange larger volumes of traffic (usually above 1Gibt/s), notably large content- and cloud-providers, we have separate direct physical fiber links between our and their routers, so not going over a central switch fabric. This saves capacity on the shared link to the switch fabric.
Swisscom, Liberty Global (sunrise/UPC), Microsoft, Google, Amazon, Facebook are among those larger peers we have direct connections to, so called private network interconnects (PNI).
Of all our commercial Internet traffic volume, on average about 90% is handled directly over peerings as described above. For the rest, we rely on multiple Transit providers for redundancy.
Currently we have a 10G transit connection to Arelion (former Telia) in Zurich, a 10G to Lumen in Geneva and Basel, a 10G connection to Cogent in Basel, with GTT planned with 100G in Geneva and 10G in Zurich.
Apart from the obvious redundancy, having multiple transit providers gives us the possibility to adjust routing over different paths to solve intermediate problems with certain prefixes.
In case you have any questions about our external connectivity, leave a comment and I’m happy to answer them.
2 thoughts on “External Internet Connectivity of SWITCHlan”
Thank you very much for this exciting contribution. I have a question: How are the connections from SWITCH to AMS-IX and DE-CIX realized?
excellent question. AMS-IX and DE-CIX connections are both realized as simple remote IX, with a leased L2 link from a router in our backbone in Switzerland directly to the remote switch fabric without any equipment abroad.
AMS-IX is connected to our router swiCE3 via a so called GEANTplus link, which is basically an EoMPLS tunnel from the GEANT Geneva Router to the GEANT Amsterdam router. It terminates as a subinterface on our primary 200G access interface to GEANT at CERN in Geneva on one end and on the GEANT Amsterdam router there is a bundle of three 10G links to the AMS-IX switch fabric. BGP configurations for peers at AMS-IX are then done on our swiCE3 router.
For DE-CIX, we have leased a 100G L2 link from DFN going from IWB Basel towards the DE-CIX switch in Frankfurt over their DWDM transport system. On our side it terminates at IWB Basel on our router swiBA3 where also the BGP sessions towards the peers are configured.