Earlier this month, I spent several days locked in a hi-tech lab facility in a secluded German village with some of Europe's leading optical engineers. The focus of this gathering was the development of terabit networks. Raise this topic with some within the networking industry and you will be greeted with raised eyebrows and furrowed brows. This is especially true for IT teams who are continually trying to combat shrinking budgets with increased bandwidth demands.
However, there are some in the industry who are already looking beyond 100G, even beyond 400G, to consider the possibility of actual terabit networks and the impact this technology could have. Take a look at any of the large-scale Research and Education (R&E) projects currently in development and it becomes readily apparent where this technology is needed. In fact, some within the R&E sector would contest that current IT infrastructures are one of the critical limiting factors to making faster progress.
To understand some of the figures we're talking about here let's take a look at the recently announced DOME project. This is a project that's seeking to build a radio telescope that can look back 13 billion years to the creation of the universe. A joint effort between IBM and Dutch space agency, ASTRON, it's expected that this project will generate up to 1,500 petabytes of data per year. To put this into perspective, CERN currently creates 15 petabytes per year.
Imagine the networking power required to transport such massive amounts of data. In one single day, the DOME project will create over one exabyte of data. This is more than one day's entire internet traffic. Take a look at the infographic below for further figures on the magnitude of what's being undertaken. The numbers are simply astronomical.
Even if we look at projects that are actively producing data now, we can see a need for bigger networks. As I mentioned earlier, CERN currently produces about 15 petabytes annually and is in the process of building its own cloud infrastructure to help compute and understand this data. The Helix Nebula Science Cloud is a Europe-only infrastructure that aims to provide CERN with more computing independence. At the moment, CERN uses 150 publically owned data centers to compute the 6G of data that it produces every second.
These are only two examples of many R&E projects that have significant data transport needs. We could easily explore other enterprise sectors and find similar demands, especially as the impetus to access and analyse big data continues to gather momentum.
The key question is where do we go from here. Yes, 100G is developing and will become the common commercial denominator in the optical networking industry. After this, the next logical step is to achieve higher speeds through the creation of super-channels, i.e., the bonding of multiple 100G wavelengths. For true 400G line systems we've seen that we can leverage 100G technology, at least theoretically.
However, the leap to 1Tbit/s native line speed is a different magnitude altogether. Looking at the tools we have today, it's clear that further advances in photonic and electronic integration are necessary, especially in regards to spectral efficiency. What’s needed is a more disruptive solution. Without it, terabit networking will remain a hi-tech lab experiment.
Are you seeing demand for 1Tbit/s networks? Is the supporting technology developing at the speed we need? Do you believe the era of terabit networking is approaching? Let me know what you think.