There are many forms of AI on the market, ranging from AI embedded in smartphones to purpose-built AI in the cloud. While they all impact network traffic, AI’s integration into enterprise workloads will have a significant impact on network design and demand going forward.
AI and machine learning still comprise just a small portion of the cloud data center’s compute infrastructure today, but they drive a fundamentally different type of application traffic and require different network architectures. Search and social media have been the first to use AI, but the big change will be when enterprise workloads, especially those at IaaS providers, incorporate AI into their applications and workloads. This will cause a change in network traffic and a new transformation in the network. It‘s not clear what the exact impact and transformation will be, but the network will play a critical role in the infusion of AI into the cloud.
AI will play a role in all successful companies in the future. It will range from the early stages of product development and design to most day-to-day business activities. Engineering activities like CAD design or pharmaceutical design will be forever changed with hundreds of years of product development potentially occurring overnight.
AI will impact almost all verticals as well. In the data center, servers will evolve to incorporate AI. Today, most AI is still built with basic components and isolated from the rest of the network. However, optimized and purpose-built AI platforms will be more common each and every year. One can just look at OCP announcements each year to see the rapid evolution of AI platforms. Instead of GPUs slotted in PCI slots, we’ll get purpose built and optimized AI server design. Google’s public demonstrations give a better example of what AI compute can look like and it does not look like standard racks of servers.
Turning to the network, AI exposes the network to a very different type of workload, just like server virtualization did about a decade ago. New traffic from AI will be very unpredictable and come with significantly higher bandwidth requirements. How AI connects to the network and utilizes those ports is very different than the workloads we have today. It’s not currently clear if AI will require a separate network, similar to HPC, or just require larger pipes to each node.
100Gbit/s server access port shipments and 400Gbit/s in the aggregation/core port shipment volumes will benefit from the increased use of AI in the cloud. Also DCI will experience significant growth to get different data sets from consumers, IOT, and enterprise to the cloud. It’s also likely that the machines themselves will help design and operate networks as their complexity will be too much for a human to do. (Google has already hinted at this.)
With AI being sprinkled in the cloud, edge devices, and throughout the network, it’s likely that brand new applications and use cases will be created. Overall this is a good thing for data center networking, as it’s an additional and new growth driver to data center networking demand.