Cisco is tackling the growing infrastructure bottleneck in AI by introducing the 8223 routing system, a solution designed to seamlessly connect distributed AI workloads across multiple data centers. Powering this new system is the Silicon One P200 chip, capable of processing 51.2 terabits per second. This positions Cisco alongside Broadcom and Nvidia in the race to provide solutions for the escalating demand for large-scale AI infrastructure.
Modern AI applications, which often rely on thousands of processors, create considerable heat and consume vast amounts of power. This puts immense strain on existing data center resources, pushing the limits of space, power, and cooling capabilities. Expanding across multiple data centers becomes a necessity, but the connections between these sites can quickly become a major performance bottleneck.
Traditional routers struggle to handle the bursty and unpredictable traffic patterns typical of AI workloads. Cisco’s 8223 addresses this with a compact design housing 64 ports of 800-gigabit connectivity, capable of processing over 20 billion packets per second. A key feature is its deep buffering capacity, which absorbs traffic surges and prevents network congestion. The system prioritizes power efficiency and supports 800G coherent optics for connections extending up to 1,000 kilometers.
Microsoft and Alibaba Cloud are among the first to adopt this new technology. The P200 chip is fully programmable, allowing organizations to adapt the silicon to support emerging protocols and evolving AI demands. It also includes line-rate encryption and integrates with Cisco’s observability platforms for comprehensive network monitoring.
The 8223 is initially shipping with open-source SONiC support, with IOS XR support planned for future releases. This flexibility in deployment options may prove to be a crucial differentiator in a competitive market.
