The silicon-rich AI race: how Cisco’s G300 puts networking at the center of compute

By Cygnus | 11 Feb 2026

The silicon-rich AI race: how Cisco’s G300 puts networking at the center of compute
Cisco's Silicon One G300 aims to reduce AI data center bottlenecks by shifting focus from GPUs to high-performance networking infrastructure. (AI Generated)
1

Summary

As the AI infrastructure race accelerates, attention is shifting beyond GPUs to the networks that connect them. At Cisco Live EMEA this week, Cisco Systems unveiled the Silicon One G300, a 102.4 Tbps switch ASIC designed to reduce bottlenecks in large-scale AI clusters. With hyperscalers projected to spend hundreds of billions of dollars on AI infrastructure this decade, Cisco is positioning networking silicon—not just compute—as the next competitive frontier.

Amsterdam, Feb 11 — In the 2026 AI buildout cycle, performance is no longer defined solely by the number of GPUs inside a cluster. Increasingly, it is defined by how efficiently those GPUs communicate.

At Cisco Live EMEA, Cisco Systems introduced the Silicon One G300, a 102.4 terabit-per-second switch ASIC aimed squarely at AI training environments. The company argues that as models grow larger and more distributed, network congestion—not compute—has become one of the primary constraints on performance.

The message is clear: in modern AI systems, the network has become part of the compute fabric itself.

Solving the “straggler” problem

In distributed AI training, workloads are synchronized across thousands of GPUs. When a single packet is delayed or dropped, the entire job can slow—a phenomenon commonly referred to as the “straggler problem.”

Cisco says the G300 addresses this through what it calls Intelligent Collective Networking, combining hardware-level buffering and congestion-aware traffic management.

According to Cisco:

  • Fully shared 252MB packet buffer: The chip is designed to absorb large bursts of east-west traffic common in AI workloads. Cisco says this allows it to handle data surges more effectively than traditional fixed-buffer architectures.
  • Path-based load balancing: The system monitors link utilization and dynamically reroutes traffic in microseconds. Cisco says this improves link utilization and reduces congestion compared with software-based tuning methods.
  • Topology efficiency: In Cisco’s reference architecture for very large AI clusters, the company estimates that required switch counts could be significantly reduced compared with prior-generation designs, potentially lowering cabling complexity and latency.

Independent benchmarking data was not disclosed at launch, and performance figures cited by Cisco are based on internal testing and modeling.

A new front in the hardware war

The AI infrastructure market is increasingly defined by three networking strategies:

FeatureCisco (Silicon One G300)Broadcom (Tomahawk 6)Nvidia (Spectrum-X)
StrategyHybrid: Chips plus integrated systems (Nexus platforms)Merchant silicon model supplying multiple OEMsVertically integrated GPU + networking stack
Throughput102.4 Tbps (max theoretical)102.4 Tbps (max theoretical)102.4 Tbps class
DifferentiationP4 programmability; post-deployment flexibilityScale and ecosystem reachDeep CUDA and GPU integration

Cisco’s approach combines custom silicon with full-stack systems integration, while Broadcom remains a leading supplier in the merchant switch silicon market. Nvidia, meanwhile, extends its vertical integration strategy from GPUs into networking via Spectrum and InfiniBand technologies.

Cisco is betting that large AI operators want flexibility and Ethernet-based interoperability, rather than proprietary fabrics.

Energy and density: the next constraint

As AI clusters scale toward power footprints measured in the hundreds of megawatts, power density and cooling are emerging as limiting factors.

The G300 will power new Cisco Nexus 9000 and 8000 platforms, which the company says are designed for liquid-cooled AI data centers.

Cisco claims:

  • Up to 70% improvement in energy efficiency per bit compared with certain prior-generation systems.
  • Reduced power draw through 800G Linear Pluggable Optics (LPO) and 1.6T OSFP modules.
  • Higher bandwidth density, enabling fewer systems to deliver equivalent aggregate throughput.

As with performance claims, these efficiency figures are based on Cisco’s internal comparisons.

The rise of “AgenticOps”

Managing AI clusters with tens of thousands of nodes increasingly requires automation.

Cisco introduced updates to its AgenticOps platform, which uses telemetry data from the switching fabric to create a digital model of the network. The company says the system can:

  • Detect early signs of optical module degradation.
  • Automatically reroute traffic before packet loss impacts training jobs.
  • Reduce mean time to repair (MTTR) through automated diagnostics.

While automation in networking is not new, Cisco is positioning AI-driven operations as essential infrastructure for hyperscale AI deployments.

Ethernet vs. InfiniBand: the standards debate

A key industry divide remains the networking fabric itself.

Nvidia continues to push InfiniBand in high-performance AI training environments, while Cisco and Broadcom advocate for Ethernet-based architectures, including standards backed by the Ultra Ethernet Consortium (UEC).

Cisco argues Ethernet offers greater openness, ecosystem interoperability, and long-term cost efficiency, particularly as AI networking scales beyond single-vendor deployments.

Availability

Cisco said the Silicon One G300, along with G300-powered Nexus systems and associated optics, is scheduled to ship later in 2026.

Why this matters

For the past three years, the AI arms race has centered on GPUs. But as model sizes expand and clusters scale into tens of thousands of accelerators, networking is emerging as a critical performance and cost lever.

In large AI deployments, even small inefficiencies in data movement can leave expensive GPUs underutilized. Improving network throughput, reducing congestion, and increasing energy efficiency can directly impact job completion times, infrastructure costs, and overall return on investment.

Cisco’s G300 launch underscores a broader industry shift: competitive advantage in AI infrastructure is no longer defined solely by compute power, but by how seamlessly compute, networking, and operations software function as an integrated system.

With hyperscalers and sovereign AI projects investing heavily in next-generation data centers, the battle for AI leadership is expanding beyond chipmakers into the networking layer that connects them.

FAQs

Q1: Why does faster job completion time matter?

In AI training, job completion time directly impacts infrastructure utilization. Even modest percentage improvements can translate into lower energy consumption and improved GPU efficiency.

Q2: Is Cisco moving away from Ethernet for AI?

No. Cisco continues to invest in Ethernet-based AI fabrics, positioning them as scalable and standards-based alternatives to proprietary interconnects.

Q3: Who is the primary competitor?

Broadcom in merchant switching silicon and Nvidia in vertically integrated AI networking stacks.