Near misses, at best, in data center interconnection

Since their inception, data centers have been hobbled by using optical interconnections designed and optimized for telecom applications. Yet, given their rapidly increasing slice of the optics pie, data center equipment vendors are wising up and demanding a solution that more closely fits their needs.

As a quick recap of the first part of this problem, the cheapest way of interconnecting servers via switches today is copper Gigabit Ethernet. The cheapest higher-speed connection would arguably be an amalgamation of 10-gigabit capacity delivered via optics (QSFP). And there’s a wide chasm between the two that industry is desperately trying to bridge.

All hail silicon photonics

Enter the silicon-photonics hype. While critics are quick to point out the products from current silicon-photonics startups do not actually create photons from their silicon, they are missing the point. Silicon photonics are not hot because of vanilla-sky dreams of building optical circuits side-by-side with electronic circuits on 300mm silicon wafers. The reasoning is much more mundane.

Silicon photonics allow a single laser to be used for a parallel interconnect. Since the laser diode is still a large portion of optical transceiver cost, the fewer lasers, the lower the cost.

Cisco’s acquisition of Lightwire was supposed to herald in a new age in low-cost silicon-photonic optical interconnects. At last Cisco made a shrewd move, analysts opined, as Lightwire had dirt-cheap optical transceivers running 100 GbE. So imagine the puzzled reaction when earlier this year Cisco instead launched a proprietary module that is neither small, low power nor cheap.

Another ray of hope shined brightly at Open Compute, as silicon photonics was mentioned as the technology behind low-cost 100 GbE links carrying traffic between the open-sourced elements. Supposedly this effort leveraged technologies developed for LightPeak before it morphed into the copper Thunderbolt. Yet, digging deeper, the optical specification released simultaneously had little to no technical details, and rumors quickly spread that the technology was not yet ready for prime time. The jury is still out on this optical link, and I welcome and wait for more than circumstantial evidence to substantiate.

Taking a step back to an earlier era

An entirely different approach was recently announced by Arista Networks. Instead of silicon photonics, Arista is using old-school VCSEL parallel optics similar to those used in supercomputer clusters. This is surprising, given Arista founder Andy Bechtolsheim has not been shy about extolling the virtues of silicon photonics in data centers.

Arista boasts a linecard that can be reconfigured to support either 144 10 GigE ports, 36 40 Gigabit Ethernet ports or 12 100 GigE ports — definitely a step in the right direction. Yet, a closer look yields a few caveats. The links use 24 fiber ribbon cable, which is difficult to route through structured cabling. The link is proprietary, so it’s really only useful between Arista products. And, finally, their optics are not pluggable, but something known as on board optics.

In the latest episode of “What’s Old Is New Again,” there has been a resurgence in interest in on-board-optics (OBO). After two decades of pluggable optics development, now the supposed answer to all interconnect woes is to permanently fix the optoelectronics on the host board and run fibers directly to the front faceplate.

Those not suffering from technology amnesia might remember there were reasons pluggable optics were invented in the first place. Optics have a higher failure rate than electronics, and with OBO a single laser failure means the entire board must get replaced. The optics also tend to be the most expensive part of a linecard, and OBO forces a customer to pay for all the connections up front, rather than buying optical modules as needed.

Pluggable optics also allow for different cable lengths to be installed, so the link from top of rack to server can be a 20-meter variant, while the link to the end of row can be a 100-meter optic.

While it is heartening to see industry attention finally shifting to data center interconnection needs, the early attempts are near misses, at best. In the meantime, data centers keep installing more and more 1 GigE copper links. I can’t really blame them, given the economic realities.

For now, I continue to wait for the first vendor to get it right. Given the leaps in technology that have occurred in the 14 years since Gigabit Ethernet, surely someone is bound to find the right formula.

Jim Theodoras is senior director of technical marketing at ADVA Optical Networking, working on Optical+Ethernet transport products.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • How the mega data center is changing the hardware and data center markets
  • The promise of SDNs in the enterprise
  • Cloud computing infrastructure: 2012 and beyond


GigaOM