Data centers need better interconnections - TvTechnology

Data centers need better interconnections

Given the progress in technology that has occurred in the 14 years since GigE, the search is on to find the right formula.
Author:
Publish date:

Cloud computing is used (and hyped) by virtually everyone — from large broadcasters to small and not-so-small content providers. The data centers that run these increasingly popular, remotely located signal processing operations (“clouds”) have heretofore been limited by the use optical connections designed and optimized for legacy telecom applications.

Now, as cloud computing matures, data centers are looking for a better kind of interconnect solution. With today’s technology, the cheapest way of interconnecting servers via switches today is copper cable-based Gigabit Ethernet. The lowest cost higher-speed connection is an amalgamation of 10Gb/s capacity delivered via optics (QSFP). In between, there is a wide gap that those who operate and run data centers are desperately trying to bridge.

There must be a better way, according to an essay written by Jim Theodoras, senior director of technical marketing at ADVA Optical Networking, a company that works on optical and Ethernet transport products.

There have been many proposed solutions, Theodoras noted. One is silicon-photonics, a technology that allows a single laser to be used for a parallel interconnect. Since the laser diode is still a large portion of optical transceiver cost, the fewer lasers, the lower the cost.

That was the promise of Cisco’s acquisition of Lightwire which was supposed to herald in a new age in low-cost silicon-photonic optical interconnects. It was calculated that Lightwire had very low cost optical transceivers running 100GbE. Instead, “Cisco launched a proprietary module that is neither small, low power nor cheap,” Theodoras said.

Another possibility was Open Compute, an effort that leveraged technologies developed for LightPeak before it morphed into the copper-based Thunderbolt. Yet, digging deeper, the optical specification released simultaneously had little to no technical details and rumors quickly spread that the technology was not yet ready for prime time. Theodoras said the jury is still out on Thunderbolt.

Another is the resurgence in interest in on-board-optics (OBO). After two decades of pluggable optics development, now the supposed answer to all interconnect woes is to permanently fix the optoelectronics on the host board and run fibers directly to the front faceplate, Theodoras said. “Optics have a higher failure rate than electronics, and with OBO a single laser failure means the entire board must get replaced. The optics also tend to be the most expensive part of a line card, and OBO forces a customer to pay for all the connections up front, rather than buying optical modules as needed,” he said.

Pluggable optics also allow for different cable lengths to be installed, so the link from top of rack to server can be a 66ft variant, while the link to the end of row can be a 330ft optic, Theodoras said.

All the early attempts are near misses, at best, Theodoras wrote. “In the meantime, data centers keep installing more and more one GigE copper links, waiting on the first vendor to get it right,” he said. “Given the leaps in technology that have occurred in the 14 years since Gigabit Ethernet, the search is on to find the right formula.”