FEATURE
Issue: 

Web-scale trend dominates OFC discussions

Given that the term web-scale was coined barely two years ago, it is surprising how quickly it has come to dominate the conversation in the optical networking industry. The requirements of web-scale operators like Facebook, Google and Microsoft are increasingly front and centre at events like OFC 2015, which was held in Los Angeles, California, in March.

Jeff Cox, senior director of network architecture for Microsoft Azure Global Network Services, kicked off the OSA Executive Forum (a one-day seminar co-located with OFC) with a presentation on ‘Optical under the cloud’, which perhaps would be more accurately called ‘What Microsoft wants’.

Cox said he chose the title of his presentation carefully. The optical industry isn’t under a cloud; quite the opposite. His reference is to web-scale operators building mega data centres that require incredibly high-volume optical capabilities. Optics is crucial to this business, but the industry isn’t providing everything that Microsoft desires – at least not yet.

Cloud requirements are very different from traditional telecom networks, as he explained. Optical hardware must be high-speed and purchased in huge volume, but is subject to shorter life cycles as new technology generations are regularly required. The average lifecycle of data centre equipment – servers, switches and the optics used to connect them – is currently just two to three years.

Delegates enjoying the OFC exhibition

With such massive volumes and short product cycles, every penny counts. Microsoft needs to optimise the cost of optical hardware, by pushing for increasing levels of integration and stripping out unnecessary features to limit complexity. Features that are commonplace in the telecom world, such as backwards compatibility, simply aren’t required in mega data centres, where an upgrade is often through wholesale replacement.

Microsoft chose the event to launch the Consortium for On Board Optics (COBO), a group that will collaborate on finding practical ways to move the optics from the faceplate onto the board. The partners, which also include Cisco, Dell, Intel and Broadcom, say this development will allow them  to improve networking equipment. 

Although on-board optics such as the 300-pin module have been around for years, it is generally accepted that pluggable modules are superior in the data centre environment. However, as faceplate density increases and space becomes increasingly at a premium, that looks set to change. ‘You can get a lot of optimisation if you move the optics onto the board,’ Cox noted.

Putting modules closer to the electronic circuits will reduce the power required to send signals between the two across high-speed electrical connections. There’s more space on the board than on the faceplate, which will allow vendors to increase the bandwidth density of their equipment while also making it easier to keep everything cool.

‘The industry has not standardised this,’ Cox noted. ‘There are different ways of putting optics on the board. Microsoft chose COBO. The intention is to create a standard that everyone can use and create an ecosystem in much the same way as we have around pluggable optics.’

Also on Microsoft’s wish list is the Open Line System – a high-capacity optical transport platform tailored to its specific need for data centre interconnection. Such a system would eliminate the unwanted features commonly found in metro and long-haul optical transport systems, and in doing so would reduce space, power consumption requirements and, of course, cost.

Optical features for the chop include bandwidth on demand and electrical OTU switching. ‘Once we turn on a link, we don’t tend to move it around,’ Cox notes. Also, the optical hardware doesn’t need to duplicate features already being provided at the packet layer, including protection, restoration, and packet aggregation. In fact, demarcation between the optical and packet layers isn’t necessary, since Microsoft views the network as a single infrastructure. 

So what does such a platform need? The system itself would include the basic network elements required to create a link between data centres, such as amplifiers, gain equalisers and possibly reconfigurable optical add-drop multiplexers (ROADMs), all optimised for coherent transmission. It would also include open application programming interfaces (APIs) such as RESTCONF to expose control information to management systems.

Another requirement is interoperability. Microsoft wants to use coherent transceivers from any vendor on the line (transmission) side to cover distances up to thousands of kilometres. With an eye towards interoperability between systems themselves, the search engine giant would also like standardised modelling of network elements and transceivers.

‘We have not formed a consortium around this. Maybe we will,’ Cox teased.

Elsewhere in the show, other announcements tapped directly into the web-scale trend. Calient Technologies, for example, announced a new architecture that would allow data centre operators to virtualise their storage capacity by connecting servers and switches via low-latency optical circuit switches. The new piece is the LightConnect Fabric Manager software, which manages that connectivity at the optical layer.

However, suppliers were urged not to be blinkered by the requirements of the web-scale providers. ‘There are still plenty of traditional data centres out there,’ noted Mitch Fields, product strategy and architecture, Avago Technologies. ‘Maybe 20 per cent are mega data centre operators, which means that 80 per cent are still traditional operators. We need to address the optics requirements of both trends.’ 

Feature

CableLabs is spearheading efforts to develop a proposal that uses coherent optics to dramatically boost the capacity of hybrid fibre coaxial networks, reports Andy Extance

Feature

Systems vendors are using intelligent software to squeeze more performance from optical networks. Pauline Rigby reports on developments at OFC 2017