Fine-tuning network design saves wavelength resources
Optimising the design of networks that use distance-adaptive transmission can save 50 per cent of wavelength resources compared with fixed-rate transmission, according to research to be presented at the upcoming Optical Fiber Conference (OFC) in Los Angeles on 19–23 March.
A researcher from Nokia Bell Labs in Murray Hill has developed a mathematical model that could improve the flow of internet traffic generated by cloud computing by optimising the placement of data centres in conjunction with adopting distance-adaptive transmission technology.
In coherent optical systems, capacity and reach are antagonistic – increase one and you decrease the other. High-capacity, shorter-reach channels also occupy more spectrum. The latest transmission systems offer distance-adaptive transmission – also referred to as flex-coherent or flex-grid systems – to optimise total system capacity by adjusting the spacing between channels to avoid leaving unused spectrum.
However, network design does not typically consider the distance that data must travel, despite the fact that shorter distances can support higher rates. Yet as the traffic grows in volume researchers have become increasingly aware of some of the limitations of this mode of transmission. Experts estimate the amount of data stored ‘in the cloud’, in remote data centres around the world, will quintuple in the next five years.
“The challenge for legacy systems that rely on fixed-rate transmission is that they lack flexibility,” said Dr. Kyle Guan, a research scientist at Nokia Bell Labs. “At shorter distances, it is possible to transmit data at much higher rates, but fixed-rate systems lack the capability to take advantage of that opportunity.”
Using the capabilities of modern distance-adaptive transmission systems, Guan set about building a mathematical model to determine the optimal lay-out of network infrastructure for data transfer between cloud data centres and between the cloud data centre and the end users.
“The question that I wanted to answer was how to design a network that would allow for the most efficient flow of data traffic,” said Guan. “Specifically, in a continent-wide system, what would be the most effective [set of] locations for data centres and how should bandwidth be apportioned? It quickly became apparent that my model would have to reflect not just the flow of traffic between data centres and end users, but also the flow of traffic between data centres.”
External industry research suggests that this second type of traffic, between the data centres, represents about one-third of total cloud traffic. It includes activities such as data backup and load balancing, whereby tasks are completed by multiple servers to maximise application performance.
“My preliminary results showed that in a continental-scale network with optimised data centre placement and bandwidth allocation, distance-adaptive transmission can use 50 per cent less wavelength resources, or light transmission and reception equipment, compared to fixed-rate rate transmission,” said Guan. “On a functional level, this could allow cloud service providers to significantly increase the volume of traffic supported on the existing fibre-optic network with the same wavelength resources.”
Guan recognises other important issues related to data centre placement. “Other important factors that have to be considered include the proximity of data centres to renewable sources of energy that can power them, and latency – the interval of time that passes from when an end user or data centre initiates an action and when they receive a response,” he said.
Future research will involve integrating these types of factors into his model so that he can run simulations that even more closely mirror the complexity of real-world conditions.