Over the past several years, cable operators have seen their networks transform into the premier platform for transmission of data services, both for residential and business customers. In order to accommodate the growth of services and transmission speeds, the networks have been divided into smaller and smaller clusters of customers forming independent service groups.
As network speeds have exploded through 100G and 200G to beyond 400G to cope with increased bandwidth requirements, network operators are increasingly focusing on automation – typically via software defined networking (SDN) or network function virtualisation (NFV) – to increase service velocity and remove cost and errors from their networks. However, one layer – the physical layer has stubbornly resisted the move to software definition or automation.
In the wake of Google Access CEO Craig Barratt’s ‘Goodbye Access’ post on the Google Fiber blog, there are pundits left, right and centre predicting the end of Google Fiber. Barratt’s post tries to sound upbeat, but in essence he’s announcing that Google Fiber won’t be expanding further (pending a strategic re-evaluation), that people will be made redundant, and that he’s leaving. I don’t know Craig and can’t really comment on his tenure as Access CEO, but that doesn’t exactly sound like good news.
Often attention is focused on the penetration of fibre closer to the subscriber; however, it is important to not lose sight of how the global demand for data-rich applications also impacts further upstream in the long-haul section of the network. Links between cities must support the ever-increasing volume of data traffic to ensure that transmission bottlenecks do not occur.
Increases in per-channel data rates up to 400G are grabbing headlines and fibre makers must continue to innovate to allow such speeds to be installed efficiently.
Metro, regional, long-haul, metro-access, metro-aggregation, metro-core, ultra-long-haul, data centre interconnect… whatever these terms mean to you, I can almost guarantee that we would disagree somewhere in our views of exactly what these terms mean and where specifically these products are used in optical networks. Our expectations of exactly what distances these systems would cover and the functionality that each should have would probably also vary considerably.
With the recent growth in smartphone and tablet users, alongside the development of hundreds of thousands of applications, consumers around the globe are using and expecting availability and access to more and more mobile data. According to a 2013 Cisco report, by the end of 2014 the number of mobile connected devices will exceed the number of people on earth, and by 2018 there will be nearly 1.4 mobile devices per capita. Global mobile data volumes have nearly doubled every year, showing that now is the time for a 4G infrastructure to be put into place.
Cost and compatibility can make a compelling case for pushing 100Gb/s bandwidth over a single optical channel, both as individual links and supporting 400Gb/s Ethernet, finds Andy Extance
Robin Mersh takes a look at how the industry is creating next-generation optical access fit for 5G
Technological advances to aid the increasing demand for bandwidth, on the path towards the terabit network, should lead to optical signals that are flexible and adaptive, like water, argues Dr Maxim Kuschnerov and Dr Yin Wang