This site is optimized for Chrome, Firefox and Safari. IE users, please disable compatibility mode for best viewing experience.

Is Carrier Ethernet Enough When Flying Through Clouds?

15 Apr 2014 / Abel Tong

We know that Carrier Ethernet delivers both high-performance connectivity across the WAN and scalable, standardized, highly-reliable services with QoS and service management. As a result, Carrier Ethernet is the service of choice for many applications including enterprise business connectivity, content distribution, video, wireless backhaul, financial services, and more. But, as the network transforms to meet the world of mega data centers and cloud services, a question comes to mind. Is Carrier Ethernet enough?

During last week’s MEF quarterly in Budapest, Hungary, I hosted a round-table on “Cloud Services and Carrier Ethernet.” Participants included service providers Colt, PCCW Global, Orange, Oteglobe, Tata, Verizon; members from the Cloud Ethernet Forum (CEF) members; as well as members from MEF’s Cloud Ethernet Focus Group and SDN Focus Group. This was a very interesting and open discussion. We focused on an enterprise to data center, infrastructure as a service (IaaS) application with Carrier Ethernet connectivity and bandwidth on-demand across the WAN. Here are a few highlights:

  • Latency. Applications often span disparate physical data centers. In order for a given application to run as if in a single logical data center, the network needs to deliver guaranteed performance. Everyone knows about bandwidth. However, latency is just as important. A service needs to include specifications for both bandwidth and latency. Consider, latency is determined by speed of light over a physical path and delay through each device along the path. An IP routed network cannot deliver deterministic latency because the data path changes dynamically and device-processing time is variable. Carrier Ethernet does not suffer this limitation.
  • Startup. When starting up IaaS, applications typically move large amounts of data into the new infrastructure. The move requires maximum bandwidth over a finite period. As the applications transitions to steady state, bandwidth needs will dial-down; typically, the peak requirement is lower and actual utilization becomes bursty. Obviously, Ethernet with quality of service (QoS) and statistical multiplexing is well suited to serving both startup and steady state. But also, having headroom to handle startup and other dynamic bandwidth also necessary. Again, “check-box” to Carrier Ethernet with a focus on 100G Ethernet (vs 10GbE) long-term.
  • Geo-location. Some applications deal with personal or sensitive data. Several countries have regulations mandating that data stay within geographic boundaries. These violations of the law may be subject to criminal penalties. Think about an MPLS network in Canada that experiences a failure. A fast re-route might switch traffic down any of a number routes traversing into the United States. While connectivity might be maintained, any healthcare application now would be in violation data privacy laws. Thus, a network needs to be connection-oriented and allow for the engineering of working and protect paths to guarantee that data stays within specified cross boundaries. Advantage - Carrier Ethernet.

Numerous carriers have Carrier Ethernet networks and deliver Carrier Ethernet services. But, what about CE services that allow both bandwidth and latency specifications, with huge changes to bandwidth on-demand over 100G Ethernet, and controls over data localization and path computation? The world of data center and cloud services is changing. CE is evolving with it. Is your network ready?



Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are...

Read More


Network operators are struggling to meet performance, availability...

Read More


Software-Defined Networking (SDN) and Network Functions Virtualization....

Read More
Get started now