Science, politics and the optics of broadband

It was a long time ago when I started my love affair with optical networks and broadband. It started with covering long-forgotten companies such as UUNet Technologies and PSINet. It involved long-forgotten names such as Northern Telecom long before it become Nortel. So perhaps, you can’t fault me for feeling excited about reading the news that Alcatel-Lucent Bell Labs identified an optical networking breakthrough that would theoretically allow sending 400 gigabits of data over a distance of 7,900 miles.

While the news of 400G didn’t excite me as much, the distance over which they could send the data impressed me, as it had implications that go beyond the oohs-and-aahs of raw speed. And what more, if this breakthrough can be commercialized — say in five to 10 years (considering companies like JDS Uniphase would have to make components) — we could see a big improvement in not only our long-haul networks, giving them more oomph and making them more efficient, but it could also have a profound impact on our cloud future.

We certainly have come a long way

Back in 1995, I remember the excitement around Ciena’s DWDM system. That same year, Nortel, the Canadian giant that at the time ruled the optical landscape, came out with a 10 Gbps offering. And then the bubble burst. Things slowed a little and we kept waiting for the long promised 40 Gbps technologies. In 2006, we saw the 40 Gbps speeds come to market, and by 2008 it was getting traction. In 2009, 100 Gbps gear came to market from Ciena.

To recap, here is the timeline of progress with optical networking technologies

  • 1990: The first commercial 2.5G optical system was deployed.
  • 1995: The first 10G optical system was introduced.
  • 2006: The first 40G solution was introduced to the market.
  • 2008: The first coherent 40G solution for plug-and-play, four-fold increase of traffic was introduced
  • 2009: And, the first operational 100G solution was introduced.

400G and beyond

Since then, the attention has focused on the 400G systems. A lot has to do with marketing, but as my friend Andrew Schmitt, research analyst at Infonetics, points out: Anytime we have a 4x improvement in networking gear, things get interesting.

And so there’s no surprise why that is where all the attention is focused. In 2012 we saw a new software-programmable approach to network that upped the speeds to 400 Gbps. British Telecom and Ciena recently showed off an experimental 800 Gbps network. Alcatel-Lucent and Telefonica have been working on their own high-capacity network experiments as well.

The latest development at Alcatel-Lucent Bell Labs is yet another example of continuous progress in the optical fiber technologies, especially over long distances. This improvement is one of the reasons we have seen an explosion in the bandwidth on the long-haul networks. According to Telegeography, a market research firm, the world saw an addition of 54 Tbps of capacity last year alone.

The irony of optics

But after my initial excitement had worn off, I was left asking myself the question: How much should we care about these breakthroughs? I mean it is not that we are bandwidth limited on these backhaul networks. We are doing really well in terms of transmission rates and have steadily boosted our ability to send signals over long distances. Sure, most optical networks in operation are either at 10 Gbps or 40 Gbps, but we are only a couple years from getting 100 Gbps everywhere. What is most amazing is that this 40x improvement on optical networks has resulted in sharp declines in bandwidth prices on the networks that connect data centers, office buildings, cities and countries.

A good proxy for the long haul and metro networking business is Cogent Communications, which operates one of the biggest networks on the Internet. The company claims that about 18 percent of the Internet traffic runs over Cogent’s pipes and that it has 28 percent of the market by bits and 12 percent of the market in terms of dollars.

At a Merrill Lynch conference, Cogent Communication CEO David Schaeffer pointed out that the “average price per megabit in the market has fallen at a rate of about 40% per year for a dozen years” even as the number of players has gone from 200 to 12. The prices on Cogent’s network during “period has fallen from $ 10 a megabit to the most recent quarter at $ 3.05 a megabit.”

He is not alone. Here is a little chart from 2012 about IP Transit prices that tells the story of falling prices.

Schaeffer estimates that the core internet transit business is about 400 petabytes a day of bandwidth and is “purchased at the core of the internet out of the 650 data centers is a $ 1.5 billion addressable market and it’s been flat at that level for a dozen years.”

I am guessing David won’t be struggling for business in coming years. The state of connectedness is going to create much stronger demand for a much beefier cloud-based infrastructure. And as everything in the world is digitized through sensors and embedded compute devices, we are going to see an explosion in ambient data that would travel between machines. These machines that sit in data centers will need big fat pipes.

We are in the early innings here but the idea of distributed processing and storage over these big fat pipes is something that should provide an exciting prospect for companies with big fiber pipes. Infonetics’ Schmitt argued that as optical bandwidth at the non-consumer level starts to become even more plentiful and prices start to stumble, we can start to learn to waste it. Instead of networks that use routers to shuttle data, we could start to build point-to-point connections, which are certainly more useful when doing high-level distributed computing.

The financial industry has already shown us the way — many hedge funds use special low-latency networks to process data and stay competitive. I wouldn’t be surprised if that becomes standard practice with all major businesses.

Last mile conundrum

Maybe I am being child-like in my thinking, but when I see the long-haul networks, I see technology and the free market trying to figure things out and in the end bringing bandwidth online at an unprecedented scale. And sure we are all benefiting from the dumb, crooked and complete craziness that went on during the boom that led to overbuilding of fiber networks, but it has been more than a decade and the dark fiber is being put to good use. (Just ask Google!)

In comparison to the long-haul and intra-city networks, the world of last-mile connections has moved at a somewhat glacial speed. Here in the U.S., while we have seen rapid improvements in speeds (from a 1 Mbps connection at the turn of the century to about 25 Mbps (average) from cable and phone companies,) they are not as astounding. A lot of that is due to the lack of competition in our access networks — controlled primarily by oligopolies such as AT&T, Verizon and Comcast.

Across the world things are different. The Chinese are starting afresh. The Japanese and the Koreans went for the fiber early and Europeans have the advantage of the short loops, which allows them to milk the copper and make 100 Mbps connections a reality. The European competitive landscape is such that fiber to the home is becoming less of a rarity.

But let’s just face it, when it comes to the U.S., the last mile is less about the technology and more about politics and lack of competition. Whenever there is competition — Chattanooga (TN), Kansas City (KS), Austin (TX) and Vermont — things start to change. Speeds go up. Service improves and incumbents are hustling for business. Unfortunately those pockets of competition are way too rare.

In the U.S., we have never had real competition — the 1996 Telecommunication Act was a mirage and faulty from the outset. It never allowed anyone with a real chance to compete and disrupt. Thankfully, it is a distant memory, and a reminder of how Washington really works (by not working.) And that’s our broadband future — held hostage by a political and regulatory system that is in bed with those it is supposed to regulate.

That sense of disillusionment, however isn’t going to stop me from getting excited about the new optical breakthroughs. Who knows…!

P.S.: My colleague Stacey Higginbotham wrote about the need for different thinking from ISPs. Hope you get a chance to read it.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Connected consumer first-quarter 2013: Analysis and outlook
  • Locating data centers in an energy-constrained world
  • Netflix may suffer from limited mobility


GigaOM