Two good infrastructure considerations for the internet of things from SXSW

When it comes to building out the broadband infrastructure, the data networks and the processing for the internet of things, we’re going to have to make some changes. That’s the message I got from conversations with a variety of people and from panels at South by Southwest in Austin this weekend.

It’s the network, stupid

When considering washing machines that tweet, inventory-tracking sensors that send a few pieces of data or home health monitoring systems that are tracking someone’s heartbeat, most people assume the data is so small that the network can handle it. But today’s networks are designed to fulfill very different scenarios.

However, Joe Weinman, the SVP of cloud services and strategy at Telx, noted that the old broadcast model employed by cellular networks (and even to an extent wireline networks) focuses on sending a piece of content down to many. In some cases it’s one piece of content down to one person, but with the internet of things the devices at the edge are sending many different chunks of data up to the core.

That could require new ways of designing networks with a focus on uploads. There’s another element as well that wasn’t discussed too much at the panel, and that was the issue of quality of service and latency. In the heart monitoring example, that’s data that should have priority over other network traffic because it needs real-time monitoring. However, if it’s just gathering information for diagnostic purposes, then it’s fine if that traffic takes a back seat to other bits.

Processing may find a new home

The network is probably the most important (and is definitely the most expensive) element of the internet of things infrastructure, but another ongoing debate is about where the information collected by the thousands (millions?) of sensors we’ll connect will be turned into action or aggregated to form meaningful insight. Namely, will the processing happen in the cloud, or will it happen locally?

Wael Diab, senior technical director at Broadcom’s infrastructure and networking group, noted that the pendulum has swung back and forth between centralized and distributed processing since the mainframe. But what’s worth noting about the internet of things is that there will need to be both — and where the processing takes place will be dynamic depending on several factors.

For example, if the promise of a truly universal internet of things ever occurs (as opposed to siloed areas of connectivity in the medical space, the home, the car etc) then devices might send certain types of data to a local hub in a medical or automotive setting because it’s more secure or cheaper, but take advantage of the cloud and wireline broadband in the home or work setting.

What’s almost universal among people I’ve discussed this with is that the technology to make the internet of things possible has been around for a while. The big change is now that people are able to mediate the lack of standards and interoperability in the underlying technologies using the web via a smartphone.

But that won’t be enough to truly created a connected world with services that span different devices — the promised internet of things.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • GigaOM Research highs and lows from CES 2013
  • Analyzing the wearable computing market
  • The Internet of things: creating tomorrow’s health care


GigaOM