What comes after Siri? A web that talks back

Siri may be the hottest personal assistant since I Dream of Jeannie, but Apple’s artificial intelligence is only the tip of the iceberg as we combine ubiquitous connectivity, sensor networks, big data and new methods of AI and programming into a truly connected network. Instead of connecting people to people as Facebook or even some of the cooler services like Turntable.fm do, the web of the future will connect machines to machines and connect those machines back to people.

What Om calls the “alive web” is still people talking to other people, but the next generation of the web is far more interesting. It’s when machines start talking to each other and then to people. The emergence of the Internet of things is well documented but in order to get there, we’ll need several advancements in technology from low-power, cheap sensors to better ways of programming computers so that they can understand data from several million end points.

The Internet of things is here

For example, Ford is not just connecting its entertainment systems to the web, it’s also connecting its engines. Another roadside example is the myriad traffic mapping services that gather data from cars on the road to deliver a real-time look at traffic. Such examples could end up profoundly changing the way we get around, either through self-driving cars, more efficient ride sharing or even better public transportation.

Future generations of Fords could talk to the cloud.

Want to get away from the automotive examples? One only has to consider what the alignment of big data and connected sensors could mean for healthcare. The simplest example might be epidemiological mapping of diseases, but advanced capabilities might involve insulin pumps that monitor blood sugar and react instantly, or that prompt you to order certain foods once you enter a store or restaurant. Think this is crazy? There are already 23,000 mobile health apps available for iOS and Android devices according to Happtique, an app store, and the Consumer Electronics Association is creating a medical and healthcare summit designed to hash out the issues surrounding gadget-delivered healthcare.

So what else do we need to make the web talk back?

These are emerging examples of what machines will be able to tell us once they are wired into the web, but to really understand them, we need to address some holes in the system. First, there’s connectivity, which we’re rapidly making ubiquitous. From smart-grid components connected to a cellular network, to the emergence of White Spaces broadband that will allow for connectivity on lower value networks, the connectivity element is in place, although costs can still be prohibitive.

Another area where we’re making rapid progress are the sensors that can track everything from inventory to environmental factors. Earlier this week, I reported on a company trying to make thin-film, cheap sensors that might be attached to objects for a few pennies. Already, RFID and GPS tags track high-value items such as fleet trucks or beloved pets; adding more items just requires cheaper sensors.

But now it gets hard. Ericsson estimates that 50 billion devices will be connected to the web by 2020, which seems reasonable. But if we want those items to give us information, that’s a lot of data to sift through and correlate. And if we want those machines to react without human interaction, then we’re looking at an entirely new style of programming, as well as better AI.

Smarter machines require better AI and different programming

Piecing together the puzzle of better AI.

On the AI side, Siri is huge. Voice recognition has come a long way, but what’s essential here is the level of understanding that Siri offers once it has recognized your voice. Siri has a sense of what words mean, and represents the real promise of the semantic web in something useful rather than gimmicky and complicated (sorry Twine).

Improvements in AI will allow machines to parse the data from billions of sensors and notify people to take action only when needed. At a less-complicated level, it’s akin to only getting a warning on your heart-rate monitor when your heart skips a beat, as opposed to hearing every blip. In a decade, it may involve weather data getting sent to a computer that then ships appropriate inventory to stores in time for a freak cold spell.

In addition to AI, the Internet of things will need new ways to process and store information. It will have to have its intelligence spread around multiple endpoints like a mesh, rather than centralized in one big brain. The smarter our computers get, the more challenging they are to program in a manner that lets them efficiently make use of all their compute resources. This is why projects such as IBM’s neurosynaptic chips are so important, or HP’s efforts to create a new style of chip for processing big data.

Pulling it all together

We’re getting to a point where connectivity allows us to interact in real time with each other, but the real boon to a connected web is not just connecting people, but adding to our capabilities with machines that can track states, detect anomalies, and then advise humans how to react. It’s not exactly the Singularity, but it is a necessary evolution, so we can take the terabytes of data we’re generating and separate the monumental from the mundane.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • From car to cloud: the future of the in-vehicle app landscape
  • Connected Consumer Q3: Netflix fumbles; Kindle Fire shines
  • The future of mobile: a segment analysis by GigaOM Pro



GigaOM