Tim Tuttle, a serial entrepreneur who co-founded web acceleration technology company Bang Networks and video search engine Truveo (acquired by AOL), has returned with his third startup, Expect Labs, which he co-founded with Moninder Jheeta (who built infrastructure for Truveo.) The company today announced its first product, an iPad app for simplified group conferencing called MindMeld that is built upon Expect’s core technology concept — anticipatory computing. Even as a demo, it is an impressive piece of technology that shows where the future of computing is headed.
Group conferencing like none other
MindMeld is an iPad app (for now) that uses Facebook’s open graph and identity to help create quick audio or video conferences. Add a few people and start talking. But here is where things get interesting: As you speak (or other participants speak), the app listens and starts surfacing information pertaining to what you are talking about.
For instance, if you are talking about an upcoming meeting with, say, someone like me, then in near realtime, it would show you my Wikipedia page, surface my recent blog posts, show GigaOM location on a map, and other such information. And as fast as the topic shifts, the system brings up relevant information for that new topic. Sometime in the future, the company will be able to access data from your Dropbox or Google Docs account and when it does, Cisco’s WebEx division should reach for a proverbial bottle of migraine medicine.
I got the demo of the application at a fairly noisy restaurant in my neighborhood, and even then, it kept offering suggestions and information pretty quickly. If there was a lag, it was due to AT&T’s LTE network, which isn’t as robust as advertised, especially in San Francisco. Put this app on a WiFi – which we did – and everything from picture quality to voice latency and information being pushed to the screen was pretty flawless. Sure, it was a demo on a system used by no more than a dozen people, including all eight of the company’s employees, but I have seem many demos in my time. Someday, Siri will work as flawlessly as this app and will get an A-plus from me.
Rise of anticipatory computing
The MindMeld app has me convinced about the capabilities of Tuttle and his crew. Yes, if you looked at the company just from the perspective of this app, its ambition might seem limited. Tuttle says what matters to him is that their platform (which he expects to unveil next year) is used by other apps that can integrate it using their API. But in order to showcase his company’s grand ambition, he needed an app and hence MindMeld.
Tuttle pointed out that unlike other semantic efforts that analyze usage history, their approach is to look at the past 10 minutes and then anticipate what users might need in the next 10 seconds. “We have a predictive model that changes second to second and surfaces relevant information without searching,” says Tuttle. He calls it “anticipatory” computing, and as far as I am concerned, “predictive” is the future direction of computing.
Tuttle started the company two years ago to develop a platform that would “continuously pay attention to what happens in your life and pick up ambient information and then start to surface relevant information.” Why? Because be believed that our computing habits were going from being desktop bound to completely mobile, and that would essentially mean a different usage behavior.
Sensors, data and mobile = complexity
With more devices and more sensors coming into our lives, the amount of data being generated will reach a point where the machines need to start anticipating our needs. Search as a way to access information doesn’t and won’t work — mostly because search can only respond to questions we ask. Also, if most of our computing is shifting to devices that are always with us, the idea of how to compute also has to change.
Tuttle, who did his Ph.D studies at the AI Labs (Artificial Intelligence Labs) at Massachusetts Institute of Technology (MIT), says the idea for the company came when he was fine-tuning his last company, Truveo. That company, which used speech-to-text technology and helped search video streams, has become one of top video search engines. Fast forward to today, the emergence of faster networks, cheaper (and reliable) cloud compute platforms and newer technologies has made it possible for Tuttle and his co-founder Jheeta to develop Expect’s platform.
They don’t go into great detail about their infrastructure, but say that at any given time during a call on MindMeld, there are multiple processing threads going, and they are pushing and pulling data over the network at a pretty rapid clip. They currently have based their system on Amazon’s EC2 and have built both voice and data communication layers in addition to using speech-to-text technology from a partner. Translation: they are using Nuance’s technology. For data (information) they tap sources such as YouTube, Yelp and Google.
Tuttle isn’t the only one thinking about anticipatory computing. Google recently launched Google Now, and it wouldn’t surprise me to see something similar to Expect’s approach show up when Google Glasses become mainstream. And other startups are working on making anticipatory computing a reality and coming up with new techniques that would simplify everyday computing tasks.