If you want to build the next Siri, ai-one wants to help

Artificial intelligence has to be a siren’s song for application developers that want to strike it rich. It’s very cool in terms of possibilities and, as Apple’s Siri application has proven, the public will eat it up. The problem, of course, is that AI is hard as hell — techniques such as machine learning and natural-language processing are becoming more common, but they’re still not commonplace. A San Diego-based startup called ai-one wants to change that with a software development kit (SDK) that aims to put artificial intelligence in the hands of developers everywhere.

Actually, ai-one calls its technology “biologically inspired intelligence” because it learns like the human brain does, detecting patterns and learning new data sources automatically without requiring developers to feed it new analytic models. Better yet, according to company literature (it has a SlideShare page very much worth checking out for more details), it takes less than a day to train developers on ai-one and less than a day to build an application. The company’s goal is to “embed intelligent computing capability in every device,” ai-one President Tom Marsh told me during a recent call.

What is a holosemantic data space!?!

At the highest level, ai-one’s technology — which powers its suite of products — works by creating a virtual brain (ai-one calls is a holosemantic data space) that processes files as they stream into the system. Marsh said the brain finds associations “between every byte pattern and every other byte pattern … in the entire corpus,” which allows it to work in various spaces. Ai-one calls the smallest unit of information in any given field the “data quant,” and that’s what it analyzes. For text files, those are words; for genomes, it’s the DNA base pairs.

Ai-one’s first product, called Topic Mapper, is a set of libraries and an API for text analysis (there’s also a genomics-focused version). Text analysis is the foundation for everything from LinkedIn’s People You May Know feature to Siri’s, which converts speech to text before analyzing it and providing an answer.

Marsh showed me a demo application that shows what Topic Mapper can do. In just seconds, the app had ingested 49 data files from the World Cup soccer website, processed them and created an XML file that ranked all the words by which ones it thought were the most-important. It also created a graphical representation on which related words stem from highly relevant words and create clusters. “Your brain does this really brilliantly,” Marsh said, but it can be difficult to write algorithms that do so.

Interestingly, that application didn’t use natural-language processing, which made the results of the demonstration all the more impressive, but also highlighted its shortcomings. Marsh also showed me the results from an attempt to analyze tweets from last year’s SEMTECH conference that ranked URLs and date stamps fairly highly even though they’re not words. Because it was just analyzing the byte patterns and didn’t know anything about language, the app didn’t know the difference between a time such as 12:21 and a term such as “schema.org.”

Using a simple browser-based tool called BrainBrowser that ai-one created, Marsh showed what happens when Topic Mapper meets natural-language processing. He was able to process the text of a website while filtering out specific parts of speech, resulting in a perusable list of relevant keywords. Developers could use this ability to easily create apps for finding related people, places and things with a high degree of accuracy. And what I saw is just a demo to show the guts of BrainBrowser — tied to industry-specific ontologies and/or dressed up under a pretty user interface, you could have a very specialized or broadly accessible application.

Sensors networks or image recognition could be the killer app

Despite what its technology can do, though, Marsh said ai-one is still waiting for someone to develop the killer app that propels it into the bigtime. “Our challenges are more commercial at this point than they are technical,” he said.

Maybe the products on its roadmap will help out. For starters, there’s the updated version of Topic Mapper slated for release in June that will integrate 64-bit, multithreaded processing and scale from 4 gigabytes per instance to 18 exabytes per instance to make it far more powerful and able to handle far larger datasets. There’s also Graphalizer, which is expected for release in 2013 and tunes the ai-one technology to detect patterns from streaming data such as that from sensor networks and financial trading systems.

But the real game-changer might be UltraMatch, a version of the ai-one technology targeting image recognition that’s on the calendar for release in July. Aside from some projects going on inside of Google (e.g., Google Glasses), we’ve yet to really tap into the potential image recognition and matching in real time, but the possibilities are both invigorating and scary. Marsh said UltraMatch uses pixels as the data quant, and the technology has already been proven within some CSI labs that use a shoeprint-matching engine ai-one built several years ago.

Actually, Google Glasses, and Google Goggles, is a prime example of just the type of big-company innovation ai-one wants to bring to the masses. He’s impressed enough with what large software vendors are doing in their research labs, but also realizes that’s a recipe for walled gardens of innovation. If we don’t democratize access to AI techniques, he said, we’re essentially “handing the keys over to IBM and Google and letting everything run on their computers and asking, ‘Pretty please, can we have some smart computing?’.”

Feature image courtesy of Shutterstock user Sashkin.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • A near-term outlook for big data
  • NewNet Q4: Platform mania and social commerce shakeout
  • Dissecting the data: 5 issues for our digital future



GigaOM