Edit Note : This is the first of a two-part post. Part 1 will outline core devices and technologies, and Part 2 tomorrow will look at networks and systems.
I had the privilege of keynoting the inaugural IEEE Technology Time Machine Symposium last week in Hong Kong, where I listened to the world’s leading academics, engineers, executives, and government officials project what the world will look like in 2020. Their predictions were based on revolutionary technologies for processing, sensors, and displays becoming integrated into global systems that can do everything from enhance the human experience to improve environmental sustainability.
Predicting the future is a challenge, since its course depends on rapidly changing technologies integrated into large-scale systems whose acceptance will depend on human behavior, global demographics, and macroeconomic and political dynamics. Nevertheless, the IEEE Technology Time Machine Symposium helped provide a glimpse into the possible. As Rico Malvar, chief scientist of Microsoft Research pointed out, today’s innovative new products such as Microsoft’s Kinect require interdisciplinary collaboration. In the case of the Kinect, that collaboration spanned computer vision, machine learning, human computer interaction, speech recognition, and more.
The components
I began my keynote by reviewing a number of disruptive technologies that are surprisingly far along. These include Intel’s “Ivy Bridge” Tri-Gate 3-D transistors, which are built vertically like a skyscraper instead of horizontally like a mall, and are being readied for production in 2012; quantum computers, which are no longer just a theoretical concept, but are being shipped commercially; and the long-theorized fourth circuit element, the memristor, now prototyped by HP, may find use in replicating the function of the human brain (sub. req’d). Plus chips aren’t just for processing or memory: Wouter Leibbrandt of the Advanced Systems Lab at NXP Semiconductors, stated that NXP’s new sensor chip has the power of the original Pentium chip but fits on the head of a pin: beginning to make the possibility of “smart dust” sensors a reality. All of the technology means smarter processing power will be faster, smaller and cheaper.
The devices
Displays are getting thinner, lighter, higher-resolution and more power-efficient, using various approaches such as OLED and e-Ink. Experts such as Prof. Hoi-Sing Kwok of Hong Kong’s University of Science & Technology (HKUST) were confident that transparent, flexible, color touchscreen displays are, well, on a roll and just around the bend with existing prototypes continuing to improve.
If you like HD, just wait. While today’s 1080p displays have a resolution of 2 Megapixels (1K x 2K), 35 Megapixel displays have been already been fabricated, 100-Megapixel tiled displays are commercially available, and 287-Megapixel tiled video walls have been constructed.How much is enough? Kwok has calculated that a medium-sized room fully enabled with video walls at the resolution of the human eye would need 3 Gigapixels, 1500 times today’s HD. Such a room might be useful for viewing HKUST’s record-breaking photograph, which is over 150 Gigapixels.
One surprising challenge in building large displays is that distributing TVs at an economically attractive scale requires using today’s transportation infrastructure, limiting the size of the glass to one car lane wide and short enough to fit under an overpass. However, wall-sized flexible displays could be rolled up, shipped, and carried through the front door.
While today’s 3-D approaches have an uncertain future, Kwok believes the most promising 3-D display technology is electro-holographic (picture Princess Leia’s “Help me, Obi-wan.”) A challenge for large, high-resolution displays and electro-holographic displays is not just the display itself, but the processing power required to drive it. Moore’s Law and the technologies I reviewed above should help. Large images may not require large devices; Kwok expects every cell phone to have a pico-projector—a laser projector that can project onto a surface larger than the device—incorporated, the same way that every cell phone now has a camera.
It’s not news that touch screens are becoming popular, but the next enhancement will be “hover” touchscreens, enabling gestural interfaces without touch, where each pixel is also a sensor. Such technology was shown off last year and would require adoption by device makers as well as developers.
At the other end of the spectrum are very small displays. The next generation mobile devices may not be handheld, but perch on your nose, or float on your retina. Masahiro Fujita, president of Sony Systems Technologies Laboratories, outlined a concept for eyeglasses with transparent lenses that double as augmented reality displays, wirelessly linked to your social network and real-time information, providing you live information as you visually scan. It could offer details such as, “That’s the restaurant where Bobby had that great salad, and, it’s got a table free in 10 minutes!”, or, as Jian Ma, chief scientist of the Wuxi Sensing Institute wryly observed, could alert a traveler that “your luggage is no longer with you.”
The next step is the wireless contact lens display, which is already under development. Ultimately, though, devices won’t be something we wear, but something we implant. Brain-computer interfaces that let us control devices using our mind (PDF), or directly stimulate the cortex for artificial vision have been built.
Sound is also important. Fujita of Sony demonstrated a 7.1 channel sound system with “high” front speakers and a “high” mix, enabling sound sources to traverse not only left to right, but also top to bottom. If that’s not enough, NHK has been experimenting with 22.2 channel sound that delivers more surround sound with 24 speakers. Next-generation gaming and entertainment will leverage all of these approaches: Fujita played a cinema-quality video of racing cars, challenging the audience to determine which components were real and which were computer-generated (answer: everything was CGI), pointing out that the vehicle dynamics (bouncing, traction) could be generated interactively in real time.
So what does all this mean for the networks and the backend systems? Please read Part 2 on Sunday for the details.
Joe Weinman leads Communications, Media, and Entertainment Industry Solutions for Hewlett-Packard. The views expressed herein are his own.
Related content from GigaOM Pro (subscription req’d):
- Report: The Future of Netbooks!
- The Case for Increased M&A in 2011: Actions and Outlooks
- In Q4, Data Centers, Not the Cloud, Were the Big Story