In computer programming should more cores equal less accuracy?

A panel of the future of computing and programming at Structure 2011.

We are moving from the information age to the insight age, where it’s not just data that matters, but finding ways to use it. These are uncharted waters, and the needs of the computing systems that will discover these insights are remarkably different from the computing systems we use today.

It is not a surprise that the computer industry is building chip with more cores to keep up with influx of data and the need to process it faster. Adding cores is a way to boost the number of processors and make a chip perform better without trying to increase the clock speed. And with more cores, we need to think differently about programming. And in this lies a big challenge and big opportunity.

The crux of the issue is how to program massively multicore chips so performance can scale along with the number of cores. I’ve covered MIT’s efforts on this as well as laid out how IBM is taking the programming and putting it on a chip modeled after the human brain. And IBM researcher David Ungar is apparently thinking that making less accurate computers is an answer.

More cores means less accuracy?

Unger, who is a researcher at IBM, is speaking at the SPLASH conference in Portland, Ore. in October. In a tantalizing summary of his talk, he explains his version of the many core problem and hints at a solution that IBM is working on along with Portland State University and Vrije Universiteit Brussel called the Renaissance project. The summary says:

If we cannot skirt Amdahl’s Law, the last 900 cores will do us no good whatsoever. What does this mean? We cannot afford even tiny amounts of serialization. Locks?! Even lock-free algorithms will not be parallel enough. They rely on instructions that require communication and synchronization between cores’ caches. Just as we learned to embrace languages without static type checking, and with the ability to shoot ourselves in the foot, we will need to embrace a style of programming without any synchronization whatsoever.

Ungar’s solution is to accept that a lack of synchronization also means that the computer will then give back less accurate results. “The obstacle we shall have to overcome, if we are to successfully program manycore systems, is our cherished assumption that we write programs that always get the exactly right answers,” it says. This runs counter to the love of accuracy that is the current rage in scientific and parallel computing, but it also seems in line with several other predictions about computing future. For example, Rice University is looking at probabilistic computing which sacrifices accuracy for more energy-efficient computers, and startup Lyric Semiconductor is also weighing such compromises.

Why are today’s chips and programming models hitting a wall?

The computing model is changing thanks to highly distributed nodes and single-focused applications such as Facebook or Google’s search engine, and mirrored in that change is a silicon-level shift to building massively multicore computers. The idea is that adding more cores boosts the performance of the chip with the caveat that the operating system understands how to use the hundreds or even thousands of cores at its disposal. But so far, getting an OS that can direct that many cores is a challenge. There are also physical challenges associated with accessing memory and not clogging the communications between cores on a chip.

Tilera is one such company building massively multicore chips, and Intel, Nvidia and AMD are also making plays in this area. So far, Nvidia and AMD, which are focusing on graphics processors, have built out tools to help program many-cored GPUs. Adapteva is a startup making many-core chips for cell phones and tablets, and I’m sure there are plenty of other efforts out there.

But for every many-core architecture out there, a programming model must be found to optimize it. And finding one that works on multiple chips will better serve the industry given that esoteric OSes won’t get the development love of the masses, and thus are less likely to win over converts. Nvidia is a perfect example of this. Until it created CUDA, a tool that helps scientist write C-level programs for the GPU, using its computers for anything other than games was super niche. But after CUDA more and more scientists picked it up, and now it’s even being deployed in supercomputers and specialty servers.

So the challenge for those, like Ungar, who are rethinking the way computers are programmed, is to find a way to do it without forcing programmers to throw out their old applications and rewrite. As big data applications become more prevalent, the discussion over the best hardware and best software will get louder, and perhaps some winners will emerge.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Infrastructure Q2: Big data and PaaS gain more momentum
  • Infrastructure Q1: IaaS Comes Down to Earth; Big Data Takes Flight
  • Infrastructure Overview, Q2 2010



GigaOM — Tech News, Analysis and Trends