MIT Builds an OS to Give Multicore Chips a Heartbeat

Human beings are complicated organisms that have evolved entire systems of feedback and governance to ensure our minds and our bodies are performing well. When we overheat, we sweat, and when we need food we get hungry and then eat. As our computers become more complicated through the addition of multiple cores, MIT scientists are working on an operating system to create a similar system of feedback and governance to ensure the machine performs well. They are part of a group of 19 researchers developing hardware and software for multicore chips, a group known as Project Angstrom.

The OS — dubbed FOS or factored operating system — is designed to help programmers building applications for these chips utilize the full power of multiple cores without having to spend too long optimizing their code. As chipmakers gave up on the Gigahertz race and started adding cores to boost performance, they ran into a scaling problem. Doubling the number of cores didn’t double the performance because applications couldn’t take full advantage of the two cores. I talked about this multicore programming issue in some of my previous posts.

Parallel programming and tool sets to optimize the hundreds of cores in a graphics processor worked for the high-performance computing sector and in some enterprise applications — where it was worth the effort to rewrite or optimize code — but as multiple cores rise in number, programming and moving information around on the chip become more difficult. When I’ve asked folks at ARM or Nvidia about this issue, they tell me that the operating system will treat the problem, and that’s exactly what the MIT’s Computer Science and Artificial Intelligence Lab researchers on Project Angstrom are trying to do.

Anant Agarwal, the director of Project Angstrom and the CTO of Tilera, a chip company offering a 64-core chip, says that in future systems –and these systems extend all the way up to servers with multiple cores — each core will have a thermometer and some mechanisms to offer feedback on how hard it’s working in the form of what researchers call heartbeats. From the MIT news release:

But crucial to the Angstrom operating system — dubbed FOS, for factored operating system — is a software-based performance measure, which Agarwal calls “heartbeats.” Programmers writing applications to run on FOS will have the option of setting performance goals: A video player, for instance, may specify that the playback rate needs to be an industry standard 30 frames per second. Software will automatically interpret that requirement and emit a simple signal — a heartbeat — each time a frame displays. If the heartbeat fell below 30, FOS could allocate more cores to the video player.

Some ways that the heartbeat could be brought back up to par is by electing to skip certain steps of any computation where it won’t change the end result. These are called “loop perforations,” and an example given by MIT would be skipping certain pixels in the frame a of a video. If the OS can figure out to skip some steps, like the human eye takes a shortcut when scanning a room, it can save on processing power and bring performance up. Other options include giving the developer a range of different algorithms to choose for solving a problem and letting the OS determine which algorithm to use based on the chip’s overall load — much like I might choose whether to whisper or to shout based on the noise level in a room and how sore my throat feels.

However, allocating resources between cores requires the cores to communicate, which will require faster ways to exchange information on a chip and better ways to access data that’s stored in memory on the chip. The Angstrom folks have a few options in mind for solving these issues, from optical interconnects to different ways the cores can access memory caches on the chip closer to the actual cores. This is something Agarwal was adamant about when he spoke at our Structure 2010 conference on new chip architectures for the cloud.

Essentially Project Angstrom is trying to solve at the chip level the problems that Google and Yahoo are solving at the data center level as they build out thousands of redundant and connected servers. Their versions of Map Reduce and Hadoop must be built out at the silicon level as we deploy hundred of cores and expect them to process data coherently across them while taking advantage of the performance offered by each core. It’s pretty awesome when you consider this taking place inside our devices.

Related content from GigaOM Pro (subscription req’d):

  • Supercomputers and the Search for the Exascale Grail
  • Pushing Processors Past Moore’s Law
  • For Phones, the Future Is Multiple Cores


The exponential data center is here: Juniper Networks


GigaOM