Oxford researchers modify Nissan Leaf for cheaper autonomous car

Is the future of the self-driving car one of full autonomy, or, as car manufacturers such as Ford have suggested, one of part-time autonomy? In the near-term, the latter option seems far saner, and it’s the approach that underpins new research being shown off by academics at the University of Oxford.

The RobotCar U.K. project is using a modified Nissan Leaf, an all-electric vehicle, which is fitted with around £5,000 ($ 7,750) worth of prototype navigation equipment. That system includes a controller PC in the trunk — which can control every function of the car — as well as cameras in the front, lasers discreetly tucked under the front and rear bumpers, and an iPad for the user interface up front.

Oxford RobotCar UKIn time, the researchers hope to develop an autonomous navigation system that costs just £100.

“We are working on a low-cost ‘auto drive’ navigation system, that doesn’t depend on GPS, done with discreet sensors that are getting cheaper all the time. It’s easy to imagine that this kind of technology could be in a car you could buy,” Professor Paul Newman, the project’s co-leader, said in a statement.

Mapping and learning

The system doesn’t use GPS because the satellite-based system is not accurate enough for the researchers’ needs. Instead, twin cameras keep an eye on the road ahead for pedestrians and so on, while the lasers create a three-dimensional map of the world around the car — this is a similar approach to that taken by Google in its autonomous vehicle research, except far cheaper (Google’s LIDAR unit alone costs $ 70,000) and less conspicuous.

This is where the car’s part-time autonomy comes in — at least in city environments. As Newman put it:

“Our approach is made possible because of advances in 3D laser mapping that enable an affordable car-based robotic system to rapidly build up a detailed picture of its surroundings. Because our cities don’t change very quickly robotic vehicles will know and look out for familiar structures as they pass by so that they can ask a human driver, ‘I know this route, do you want me to drive?’, and the driver can choose to let the technology take over.”

It’s really a matter of machine learning, the science of probability and good guesswork; and the data the researchers are using comes from the cameras and lasers, but also from road plans, aerial photographs and internet queries. The car needs to learn its environment before it can, metaphorically speaking, take the wheel. (The driver can always take back control by tapping the brakes.)

Check out this video showing car driving through a gradually-updating “semantic prior map” — in other words, all the fixed stuff such as road markings, curb locations and so on, with dynamic objects being mapped along the way:

As for next steps, the team will try to get the system to understand traffic flows and learn how to evaluate best routes.

“Whilst our technology won’t be in a car showroom near you any time soon, and there’s lots more work to do, it shows the potential for this kind of affordable robotic system that could make our car journeys safer, more efficient, and more pleasant for drivers,” Newman said.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • How electric vehicles lead to better telematics
  • Connected world: the consumer technology revolution
  • The fourth quarter of 2012 in cleantech


GigaOM