As we reported yesterday, Google's self-driving cars, of which there are usually a dozen on the roads of California and Nevada at any given time, have now logged 700,000 miles of awesome accident-free autonomous driving. To celebrate, Google has released a new video that demonstrates some impressive software improvements that have been made over the last two years: Most notably, its self-driving cars can now track hundreds of objects simultaneously, including pedestrians, an indicating cyclist, a stop sign held by a pedestrian crossing guard, or traffic cones. You really should watch the video above – it's one of the coolest bits of tech that I've seen in a long time.
While Google's driverless car makes it look easy, there is a huge amount of work going on behind the scenes. Not only is there around $150,000 (£90,000) of equipment in each car performing real-time LIDAR and 360-degree computer vision (a complex and computing-intensive task), but the software itself is the result of years of development. Basically, every single driving situation that can possibly occur must be painstakingly programmed into the software. It isn't like Google has built an artificial intelligence that can learn how to drive a car from basic principles – if Google doesn't tell the car what to do, it doesn't do anything.
As you can imagine, there are quite literally thousands of situations that can occur while driving around a town. As always, Google's blog post is short on exact numbers, but it does say that "thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously" – and, back in 2012, the software had already successfully piloted the cars for 300,000 accident-free miles. In the video, you can see that Google's cars can now react to railroad crossings, large stationary objects, roadwork signs and cones, and cyclists. The cyclist detection is particularly impressive – not only can it see when a cyclist is indicating to move left or right, but it even watches out for cyclists coming from behind when making a right turn.
While a lot has been said about the expensive LIDAR hardware used by Google's driverless cars, most of the innovations here are likely based on computer vision. While LIDAR gives you a very good idea of the lay of the land and the position of large objects like parked cars, it doesn't help with spotting speed limits or "construction ahead" signs.
LIDAR might tell you that an object is passing your blind spot or that there's an obstruction in front of you, but computer vision tells you that it's a cyclist or a railroad crossing barrier. As we have seen previously with its Street View and Google Glass efforts, computer vision is one of the company's stronger suits – and it's now bringing this expertise to bear in its driverless cars.
Moving forward, Google says it still has lots of problems to overcome before its cars are ready to move on from their home town of Mountain View, California. This strongly suggests that Google's self-driving car software is very specifically tuned to a certain road layout, and that it's nowhere near ready for everyday use. It also suggests that, in the future, Google's business model might be producing and selling the maps and specialised software for each area that a self-driving car might visit – you might get California for free, but if you want to take your self-driving car to Las Vegas, you'll need to buy the Nevada map pack.
In the meantime, the adoption of technologies like adaptive cruise control (ACC) and lane keep assist (LKA) will bring lots of almost-self-driving cars to the road over the next few years. These technologies have the added benefit of not requiring a costly LIDAR system (the Velodyne LIDAR used by Google's self-driving cars costs upwards of $70,000, or £40,000).