At WebSummit 2017 there was a recurring theme — self-driving cars. Almost every talk I went to — whether at Centre Stage, SaaS Monster or AutoTech — mentioned self-driving cars in one way or another.
The benefits of self-driving cars are enormous:
- Increased safety: There will be reduced vehicles collisions caused by human error
- Time saved: People can spend the time they would have spent driving doing something more productive
- Congestion: linked self-driving cars will be able to make better use of available roads, and improve fuel efficiency
- Insurance cost reduction: without human error, the risk of pay-out is lower
Self-driving cars have become more prominent in recent years for two reasons:
1) The Need: towns and cities are reaching transport breaking point, with governments unwilling to spend more on infrastructure.
2) The Possibility: ubiquitous computing power, and availability of the cloud makes training AI more widely available than before, sparking innovation and competition.
So let’s cut through the WebSummit hype — where are self-driving cars actually at and who is going to win the race of the self-driving cars?
I want to talk about four very different companies that are associated with self-driving cars. I saw three of those companies present their latest tech at WebSummit. The last one was not there, but I did have a recent personal experience with them.
Comma.ai is run by George Hotz (the guy who famously carrier unlocked the iPhone for the first time in 2007). At this presentation at WebSummit, Hotz proclaimed that he doesn’t understand car companies today, and that he wants to be the one to “solve” self-driving cars.
His company has built a neat set of tools that build on each other to help solve the problem:
- Chffr — a free dash-cam app for iOS and Android. Data recorded by the app is uploaded to Comma.ai and used for AI training
- Panda — a piece of hardware, a universal interface for cars that on one side plugs into the cars ODB-II diagnostic port, and on the other presents USB & Wi-Fi. It talks to Chffr to enrich the data that is uploaded to Comma.ai
- OpenPilot — the crown-jewel of open-source software that can drive a car
- EON — another piece of hardware for mounting a device (running OpenPilot) in the correct position, and provides adequate power & cooling
The theory is great, but the numbers are not so. Chffr has clocked up 4.3 million km of data, and OpenPilot has around 150 active users — numbers that are completely dwarfed by the competition.
Nonetheless, Hotz confidently takes a thinly veiled shot at Waymo (Google’s self driving spin-off company, that I’ll talk about next) accusing them of just sitting around and talking about self-driving cars, and releasing videos of their engineers being the wheel of their self-driving prototypes, rather than real-world users.
He also predicts that, one day, car manufacturers will be coming to him wanting to put OpenPilot in their cars, in much the same way that many mobile phone hardware manufacturers choose Android today.
On the striking Centre Stage at WebSummit, Waymo put out an impressive update on their efforts for driving autonomy.
John Krafcik (Waymo CEO) announced that Waymo are shooting for full level 5 autonomy, where a steering wheel is optional — they aren’t just pursuing the idea of building up to level 5 like much of the competition. Currently, OpenPilot & Tesla’s AutoPilot are level 2 systems (described as a “hands off” system, where the driver must be prepared to intervene immediately at any time).
Audi’s 2019 version of the A8 will be available with the world’s first level 3 system (described as “eyes-off” — the driver can fully engage in other tasks, and the car should be able to respond to situations that require an immediate response). Audi’s system is only for traffic jams while travelling less than 37mph, however, and it is currently traversing the government approval minefield.
Waymo’s strategy of aiming for full level 5 seems ambitious, but then they chose this moment to announce that they will be starting a completely driverless point-to-point taxi service in Phoenix, Arizona. Very impressive.
They also demonstrated a well thought through car to passenger interface that distils the world around the car to an easy to understand map, so that a passenger can always (at a glance) answer the question “why is the car doing what it’s doing right now?"
This all sounds great, so where is the problem? It’s the hardware.
Firstly, Waymo are using state-of-the-art LIDAR to 3D map the world around the car in real time. A LIDAR system is expensive — although Waymo have reported heavily in reducing the cost significantly.
Secondly, the hardware in use is not very well integrated in the car. Sensors jut out in funny places making the car just look ugly.
I’m sure these problems will be overcome — after all, this company is backed by Google, who have enormous resources available to them, and a track record of innovation — but that won’t come quickly.
Intel were on the same impressive Centre Stage as Waymo at this year’s WebSummit, shortly after them, on the same day.
Unfortunately, what they had to show was not so impressive.
The Intel CEO, Brian Krzanich, announced that self-driving cars are a few years away. Ooops. Did Krzanich not watch the Waymo presentation an hour or so beforehand?
One of Intel’s self-driving cars appeared on stage. Brian spends too long telling us about all the “amazing sensors” that it has on board. But alas, Intel’s self-driving car was being driven by a human. (Krzanich said “as the car drives off now” as the on-board human driver clearly took control of the vehicle and manoeuvred it cautiously off stage).
They really didn’t look like they knew what was going on, and were totally outclassed by Waymo.
To make things ever-so-slightly worse, they then demonstrated “AI on a USB stick”. I felt it insulted the mostly techie audience by insinuating that it’s possible to make a drone “intelligent” by just plugging the stick in.
And then ensued possibly the worst demo of the WebSummit conference.
A poor Intel employee held in their hands a non-flying drone (with the magic AI USB stick plugged into it) over a TV screen (laid on a table) playing a video of a shark swimming in the sea. They then claimed that the drone had “spotted” the shark, and brought another (flying, this time) drone onto the stage and attempted to drop a bean bag on the screen with the shark…
The idea was to simulate a drone that could be used by lifeguards or search & rescue to monitor near-shore waters, and drop life-saving buoyancy aids to stranded individuals.
The problem was, they missed the screen (with shark on) entirely.
Maybe they would’ve been better using the drone to drop their car in the sea.
Tesla were not present at WebSummit. Why? I’ll give you my opinion later.
This is the company that I have had a personal experience with — I test drove a Tesla Model S in the Spring of 2016.
It was like I was, briefly, living in the future.
The car sped up, slowed down, and steered around corners, all on its own accord. It even parked itself at a service station. OK, I needed to be on hand to supervise, be available to provide “immediate intervention”, but this was over a year ago.
Since then, Tesla has released updates & improvements (roughly) every month. This is technology that consumers can buy today — this isn’t a promised technology, or only available in open-source warranty-free format. It isn’t only available on new 2019 model cars (like Audi) — it’s been around for long enough that there is an active second-hand market for Model S with this technology.
And the story for the future is good too. Around a year ago Tesla said they had 1.3 billion miles of data, presumably available to train their self-driving systems. Going by Comma.ai stats from last week, they have available to them less than 0.25 per cent of that.
But it’s not just the quantity — it’s the quality of that data too.
This isn’t just 1.3 billion miles of video of someone driving, it’s accompanied with data on human corrections i.e. good feedback on the quality of the decisions that the current software is making. From what I can tell, there is no one else that has the quantity and quality of training data that Tesla has.
Lastly, there is the integration — Tesla develop all the hardware and software in-house, so it works together impeccably well. Who were the last to nail hardware & software integration? Apple. And how have they done?
When it comes down to it, Machine Learning (or AI) is a pretty simple process.
When creating (or "training") an AI system, you need two inputs. Data, and expected behaviour. The processing of training takes these two inputs, and “learns” to trigger the expected behaviour when the correct data patterns emerge — the output is a model.
The model is then used to make decisions. The quality and quantity of the inputs to the training are paramount to creating a good working model.
So, who will win self-driving cars? It must be Tesla.
Why? Because, put simply, they have the quality (real immediate feedback from real people — actual corrective behaviour from real humans when their system is active) and quantity (from the hundreds of thousands of cars they have currently on the road) of data to make them unstoppable.
Comma.ai have the quality, but only a very limited quantity. Waymo have the quality — their quantity is better than Comma.ai, although limited to their own staff — but they are playing a risky long game, where only level 5 automation will do. And Intel… well yes, or maybe just no.
And why weren’t Tesla at WebSummit this year? My opinion — they didn’t need to be — they have nothing to prove.
Chris Priest, Senior Technical Consultant, Amido
Image Credit: PHOTOCREO Michal Bednarek / Shutterstock