Driverless Car 101: How do AVs Work? (Part 2 of 3)

Ok, in Part 1 we talked about the spinny thing on top of many autonomous vehicles (we can call it LIDAR now, right?), but we didn’t get around to covering the other major sensor capabilities found on most autonomous vehicles.  We didn’t even answer the question: Why don’t Tesla’s have a spinn…I mean LIDAR? Well, short answer: its because, although most AVs also rely on the sensors we are about to go through below, their main sensor is LIDAR – and Tesla doesn’t like this. Breaking from this trend, Tesla nixed LIDAR (because its expensive and spinny things often break) and chose to rely more heavily on all the sensors we are about to go through below. Oh, and also a very smart artificial intelligence, but more details on that in Part 3! For now, lets put back up our image of where all the sensors are on our car here first so we can see where things are:

and away we go!

Sensor #2: Camera

Cameras are a quick topic because we’ve all played with one (probably most of us are carrying one around with us right now!). Camera systems produce live video output, which is then analyzed by a computer system to determine obstacles and provide roadway information, such as the vehicle’s relation to lane striping and signal interpretation.  Currently, most automakers appear to be relying on cameras for obstacle detection, lane departure, and signal reading, like this:

But here’s where Cameras start to get interesting: I know its not traditionally how we think of data, but visual data is a TON of data.   If you do think about it, we humans drive pretty much completely reliant on what we see (visual data). So the question quickly arises – why cant a computer? Well, short answer: right now, computers are not smart enough! Long answer: Video images are very good in two dimensions, but because they come on a flat medium, figuring out depth is tricky. We humans are very good at seeing depth because of millions of years of evolution (and having two eyes helps a lot!), but computers are not as well equipped. Even in the picture above you can see the outlines around the car are not perfect. But thats why you can combine cameras with other sensors, like the ones we discuss below, to help overcome some of these depth issues.

Before we leave cameras, here are a few key points:

  • A camera’s ability to look far down the road provides ample visual information. If fully processed, this information could negate the need for precompiled high-accuracy maps.
  • Cameras have issues working at night, and in limited visibility, but this can be overcome by high-sensitivity cameras.
  • Because they are heavily software driven, once an advanced artificial intelligence (AI) system is developed, implementation of a camera-based system should be a very cost effective sensor method.

Sensor #3: Radar

Whats the best way to solve depth problems? Use a tried and true military technology designed for sensing how far away things are: Radar!

 

 

 

 

 

 

 

 

Radar has been in use in the military since before World War II, and it emits radio waves, which bounce off of things in the world and come back to the sensor. A quick compilation based on time and speed and the radar system is able to tell depth (sounds a lot like LIDAR doesnt it? RADAR came first!)  Since its been around so long, its a known commodity, not to mention cheap and reliable.  By adding Radar to a vehicle, you employ a great short and long range sensor that can help you identify how far away thing are from the vehicle (and perhaps cover some of the short comings of camera).

Here are a few other things about Radar:

  • Radar wavelengths can penetrate dust and other visual obscurants, allowing the car to “see” in poor visibility
  • Radar does not work well in snow and rain.
  • Radar works very well along a two dimensional plane, so you would need to stack a few on top of each other to get a 3D reading around you.

Ok! We are moving faster now, since these sensors are more well known, lets go to the next:

Sensor #4: GPS

Geographical Positioning Systems, or GPS as we call it, relies on signals from space.  Specifically, the US Department of Defense operates thirty-one satellites in space that broadcast microwave signals to Earth, which contain the coordinates, heading, velocity, and a timestamp of the signal from the respective satellite. The orbits of these satellites are coordinated so that, at any given time, four are visible in the sky from any point on earth. With the collective data from these microwave signals, a GPS unit can triangulate its position on earth.  See, like this handy graphic from GPS.gov shows

The Problem?  GPS is not very accurate! Expensive systems can get you accuracy under 3 ft (a meter), but even then, if your car is only about 6 ft wide (2m) and there is another car just a few feet away from you in the next lane, this error is unacceptable.  Google once did a very cool presentation at SxSW in Austin, and showed the image off to the side there.  Obviously GPS is not good enough! This is why LIDAR is used to position cars based on 3D Maps, but more on that in Part 3.

As is tradition, a few keys about GPS:

  • GPS is accurate to eleven feet, and is even less accurate when moving (15).
  • GPS units are very cost effective ($100–$2000).
  • Accuracy is heavily dependent on the ability to see sky. Operation in tunnels and dense urban corridors with blocking structures is suspect.
  • GPS works well at night, in rain, and in snow, as these conditions have no effect on the microwave signals used by the system.

Sensors #5+: Odometry and Ultrasonic Sensors

I’m lumping these in together because, lets face it, they aren’t as cool as the other sensors, but both odometry and ultrasonic sensors are great sources of information. Odometry sensors (they count rotation of car wheels) and speed sensors and other sensors in the car can give a good idea of where the car’s position is after being calibrated to a known point (i.e. if i know that I start at a certain point, and my wheel spins 10 times, then I am a certain distance away from that point).  And ultrasonic sensors rely on high pitched sounds (like bats!) to cause a radar effect. They are very cheap and effective, but only really at close distances.  When your car beeps as you back up, thats likely because an ultrasonic sensor is doing work.

Ok – that’s it! That’s the majority of the sensors used for self driving behavior!  In Part 3, we will go over putting all this together, and the two schools of thought about how to put together an autonomous vehicle. Finally, we will learn why Tesla does not use LIDAR, and we will even learn a bit about Artificial Intelligence’s role in all this.  So on to Part 3! (and if you missed Part 1, its here)