Driverless Car 101: How do AVs Work? (Part 1 of 3)

How do driverless cars work? Whats that spinny thing on top of the cars? why doesn’t Tesla have a spinny thing?

All great questions – and we hope to provide answers to all of those questions (and more!) in this 3 part post on How Autonomous Vehicles (AVs) Work.

Before we jump in, I should say that I am going to reproducing a lot of stuff out of this report: Travel Modeling in an Era of Connected and Automated Transportation Systems: An Investigation in the Dallas-Fort Worth Area, a report I was the lead author on for the North Central Texas Council of Governments as a member of the D-STOP team – so if you want a more in depth version of what I am about to say, check that out.

Ok, here we go!

First, its important to realize that Automated vehicles do not work like the human brain. Humans draw in information via their five senses and perceive one three-dimensional world in which they move, generally seeking obstacle avoidance. In contrast, AVs take a more piecemeal approach: each sensor is assigned an individual task, such as detecting the color of a traffic light or determining the probable path of a surrounding object. Like this:


These sensors relay their individual findings to a central processing (CPU) which then does a lot of powerful computation to pretty much tell the car just four basic commands: turn left, turn right, slow down, and/or speed up. Currently there is no ultimate, holy grail formula as to what sensors are needed for a “close enough” or “better than” approximation of human driving (the standard against which autonomous vehicles wil be judged), but there are some more common combinations (2 really) that we can look at. So lets go through the most famous sensor of all before hitting the rest in Part 2:

Sensor #1: LIDAR

Lidar stands for…nothing! its actually the word “Light” and the word “Radar” smushed together.  This technology works by shooting a laser beam out at the world around it (dont worry, it’s not on a wavelength that will hurt your eyes or fry your brain or anything).  As the laser beam comes in contact with stuff, it does what light does and reflects, coming back to the sensor itself that, believe it or not, is timing how long it took for that laser beam to come back! Pretty impressive considering that these things move at the speed of light.  By timing how long it takes for emitted light to come back, and multiplying that time by the speed of light, the LIDAR unit can compute a distance, down to a centimeter level of accuracy, to whatever the laser bounced off of.  Do this all around the unit and you can build a 3d point cloud (a 3d map) of the world around you.  And – to get a 360 degree view of the world, you either need a lot of laser beams or…. you need to rotate the beam – ta-da! you have the spinny thing on top of the car. Look, here is a visualization, shamelessly stolen from Mike1024 and Wikipedia.





Now, think about if you were to shoot 8 laser beams at the mirror rotating above, or 16 or 32 or 64 – suddenly you’d make an array of lasers that could give you a 3D view of the world you were in –  Like this:

Looks pretty cool – and probably super helpful for not running into other stuff, but really LIDAR’s main purpose is to locate a vehicle on a precompiled 3D map – which we will discuss in Part 3. But even if its not its main job, LIDAR does help with static (still) and dynamic (moving) object identification (again, really by using the precompiled 3D maps) and helps pick out other things on the road.  Also, whats really nice about LIDAR is that its a depth perception type of technology which has some good advantages over something like a camera, which works in 2 dimensions (and all to be discussed in Part 3).

So thats a pretty quick look at LIDAR tech, what else do you need to know?  How about this:

  • High-precision pre-mapping is required to geolocate the vehicle and delineate dynamic and static objects in the environment.
  • LIDAR is currently effective to a range of 200 meters.
  • Because each laser remains in one plane at a fixed angle in relation to the ground, vertical accuracy is heavily dependent on the number of lasers emitted from the sensor—common LIDAR is available from four lasers to sixty-four lasers
  • Costs for LIDAR range from $8,000 for a four-laser unit to $75,000 for a 64-laser unit, but Ford, Google and others are working on bringing this price down to the hundreds of dollars range
  • Because the system produces its own light, LIDAR works well at night.
  • Reflection from falling snow and rain can distort the LIDAR point clouds and have a negative effect on the sensor’s capabilities.

Well, we got to most of the question that kicked off this Post, but not the last part about Tesla, if you’ve survived so far, onward to Part 2, where we talk about all the other sensors (the non-spinny ones), or skip to Part 3, where I go over putting them all together to create one autonomous vehicle (and explain why Tesla doesn’t use LIDAR).