Driverless Cars 101: Putting it all together (Part 3 of 3)

Ok so in Part 1, we learned about LIDAR, and in Part 2, we learned about all the other usual sensors that play a role in make a car self-driving.  In this part 3 we are going to talk about putting it all together: the type of computer power needed and finally answer the question: Why doesn’t Tesla use LIDAR?

Computers

In the early models of autonomous vehicles the computers were…well big:

because a self driving vehicle takes a lot of computation power!  Fortunately, chip makers such as NVIDIA and Mobile Eye have been working diligently to make these systems smaller, and they’ve succeded – here is what it takes for computation in a self driving car today:

Now, if you look at those stats, thats still a very impressive system – here’s a comparison, the Drive PX2 (yours for only $10k!) is as powerful as 150 Macbook Pros.

Here are some other key points about computers in self driving cars:

  • Relatively available technology already enables the necessary number of calculations.
  • NVIDIA’s current premier system offers 2.3 teraflops (roughly 20% more than a PlayStation 4) and can handle 12 sensor feeds. Nvidia recently began showcasing its Drive PX 2, with 8 Tflops (equivalent to roughly 150 Macbook pros)
  • Mobileye, in conjunction with STMicrolectonics, is anticipating making available a 12 Teraflop platform by 2020 that will be able to handle 20 sensor feeds.
  • These more advance CPUs will need water cooling

Software

So using inputs from all the sensors we’ve already gone through, the very smart computers process all the data and determine whether the car should go left, right, forwards or backwards (or increase or decrease speed).  This process looks something like this:

I will admit, the processing part is pretty complex stuff! So, lets skip talking about it here and maybe talk about it another day and move on to..

LIDAR vs. Camera/Radar (or Why Tesla Doesn’t Use Lidar)

Currently there are two major schools of thought on how to put together self driving cars: The first school (lets call it the “Google School”) believes in combining strong sensors and not so strong Artifical Intelligence (A.I.) and the second school (lets say the “Tesla School”) believes in combining not so strong sensors with very Strong A.I. (and thats actually what they call it “Strong A.I.”).  Here’s how each works:

Google School

The Google School relies on high-accuracy maps and LIDAR readings. Through matching up the LIDAR data that their AVs are gathering as they drive along with pre-loaded high-accuracy 3D maps, the LIDAR-dependent group can precisely locate its vehicle and pick exact (down to a centimeter) paths for its car to follow.

Additionally, LIDAR AVs can identify dynamic objects around the vehicle by simply subtracting everything that has already been predetermined to be static on its map, like so:

However, the problem with this is that the LIDAR-dependent group is hindered by the availability of constantly updated high-resolution maps needed for its system to work. In other words, if a LIDAR based car hasn’t sent out a pilot car (driven by a human) to map a roadway, that car can’t operate there!  (kills some of the magic doesn’t it?)  To solve this mapping need, companies such as Google send out pilot cars with very high resolution LIDAR mapping systems to pre-map the areas in which their AVs will be operating. In the future, this mapping issue may be solved by crowd sourcing data collected from AVs.

Tesla School

The second group, championed by Tesla, relies heavily on sensing through the use of video cameras and radar – but they try to mimic the human brain, and they utilize advanced artificial intelligence (known as “Strong A.I”) to do so. Ultimately, the goal of this camera-dependent group is to create a system that is comparable in capability to the human brain at processing visual cues and responding as a human driver would.

This solution is great because, lets be honest, LIDAR is expensive and prone to break.  However, there is one big problem: Strong A.I. hasn’t been developed yet! Tesla and their team are working on it as fast as they can, but right now they are getting a lot of criticism from proponents of LIDAR-based systems, who believe that until this A.I. is developed, Tesla can not support safe autonomous functions .

The Tesla school does come with an interesting corollary:  We know that Strong A.I. will be developed.. eventually.  And the sensors and computer hardware needed to run this Strong A.I. already exist.  So, Tesla has outfitted their more recent cars with all the sensors and computer hardware needed to support Strong A.I. – this means that at some point in the future, Tesla can send out a software update, load all their cars with Strong A.I. and suddenly all of these Teslas will be capable of self driving!

thats pretty cool I think!

So thats it! That’s why Tesla doesn’t use LIDAR, instead relying on Strong A.I. with sensor feeds from radar and camera. If you completely understood that sentence – congratulations, you have a pretty good basic understanding of what goes into self driving cars.  If you missed any of the other parts of this 3 part series, Part 1 is here and Part 2 is here.

thanks for reading, until next time

James