How Autonomous Cars Map The Environment

Written by: Tony Kerr 
 

 
Last time in our series on autonomous cars, we talked about RADAR and how it can detect objects by bouncing radio waves off them and how, by timing the echoes and scanning the environment using directional antennas, it is able to tell where the objects are. LIDAR (Light Detection and Ranging) is the same thing, but uses light instead of radio waves. One advantage that LIDAR has over RADAR is that it will detect non-metalic objects. Anything that reflects light can be seen by LIDAR. The 3D rangefinder in the Google car is a modern form of LIDAR.
 
One of the problems that has to be overcome by RADARs is that the wavelength of the radio waves is not that much less than the size of the antenna. The mathematics is a bit involved, but it means that the antenna produces unwanted smaller beams to either side of the main one. These smaller beams are known as
sidelobes, and can lead to bearing errors and false detections if they are not accounted for. The main beam also gets wider as the antenna gets smaller relative to the wavelength. Generally speaking, larger antennas produce smaller sidelobes and narrower beams, but of course we want smaller antennas not larger ones!
 
 
Light Rather Than Radio

 

Instead of radio waves, LIDAR of course uses light — typically with a wavelength of around 1um. This means that the “antenna” — the mirrors and such that steer the beam – are an order of 10,000 times larger than the wavelength of the light. The sidelobes are therefore so small and so close together that they blend in with the main beam. It also means that the LASER beam can be very narrow —
around 30 centimeters wide a kilometer away. This is good and bad: It’s good because you can use it to pinpoint the distance of a specific object (which is how it is used on the battlefield), but it’s bad because it can only look at a very small part of the environment at a time. If the beam is pointing straight ahead, for example, it won’t pick up an object that is only a few inches to the side 100 meters away.
 
So, how do we get around this? Well, the laser beam doesn’t HAVE to be as narrow as it can be. Using lenses, the beam can be made as wide as we like, within reason. The LIDAR used by Google in their prototypes has a
beamwidth of 0.4 degrees, which is 20 times wider than a typical beamwidth for an infrared laser.
 
A beam that is 0.4 degrees wide is still only going to see a narrow slice of the world when it is scanned, so the beam has to be scanned vertically as well as horizontally in order to get a more complete map of the environment.
 
There are two options here:
 
1. The first is to scan in a spiral pattern, starting at the upper bound of the scan, for example, then lower the beam one step each time the scanner head rotates horizontally.
 
2. The alternative is to scan vertically, scanning a large number of vertical stripes as the scanner head rotates horizontally.
 
Both of these methods suffer the same problem. It takes time to do a complete 3D scan. If the scanner head rotates 10 times a second, say, the first method will take several seconds to do a complete 3D scan. The reaction time of the sensor means that the second approach will also take about the same amount of time, with
the beam scanning rapidly in the vertical direction and taking several seconds to rotate horizontally. This type of scanning arrangement becomes a trade-off between resolution and scanning speed, with high resolution scans taking several seconds to complete.
 
A 3D rangefinder that takes, say, 5 seconds to do a complete scan is not going to be very useful on a car. By the time the rangefinder has spotted an object, the car could already have run into it. So there is a trick to get around this problem. The rangefinder has not one, but 64 laser/sensor pairs producing beams stacked one on top of the other (they are actually scattered slightly horizontally to make the scanning head smaller). Having a number of sensors all scanning at the same time provides the best of both worlds – good vertical and horizontal
resolution as well with the ability to perform several scans every second.
 
The Google car prototypes use a Velodyne HDL-64E LIDAR, which has a vertical coverage of 26.9 degrees with a resolution of 0.4 degrees. In the horizontal plane, the resolution is as high as 0.08 degrees, which means that each beam records 4000 range points per revolution. The resolution/scanning trade-off is still there,
however, which means that the highest resolution is only achievable at the lowest scanning rate (i.e 0.08 degrees at 5 revolutions/second). If a higher scanning rate is required, the horizontal resolution drops (0.35 degrees at 20 Revolutions/second, about 1000 range points per revolution per beam) because the beam is effectively smeared out in the horizontal plane.
 
 
LIDAR Is Effective But Pricey
 
 
To say that modern LIDARs are “just like a RADAR but using light,” then, is a little dismissive of the capability that LIDAR provides. LIDAR gives the control system in the Google car the ability to quickly build up a detailed map of its environment even to the degree that it can tell which way a person is facing (and therefore which way they are moving) without resorting to the complex and highly computationally intensive technique of image processing a live video stream.
 
All of this comes at a cost, however. The price of the Velodyne HDL-64E LIDAR is around $75,000. Obviously, with the cost of a single component being more than the cost of the entire car with the processing and control system, a much cheaper solution is needed in order to bring autonomous vehicles to the mass market.
 
Velodyne and other companies are responding to the call. Velodyne has announced a lower spec version of the HDL-64E called the “Puck” — with 16 beams instead of 64 — at a much more moderate price of around $8,000. As with all technology, however, increasing demand will lead to mass production and economies of scale that will see the price of LIDAR units drop even further.
 
In the next post in this series, we’ll learn more about autonomous vehicles and GPS.
 
Connect with us to learn more about how autonomous vehicles will shift the role of manufacturers, dealers, and the auto industry. Leave a comment below or reach out at info@smallworldsocial.com.

 

2 thoughts on “How Autonomous Cars Map The Environment”

Leave a reply

Your email address will not be published. Required fields are marked *