As a surveying method that measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor, Light Detection and Ranging (LIDAR) helps to assure Road Safety for Autonomous Travel.
Successful perception algorithms also tend to be probabilistic. For example, the evidence grid framework accumulates diffuse evidence from individual, uncertain sensor readings into increasingly confident and detailed maps of a robot's surroundings. This approach yields a probability that an object is present, but never complete confidence. Furthermore, these algorithms are based on prior models of sensor physics (e.g., multipath returns) and noise (e.g., Gaussian noise on LIDAR-reported ranges) which are themselves probabilistic and sensitive to small changes in environmental conditions. LIDAR, short for light radar, is a crucial enabling technology for self-driving cars. The sensors provide a three-dimensional point cloud of a car's surroundings, and the concept helped teams win the DARPA Urban Challenge back in 2007. LIDAR systems have been standard on self-driving cars ever since.
If the safety requirement is to “detect all pedestrians within 10 meters”, this is not completely specifiable because it is unclear what the complete set of necessary and sufficient conditions are for identifying what a pedestrian is. On the other hand, the safety requirement “detect obstacles within 10 meters” could be precisely specified and implemented using programming because obstacle detection can be performed with an appropriate combination of sensors (e.g., LIDAR, RADAR, etc.) and signal processing. Today, most self-driving cars rely on a trio of sensor types: cameras, radar, and LIDAR. Each has its own strengths and weaknesses. Cameras capture high-resolution color images, however they can't measure distances with any precision, and they're even worse at estimating the velocity of distant objects.
Radar can measure both distance and velocity, and automotive radars have gotten a lot more affordable in recent years. Radar is good when you're close to the vehicle. However because radar uses radio waves, they're not good at mapping fine details at large distances.
LIDAR offers the best of both worlds.
Like radar, LIDAR scanners can measure distances with high accuracy.
Some LIDAR sensors can even measure velocity.
LIDAR also offers higher resolution than radar.
That makes LIDAR better at detecting smaller objects and at figuring out whether an object on the side of the road is a pedestrian, a motorcycle, or a stray pile of garbage.
And unlike cameras, LIDAR works about as well in any lighting condition.
In order to let a car run autonomously, first it has to sense the external environment/surroundings; process the data and act by making meaningful decisions. In this sense, process and act chain, the sensing part of the external environment is taken care by sensors like camera, radar, LIDAR and referred as surround sensors. Apart from surround sensors, other sensors like vehicle odometry sensors and actuators are also important to feed the information to decision-making block. For example, the steering wheel angle and wheel speed is important data for a car to make the right decision along with surrounding information. So broadly we would divide sensors into the following three categories
Surround sensors: These are mounted on the external/internal
surface of the car and useful to provide surrounding
Example: Camera, radar, LIDAR, ultrasonic, infrared camera, IMU, GPS and digital map etc.
Vehicle odometry sensors: These sensors capture the information about vehicle motion. Example: wheel speed, acceleration, yaw rate, steering wheel angle etc.
Actuators: These are the sensors which translate the
Example: Break Torque, Engine Torque, restraint actuators, wheel spring etc.
Car makers have been using different sensors mainly LIDAR, radar, camera and ultrasonic for safety features like ACC (Automatic Cruise Control), LKA (Lane Keep Assist), blind spot detections,forward collision warning, and very recently for active safety features like AEB (Auto-Emergency Brake) as well. In recent past, industry has seen the usage of more sensor/information like satellite information, vehicle and infrastructure (V2V and V2x) and LIDAR to improve the robustness of these safety features. There is significant overlap of the information provided by these sensors.
At the same time, their degree of reliability varies. For example, radar and camera both can identify the distance of an object. However, the reliability of information from a radar sensor is higher as compared to a camera. Autonomous driving systems need to provide the highest degree of reliability and would require a good overlap of information from different sensors to make a confident decision.The basic idea of LIDAR is simple: a sensor sends out laser beams in various directions and waits for them to bounce back. Because light travels at a known speed, the round-trip time gives a precise estimate of the distance.
While the basic idea is simple, the details get complicated fast. Every LIDAR maker has to make three basic decisions:
how to point the laser in different directions,
how to measure the round-trip time, and
what frequency of light to use.
We'll look at each of these in turn.
Beam-steering technology: Most leading LIDAR sensors use one of four methods to direct laser beams in different directions:
Spinning LIDAR. This approach has the advantage of 360-degree coverage, but critics question whether spinning LIDAR can be made cheap and reliable enough for mass-market use.
Mechanical scanning LIDAR uses a mirror to redirect a single laser in different directions. Some LIDAR companies in this category use a technology called a micro-electro-mechanical system (MEMS) to drive the mirror.
Optical phased array LIDAR uses a row of emitters that can change the direction of a laser beam by adjusting the relative phase of the signal from one transmitter to the next.
Flash LIDAR illuminates the entire field with a single flash. Current flash LIDAR technologies use a single wide-angle laser. This can make it difficult to reach long ranges since any given point gets only a small fraction of the source laser's light. Multi-laser flash systems would have an array of thousands or millions of lasers—each pointed in a different direction.
LIDAR measures how long light takes to travel to an object and bounce back. There are three basic ways to do this:
Time-of-flight LIDAR send out a short pulse and measures how long it takes to detect the return flash.
Frequency-modulated continuous-wave (FMCW) LIDAR sends out a continuous beam whose frequency changes steadily over time. The beam is split into two, with one half of the beam getting sent out in the world, then being reunited with the other half after it bounces back. Because the source beam has a steadily changing frequency, the difference in travel distance between the beams translates to slightly different beam frequencies. This produces an interference pattern with a beat frequency that is a function of the round-trip time (and therefore of the round-trip distance). This might seem like a needlessly complicated way to measure how far a laser beam travels, but it has a couple of big advantages. FMCW LIDAR is resistant to interference from other LIDAR units or from the Sun. FMCW LIDAR can also use Doppler shifts to measure the velocity of objects as well as their distance.
Amplitude Modulated Continuous Wave (AMCW) LIDAR can be seen as a compromise between the other two options. Like a basic time-of-flight system, AMCW LIDARs send out a signal and then measure how long it takes for that signal to bounce back. But whereas time-of-flight systems send out a single pulse, AMCW systems send out a more complex pattern (like a pseudo-random stream of digitally encoded one and zeros, for example). Supporters say this makes AMCW LIDAR more resistant to interference than simple time-of-flight systems.
The LIDARs featured in this article use one of three wavelengths:
905 nanometers, or
This choice matters for two main reasons. One is eye safety. The fluid in the human eye is transparent to light at 850 and 905nm, allowing the light to reach the retina at the back of the eye. If the laser is too powerful, it can cause permanent eye damage.
On the other hand, the eye is opaque to 1550nm light, allowing 1550nm LIDAR to operate at much higher power levels without causing retina damage. Higher power levels can translate to longer a range.
So why doesn't everyone use 1550nm lasers for LIDAR? Detectors for 850 and 905nm light can be built using cheap, ubiquitous silicon technologies. Building a LIDAR based on 1550nm lasers, in contrast, requires the use of exotic, expensive materials like indium gallium arsenide.
And while 1550nm lasers can operate at higher power levels without a risk to human eyes, those higher-power levels can still cause other problems. And, of course, higher-power lasers consume more energy, lowering a vehicle's range and energy efficiency.
In summary, as a a detection system which works on the principle of radar, but uses light from a laser instead, LIDAR is essential to assure Road Safety for Autonomous Travel.