Introduction
Autonomous vehicles require a number of sensors with varying parameters, ranges, and operating conditions. Cameras or vision-based sensors help in providing data for identifying certain objects on the road, but they are sensitive to weather changes. Radar sensors perform very well in almost all types of weather but are unable to provide an accurate 3D map of the surroundings. LiDAR sensors map the surroundings of the vehicle to a high level of accuracy but are expensive.
The Need for Integrating Multiple Sensors
Thus, every sensor has a different role to play, but neither of them can be used individually in an autonomous vehicle. If an autonomous vehicle has to make decisions similar to the human brain (or in some cases, even better than the human brain), then it needs data from multiple sources to improve accuracy and get a better understanding of the overall surroundings of the vehicle.
This is why sensor fusion becomes an essential component.
Source: https://www.robsonforensic.com/articles/autonomous-vehicles-sensors-expert/
Sensor Fusion
Sensor fusion essentially means taking all data from the sensors set up around the vehicle’s body and using it to make decisions. This mainly helps in reducing the amount of uncertainty that could be prevalent through the use of individual sensors.
Thus, this fusion helps in taking care of the drawbacks of every sensor and building a robust sensing system. Most of the time, in normal driving scenarios, sensor fusion brings a lot of redundancy to the system. This means that there are multiple sensors detecting the same objects.
Source: https://semiengineering.com/sensor-fusion-challenges-in-cars/
However, when one or multiple sensors fail to perform accurately, fusion helps in ensuring that there are no undetected objects. For example, a camera can capture the visuals around a vehicle in ideal weather conditions. But during dense fog or heavy rainfall, the camera won’t provide sufficient data to the system. This is where radar, and to some extent, LiDAR sensors help. Furthermore, a radar sensor may accurately show that there is a truck in the intersection where the car is waiting at a red light.
But it may not be able to generate data from a three-dimensional point of view. This is where LiDAR is needed. Thus, having multiple sensors detect the same object may seem unnecessary in ideal scenarios, but in edge cases such as poor weather, sensor fusion is required.
Levels of Sensor Fusion
1. Low-Level Fusion (Early Fusion):
In this kind of fusion method, all the data coming from all sensors is fused in a single computing unit, before we begin processing it. For example, pixels from cameras and point clouds from LiDAR sensors are fused to understand the size and shape of the object that is detected. This method has many future applications since it sends all the data to the computing unit. Thus, different aspects of the data can be used by different algorithms. However, the drawback of transferring and handling such huge amounts of data is the complexity of computation. High-quality processing units are required, which will drive up the price of the hardware setup.
2. Mid-Level Fusion:
In mid-level fusion, objects are first detected by the individual sensors and then the algorithm uses the data. Generally, a Kalman filter is used to fuse this data (which will be explained later on in this course). The idea is to have, let’s say, a camera and a LiDAR sensor detect an obstacle individually, and then fuse the results from both to get the best estimates of the position, class, and velocity of the vehicle. This is an easier process to implement, but there is a chance of the fusion process failing in case of sensor failure.
3. High-Level Fusion (Late Fusion):
This is similar to the mid-level method, except that we implement detection as well as tracking algorithms for each individual sensor, and then fuse the results. The problem, however, would be that if the tracking for one sensor has some errors, then the entire fusion may get affected.
Source: https://www.abaltatech.com/blog/autonomous-data-problem
Sensor fusion can also be of different types. Competitive sensor fusion consists of having multiple types of sensors generating data about the same object to ensure consistency. Complementary sensor-fusion will use two sensors to paint an extended picture, something that neither of the sensors could manage individually. Coordinated fusion will improve the quality of the data. For example, taking two different perspectives of a 2D object to generate a 3D view of the same object.
Variation in the approach of sensor fusion
Radar-LiDAR Fusion
Since there are a number of sensors that work in different ways, there is no single solution to sensor fusion. If a LiDAR and radar sensor have to be fused, then the mid-level sensor-fusion approach can be used.
This consists of fusing the objects and then taking decisions.
Source: https://intellias.com/the-way-of-data-how-sensor-fusion-and-data-compression-empower-autonomous-driving/
In this approach, a Kalman filter can be used. This consists of a “predict-and-update” method, where based on the current measurement and the last prediction, the predictive model gets updated to provide a better result in the next iteration.
An easier understanding of this is shown in the following image.
Source: https://thinkautonomous.medium.com/sensor-fusion-90135614fde6
The issue with the radar-LiDAR sensor is that the radar sensor provides non-linear data, while LiDAR data is linear in nature. Hence, the non-linear data from the radar sensor has to be linearized before it can be fused with the LiDAR data, and the model then gets updated accordingly.
In order to linearize the radar data, an extended Kalman filter or unscented Kalman filter can be used.
Source: https://thinkautonomous.medium.com/sensor-fusion-90135614fde6
Camera-LiDAR Fusion
Now, if a system needs the fusion of a camera and LiDAR sensor, then low-level (fusing raw data), as well as high-level (fusing objects and their positions) fusion, can be used. The results in both cases vary slightly. The low-level fusion consists of overlapping the data from both sensors and using the ROI (region of interest) matching approach. The 3D point cloud data of objects from the LiDAR sensor is projected on a 2D plane, while the images captured by the camera are used to detect the objects in front of the vehicle.
These two maps are then superimposed to check the common regions. These common regions signify the same object detected by two different sensors. For the high-level fusion, data first gets processed, and then 2D data from the camera undergoes conversion to 3D object detection. This data is then compared with the 3D object detection data from the LiDAR sensor, and the intersecting regions of the two sensors give the output (IOU matching).
Source: https://www.thinkautonomous.ai/blog/?p=lidar-and-camera-sensor-fusion-in-self-driving-cars
Thus, the combination of sensors has a bearing on which approach of fusion needs to be used. Sensors play a massive role in providing the computer with an adequate amount of data to make the right decisions. Furthermore, sensor fusion also allows the computer to “have a second look” at the data and filter out the noise while improving accuracy.
Autonomous Vehicle Feature Development at Dorle Controls
At Dorle Controls, developing sensor and actuator drivers is one of the many things we do. Be it bespoke feature development or full-stack software development, we can assist you.
For more info on our capabilities and how we can assist you with your software control needs, please write to info@dorleco.com.