The Power of Sensor Fusion | Dorleco | VCU Supplier

The Power of Sensor Fusion

Introduction

Multiple sensors with different specifications, operating circumstances, and ranges are needed for autonomous vehicles. Though they are susceptible to weather variations, cameras and vision-based sensors aid in the provision of data for recognizing specific objects on the road. Radar sensors are excellent in practically all weather conditions, but they cannot produce a precise three-dimensional representation of the environment. LiDAR sensors are costly, yet they provide a highly accurate map of the vehicle’s surroundings.

The Need for Integrating Multiple Sensors

Although each sensor has a distinct function, an autonomous car cannot use either one of them on its own. An autonomous car needs information from several sources to increase accuracy and gain a greater grasp of its surroundings if it is expected to make decisions that are comparable to—or, in some situations, even superior to—those made by a human brain.

Sensor fusion, therefore, becomes a crucial element.

Sensor Fusion

In essence, sensor fusion refers to using all of the information gathered from the sensors positioned throughout the body of the car to inform judgments. This mostly aids in lowering the level of uncertainty that could otherwise be there when using individual sensors.

As a result, this combination aids in addressing each sensor’s shortcomings and creating a strong sensing system. In typical driving situations, sensor fusion typically increases system redundancy significantly. This indicates that different sensors are picking up on the same objects.

The Power of Sensor Fusion | Dorleco | VCU Supplier

However, fusion helps guarantee that no items are missed when one or more sensors are inaccurate. For example, in perfect weather, a camera can record the surroundings of a moving car. That being said, the camera won’t give the system enough information when there is a lot of fog or rain. Here’s where LiDAR sensors and radar come in handy. In addition, a radar sensor might precisely detect the presence of a truck at the intersection where the vehicle is stopped at a red light.

It might not be able to produce data from a three-dimensional perspective, though. This is a necessity for LiDAR. Therefore, while in perfect circumstances having many sensors detect the same thing can appear redundant, in edge cases such as poor weather, sensor fusion is required.

Levels of Sensor Fusion

1. Low-Level Fusion, or Initial Fusion:

When using this type of fusion method, all of the data from all of the sensors is combined into one computer unit before it is processed. For instance, to determine the size and form of an item spotted, pixels from cameras and point clouds from LiDAR sensors are fused. Since this method transmits all of the data to the processing unit, it has a wide range of potential uses. As a result, different algorithms might make use of distinct data elements. The intricacy of processing is a disadvantage of transporting and managing such massive volumes of data. The cost of the hardware configuration will increase since high-quality processing units are necessary.

2. Fusion at the Mid-Level:

In mid-level fusion, the algorithm utilizes the data after the individual sensors have initially spotted the objects. This data is typically fused using a Kalman filter (described later in the course). The idea is to have, let’s say, a camera and a LiDAR sensor detect an obstacle individually and then fuse the results from both to get the best estimates of the vehicle’s position, class, and velocity. Although this is a simpler procedure to execute, if a sensor fails, there is a possibility that the fusion process will not succeed.

The Power of Sensor Fusion | Dorleco | VCU Supplier

3. High-Level Fusion (Late Fusion):

This is comparable to the mid-level approach, with the exception that we fuse the outcomes after implementing detection and tracking algorithms for every single sensor. The issue is that the fusion as a whole could be impacted if there are tracking issues with one sensor.

Additionally, there are various kinds of sensor fusion. To maintain consistency, competitive sensor fusion involves having different types of sensors generate data on the same object. Through the employment of two sensors, complementary sensor-fusion will be able to create a larger picture than either sensor could on its own. Coordinated fusion will raise the data’s quality. For instance, to create a 3D representation of a 2D object, take two distinct viewpoints of it.

Variation in the approach of sensor fusion 

1. Radar-LiDAR Fusion

Sensor fusion cannot be solved by a single method since different sensors have diverse functions. The mid-level sensor-fusion approach can be applied if a LiDAR and radar sensor need to be fused.

This entails combining the elements and concluding afterward.

This method can make use of a Kalman filter. This involves using a “predict-and-update” technique, in which the predictive model is updated to produce a better result in the following iteration depending on the previous prediction and the current measurement.

The Power of Sensor Fusion | Dorleco | VCU Supplier

The following illustration illustrates this in an easier-to-comprehend manner.

The problem with the radar-LiDAR sensor is that LiDAR data is linear, whereas the data from the radar sensor is non-linear. Therefore, before being fused with the LiDAR data, the non-linear radar sensor data must be linearized, and the model is then updated appropriately.

An extended Kalman filter or an unscented Kalman filter can be used to linearize the radar data.

2. Fusion of Camera and LiDAR

Now, low-level fusion—which combines raw data—as well as high-level fusion—which combines objects and their positions—can be applied to systems that require the fusion of a camera and a LiDAR sensor. In both situations, the outcomes differ somewhat. The ROI (region of interest) matching technique and data overlap from both sensors make up the low-level fusion. The things in front of the vehicle are detected using the camera’s pictures, while the 3D point cloud data of the objects obtained from the LiDAR sensor is projected on a 2D plane.

The Power of Sensor Fusion | Dorleco | VCU Supplier

Next, we superimpose these two maps to see if there are any common regions. The same object, identified by two separate sensors, is indicated by these similar regions. Data is first processed for high-level fusion, after which 2D camera data is transformed into 3D object detection. Following a comparison of this data with the LiDAR sensor’s 3D object detection data, the output (IOU matching) is determined by where the two sensors’ intersecting regions are.

Development of Autonomous Vehicle Features at Dorleco

Among the many things we do at Dorleco is develop sensor and actuator drivers. We can help you with full-stack software development or custom feature development.

Please send an email to info@dorleco.com for more information about our services and how we can help you with your software control needs. You can also connect with us for all VCU-related products.

Tags: No tags

Add a Comment

Your email address will not be published. Required fields are marked *