Sensors in Autonomous Vehicles
The entire discussion on autonomous vehicles consists of one major point – will the vehicle’s brain (i.e., the computer) be able to make decisions just like a human brain does? Now whether or not a computer can make these decisions is an altogether different topic, but it is just as important for an automotive company working on self-driving technology to provide the computer with the necessary and sufficient data to make decisions. This is where sensors, and in particular, their integration with the computing system comes into the picture.
Types of sensors:
There are three main types of sensors used to map the environment around an autonomous vehicle – vision-based (cameras), radio-based (radar), and light/laser-based (LiDAR). These three types have been explained below in brief:
High-resolution video cameras can be installed in multiple locations around the vehicle’s body to gain a 360° view of the surroundings of the vehicle. They capture images and provide the data required to identify different objects such as traffic lights, pedestrians and other cars on the road.
The biggest advantage of using data from high-resolution cameras is that objects can be accurately identified and this is used to map 3D images of the vehicle’s surroundings. However, these cameras don’t perform as accurately in poor weather conditions such as nighttime or heavy rain/fog.
Types of vision-based sensors used:
- Monocular vision sensor
A monocular vision sensor makes use of a single camera to help detect pedestrians and vehicles on the road. This system relies heavily on object classification, meaning it will detect classified objects only. These systems can be trained to detect and classify objects through millions of miles of simulated driving. When it classifies an object, it compares the size of this object with the objects it has stored in memory. For example, let’s say that the system has classified a certain object as a truck. If the system knows how big a truck appears in an image at a specific distance, it can compare the size of the new truck and calculate the distance accordingly.
But if the system encounters an object which it is unable to classify, that object will go undetected. This is a major concern for autonomous system developers.
- Stereo vision sensor
A stereo vision system consists of a dual-camera setup, which helps in accurately measuring the distance to a detected object even if it doesn’t recognize what the object is. Since the system has two distinct lenses, it functions as a human eye and helps perceive the depth of a certain object. Since both lenses capture slightly different images, the system can calculate the distance between the object and the camera based on triangulation.
Radar sensors make use of radio waves to read the environment and get an accurate reading of an object’s size, angle and velocity. A transmitter inside the sensor sends out radio waves and based on the time these waves take to get reflected back to the sensor, the sensor will calculate the size and velocity of the object as well as its distance from the host vehicle.
Radar sensors have been used previously in weather forecasting as well as ocean navigation. The reason for this is that it performs quite consistently across a varying set of weather conditions, thus proving to be better than vision-based sensors. However, they are not always accurate when it comes to identifying a specific object and classifying it (which is an important step in making decisions for a self-driving car).
RADAR sensors can be classified based on their operating distance ranges:
- Short Range Radar (SRR): (0.2 to 30m) – The major advantage of short-range radar sensors is to provide high resolution to the images being detected. This is of utmost importance, as a pedestrian standing in front of a larger object may not be properly detected in low resolutions.
- Medium Range Radar (MRR): 30 to 80m
- Long-Range Radar (LRR): (80m to more than 200m) – These sensors are most useful for Adaptive Cruise Control (ACC) and highway Automatic Emergency Braking (AEB) systems.
LiDAR (Light Detection and Ranging) offers some positives over vision-based and radar sensors. It transmits thousands of high-speed laser light waves, which when reflected, give a much more accurate sense of a vehicle’s size, distance from the host vehicle and its other features. Many pulses create distinct point clouds (a set of points in 3D space), which means a LiDAR sensor will give a three-dimensional view of a certain object as well.
LiDAR sensors often detect small objects with high precision, thus improving the accuracy of object identification. Moreover, LiDAR sensors can be configured to give a 360° view of the objects around the vehicle as well, thus reducing the requirement for multiple sensors of the same type. The drawback, however, is that LiDAR sensors have a complex design and architecture, which means that integrating a LiDAR sensor into a vehicle can increase manufacturing costs multifold. Moreover, these sensors need high computing power, which makes them difficult to be integrated into a compact design.
Most LiDAR sensors use a 905 nm wavelength, which can provide accurate data up to 200 m in a restricted field of view. Some companies are also working on 1550 nm LiDAR sensors, which will have even better accuracy over a longer range.
Ultrasonic sensors are mostly used in low-speed applications in automobiles. Most of the parking assist systems have ultrasonic sensors, as they provide an accurate reading of the distance between an obstacle and the car, irrespective of the size and shape of the obstacle.
The ultrasonic sensor consists of a transmitter-receiver setup. The transmitter sends ultrasonic sound waves and based on the time period between transmission of the wave and its reception, the distance to the obstacle is calculated.
The detection range of ultrasonic sensors ranges from a few centimetres to 5 m, with an exact measurement of distance. They can also detect objects at a very small distance from the vehicle, which can be extremely helpful while parking your vehicle.
Ultrasonic sensors can also be used to detect conditions around the vehicle and help with V2V (vehicle-to-vehicle) and V2I (vehicle-to-infrastructure) connectivity. Sensor data from thousands of such connected vehicles can help in building algorithms for autonomous vehicles and offers reference data for several scenarios, conditions and locations.
Challenges in handling sensors
The main challenge in handling sensors is to get an accurate reading from the sensor while filtering out the noise. Noise means the additional vibrations or abnormalities in a signal, which may reduce the accuracy and precision of the signal. It is important to tell the system which part of the signal is important and which needs to be ignored. Noise filtering is a set of processes that are performed to remove the noise contained within the data.
The main cause for uncertainty being generated through the use of individual sensors is the unwanted noise or interference in the environment. Of course, any data picked up by any sensor in the world consists of the signal part (which we need) and the noise part (which we want to ignore). But the uncertainty lies in not understanding the degree of noise present in any data.
Normally, high-frequency noise can cause a lot of distortions in the measurements of the sensors. Since we want the signal to be as precise as possible, it is important to remove such high-frequency noise.
Noise filters are divided into linear (e.g., simple moving average) and non-linear (e.g., median) filters. The most commonly used noise filters are:
- Low-pass filter – It passes signals with a frequency lower than a certain cut-off frequency and attenuates signals with frequencies higher than the cut-off frequency.
- High-pass filter – It passes signals with a frequency higher than a certain cut-off frequency and attenuates signals with frequencies lower than the cut-off frequency.
Other common filters include Kalman filter, Recursive Least Square (RLS), Least Mean Square Error (LMS).
Autonomous Vehicle Feature Development at Dorle Controls
At Dorle Controls, we strive to provide bespoke software development and integration solutions for autonomous vehicles as well. This includes individual need-based application software development as well as developing the entire software stack for an autonomous vehicle. Write to firstname.lastname@example.org to know more about our capabilities in this domain.