Modern car AI evaluating driving conditions and street elements. This is entirely 3D generated image. Vehicle is generic, custom made and not based on any real model or brand.

Anatomy of Autonomous Vehicles

How an autonomous vehicle senses its environment, how it makes decisions and how it implements the commands can be implemented through multiple approaches. Based on what all tasks of these are implemented by the vehicle, the level of autonomy can be determined. However, on a system level, each of these implementations exhibits similar fundamental behaviour.

The level of vehicle autonomy can be an obscured subject because of the varying approaches taken by OEMs to make their car drive itself. The Society of Automotive Engineers (SAE) has defined 5 levels of autonomy, which is now considered as the industry standard. The image below summarizes these five levels:

 

Source: https://www.sae.org/blog/sae-j3016-update

 

Generally, Level 1 & 2 assist the driver with some active safety features which can prove to be crucial in extreme scenarios. The human is the primary driver and the features just enhance the driving experience in terms of safety and comfort. 

Level 3 onwards, the driving gets automated, albeit in limited driving conditions. These conditions are defined as the Operational Design Domain (ODD). As and how the machine learning algorithms inside the vehicle learn how to handle edge cases (harsh weather, haphazard traffic, etc.), the ODD of the vehicle expands until the vehicle reaches level 5.   

Anatomy of a Self-Driving Car

 

Sensing system 

Sensors are the first major system in autonomous vehicle architecture. They are responsible for perceiving the surroundings and providing the data for localizing the vehicle on the map. Any autonomous vehicle requires multiple sensors – Cameras (for detecting and classifying objects with computer vision), LiDARs (creating 3D point clouds of the surroundings to accurately identify objects), Radars (to judge the velocity and heading of other vehicles), Inertial Measurement Units (to help determine the velocity and heading of the vehicle), GNSS-RTK systems (e.g, GPS – for localizing the vehicle), Ultrasonic sensors (for low-range distance measurement) and many more. An important design consideration is the placement of each sensor around the vehicle body to ensure 360° sensor coverage, which helps detect objects even in the human blind spot. The image below shows how multiple sensors are placed to ensure minimal blind spots.

 

Source: https://www.edge-ai-vision.com/2019/01/multi-sensor-fusion-for-robust-device-autonomy/

 

Perception

Owing to the large number of sensors in an autonomous vehicle, it is necessary to merge the data from multiple sensors (sensor fusion) to understand and perceive the surroundings. This includes understanding where the road is (semantic segmentation), how the objects can be classified (supervised object detection), and what is the state of every object (vehicle, pedestrian, etc.) in terms of position, velocity and direction of motion (tracking). Understanding all of this comes under perception.  

 

Object Classification | Source: https://medium.com/@albertlai631/how-do-self-driving-cars-see-13054aee2503

 

Localization and Mapping

Localization is where the vehicle uses sensor data to create highly accurate 3-dimensional maps of the surroundings and shows where the vehicle is situated on this map in real-time. Each sensor provides unique insight into the surroundings, which helps the system map its surroundings. Then, the newly acquired sensor data is compared with these maps to help localize the host vehicle. There can be numerous types of localization methods based on which sensor is being used. LiDAR-based localization compares its point clouds with the existing 3D maps, while vision-based localization uses images. Some localization algorithms also make use of a GNSS (Global Navigation Satellite System) along with Real-Time Kinematic positioning (RTK) and combine the estimates with the measurements of an inertial measurement unit (IMU sensor) to get high-fidelity results. 

Prediction and Planning

After localizing the vehicle, the system needs to predict the behaviour of other dynamic objects before it can plan a trajectory to be followed by the car. Regression algorithms such as Bayesian regression, Neural network regression and Decision forest regression, among others, can tackle this issue. 

The planning block then makes use of the vehicle’s intelligence to understand the state of the vehicle itself in the environment and plan manoeuvres to reach the destination. This includes Local planning (whether or not to stop at an intersection, or planning an overtaking manoeuvre) as well as Global planning (which route to take while going from point A to point B). Another step is Path planning, which is used to accurately decide the trajectory of the vehicle in order to execute a particular manoeuvre, keeping in mind various aspects such as vehicle dynamics, passenger comfort, road texture as well as traffic conditions. Motion planning consists of the lateral and longitudinal vehicle motion plan. 

 

Short term planning

Short term planning | Source: https://medium.com/udacity/self-driving-path-planning-brought-to-you-by-udacity-students-13c07bcd4f32

 

Controls

Finally, the control block is where the driver commands are sent to the actuators to execute the plan decided in the previous block. Normally, the controller is accompanied by a feedback system that constantly checks how accurately the vehicle is following the plan. Based on the level of automation necessary and the type of manoeuvre to be executed, the control mechanism can also differ. These mechanisms can be Linear Quadratic Regulator (LQR), Model Predictive Control (MPC), or a Proportional-Integral-Derivative (PID), among others. 

Autonomous Systems Development and Testing at Dorle Controls

At Dorle Controls, we focus on application software development and testing, feature integration, simulation, functional safety and cybersecurity for autonomous systems. This includes features like AEB, ACC, LKA, LCA, FCW, park assist, road texture classification, driver monitoring, and facial landmark tracking, with applications of data annotation services, neural networks, image processing, sensor fusion, and computer vision. For more information, write to info@dorleco.com)

Tags: No tags

Add a Comment

Your email address will not be published. Required fields are marked *