Anatomy of Autonomous Vehicle | Dorleco | VCU For electric vehicle

Anatomy of Autonomous Vehicles

Overview of Autonomous Vehicles

There are various ways to implement how an autonomous vehicle perceives its surroundings, makes decisions, and carries out directions. The degree of autonomy can be calculated based on all the tasks that the vehicle completes. All of these implementations, though, share basic behavior at the system level.

Due to the various methods OEMs utilize to enable Autonomous Vehicle technology, the extent of vehicle autonomy can be a confusing topic. The industry currently regards the five levels of autonomy established by the Society of Automotive Engineers (SAE) as the standard. The figure below sums up these five tiers:

In general, Levels 1 and 2 help the driver with certain active safety features, which can come in handy in dire situations. The features only serve to improve the safety and comfort of driving—the human being is still the principal operator.

At level 3, the driving becomes automatic, but these circumstances define the operational design domain (ODD) under certain restricted driving circumstances. The vehicle’s ODD increases until it reaches level 5 as the machine learning algorithms within learn how to handle edge circumstances (bad weather, unpredictable traffic, etc.).

Anatomy of an Autonomous Vehicle

1. Sensing system

In the architecture of autonomous vehicles, sensors are the first significant system. They observe the environment and supply the information needed to locate the car on a map. Any autonomous vehicle needs multiple sensors. These include cameras for computer vision-based object detection and classification, LiDARs for building 3D point clouds of the environment to identify objects precisely, radars for determining the direction and speed of other vehicles, inertial measurement units for assisting in determining the direction and speed of the vehicle, GNSS-RTK systems (such as GPS) for localizing the vehicle, ultrasonic sensors for low-range distance measurement, and many more. Positioning each sensor around the vehicle body to provide 360° sensor coverage—which aids in object detection even in the blind spot—is a crucial design factor.

Anatomy of Autonomous Vehicle | Dorleco | VCU For electric vehicle

The graphic depicts the placement of several sensors to guarantee there are few blind spots.

2. Perception

An Autonomous Vehicle has a lot of sensors; therefore, to comprehend and perceive its environment, data from several sensors must be combined (a process known as sensor fusion). To know the location of the road (semantic segmentation), categorize things (supervised object detection), and determine the position, velocity, and direction of motion of each object (car, pedestrian, etc.) (We) require (tracking). Perception is the key to comprehending all of this.

Anatomy of Autonomous Vehicle | Dorleco | VCU For electric vehicle

3. Map-making and Localization

Localization uses sensor data to create precise 3D maps and track the vehicle’s position in real time. Every sensor offers a different perspective on the environment, which aids in the system’s mapping of its surroundings. The host car is then located with the assistance of these maps by comparing it with the recently obtained sensor data. Depending on the sensor you use, you can apply a variety of localization techniques.

 

Anatomy of Autonomous Vehicle | Dorleco | VCU For electric vehicle

Whereas vision-based localization makes use of images, LiDAR-based localization compares its point clouds with the available 3D maps. For high-fidelity results, many localization algorithms combine RTK-GNSS data with IMU sensor readings.

4. Prediction and planning

The system must first anticipate the actions of other dynamic objects to design a route for the car to follow after localizing the vehicle. Regression algorithms like decision trees, neural networks, and Bayesian regression can solve this problem, among others.

In the next stage, the planning block analyzes the surroundings to plan routes and actions such as stopping or overtaking.

Anatomy of Autonomous Vehicle | Dorleco | VCU For electric vehicle

Consequently, the control block sends commands to actuators and uses feedback to keep the vehicle on course.

The control mechanism can also vary based on the type of movement to be performed and the required level of automation.
Among other techniques, common examples include proportional-integral-derivative (PID), model predictive control (MPC), and linear quadratic regulators (LQR).

Dorle Controls: Development and Testing of Autonomous Vehicle Systems

Dorle Controls specializes in autonomous system software, including functional safety, simulation, integration, testing, and AI-driven perception. We support features like AEB, ACC, LKA, LCA, FCW, park assist, road texture classification, driver monitoring, and facial landmark tracking. Write to info@dorleco.com to learn more.

Tags: No tags

Comments are closed.