Skip to main content

Mobility

DENSE: 24/7 Sensing System

Published: 22/03/2020

Partner(s):  Mercedes-Benz, Renault, Ibeo, Innoluce, Modulight, Oplatek, Vaisala, Veoneer, Xenics, Cerema, Technical University Tampere, Ulm University and VTT

Year(s): 2016-2020

Advanced prototype technologies from Hitachi’s European R&D Centre are at the heart of DENSE, an EU project to create a 24/7 all-weather sensor suite.

Real-time perception for autonomous driving designed to work in all weather conditions

Successful autonomous driving needs extended perception based on a smart, reliable and cost-efficient sensing system.

DENSE is tackling one of the most pressing problems of autonomous mobility: enabling vehicles to sense their surroundings in all weather conditions. Severe weather such as snow, heavy rain or fog has long been seen as one of the last technical challenges preventing self-driving cars from being brought to the market. We address this issue with “smart fusion” functionality that uses artificial neural networks to fuse all sensor information at pixel level and to identify the driveable area in real time.

Hitachi designed and developed three elements of the DENSE project:

  1. High-level functional architecture for sensor data fusion.
  2. Robust and efficient perception system based on artificial intelligence (AI) to detect the drivable area in real-time.
  3. New generation of ECU designed specifically for AI.

Environmental awareness is the key

Detecting the drivable area (also known as free space) around a vehicle reliably and in real-time is one of the prerequisites for autonomous driving. This field has seen rapid advances in recent years, thanks to advances in deep learning and the availability of public annotated datasets. The latest projects use convolutional neural networks (CNNs) similar to those used for semantic segmentation (the process of classifying pixels in an image) to infer the free space ahead from images and/or 3D LiDAR.

As well as obstacle avoidance, free space estimation can also be used for road segmentation to facilitate route-planning and decision-making. It has been studied for some time due to its importance and many approaches have been put forward. Most of these have used standard cameras and/or LiDAR as input sensors. More recently, the use of deep neural networks trained to estimate free space by learning from tailored datasets has shown promise. However, this approach typically relies on complex models, whose high accuracy requires a heavy computation burden that is not suitable for on-vehicle embedded devices. And despite promising results, it is not yet clear how to efficiently exploit multi-modal input data from different sensors.

Fusing sensor information

The data delivered by different sensors is processed in separate encoders to extract representative features for each data type. These features are data arrays that contain rich contextual information: in contrast to a pixel that contains RGB information, a feature also includes information extracted from neighbouring pixels. The features are then combined (concatenated) and the decoder transforms them into the CNN’s final output: a class label per pixel. This strategy for data fusion makes it possible to effectively combine multimodal data, creating more robustness and accuracy for the same computational burden.

Schematic representation of the encoder-decoder CNNs: (top) ENet, (bottom) compact double pipeline cENet2 fusing two sensors

An efficient, robust multi-sensor perception system

At the final DENSE event at CEREMA in Clermont-Ferrand, France, we demonstrated an approach for real-time, drivable road estimation combining a standard RGB camera and 3D LiDAR. The system shows good performance in urban and suburban environments despite a low quantity of training data (less than 300 images).
Hitachi’s solution shows state-of-the-art performance and high robustness, while maintaining real-time capability with over 30 frames per second either on a conventional consumer GPU or even on a low-power FPGA. Its flexibility means this multi-sensor solution can be transferred to other vehicle types including buses, trucks and trams.

By improving hardware and system efficiency, and reducing computational effort, the solution contributes towards a reliable, cost- and resource-efficient solution for safe autonomous mobility. And in the context of vehicle electrification, improved efficiency can improve range and reduce battery costs.