Reaching the next level of automated driving.
To drive safely, an automated vehicle needs to know exactly where it is on the road. Is it in the middle lane or the right lane? How far away is the curb? Automated Driving (AD) systems answer questions like these through a process known as localization, which allows a vehicle to determine its precise relationship to its surroundings.
For effective localization, onboard sensors must work together with high-definition (HD) map data. The latter allows vehicles to match what their onboard sensors see with what’s on the map, helping them establish their exact location on the road and plan safe maneuvers.
Until recently, AD systems have only been able to link one sensor (for example, a camera) with one type of data in the map – a mono-sensor localization approach. The drawback here is that if a sensor malfunctions or map data is invalid, it dramatically impacts an automated vehicle’s ability to make safe maneuvers on its own.
Now, thanks to a pioneering collaboration between Valeo and TomTom, we’ve been able to successfully test a multi-sensor approach: harnessing multiple sensors and multiple layers of map data to improve the robustness of localization and the safety of AD.
Our joint solution combines the TomTom HD Map and RoadDNA localization suite with the Valeo SCALA® 3D LiDAR and Drive4U® Locate technology. We put it to the test during separate driving tests in Tokyo, Paris and San Francisco, using three vehicles with different camera and LiDAR sensor positions. All three proof-of-concepts successfully demonstrated multi-sensor localization: Valeo’s localization software was able to accurately match the output from the Valeo SCALA® 3D LiDAR with RoadDNA data from the TomTom HD Map.
RoadDNA consists of sensor-agnostic localization layers in the TomTom HD Map, such as traffic signs and 3D pattern information about the roadside. It enables precise localization across sensors such as camera and LiDAR. This increases safety by creating redundancy for map data and localization, as well as system robustness by using a multi-sensor approach. It means that even if one sensor fails or is unable to accurately determine its surroundings; for example, because of poor visibility caused by heavy snow or thick fog, the vehicle can still safely determine its location.