TLDR
High-definition (HD) maps are essential for self-driving cars, but their reliability degrades as real-world conditions change due to construction, accidents, road wear, and weather. This misalignment between static maps and dynamic environments impacts safety, rider trust, and fleet efficiency. To solve this, we propose a scalable, systematic framework that leverages fleet vehicle cameras and sensors to detect new or changed road features in real time, delivering over-the-air (OTA) updates to the entire fleet. By improving map accuracy, coverage, and responsiveness to dynamic events, the solution reduces safety risks, minimizes sharp maneuvers and false braking, ensures accurate ETAs, and enhances the overall rider experience—advancing the mission of building safe, reliable autonomous vehicles for everyone, everywhere.
Problem
Design a systematic approach by which the AV Map can be maintained/updated to make the Maps more reliable for self-driving cars
Background
HD maps (high definition maps) are essential for self-driving cars and they have a high accuracy of object locations, up to 5 cm. These maps solve the problem of localization and can provide valuable information on objects like traffic lights, drivable lanes, bicycle lakes, road signs, the speed limit, pedestrian crossings and curbs etc., which is essential for path planning.
The most significant problem with the HD map is the change in the physical features with construction, new static objects, road wear and tear etc. The change of physical feature can give the incorrect environment information to the autonomous driving system, and it can cause a negative impact on the safety and rider experience of autonomous driving vehicles. The localization based on the alignment of landmark can also suffered from degradation of the accuracy and reliability due to a misalignment between maps and the real world. Therefore, the changes of HD map must be detected and managed effectively and at scale to allow the autonomous driving vehicles to operate safely.
Solving this problem aligns with the product strategy to build safe and reliable self-driving cars for everyone and everywhere
Goals
Design a systematic approach by which the AV Map can be managed and updated effectively taking into account physical features such as construction, new static objects, road wear and tear, impact of weather on road etc.
The approach must be scalable, must manage real-time updates, must create a better rider experience for self-driving cars and mitigate impact on fleet operations.
Metrics to Measure Success
- Improving Map Coverage and Reliability
- Reduce %age of unclassified objects on road
- Increase %age identification of new static objects on road
- Increase %age identification of moved/removed static objects
- Increase %age identification of road wear and tear
- High confidence identification of dynamic road scenarios like construction, collision, potholes etc.
- Improving Rider Experience
- Reducing near miss incidents
- Reducing sharp maneuvers
- Reducing false emergency braking
- Reducing manual interventions
- Meeting ETA
Market Analysis
Based on my short survey on internet online the top 4 reasons of improving rider experience for autonomous vehicle (AV).
- Visibility into AV’s perception
- Visibility into AV’s short term and long-term decisions
- Ability to pullover in case of hardware or software failure
- No incidents during trip
- Getting to destination on time
Assumption: My survey is representative of the feedback of general public feedback
Based on data analysis for US department of transportation and NHSTA data the 3rd cause of road fatality after, speeding and DUI is work zone fatalities. So, identifying and updating AV maps with the dynamic objects in work zone is absolutely critical for safety of the people both inside and outside the vehicle.
Here are some of the scenarios that needs to be identified and updated on the AV maps for the benefit of the AV fleet
- Detour/Road closure updates
- Construction zone updates
- Potholes, road wear and tear
- Accident or disabled vehicles on road
- Snow or ice on road
Assumption: The data set is based on year 2016 and the data distribution is similar for 2019
User Personas
Features/Hypotheses for MVP to explore
Goal | Feature Idea | Priority | Notes |
Improving Map Reliability and Coverage | Fleet camera based dynamic event detection for Detour/Road closure updates/Construction zones | High | Create a Map updates options where fleet drivers can use camera on their mobile for perception of dynamic events on road. |
Fleet camera based dynamic object detection for new/ moved/ deleted static road objects | High | ||
Fleet camera based dynamic event detection for accidents/disabled vehicles/emergency vehicles | Medium | ||
Fleet camera based dynamic object detection for potholes/ road wear and tear | Medium | Provide OTA event updates over 5G to AV fleet | |
Improving Rider Experience | Real time OTA event updates to AV fleet | High | |
Visibility into AV’s path plan with dynamic event updates and re-route options for riders | High | Create a visualization of path plan of and overlay event updates from fleet vehicles | |
Real-time camera feed and simulation options for vehicle operators/ validation of motion planning | Medium | Create real time simulation of AV’s response to dynamic event updates and provide re-route options. Use driver’s reaction to dynamic events as a validation for AV’s response. |
Assumption: The fleet is available anywhere we need to maintain the map we have an active fleet of vehicles
User interaction and design
Feature 1: Fleet camera based dynamic event detection for Detour/Road closure updates/Construction zones
Feature 2: Fleet camera based dynamic object detection for new/ moved/ deleted static road objects
Feature 3: Fleet camera based dynamic event detection for accidents/disabled vehicles/emergency vehicles
Feature 4: Fleet camera based dynamic object detection for potholes/ road wear and tear
Feature 5: Real time OTA event updates to AV fleet
Feature 6: Visibility into AV’s path plan with dynamic event updates and re-route options for riders
Feature 4: Real-time camera feed and simulation options for vehicle operators
Feature 5: driver fleet data for training / validation of motion planning
Hypotheses Testing and Performance Measurement