In our last safety series post, we detailed the safety benefits and innovations unlocked by our zero-occupant autonomous vehicle. By beginning with a fundamentally safer design, we’re able to confidently improve on that baseline with a smaller footprint, sophisticated sensor stack, pedestrian-protecting front end, self-clearing sensors, and even an external airbag. Every detail of our hardware is developed through the lens of safety.
And even though we’re confident in the safety of our hardware design, we know that it requires equally safe and intelligent autonomy to guide it through the real world for the benefit of the communities in which it operates. That’s why our hardware works with our automation stack to ensure we’re deploying safe technology into the world.
To explain how this symbiotic relationship of our hardware and software works to make sure Nuro is as safe as possible, we’re going to dive into our autonomy stack and all of its parts from a safety perspective.
Mapping and localization
The first step to safer autonomy is creating a detailed map of the world. After all, it’s impossible to make a vehicle navigate through the world if it doesn’t know what the world looks like. Our operators carefully map areas using the same sensor stack as our bots, gathering data that is turned into high-definition, scalable maps. By building our own maps, we can make sure they’re accurate down to the centimeter level and contain information tailored to our autonomy stack.
That level of detail may seem unnecessary, but on the road, a few centimeters can mean the difference between a road and a bike lane. Or a lane and a crosswalk. We know that, to make safe decisions, the vehicle needs a complete picture, and that’s why we devote so much energy to creating our own maps. By maintaining control of our maps, we provide a foundation for our vehicles to make safer choices.
Perception
Once our vehicle understands the static world in which it exists, it needs to understand what objects are within that world so it can make safer decisions about those objects. Our sensors and algorithms enable that to happen. First, our custom vehicle enables custom sensor placement, which allows for a multimodal, 360º view of the world — the vehicle is essentially seeing the world in different ways at all times (thermal, lidar, etc.). The vehicle uses our machine-learning models to detect, track, and classify objects within the world.
Accurate perception is critical for the decision making of our vehicle and how it should interact with objects in the real world — because when you have all the information, you can make safer decisions. Information is constantly being absorbed by our autonomy stack; in fact, our perception system combines data from the vehicle’s sensors with machine-learning models to identify and categorize objects in real time.
To understand the magnitude of that feat, consider how many objects are likely to exist around the vehicle at any moment: hundreds of passing cars along with trees, pedestrians, motorcyclists, bicycles, buildings, dogs, and stop lights. Over the course of the day, that equates to hundreds of thousands of objects.
Understanding what those objects are is critical for safety, since raw data doesn’t make value judgments. Until the vehicle can detect and classify objects, it doesn’t understand the difference between a pile of leaves and a person, nor does it know what to do with that information. Correctly identifying objects means our software knows what to value most: people. Our software has been built to prioritize the lives of others on the road, so if the vehicle predicts potential harm involving a person, it will take all possible measures to stop or evade other road users.
This is all true even in inclement weather. Thanks to the sensors we discussed above that provide multiple layers of sensing and our machine-learning approach, safety doesn’t stop when it starts raining; even in dense fog, the vehicle is able to detect and classify objects around it. That’s vital for ensuring our vehicle behaves safely and makes decisions that prioritize people.
Prediction/planning
After our vehicle has an in-depth understanding of the world and the objects within that world, it needs to be able to predict the future with a high degree of certainty. That might seem unnecessary, but human drivers routinely make educated guesses about behavior. For instance, we know that oncoming traffic on the other side of a double yellow is likely to keep moving in a straight line; it won’t suddenly veer into your lane. We assume that a person walking on a sidewalk is going to maintain walking on the sidewalk. And we can tell that a car drifting towards a dotted line is preparing to change lanes. Being on the defense by making predictions about other road users allows human drivers to constantly prioritize their own safety.
That level of prediction requires a special combination of experience and behavioral understanding that’s difficult to replicate with an autonomous vehicle. Without that prediction, the vehicle can’t be prepared and can’t choose the safest plan. So, our prediction system makes multiple hypotheses per second about how others on the road will behave. It then decides how to proceed based on the scenario with the least amount of risk. The vehicle is constantly updating and refining its predictions, helping the bot more safely navigate the dynamic driving environment.
For instance, if a bicyclist suddenly veers out of the bike lane in front of the bot, the bot has already included that action in one of its hypotheses. It then must decide the safest behavior: The autonomy software always prioritizes the safety of other road users.
Of course, if you’ve ever driven a car, you know that the world is an unpredictable place. So the prediction system needs to be able to make safe choices even in challenging situations, such as when a group of bicyclists surround the vehicle. In that instance, the vehicle would identify the unusual behavior and take extra precautions.
Our approach to prediction and planning means that our vehicle always has a plan for how to navigate the world, and that plan keeps safety a priority, no matter the situation.
Controls
The safety of our driving plans is only as good as our ability to execute those plans. That’s where controls come in. The controls send signals to the actuators — brakes, steering, etc. — on the vehicle to implement what the autonomy stack has planned. We’ve designed our controls systems to be highly reliable, robust, and testable to ensure the vehicle behaves in accordance with the plan. This requires a sophisticated communication system between the actuators and the software.
In the event we encounter a scenario that is out of scope for the autonomy stack, Nuro has a backup that can take over its operation: our teleoperations system. In the rare instances when a bot needs assistance from a human, our teleoperators can operate the vehicle from a special console that gives them a high-quality picture of the world around the bot. This system is only used rarely and in certain situations, but it’s an important safeguard.
Hitting the road
Our zero-occupant design is only the start of deploying safer technology on the road. When combined with our autonomy stack, our technology is truly differentiated in its safety approach. That’s critical because our vehicles share the road with real people: whether encountering an impromptu street hockey game or a bicyclist maneuvering around parked cars, our vehicle is taught to behave safely and prioritize others.
Safety is a priority throughout our autonomy stack, and it’s further proof that our vehicles are safe technology that will better the everyday lives of people around the world.