Autonomous driving technology is a prerequisite to offer significantly better and more developed public transport services - whether to reach environmental or life quality improvement targets
Large fleet of minibuses can offer granular, high frequency public transport, and have thus the potential to deter people from using their private cars. A speaker earlier in the day shared that for Oslo the potential of larger shared public transport is up to -93% in total number of vehicles running in the streets. This allows a reduction in resource consumptions, but also more space for pedestrians and improved urban life environments. But the labor cost of operating these large fleets combined with the shortage of drivers observed all over Europe makes this scenario unrealistic, except if there is a large-scale deployment of autonomous driving technology in public transport. What will fuel the growth of autonomous driving is thus not the search for savings on existing drivers, but rather the opportunity to offer a better level of service, and to bring a solution to a dramatic shortage of labor force observed in this field of activity.
To reach this high-scale deployment, technology providers have to reach two maturity targets in parallel: having the right level of performance (going fast enough, with the right level of comfort, and the right level of availability), and the right level of safety. Which led to the core of my session: What is good enough in terms of safety for autonomous vehicles in public transport?
Performance and safety are twin goals that have to be reached at the same time. The former can be judged by anyone, but how to determine what is safe enough for autonomous vehicles?
At Terhills, the technology we deployed is good enough in terms of performance - vehicles drive fast enough, and with the right level of comfort to match the expectations of passengers - and in terms of safety. But how does one judge the level of safety of autonomous vehicles? When it comes to transport, safety is not a feeling. It is not based on personal experiences after a few hundred hours of testing. It is a real science.
And autonomous vehicles are definitely not the first ones to have to answer these tricky questions. We can learn from the railway and aeronautics industries, which have faced the same challenge: what is safe enough, and which set of rules and standards are needed to ensure that these safety targets are reached?
When we talk about public transport, safety boils down at the end of the day to a kind of brutal figure: what’s the maximum number of passengers (or people around the vehicle) that can on average be killed or severely wounded per kilometer while keeping this new transport modality meaningful for the society? Airplanes and trains in their early days had to find a quantitative answer to these questions, and define precise targets, and processes to prove that these targets were reached, which is at the origin of the heavy homologation processes that underpin these industries. Which is one of the reasons why homologating new planes is so complex, and why it took such a long time for the new China-built aircrafts to get this green-light. To go back to the autonomous vehicles industry, another way to phrase this question is “how many Uber-like accidents can we afford before having this new technology banned from our streets?”. And this question becomes more and more critical with the progressive deployment of large scale fleets of AVs in the streets.
But there is actually a double lens to this question when we talk about autonomous vehicles, and this is specific to this very technology. To get public acceptance, we must prove that we are “as safe” as a human driver - people will most likely take that as the ultimate point of reference. The challenge here is that we deal with figures that are so big, that it is very hard to realize what they mean. If I say “every year, 1,3m people die on the street” - figure from a previous presentation today - one can have the feeling that this is huge. But if we go to the official European statistics, buses, on average, result in 0.17 fatalities for every billion passenger kilometers driven (a passenger kilometer is the number of people transported multiplied by the distance traveled). Which does not help to imagine how safe a human driver is - at least if you don’t have a PhD in statistics. If we take some rough order of magnitude (average number of passengers in a bus, average speed, …), we can change the unit, and calculate that on average, there are 5 casualties every 10 billion hours of driving. Now we have a unit that we can master, but a figure too big to be understood.
Let’s try to figure out what it means. Let’s imagine a group of 20 professional bus drivers that drive 10 hours per day, every single day of their life. The statistics that we mentioned just before means that on average, there will be a deadly accident once every 1500 years.
Once every 1500 years with 20 buses driving every single day… Which totally invalidates all the “validation” of safety based on a few years of return on experience. And to be able to state “we are as safe as a human driver”, one has to demonstrate that technology can reach this incredibly high figure.
And this can not be reached by collecting data of real life driving, otherwise we will be ready in tens of thousands of years to be statistically relevant ! So how can we prove that we are safe enough to deploy fully driverless vehicles in our streets?
Demonstrating safety is a science, with direct consequences on some technology choices
We just saw that safety is not a feeling, but goes back to statistics and math. And the same applied to demonstrate that these safety targets are reached.
For any given site where AVs have to be deployed there will be several key steps.
The first is to identify the potential risks that could arise, and for each of these risks to quantify their potential frequency of accident if they materialize, and the level of severity of the consequences.
If I take a very concrete exemple, let’s talk about crossing an intersection with a traffic light. If the autonomous bus makes a mistake and crosses when the traffic light is red, it is very likely that there will be an accident. And this accident will likely be extremely serious, with potential casualties. Thus, the safety targets associated with the task “stop when the traffic light is red” will be very high (the “acceptable” failure rate will be very low in terms of statistics).
Once these quantitative figures are computed for each of the potential driving tasks - which in safety language translates into “once ASIL level required for each safety goals has been weighted” :) - comes the formal demonstration that they can be reached, either (and often in combination) because the autonomous vehicle is “safe enough”, or because we somehow reduced the level of risks by playing on the physical characteristics of the site (speed reduction, barriers to limit the risk of people crossing randomly, …).
This safety demonstration is a highly complex and rigorous mathematical proof, a “formal proof” based on mathematical analyses, and each statement in this argumentation has to be backed with evidence. Which of course means that you have to fully master what happens in your system - in your software, in your calculators. And which - given the science maturity on the topic for the moment - mostly prohibit machine learning based algorithms. Given the “black box effect” of these algorithms, how could you prove that they won’t make more mistakes than say one every 10 million years of operations? When we talk about safety, it is more “deterministic robotics' ' which looks like “if X then Y” lines of code.
Which does not mean that there are no machine learning based algorithms in autonomous vehicles, of course. But for example, to be able to prove a level of safety, machine learning based algorithms take care of interpreting what happens in the medium to long range, versus “deterministic algorithms” take care of the short range, and can generate for example last second stops in case the clever “machine learning based” software made a mistake.And very often the maximum speed, and the maximum level of scenario complexity an autonomous vehicle can address are defined not by the level of intelligence of the machine-learning based algorithms, but by the “safety layer” that by construction is a more basic one. At least if your ambition to produce homologated fully driverless vehicles.
And, as you can imagine, once you have the right safety software, it must run on the right calculator - that can also reach the right level of performance in terms of failure rates. Such calculators already exist, but with limitations. For example, calculators on which autopilots for planes run are safe enough, but are not powerful enough to treat the huge data set that are produced by sensors like LIDARs - and they cost a fortune. That’s why, at EasyMile, since we did not find ready-made safety calculators that would be safe enough, powerful enough, and at the right price points, we have developed our own.
The road ahead
As of today, we can already deliver services where we are good enough and safe enough to meet stringent requirements in environments like the one in Terhills : limited traffic, limited speed, and a road design which fits our capabilities. It is already a great step forward, and it opens commercial opportunities throughout Europe.
At EasyMIle we are already working on the next step, the autonomous systems that will be performant enough, and safe enough to address at large the open road market, with target commercialization for 2025/2026. . By adhering to strict safety protocols and constantly pushing the boundaries of innovation, we can pave the way for a safe and more efficient transportation landscape and deliver the full promise of autonomous driving that we discussed in the introduction.