In the pre-history of robocars, it was common to say that it would be ages before they were legal. Then, at least in some countries, people like Google discovered they were already legal by default, at least for testing. When the vehicle codes were written long ago, nobody thought to say "no robots."
A few places have laws that require human activities of the driver, but many don't. Even so, Google and a few others sought to make the regulatory status of the cars more clear. Soon the state of Nevada passed a bill requiring their DMV to write regulations. Several other states passed similar laws. The first regulations mostly defined testing, and some are leading to covering operation -- first under human supervision, eventually without a person in the car.
At the same time, the National Highway Transportation Safety Agency (NHTSA,) which writes the safety standards for car manufacturers -- the rules that demand seatbelts, airbags and electronic stability control -- were encouraged to see what their role should be. They published a document defining a taxonomy of vehicles and suggestions on how to regulate them.
The first reaction of many is, "of course." It's common to think that it would be strange to put these vehicles on the road without the government certifying them and laying out standards for how they should work. Contrary to that intuition, that is not the way government regulation of automotive technologies has worked, nor the way it should work. And that turns out to be even more true for robocars than the technologies that came before them.
There are already very strong rules requiring that robocars be operated safely. They are not officially statutes or regulations, but rather they come from tort law, and by and large this says:
If your machine hurts somebody because you made a mistake in how you built it, used it or commanded it, you are responsible and must pay. |
This rule has been more than enough to make every developer I have met at a large organization extremely focused on safety.
It may surprise people to learn that most automotive products were introduced and operated for a long time before anything more than this rule applied to them. Things like seatbelts, airbags, crumple zones, anti-lock brakes, electronic stability control and more were all developed by car makers and deployed into the marketplace for several years, in some cases decades before regulations arose. And those regulations in many cases didn't say how to build these tools, rather they just told the car makers who were not already putting them in cars to get to it. The use in the market showed they made cars safer, and when that got really clear the order came that everybody had to do it.
Many of the systems in cars today that are early tastes of robocar operation are to this day entirely unregulated, except by that tort rule above. That includes adaptive cruise control, that keeps you a fixed distance from the car ahead of you, and forward collision avoidance, which hits the brakes if you don't do it yourself when you're about to hit something. Some day, some of these may get mandated into cars too.
When regulators want to get a bit more proactive on safety, they still rarely invent technologies and say how to build them. Instead, they work through things like the star ratings in the New Car Acceptance Program (NCAP.) Vendors aren't legally forced to do the things in those rules, but they won't get the stars, and some car markets demand those stars. The Insurance Institute for Highway Safety, a private consortium, also does ratings and pushes car makers to do things to reach their standards.
Some of the drafted robocar regulations require that there be some effort to certify the safety of marketed robocars. The most common proposal is for manufacturer self-certification, which surprises some people, though in fact it is quite common as a method of regulation. With self-certification, the vendor declares that they have tested the vehicle and it meets listed requirements.
This may seem to be without accountability, but self-certification actually fortifies the basic accountability in the tort principle above. Having made the declaration of safety, the company is now particularly liable not just if the vehicle is unsafe, but also if they actually lied when certifying it safe. In addition, if they have been certifying falsely, they can lose their whole business.
Self-certification is popular because in most cases, even with fairly stable technologies, governments and even independent labs don't have the skills to perform any kind of meaningful evaluation or certification. Governments don't want to take on the social and political liability either. As I'll discuss below, even the companies building the cars are uncertain just how they will test their safety, so it's a certain bet that regulators don't know how to do it, and they're well aware of that.
In our current driving system, we let people take risks and assume responsibility for them. I would venture, for example, that even though it is highly illegal, we "let" millions of drunks get behind the wheel every weekend. If they are caught they are punished, and the punishment is very severe (especially in an accident,) but we don't require cars to have breath tests before you can start the engine. (Some corporations do this on business vehicles, and it is sometimes mandated for people with DUI convictions.)
We don't prevent them before they take the wheel, and hope that the penalties will do the job. And drunks are well known to be highly dangerous on the roads. It seems odd that robocars, which will save lives, might not be even allowed on the roads when to all evidence they should not be nearly as dangerous as drunks.
The idea of a robocar sounds dangerous, but in fact, as of this writing, nobody has ever been harmed in what now exceeds over a million miles of testing, and to my knowledge, beyond a few bent fenders in off-road tests during early development, little property has been damaged. It is not generally seen as the role of the government to imagine hypothetical harms from entirely novel technologies and slow their deployment. Some people call for it, but to enact such rules would be unusual.
The typical methodology of regulation requires:
It is not normally, "we imagine what they might do wrong before they do it, and forbid it in advance."
What will regulators don't do, if they don't test and certify? They can define fairly broad functional standards rather than actual operational standards. Functional standards define goals, rather than the specifics on how to achieve them. You specify that a product must not cause fires, rather than name the type of flame-retardant it must use. You state what software must do, and how often it can fail, rather than specifying programming techniques. Functional standards are important when a field is changing rapidly and subject to innovation.
If specifications and standards are written, they tend to be written by groups of engineers rather than regulatory bodies. The regulatory bodies or funding rules may then, at a later date, demand compliance with the industry created standards.
The first regulations we've seen have related to testing robocars on public roads. Famously, Google started testing their cars on public roads back in 2009, long before any regulations were even dreamed of. An examination of the California Vehicle Code indicated there was nothing in there prohibiting testing.
For testing purposes, Google has a trained safety driver sitting behind the wheel, ready to take it at any moment. Any attempt to take the wheel or use the pedals disables the automatic systems and the safety driver is in control. The safety drivers took special driving safety courses and were instructed to take control if they have any doubt about safe operation. For example, if a vehicle is not braking as expected when approaching a cross walk, take the controls immediately, do not wait to see if it will detect the pedestrians and stop.
The safety drivers are accompanied by a second person in the passenger seat. Known as the software operator, this person monitors diagnostic screens showing what the system is perceiving and planning, and tells the safety driver if something appeared to be going wrong. The software operator is also an extra set of eyes on the road from time to time.
Many other developers have taken this approach, and some of the regulations written have coded something similar to it into law.
This style of testing makes sense if you consider how we train teen-agers to drive. We allow them to get behind the wheel with almost no skill at all, and a driving instructor sits in the passenger seat. While not required, professional driving instructors tend to have their own brake pedal, and know how and when to grab the wheel if need be. They let the student learn and make minor mistakes, and correct the major ones.
The law doesn't require that, of course. After taking a simple written test, a teen is allowed to drive with a learner's permit as long as almost any licenced adult is in the car with them. While it varies from country to country, we let these young drivers get full solo licences after only a fairly simple written test and a short road test which covers only a tiny fraction of situations we will encounter on the road. They then get their paperwork and become the most dangerous drivers on the road.
In contrast, robocar testing procedures have been much more strict, with more oversight by highly trained supervisors. With regulations, there have been requirements for high insurance bonds and special permits to go even further. Both software systems and teens will make mistakes, but the reality is the teens are more dangerous.
Everybody is fully on board with assuring that robocars are safe. In fact, generally everybody wants the cars to outperform human drivers when it comes to safety. This does not mean they will be perfect, but that overall they will improve safety, ideally by a very nice margin, and thus make things better for everybody. (Well, almost everybody, as we'll see below.)
Every team wants to be able to demonstrate to themselves, their bosses, their lawyers, the public and the government that they have attained an appropriate safety level.
Everybody likes the goal, but nobody has figured out just how to test and demonstrate that safety. That's because human drivers, even though we wreak great carnage on the highways, still do pretty well individually. In the USA, the average driver has an accident of any type (including unreported parking lot dings) every 250,000 miles or so -- roughly every 6,000 hours of driving or 25 years of living. Many human drivers never have an accident in their whole lives. There is a fatality about every 2 million hours of driving (and pretty obviously, the vast majority of people never have one.)
Clearly, these are not numbers you can use in a testing standard. You won't say, "If you have new software, go out and drive it for 10 million hours (over 1,000 years driving 24/7) and see if you get less than the 5 deaths human drivers would cause." In fact, it's even hard to demand driving for 250,000 miles just to compare with the one accident a typical human would have. New revisions of car software will and should come out quite frequently.
There have been some proposals for how to measure the safety. Google has reported 700,000 miles, and had no accidents, but it has had times where the safety drivers needed to intervene. 700,000 straight miles without an intervention being necessary would be a better record than human beings -- in fact most humans don't drive 700,000 miles in their lifetime. Google is not there yet but last reported about 80,000 miles between necessary interventions.
There are other factors which occur more frequently that can be measured. Humans, for example, are always making small mistakes and getting away with them. We look away from the road for various reasons, some expected (like changing the radio) and some illegal (like sending text messages.) Sometimes, we look up to see a car stopping in front of us and slam the brakes hard. Perhaps no accident this time, but this is the sort of mistake that can lead to one. We drift in our lanes and sometimes drift out of them. We even fall asleep at the wheel, sometimes for very short periods (no accident) and sometimes for longer (bad accident.)
To judge our robocars, we might look for other small mistakes they make and measure their frequency. The problem is that they don't tend to make the small mistakes humans do. They are always looking (though sometimes their perception systems will temporarily not see or identify an obstacle on the road.) They tend to stay precisely in the middle of their lane (though sometimes they may temporarily lose their precise position and think the lane is not quite where it is.) They don't fall asleep but components can fail or crash while other backup components keep them going.
While you want as much road testing as you can get (to measure whatever you are measuring,) really extensive testing has to be done in simulator. There are several types of simulation testing you might do.
In the virtual world simulators, you can readily test billions of miles of virtual operation in lots of different situations if you have a large computer cluster. You can do this every night with all new revisions of the software. With the real-world simulations you can't do nearly as many miles but you can make all the miles you do test "interesting." That means you don't need to test millions of miles of plain "driving in your lane on an ordinary road." You focus all your testing on crowded roads or dangerous situations, and so encounter as many such situations in 10,000 miles of testing as an ordinary car would encounter in a million miles. You base you scenarios on every dangerous situation your cars have recorded when out cruising the roads, plus any new one that anybody dreams up. Thanks to new sensors, as well as the "dash cams" popular in many countries, we now have access to tons and tons of recordings of danger situations and accidents. The cars will have learned from and been tested in every bad situation ever seen, and some never seen. This is something no human ever gets.
The greatest argument for a simple scheme of regulation with no pre-regulation, self-certification and functional standards is the pace at which this technology is changing. Today's regulators willingly admit they don't know nearly enough to regulate or standardize the technology, but the truth is that even the developers and most advanced researches also don't know enough.
Regulation and standardization usually come long after a technology is developed, even a car technology, and that's the case with fairly simple and predictable technologies.
Standardization, though useful for interoperability and competition, is generally unable to codify anything but the status quo. It's so hard to define standards for things that are not already well established that the Internet Engineering Task Force, which writes standards for the Internet, demands "rough consensus and running code" before considering standardization.
Safety standards are notoriously conservative. But that approach will steer you wrong in a computer related technology, driven by software and improving faster and faster with each generation.
Google provides us a great example of this, from their server platforms, not their cars. When Google was young, they were building the servers to handle search. Conventional wisdom among people building mission critical internet servers at the time was to buy high end "server grade" components which were expensive but had lower rates of failure.
Google did the opposite and bought low end, inexpensive gear for their servers. They knew they would fail more often, so they planned for that failure. They bought a few more than the minimum needed and arranged so that backup approaches took over in case of the many expected failures. When a disk drive failed, nothing special needed to be done. Later on, a staffer would do a run through the server room with a list of failed drives, pulling out the dead ones and slotting in fresh, cheap ones.
The result is one of the most high quality and reliable internet server facilities in the world.
Had Google been told to follow standards in building their rooms, they would have been much more expensive, and probably less reliable. Instead they came up with new systems, and made things up as they went along. You can read a hypothetical dialogue on how that might have gone down.
There is more than passing similarity between these problems. Self-driving car software is complex, and will thus involve bigger, more complex computers and operating systems with more chance of failure. As such, many suspect the best path is not to design the systems with maximum reliability, but to instead plan for them to fail, and have backup systems in place which deal safely with the failure. The overall safety goal is attained, but not in a way any certification body or standardization group might expect it to be attained.
Today, designers of self-driving cars are also still only in the prototype phase. They are working on safety but even they can't tell you precisely what the final form of that will take. In fact, there will not be a final form, it will be something that constantly evolves, always improving.
It just doesn't work to have people from outside lay out rules about how to be safe when the best ways to be safe are not even worked out yet, and may involve technologies not yet developed.
On a much shorter time scale, it must be realized that the software in these cars will be in a state of constant improvement, with new releases on a frequent basis. The vendors will have extensive regression testing systems of their own design to test these new releases, but it will be close to impossible for any external testing body or certifier to use their own procedures to re-certify a new software build. And the software is the core of the system. We want to encourage this constant improvement, as long as it is done with care. Otherwise, we will see what's been the norm in car companies where many cars never see a software update in their lives.
In regulating driving technologies, we must be careful not to stifle a very important source of innovation, namely the tinkerers and small entrepreneurs. The auto industry started with solo tinkerers, and we don't want to legislate them out of existence. Already, some of the regulations with large insurance bonds and legal paperwork already provide a barrier to entry to the small player. We don't want to make that worse.
It is true that tinkerers are the least afraid of the strong deterrents that come from the tort system. They have little to lose, and so are more accepting of risk, and more willing to put others at risk.
In spite of this, we have historically stripped away many of the existing regulations on auto safety for those who are custom building their own cars. We know there is risk with that, but the risk is always on a small scale because the tinkerers are small. Every day, millions of people drive in risky ways (and cause carnage.) A small number of tinkerers doing risky things is tiny in comparison, and unlike the reckless drivers, they are generally doing things to improve automobiles, and in the case of robocar technologies, to improve automotive safety.
There is a sad history of regulation where large players actually encourage regulation and participate in the regulatory process. This often results in regulations which are acceptable to the large players -- who have large resources to deal with procedures and bureaucracy -- but create a barrier to entry by small innovators. The large companies often like it just fine that way, and have been known to deliberately encourage it to be that way. In the end, to maximize innovation in safety technologies, and eventual increased public safety, the counterintuitive step of placing less regulation on small entities and allowing next risk can be the best strategy.
More charitably, the large corporations who tend to lobby for regulations are mostly concerned with how the regulations fit their plans, and may end up accidentally restricting new approaches they haven't thought of.
As explained above, in California and many other places, existing regulations had nothing specific to say about robocars. NHTSA, when it defined its premature taxonomy of "levels" of robocar supervision, suggested that state regulators only allow testing for now, and if they do allow operation, they insist there be a person on standby in the car -- prohibiting the "level 4" which they expected to come later rather than first. In several cases states have elected to work on regulations only for the earlier "levels." Unfortunately, the reality is that these levels are false, and the first commercial product (the Navya) is best classed as a 4, and Google, the leading company is also putting all of its focus on 4, declaring itself to be uncomfortable with problems around 3. May writers believe that cars most like "level" 4 are the real and most meaningful target of this research.
At the same time in Europe, the existing Vienna convention on driving rules appeared to demand a human driver, and thus prohibit robocars. Car companies developing self-drive technology worked to get the convention amended, but they had it amended to support their projects, which require a human driver in the car to perform certain functions, effectively more clearly banning unmanned operations, in spite of the belief by many that they are the most important goal, and contrary to the levels, the first goal rather than the last.
One might argue about the order of things, but there is a more dangerous reality here. Once a law is explicitly about restricting or banning a future technology, it is very difficult to reverse that ban. When there is no regulation either way, then developers and build whatever they imagine, and plan to settle what the regulations should be after they and the world have had a chance to work with the technology and discover if there are any failures that makers won't correct without regulations.
With a regulation in place, then politicians and bureaucrats must decide when to relax regulations or remove bans, and it must be done before the new technology can be properly tested -- even if, because it has not been tested, there is no evidence either way on its safety risk.
Generally in such a situation, there will be some risk, and that is certainly the case here. Unfortunately, if regulations are relaxed by politicians, they face the danger of being blamed for any harm done after. If unmanned vehicles are not regulated, it is the manufacturer's fault if they do harm. If they were banned, and a politician had the courage to remove the ban, and then they harm people (as they will) then the politician will face blame, and guilt will be decided not in a court, but in the court of public opinion.
Because few politicians or agencies want that risk, they are highly reluctant to relax regulations once they have been put in place. It does happen, but it happens much later than it should. If the new technology improves safety, that delay costs far more lives than not having the ban in the first place would.
Some people may have been surprised at the assertion that the existing system of accountability -- the liability rule in the box above -- is all the regulation we need. In fact, in the USA, it may even be too much. US rules allow courts to issue very high damages to deep pocketed defendants which may scare them out of the industry.
It's almost certain, in fact, that adjudicating the first accidents for these cars will be very expensive, and makers will need to budget for that. Over time, however, it is important that juries not make robocar vendors pay more for accidents than they would make negligent humans pay when they cause them, unless there is a very good reason. Otherwise we could find ourselves in a situation where the vehicles cut accidents by 90% but cost 20 times as much per accident, and are thus not practical to insure.
This will be compounded by the fact that even when the vehicles do outperform humans in safety, the accidents they have will be different from the accidents humans have. The injured parties appearing before juries will be able to make the case that they would not have been hurt except for flaws in the robocar software. The several other people saved from human accidents will just be statistics and not in front of the court.
There is no need to act on this as yet, this would be done if the awards in trials get out of hand.
In addition to safety, there has been discussion of what changes might be desired to the infrastructure, or to the vehicle codes. As far as the infrastructure goes, the most likely answer is "very little," though there is some attraction to having a broadcast somewhere of what traffic lights are doing to assure they are always correctly understood. A number of potential changes to the vehicle code might make sense, but it is far too early to finalize what they are.
Initial vehicle code changes would mostly be removal of any clauses that require a human being to be present in the vehicle if the vehicle is ruled safe enough to operate unmanned.
I cover such changes in other articles, notably Advice to Governments on Robocars and my articles on accidents and speed limits.
My long term view suggests that the vehicle code method of regulation is not appropriate for robocars at all. Because you can easily get all the developers of robocars in a room together, or on a mailing list, you can work out proper behaviour with consensus and discussion, and then codify reasonable results. Unlike humans, who keep making the same mistakes over and over again, robocars will be reprogrammed after any mistake and so will not make the same mistake twice.
The other big challenge is the fact that in many countries, driving in practice requires constant violation of the rules, which is harder for robots to do.
Perversely, the most common safety question asked about robocars (aside from "who is liable in an accident") is the philosophy class trolley-problem question about what cars should do when faced with two choices, both of which are bad, like running over one person vs. another.
It's not that the question is uninteresting, but this situation is extremely rare, and the law already answers it, so regulators should not spend energy on it simply because it's a popular parlour topic. See this post on the trolley problem for details.
In the end, the debate around regulation needs to consider a crucial point, one not well handled by typical policy methodologies. Robocars promise to be one of the greatest life-saving technologies ever developed outside the medical field. The car, after all, is the 2nd most dangerous consumer product that's legally sold. (The most dangerous needs you to set it in fire and breathe in all the smoke.) This is technology to reduce that scourge that kills 1.2 million and seriously injures millions more each year around the world.
It would be a very unusual regulatory regime that would not end up slowing down the deployment and even development of this technology. This is what history teaches us. The goal of safety regulations will also be to save lives, and there is a clear public interest in that. Unlike most other situations, they delays caused by regulation will also come with a cost in lives, possibly a much greater one.
In our political systems, we have a hard time reconciling deaths that we might cause, deaths we might have prevented, deaths we might allow others to cause, and deaths we might stop others from preventing. We tend to be much less tolerant of deaths where there is somebody to blame than deaths through inaction. We don't know how to understand people who were saved, or killed in a way we've gotten used to and consider "just part of the environment" like DUI fatalities.
It is my hope we can make the right decisions that while working to minimize harm from robocars, does not ignore the deaths that will be caused by accidents and human driver negligence during any delays in deployment.
Comments can be left at this blog entry.
Disclosures: Google (including the car team) has been a consulting client of mine, as well as some car companies on a limited basis, and I have involvement with a few small startups in the space.
Here's how I would imagine the discussion about Google's server design if they had been subject to outside certification of standards for mission critical server facilities:
Certifier: So, do you use TLA certified hard drives of class 4 with an MTBF of >1 million hours?
Server Designer: No, we use cheap drives from Thailand. They are not certified by anybody and we estimate their MTBF at under 100,000 hours.
C: What? The spec requires TLA class 4 certified drives!
SD: We've designed our system to expect drive failure and survive it without a hiccup.
C: Oh. Hmmm. Yes, after looking at your design, it looks like it should do that.
SD: Great.
C: Oh no, the spec says TLS class 4 and I can't certify you if you don't have that. I would be responsible.
SD: Why does the spec say that?
C: It was written last year, when nobody was aware of your design. At the time, it matched well accepted best practices.
SD: So let's change the spec.
C: I agree. I will fast-track it and it should come up in the meeting next March.
SD: Next March? We are ready to deploy today.
C: Sorry, not if I can't certify you. Can you switch to the TLA class 4 drives?