Re: Fully-autonomous cars could be impossible without human attention, say experts I have a very different take on this topic, certainly the real elephant in the room is not being called out. In fact, there are two HUGE elephants in this autonomous car room.
I have no doubt that autonomous cars are on their way, so maybe it is going to take a little longer than originally anticipated. That tends to be the case with just about all new complex technology.
The main problem I often see in articles such as these, is that it is based on what human might do, e.g. anticipate for a kid running behind a ball into the road, and what humans are most likely to do / to behave.
Fact remains that humans are not particular good at driving cars to start with. You only have to look at the number of accidents, deaths and so on. You can take any country. Even in a small country such a my home country, the Netherlands, with excellent infrastructure, mandatory driving lessons prior to taking a driving exam, well maintained cars, well enforced traffic rules etc. we still have some 600-700 deaths per year in our tiny country. (population 17 million). And thousands of thousands major accidents with people ending up in hospitals and maimed for life.
The vast majority of these deaths is due to driver error one way or the other. Mechanical failure is almost unheard of these days.
So what are some of the problems with human drivers when it comes to their (driving) behaviour:
- They might drink alcohol prior to driving (some whilst driving)
- They might be on (medical) drugs
- they might talk or text on a mobile phone
- they might have had a rough night (e.g. parent with a young baby crying throughout the night), just being tired
- they might be having an argument with their fellow passenger
- they might be eating and or drinking
- they might get blinded by low sun
- they overestimate their driving ability
- they speed
- they brake too late
- they might fall asleep behind the wheel
- their attention will wander on long boring stretches of road
- etc etc
The list of items affecting human driving capabilities and goes on and on and on. It is just human nature. In many industries this was one of the reasons automation took over. (Process industry, aviation, trains, etc etc)
Short version; Human driving performance varies wildly from one individual to the next and is affected by a huge number of factors.
So yes, humans can be pretty good at handling unexpected situation. BUT, that is certainly not a given for all drivers. Take that example of the little kid running after the ball into the road. Some drivers will, correctly, assume when they see the ball coming into the road, a kid might follow. So they slow down, in a controlled fashion. Many drivers will not slow down. Some might swerve or slam on the brakes endangering other cars/drivers.
All these “exceptions” where humans are doing better than a computer are exceptions. It is far from a given that all humans will react well to such exceptions.
Computers/automation will give a much more consistent performance than humans in just about any task you can think of. In fact they tend to be much, much better at it than humans. E.g. the autopilot on my little plane, flies my aircraft in a much more precise way I can ever hope to achieve.
What is surprising is that these sort of papers don’t compare how automation does compared to humans overall. It only looks at possible exception scenario’s where automation might do less. It overlooks the fact that probably during 99,99% of the time the automation is much, much better than any human will ever be.
So here we have Elephant number 1:
Automation is never going to be perfect. But at what level of safety will we prefer automation over humans? What I see in these debates is that we expect automation to be perfect and outperform humans in every situation and under all circumstances. We certainly don’t expect that from human drivers, so why do we expect it from automation?
If automation would reduce overall fatalities by 50% I would think that would be a tremendous achievement. It still means there will be accidents, there might even be a rise in certain type of accidents, but it is a vast improvement!
And here is Elephant number 2:
I learned about this some years ago when I was spending some time at Stanford University on a management course. As part of the course we also visited and had sessions with some of the different faculties. We spend some time with the guys and girls working on autonomous cars. This sort of pretty advanced R&D Stanford work won’t make its way into commercial products for 10-12 years, so they are very much ahead of the game, or rather the current state of art of technology.
When asked what their biggest challenge was, they told us: human interaction with the automation. The notion that humans are any good at overseeing automation is beyond ridiculous. It has been proven again and again. (often at huge cost and lives) that humans suck at it.
Humans have a very short attention span. You can only spend so much time looking at a dashboard/road etc. If a machine is doing the actual driving, within minutes your mind will wander. When the machine/automation needs your attention it will take several seconds before you have assessed the situation and come up with a reasonable solution.
They actually had made a mathematical model showing, most likely, a semi-autonomous car needing oversight and occasional human assistance is more likely to meet with a crash than regular car. Again, not because the automation isn’t good enough, but due to human nature and how we behave.
If you fly at cruise altitude on autopilot, pilots mind will start to wander. Pilots will exhibit the exact same behaviour as drivers of cars. So some safety measures are included. During busy flight segments such as take off, landing, descent the auto pilot might be switched on, but the pilots will be very active and involved in what they are doing, not a problem to keep that going for a little while.
In my mind, the worst scenario is relying on humans to step in when automation fails. Because when you are driving a car and something happens you need to respond pretty quickly. One of the reasons we have so many accidents is also because, as it is, humans aren’t to good at responding quickly to unexpected situation. Some people are, most people are not and need a bit of time. But driving at only 50 km/h means you are travelling at almost 14 m/s.
If you were driving in your semi-autonomous car, looking down on your iPad and the computer signals it needs your assistance it will take several seconds before you looked up, looked around you, got your hands on the wheel, feet on the pedal and start responding. You might have driven 30-60 meters already!
Normal driving routines, keep your mind occupied and your attention focussed and routine/experience helps as well. But put your mind to a very different task then driving a car and switching back to driving the car is going to take time. Much more than normal driver reaction times.
Again, the problem with these two elephants is humans. This article is also a prime example on human behaviour, it focusses on the exception rather than the overall result. That doesn’t mean you should not be looking at these exceptions, but you need to do so in much broader context. (e.g. impact on overall accidents)
Humans are not particular good at making decisions based on scientific data. So even though there might be more than ample evidence of automation outperforming humans and saving lives, many people still prefer humans to be in control.
Of course, there is also the matter of how this autonomous cars need to be fitted into the various countries legal system, insurance etc. That is up to politician, they are humans, behave as humans (well, at least in theory) and listen to their human voters.
Jeroen
Last edited by Jeroen : 14th September 2022 at 14:35.
|