“Do nothing” is usually not that bad an approach to dealing with an unknown situation. I could easily see a situation where trying to back away from the person you just hit would increase the damage.
As other comments have suggested, we should wait for the video before judging whether this was really a bad choice by the autonomous car.
And “no it isn’t” isn’t a very convincing argument to the contrary.
Yes, in this particular case, maybe the car should have moved a bit. I’m talking about the general case. What are the odds that a car happens to come to a stop with its wheel exactly on top of someone’s limb, versus having that wheel finish up somewhere near the person where further movement might cause additional harm? And how can the car know which situation it’s currently in?
If you want autonomous cars outside in the real world (as opposed to artificial lab and test scenarios), then they have to deal with real world situations. This situation has happened in reality. You don’t need to ask about odds anymore.
how can the car know which situation it’s currently in?
That is an engineering question. A good one. And again one of these that should have been solved before they let this car out into the real world.
This situation happened, yes. Do you think this is the only time that an autonomous car will ever find itself straddling a pedestrian and need to decide which way to move its tires to avoid running over their head? You can’t just grab one very specific case and tell the car to treat every situation as if it was identical to that, when most cases are probably going to be quite different.
“Do nothing” is usually not that bad an approach to dealing with an unknown situation. I could easily see a situation where trying to back away from the person you just hit would increase the damage.
As other comments have suggested, we should wait for the video before judging whether this was really a bad choice by the autonomous car.
That doesn’t get any truer even if you repeat it a few more times.
Truth is that a general approach was not sufficient here. This cars programming was NOT good enough. It has made a bad decision with bad consequences.
And “no it isn’t” isn’t a very convincing argument to the contrary.
Yes, in this particular case, maybe the car should have moved a bit. I’m talking about the general case. What are the odds that a car happens to come to a stop with its wheel exactly on top of someone’s limb, versus having that wheel finish up somewhere near the person where further movement might cause additional harm? And how can the car know which situation it’s currently in?
Wrong question.
If you want autonomous cars outside in the real world (as opposed to artificial lab and test scenarios), then they have to deal with real world situations. This situation has happened in reality. You don’t need to ask about odds anymore.
That is an engineering question. A good one. And again one of these that should have been solved before they let this car out into the real world.
This situation happened, yes. Do you think this is the only time that an autonomous car will ever find itself straddling a pedestrian and need to decide which way to move its tires to avoid running over their head? You can’t just grab one very specific case and tell the car to treat every situation as if it was identical to that, when most cases are probably going to be quite different.