Accident fatalities in a TESLA car might have been avoided by using software modules of NEXYAD: the time for monitoring has come
Accident fatalities in a TESLA car might have been avoided by using software modules of NEXYAD : the time for monitoring circuit has come.
by NEXYAD
Passions run around the issue of autonomous vehicles, or semi-autonomous. Recently with a TESLA car there has been a fatal
accident while on autopilot mode. NEXYAD studied traffic safety for twenty years, and we give some elements of reflection on this
type of accident.
Processing circuit, informing auti-pilot systems, control, etc ... from perception, data fusion, decision-making, and automatic control of actuators,
are usually very well designed, and based on high-performance modules. But unfortunately, this is not enough to void the the risk of accidents.
Indeed, for the treatment of this risk, it lacks a parallel circuit (oarallel and independent) called "monitoring" circuit.
To understand this need for a monitoring circuit, one must first understand the level of complexity of a road scene viewed from a camera.
The variability of road scenes is actually much more than what a normal person comes to Imagine. Indeed, a color image, which has eight bits
for each color (then, 24-bit, as there are 3 colors) may encode 224 different color levels per pixel (more than 65,000 different possible values).
HD video has more than 2 million pixels.
This means that the matrix of HD 8-bit color image may encode more than 65 000 2 000 000 images !
This huge number is simply unimaginable.
This raises the question of the validation process of driver assistance systems (ADAS) based on cameras. It really is impossible to test all possible
cases of road scenes ! Even one or ten million kilometers of testing represents a negligeable part of possible cases !
And please do not think that you can easily reduce this complexity, assuming that road scenes have a "SPECIFIC SHAPE" (the road in front of the car).
You may then be surprised ifever a spider settled in front of the camera lens ... or if the car is behind a truck or bus with an adds poster (and in such a
case, all images are possible, including the image of a straight road in the desert, while the actual road, the one on which the vehicle sets, turns !).
Example : "transparent" trucks proposed by Samsung
Completeness of validation database is impossible, however, there is a solution to use in such a case. First, build the validation database using
a methodology (NEXYAD has been publishing the « AGENDA » methodology in the field of ADAS validation, see
http://groupementadas.canalblog.com/tag/Methodology%20Agenda ). And second, use in real time a parallel circuit of « monitoring » that has the role of « closing »
the open world. In other words, it is necessary that all cases already met in the validation process are used to create a kind of "confined space" called "known space".
Then, in real time while vehicle is driving, one check if the current case (what is seen NOW by cameras, and other sensors) is INSIDE this « known space »
so that the built-in intelligence to know if the road scene meets cases in which it can react properly, or if the road scene is very different (unknown case).
In the second case, it is obvious that embedded intelligence must take default decisions : slowdown, warns the driver, etc ...
Therefore, in parallel of the construction of the main auto-pilot chain of perception, intelligence and control , it is necessary to build a second circuit for case monitoring.
NEXYAD has been developing three software modules aimed at achieving this kind of monitoring circuit.
. Visibility measurement with VisiNex onboard:
Visibility can vary greatly on road scenes : more or less light from the sky, rain, fog lights, glare, objects (vehicles, pedestrians) of the same color as the road or OF the sky, etc ...
VisiNex onboard measures the visibility of the road scene. The standard model of visibility integrated into VisiNex is the human vision (ViSiNex gives results 100% correlated
with a human panel marks). VisiNex can predict, given the characteristics of the image (brightness, contrast, signal report noise, etc ...), if a human observer can detect
whether or not the objects in the image. Indeed, vision systems all require a minimum image quality to be able to "see"/detect. Most part of time, Machine vision systems have
greater needs than human vision (except night vision using infra-red). NEXYAD is able to « test » an artificial vision system in terms of efficiency, on a database that samples visibility
through design plans (we can test Mobileye, or NEXYAD modules such as RoadNex – road detection-,ObstaNex – obstaclesdetection - , …). Then NEXYAD build the « space of known
visibility situations ». If an image leads to visibility situation out of bounds in such a space, then
it means that no one knows if the camera-based detection system is able or not to detect obstacles properly (even if we don’t know how the detection module works : what kinds of
algorithms, etc) !
In such a case, no detection does not mean no obstacle, it means, « I don’t know ». And this applies as well when using NEXYAD detection modules, or MOBILEYE ones, or any other
computer vision-based detector : VisiNex Onboard does not need to know the detection algorithms because it is a functional approach.
, Statistical closure of the space generated by the set of sensors (cameras, lidar, radar, etc ...):
It is rare that ADAS are based only on a camera. One often use other sensors, such as of LIDAR, RADAR, etc.
NB: by adding sensors, one increases the overall complexity of the perception signals ! The idea in this case is to use what car manufacturers use to call "data fusion" .
One after the other, sensors may be tricked while the others continue to operate efficiently : that’s the key idea.
For example, in the case of very poor visibility, cameras (in the visible spectrum) are inoperative : ie. by night under heavy rain crossing other cars ligntings. The radar, it is not
supposed to be affected in such a case. But if infrastructure contains a lot of metal (ex : toll, metal bridge, …), then the radar will be dazzled, not the camera.
The idea of using a variety of sensors, and then combine the data is correct. NB : one ca see that you need a third sensor if you want to cope with poor visibility on a metal
bridge (visible spectre cameras are blind and radar is dazzled) : lidar, infrared, etc …
But such a « data fusion system » should be tested anyway... and as we explained before, validation would need quite an infinite number of tests.
NEXYAD developed a statistical data closure system that allows you know if a multi-sensors perception system sis currently meeting or not a « known case » : ReliaNex.
With ReliaNex, if the vehicle is experiencing a situation similar to the ones already encountered during the test, then your data fusion system wil be considered as reliable.
If not, then ReliaNex will warn you that reliability of your datafusion system is very low.
Again, it is possible then to apply default strategies : warn the driver, slow down the car, …
. Onboard real time risk assessment :
SafetyNex NEXYAD module calculates the risk taken by the driver : the driver can be a human being or a robot (auto-pilot).
It gathers all the risk factors, including the speed of the vehicle, danger of infrastructure, points of interest (school, ...),
Time to collision, reliabilities, etc …
NB: SafetyNex was selected in June 2016 by BMW during BMW Tech Date Challenge.
Read: http://nexyad.net/Automotive-Transportation/?p=2472
Read more : http://www.nexyad.net/automotive-Transportation