The Evolution of Deep Learning for ADAS Applications

Embedded vision solutions will be a key enabler for making automobiles fully autonomous. Giving an automobile a set of eyes – in the form of multiple cameras and image sensors – is a first step, but it also will be critical for the automobile to interpret content from those images and react accordingly.  To accomplish this, embedded vision processors must be hardware optimized for performance while achieving low power and small area, have tools to program the hardware efficiently, and have algorithms to run on these processors.

The significant automotive safety improvements in the past (e.g., shatter-resistant glass, three-point seatbelts, airbags), were passive safety measures designed to minimize damage during an accident. We now have technology that can actively help the driver avoid crashing in the first place. Advanced Driver Assistance Systems (ADAS) will help autonomous vehicles become a reality. Blind spot detection can alert a driver as he or she tries to move into an occupied lane. Lane departure warning and Lane Keep Aid alerts the driver if the car is drifting outside its lane and actively steers the car back into their own lane. Pedestrian detection notifies the driver that pedestrians are in front or behind the car and Automatic Emergency Braking applies the brakes to avoid an accident or pedestrian injury. As ADAS features are combined, we get closer to autonomous vehicles—all enabled by high-performance vision processing. 

Auto manufacturers are including more cameras in their cars, as shown in Figure 1. A front facing camera can detect pedestrians or other obstacles and with the right algorithms, assist the driver in braking. A rear-facing camera – mandatory in the United States for most new vehicles starting in 2018 – can save lives by alerting the driver to objects behind the car, out of the driver’s field of view. A camera in the cars cockpit facing the driver can identify and alert for distracted driving. And most recently, adding four to six additional cameras can provide a 360-degree view around the car. 

Figure 1: Cameras, enabled by high-performance vision processors, can "see" if objects are not in the expected place

 

Figure 1: Cameras, enabled by high-performance vision processors, can "see" if objects are not in the expected place

Vision Processors for Object Detection

Since the driver is already facing forward, a front facing camera may seem unnecessary. So, to be of value, the front facing camera has to be consistently faster than the driver in detecting and alerting for obstacles. While an ADAS system can physically react faster than a human driver, it needs embedded vision to provide real-time analysis of the streaming video and know what to react to.

Vision processors are based on heterogeneous processing units. That means the programming tasks are divided into processing units with different strengths (Figure 2). Most of the code will be written using C or C++ for a traditional 32-bit scalar processor, which provides an easy-to-program processor. The vector DSP unit will perform most of the computations, because its very large instruction word can handle a lot of parallel computations for pixel processing of each incoming image. 

Read more : https://www.synopsys.com/designware-ip/technical-bulletin/deep-learning-dwtb-q217.html