An Autonomous Vehicle that drives like a good human driver
Autonomous véhicles are one of the main R&D subject in automotive sector. Autonomoy has been described in a functional way and classified in 6 levels, from 0 to 5.
Today, intelligent cars that are for sale (TESLA, ...) are said to be level 3, and some car munifacturers claim that they have demo cars with a level 4.
The chain of the Autonomous Driving System (AD System) is quite always a feed forward chain, from sensors to actuators, and applies or may apply Artificial Intelligence algorithms and modules :
If such a chain is level 3, how can it be updated to level 5 ? It is very simple, for a feed forward chain, if you want it to be robust (enough good response to quite "any" input), then the only solution is that EVERY box of the chain is quite perfect and provides the right output to quite any input. There is no other solution for a feed forward chain.
Then, measuring the distance between what we do today ("number" of use cases that we do not process properly, etc), some prospective people deduct :
. number of km to be tested (millions of billions)
. power of computers to run improved versions of boxes
All those deductions make them think that personal vehicle will not be autonomous (level 5) before 2030.
The solution we present in this article is very simple : switch from a feed forward loop to a feedback loop. Everyone in the technical and scientifical world knows that this makes the system inherently adaptive to inputs variations, and then much more robust than a feedback chain. Inded, a feedback control slf-corrects the output in order to minimize an error. Then if the feedback chain failed to produce the correct output, then the feedback control will compute an "error" and will iteratively adjust in order to suppress or minimize this error.
Pure feedback control is very lean in terms of sensors of computing but it can apply only after the error is computed. It is then slower than a feed forward chain that would give directly the correct output all the time. But as we exposed above, the cost of such a solution tends to infinite.
The question is : how to compute an error for the AD System ?
The company NEXYAD proposed to use their technology SafetyNex : SafetyNex computes in real time at every moment, the driving risk that the driver is currently taking. If driver is a human being, it is possible to send a vocal alert in case of risk rising, and it has been shown that such a simple solution reduces accident rate at least by 20%. If driver is an AD system then it means that AD system can be "aware" of the risk it is currently taking.
NEXYAD proposed the feed forward chain :
One can see that error is the difference between computed driving risk (computed by SafetyNex) and maximum accepted risk.
This solution is under integration in several demo cars in the world (Tier One Companies and Car Manufacturers) in order to study its robustness.
First results are stunning even when only mandatory inputs are used for SafetyNex. Indeed, minimizing the driving risk means by definition applying cautiousness rules in driving trying never be surprised by an emergency situation. This is called ANTICIPATION.
As shown below, there is a huge difference between reaction to emergency (domain of Advanced Driver Assistance Systems ADAS), and anticipation :
Anticipation is very efficient at low cost because it applies on EVERY driving situation where :
. ADAS apply only in case of detected emergency situation : rare
. PASSIVE SAFETY apply only in case if crash : very rare
Of course, in pratice, the idea is to cumulate anticipation, ADAS, and passive safety. Note : it is interesting to notice that for technical reasons, development was made from passive safety to ADAS, and now to anticipation.
And the better the feedforward chain the best, because a poor feedforward chain would lead to very big errors that may be long to adjust in order to minimize error : the better the feedforward chain the quickest the AD system is and in real time it is very interesting to be fast.
As a conclusion, we can say that all the good work that has been made by engineering teams worldwide to get level 3 feedforward chain can now be robustified by SafetyNex uning the NEXYAD figure.
We will give news of demo car of our customers when authorized.
Such a car natively applies cautiousness rules and can pass the driving licence even in dense urban areas or in the countryside. We can say then that an AD system can drive like a good human driver.
Anf no complexification of feedforward chain : so it may be ready much BEFORE 2030 !!!!
Rethinking GPS: Engineering Next-Gen Location at Uber
Location and navigation using global positioning systems (GPS) is deeply embedded in our daily lives, and is particularly crucial to Uber’s services. To orchestrate quick, efficient pickups, our GPS technologies need to know the locations of matched riders and drivers, as well as provide navigation guidance from a driver’s current location to where the rider needs to be picked up, and then, to the rider’s chosen destination. For this process to work seamlessly, the location estimates for riders and drivers need to be as precise as possible.
Since the (literal!) launch of GPS in 1973, we have advanced our understanding of the world, experienced exponential growth in the computational power available to us, and developed powerful algorithms to model uncertainty from fields like robotics. While our lives have become increasingly dependent on GPS, the fundamentals of how GPS works have not changed that much, which leads to significant performance limitations. In our opinion, it is time to rethink some of the starting assumptions that were true in 1973 regarding where and how we use GPS, as well as the computational power and additional information we can bring to bear to improve it.
While GPS works well under clear skies, its location estimates can be wildly inaccurate (with a margin of error of 50 meters or more) when we need it the most: in densely populated and highly built-up urban areas, where many of our users are located. To overcome this challenge, we developed a software upgrade to GPS for Android which substantially improves location accuracy in urban environments via a client-server architecture that utilizes 3D maps and performs sophisticated probabilistic computations on GPS data available through Android’s GNSS APIs.
In this article, we discuss why GPS can perform poorly in urban environments and outline how we fix it using advanced signal processing algorithms deployed at scale on our server infrastructure.
Read more : https://eng.uber.com/rethinking-gps/
Cars that read your brain: 4 trends that will shape new vehicles of the future
Every year, some of the world’s biggest car manufacturers compete to show off the splashy new tech that they believe will wow drivers of the future (and sell cars, of course).
So what did we learn from CES 2018? Well, this year, the big trends in the motoring sphere were artificial intelligence, Brain-to-Vehicle technology, huge infotainment systems and ‘digital cockpits’. Let’s take a look at how these were presented.
Artificial Intelligence (AI)
Hyundai presented its Intelligent Personal Cockpit – a concept which employs technologies ranging from voice recognition to AI, ‘Internet of Things’ technology and even driver stress detection. Yup, that’s right, your car of the future will serve as a personal assistant and nurse as it checks your vital signs so the vehicle can take action if the driver is stressed.
This ‘Wellness Care’ function utilises two sensors placed on the steering wheel and the seat. The steering wheel bio-sensor and seat heart rate sensor monitor heart rate for sudden changes and may detect driver stress level. If the sensors detect stress, the system is equipped to take action and provide access to online visual consultation with a doctor – or simply turn on a soothing playlist while dimming the cabin lighting for a more soothing driving experience. Super smart, if a little creepy.
Volkswagen showed off its AI-enabled features which included facial recognition for unlocking the vehicle from the outside, driver alerts for bicycles, gesture recognition for user controls, natural language understanding for flawless voice control, and gaze tracking for driver distraction alerts. Very useful and practical applications of AI by Volkswagen.
Trends in connected cars – at a glance
We are on the verge of a new generation of mobility. Development of highly or fully automated vehicles holds the promise of extremely high travel convenience and safety for vehicle users. To allow drivers to devote their full attention to other activities while underway, the vehicle must always be able to interpret the traffic situation and predict how it will develop. Current advanced driver assistant systems (ADAS) are based on a wide variety of sensors, but in the future, communications technology will play a role in the overall system and provide essential support for traffic situation analysis. The current practice of reacting to the behavior of other traffic participants can develop into a complete understanding of their intentions and further cooperation between traffic participants. Systems for vehicle-to-everything (V2X) networking are a basic prerequisite for this. All over the world, the industry now recognizes the importance of communications technology.
3 Reasons Usage Based Insurance is Driving Billions of Dollars in Global Telematics Market Growth
It’s not hard to understand and forecast why usage based insurance (UBI) — otherwise known as behaviour-based insurance or pay-as-you-drive (PAYD) — is becoming popular among North American drivers and rest of the world. With improved governmental regulations and technology market penetration the industry estimates the automotive telematics market is expected to grow at a CAGR of 23 – 24 % over the next few years.
By using cutting-edge technology to closely monitor the driving behaviour of automotive insurance customers, insurance providers are being given a more accurate picture of how their clients drive. The result: many drivers are being rewarded for their good on-road behaviour through declining premiums and more money in their wallets at the end of the month.
Given this situation and the upward market trend of telematics reaching over $100 billion by 2022, it’s not particularly surprising that some insurance companies, such as Progressive, have seen usage based insurance programs become steadily more popular over the last couple of years. Today, Progressive has over 25 Billion miles logged through Snapshot and roughly 19,000,000 policies in force.
The rising satisfaction and retention of UBI isn’t just due to the fact that more drivers are saving money on their automotive insurance. It’s also the result of rapidly evolving automotive telematics technology, the hardware and software involved in helping insurance companies track driver behaviour in a safe, secure, and accurate way.
When automotive telematics and usage based insurance emerged a few years ago, most insurance providers used wireless devices that plugged into a vehicle’s on-board diagnostics (or OBD-II) port to receive information about a driver’s on-road behaviour. However, over the past year or so this approach has changed, with more and more insurance providers taking advantage of significant enhancements in smartphone technology. Today, smartphone telematics — and specifically the sensors found inside most popular smartphones from companies like Apple and Samsung — can help insurance providers understand more than ever about their customers’ driving habits.
With premiums declining, it’s obvious why many drivers would opt for a usage based insurance program over the traditional automotive insurance policy. But what do insurance companies have to gain from implementing behaviour-based insurance programs? Is it worth all of the hassle just to get a more accurate picture of how their customers drive? Or is there more to it than that?
Artificial Intelligence within Mobility
At the turn of the 20th century, the first motorized vehicles were starting to run across the road. In these early days for automobiles, they shared the road with horse drawn carriages.
At the time horses were quite reliable compared to this new ‘beta’ technology. Horsemen could easily control the horses, and they didn’t make as much noise as their inferior successors.
Fast forward a 100 years later. King car rules the roads, and the only place you see horse drawn carriages are in movies or somewhere in a museum. The point here is that the mobility industry was, is and will always be fast-paced and open to new innovation. One of those innovations that is currently changing our perception of what mobility means, is Artificial Intelligence.
What is Artificial Intelligence in Mobility?
Mobility in this context essentially means transportation. So, we’re talking about cars, trucks, freight services and bicycles. Most people have a vague idea of what artificial intelligence means. When they hear the phrase, what comes to mind is a super smart computer. Although this isn’t far from the truth, it isn’t a fully comprehensive answer. Artificial Intelligence is a combination of Analytics, Advanced Analytics, and Machine Learning.
Analytics can be described as the ability to record information and access it when you want to. An example is when you record the travel of individual trucks in a fleet in order to get the total mileage of the fleet. Advanced analytics occurs when you write algorithms that search for hidden patterns in the analytics data. An example is when you write an algorithm that clusters vehicles with similar mileage patterns. Machine learning is when the algorithm improves itself from analyzing more data. The larger amount of data it can crunch, the better it becomes at identifying the task it was made for.
Finally, Artificial Intelligence is when the computer program can basically do things that humans can do. So, if a computer program can predict vehicle mileage and adjust the routes to reduce the mileage and save costs, it becomes Artificial intelligence.
A very important part of artificial intelligence is the data it needs to crunch to get better. This is not much of a problem; humans generate lots of data. According to Forbes, by 2020, every single human being would generate about 1.7mb of data per second. If you attempt to do the math, you would understand that it’s quite a lot of data. According to IDC, the enterprise data is expected to grow 14-fold from 2012 to 2020. To demonstrate that we are under-utilizing this data, research from the McKinsey Global Institute shows we utilize just about 50 to 60 percent of the value of this data.
Why is Artificial Intelligence within mobility important?
Incorporating Artificial Intelligence within mobility is very important for a single reason — we are humans. Humans by nature get tired after doing an activity repeatedly for a long period of time.Meaning the efficiency humans use for the first task would have reduced when it’s time for the 1000th task. On the other hand, a computer can do the 1000th task with the same enthusiasm and tenacity as the first.
Machine vision, AI, driving an evolution in fleet safety
Video event recorders and telematics platforms are advancing rapidly with machine vision and artificial intelligence (AI).
Early versions of the products — no more than a decade old by now — are limited in comparison to newer versions that use these technologies.
Lytx, a pioneer of video-based driver safety systems, recently shared with CCJ how it uses machine vision and AI to simplify the workflow for fleets while expanding the capabilities of safety and risk management.
About eight years ago, Lytx started down a path to use supervised and unsupervised machine learning in its DriveCam platform, explains Brandon Nixon, the company’s chief executive officer.
Machine learning starts onboard the vehicle with edge computing devices that detect “trigger” events. Algorithms in these devices constantly monitor streaming video and data from integrated cameras (machine vision) as well as from the vehicle databus and sensors.
Examples of basic trigger events include rapid deceleration and speeding. When these and more complex events occur, the devices capture and transmit video and data event files to servers in the cloud.
UK-based AV startup Capri has been working with Bristol Airport to trial the on-site deployment of autonomous shuttle pods.
Focusing on trips of up to five miles, Capri aims to develop the next generation of autonomous pods, capable of safely navigating both pedestrian and on-road environments.
George Lunt, technical director, Aecom, said, “Connected and autonomous vehicles are predicted to make a huge impact on society, but require significant research and development to support their future commercial use. With a wide range of potential markets for on-demand mobility services, our project has clear economic benefits that will inform the business cases for these types of schemes. Our work with Bristol Airport is an important stage of the project as we look in detail at the underlying operating models required to deliver a viable service.”
Peloton launches Level 4 platooning system for heavy vehicles
Self-driving technology developer Peloton has unveiled its new Level 4 Automated Following solution, designed to double the productivity of truck drivers through platooning.
“We’ve taken a different approach to commercial introduction of automation in Class 8 vehicles,” said Josh Switkes, CEO, Peloton Technology. “We see the drivers as the world’s best sensors, and we are leveraging this to enable today’s drivers to be more productive through automated following platoons.”
Peloton’s Automated Following solution is an advanced platooning system that leverages vehicle-to-vehicle (V2V) technology to enable a single driver to operate two separate vehicles. Platooning systems work by combining V2V communications, radar-based active braking systems, and vehicle control algorithms to deliver connected driving with the aim of improving aerodynamics, fuel economy and safety.
Bosch and Daimler set to launch fully autonomous valet service in Stuttgart
The parking garage at the Mercedes-Benz Museum in Stuttgart, Germany, has become home to the world’s first driverless SAE Level 4 parking application, officially approved for everyday use.
Developed by Bosch and Daimler, the automated parking service is accessed via a smartphone app and requires no safety driver. Mercedes-Benz owners, who possess a car equipped with the relevant self-driving technologies, simply pull into the allocated valet parking bay and engage the service using the app. The vehicle then drives itself to an empty spot in the parking garage. When returning, the owners consult the app and collect their vehicle from the allocated collection area.
“This approval from the Baden-Württemberg authorities sets a precedent for obtaining approval in the future for the parking service in parking garages around the world,” said Dr. Michael Hafner, head of drive technologies and automated driving at Daimler. “As a pioneer in automated driving, our project paves the way for automated valet parking to go into mass production in the future.”
Dr. Markus Heyn, board of management member at Robert Bosch, added, “This decision by the authorities shows that innovations like automated valet parking are possible. Autonomous driving and parking are important building blocks for tomorrow’s mobility. The driverless parking system shows just how far we have already progressed along this development path.”
The autonomous valet application relies on the interplay between the intelligent parking garage infrastructure supplied by Bosch and Mercedes-Benz’s self-driving technology. Bosch sensors in the parking garage monitor the driving corridor and its surroundings and provide the information needed to guide the vehicle. The technology in the car converts the commands from the infrastructure into driving maneuvers. This way, cars can even drive themselves up and down ramps to move between stories in the parking garage. If the infrastructure sensors detect an obstacle, the vehicle stops immediately. As an added safety feature, turquoise lighting indicates that the vehicle is in automated driving mode and informs passers-by and other road users that the vehicle is driving itself.