Apple’s latest AI research explores the problem of mapping systems for self-driving cars
Apple’s ambitions to build a self-driving car have reportedly shifted gears over the years, but we know the company is focusing on the software side of the equation. This June, CEO Tim Cook said the iPhone maker is building autonomous systems that could power a range of different vehicles (rather than, say, working on its own Apple-branded SUVs). “We sort of see it as the mother of all AI projects,” said Cook.
Now, new research from the company’s machine learning team confirms this direction, with a paper published on pre-print server arXiv describing a mapping system that could be put to a range of uses, including powering “autonomous navigation, housekeeping robots, and augmented / virtual reality.” Though, to be clear, this is just academic research: it doesn’t indicate that Apple is working on these particular use-cases.
The system in question is called VoxelNet, and it’s all about improving the data we get from the eyes of most self-driving systems: LIDAR sensors. These components are integral to lots of autonomous vehicles, and work by bouncing lasers off nearby objects to build a 3D model of their surroundings. They offer better depth information than regular cameras, but produce patchy maps, with large sections often rendered invisible by objects blocking the laser’s path. This leads to maps that are “sparse and have highly variable point density,” as Apple’s researchers put it. In other words, it’s not good for safe self-driving.
Read more : https://www.theverge.com/2017/11/22/16689810/apple-ai-research-self-driving-cars-autonomous