How Machine Learning and Digital Mapping Impact Autonomous Vehicles

September 15, 2020

Machine Learning and Digital MappingTo a large extent, machine learning puts the “intelligence” in Artificial Intelligence. Think of it as the technology that provides the learning capabilities to make AI tasks faster and safer — a true value-add in a world moving toward technological autonomy.

The machine learning market covers a wide swath of AI-aided activities, but according to recent research most funding dollars back machine learning applications ($28 billion), machine learning platforms ($14 billion), small robots, and computer vision platforms (each at $7 billion).1

While risk management and performance analytics hold top spots in use-case percentages, machine learning is surging in automation, with 61% of machines being trained to learn.1 This isn’t surprising given the rise in mobile autonomous robotics in “smart” manufacturing and warehousing facilities, retail environments, and — to an even larger degree — autonomous vehicles.

While self-driving passenger cars generally receive the most buzz, ride-sharing vehicles, trucks, and some public transport round out the global autonomous vehicle market. What’s more, the technology — like massive Light Detection and Ranging (LiDAR) scans and maps — that powers these innovations add to the nearly $55 billion market size.1


How Autonomous Vehicles “See” Things

Autonomous vehicles are essentially sensor-based powerhouses, using four distinct technologies to navigate the world2:

  1. Optical cameras to scan the driving environment, traffic signs, and other traffic data
  2. Radar and ultrasound to determine distances between vehicles and other objects
  3. LiDAR scanners for creating 3D images of vehicle surroundings
  4. HD maps that essentially “make sense” of data gathered from the other sensors by putting details in pre-recorded map form for drivers

The technology and sequence are logical and form the basis of most autonomous vehicle “vision.” What MIT researchers are currently grappling with is if such advanced, complicated, and expensive technology is actually necessary. A combination of machine learning and digital mapping may be equally as efficient, if not more so.


(Machine) Learning by Doing

The theory behind basically “skipping to Step 4” is relatively simple3:

  • Using a machine learning model like that for image recognition, the vehicle first observes how a human driver reacts to real-life conditions
  • Data from these observations inform the vehicle about the possible choices in each environment (e.g., only turning left, no straight-forward option, etc.)
  • The vehicle “remembers” the choices via digital mapping when it encounters the roadway conditions on its routes

The researchers heading up this breakthrough in autonomous technology liken it to humanizing the vehicle’s experience in driving according to a map, localizing within an environment, and reasoning through decisions when presented with conflicting information.3

The result? Whole-world navigation that takes as little as 40 gigabytes of data...versus the mega-gig strategy that requires 4,000 gigabytes of data just to store information for the city of San Francisco alone!3

Digital mapping continues to push the boundaries of what is possible on the roadways and in other environments as commerce and manufacturing embrace autonomous robotics and vehicles. What can it do for you now and in the future? Contact the ADCi team to find out!

Contacts ADCi
SOURCES
1FinancesOnline, 55 Notable Machine Learning Statistics: 2020 Market Share & Data Analysis, Undated
2BMW.com, Autonomous driving: Digital measuring of the world, May 27, 2020
3GeoAwesomeness, Self-driving cars that run on simple maps and video camera feeds?, December 23, 2019

 

Subscribe to ADCi's Blog