We discussed mobile mapping with Bryan Leedham, product manager of enclosures and post-processing software, NovAtel, Autonomy & Positioning division, Hexagon.
How do you define mobile mapping?
It is getting broader in scope, as more folks find reasons to map the world. The key goal is to capture reality from mobile platforms to build a digital representation of reality for some large area, such as a city, a road or a factory. Most of the time, that means from a ground vehicle on public roads.
It’s also safer and faster than traditional surveying because you don’t have to stop traffic or dodge it.
Right! In an ideal world, rather than spending days setting up traditional survey equipment, you could strap some sensors on a mobile platform and gather accurate map data in minutes.
What are the key remaining technical challenges?
Picture one of Google’s or Waymo’s mapping vehicles. The first sensors that come to mind are GNSS, inertial, lidar and radar. Each of those has its own unique strengths and weaknesses. The first technical challenge that remains is to mature each of those technologies for a lower enough cost that it’s affordable.
Right now, mobile-mapping vehicles are quite expensive, especially in areas where some of these sensors will struggle more than others. To map very dense urban spaces — with underground areas, overpasses and tall buildings where GPS is challenged — you need a very strong localization system that can survive those conditions for however long it takes to drive through them. If I’m building a car to map rural Alberta, I could choose much cheaper sensors than if I were trying to map downtown Chicago every week.
On the flip side, you must deal with the massive amounts of data collected.
Yes, that is a very large challenge. Lidar data, in particular, is guilty of generating very large point clouds. It’s a balancing act. More accurate and higher resolution maps require lidar sensors with even denser point clouds. So, you need data management and sufficient processing power to get accurate results quickly.
What are the key technical challenges in sensor fusion?
Sensor fusion is how we approach the goal of mapping as accurately as possible in increasingly difficult environments. On their own, GNSS receivers struggle in obstructed areas but, when you pair them with other sensors, they become very complementary.
Lidar and cameras, for example, are quite good at measuring the distance to nearby objects and at classifying them, but they have no idea where they are relative to one another. Likewise, if you let an IMU [inertial measurement unit] sit in your car, it will no longer know its location. However, once you give it a position update, it is very good at maintaining a trajectory over a short period of time. When you combine absolute and relative localization, all the sensors play to their own strengths.
What is NovAtel’s SPAN software?
It stands for synchronous position, attitude and navigation. It is the sensor-fusion software that combines the GNSS, inertial and whatever other sensors. It is based on core NovAtel GNSS receiver software. We can use NovAtel receivers in combination with IMUs from a wide range of manufacturers and, in the future, hopefully, other sensors from a variety of manufacturers as well.
SPAN started with blending just GNSS and inertial but we’re now researching how to bring in such things as lidar and cameras. Autonomous Stuff, another Hexagon company, works on the greater sensor fusion using SPAN as well.