Mätfokus – Maetfokus

PNT by Other Means: Oxford Technical Solutions

An exclusive interview with Paris Austin, Head of Product – New Technology, Oxford Technical Solutions. For more exclusive interviews from this cover story, click here.


What are your title and role?

I’m the head of product for core technology at OxTS. My role now is focused on R&D innovation. So, the research side, developing prototypes and taking new technology to market effectively. One of the key things we’re examining is GNSS-denied navigation: how we can improve our inertial navigation system via other aiding sources and what other aiding sensors can complement the IMU or inertial measurement unit to give you good navigation in all environments. Use GNSS when it’s good, don’t rely on it when it’s bad or completely absent.

We rely increasingly on GNSS but are also increasingly aware of its weaknesses and vulnerabilities. What do you see as the main challenges?

Excessive reliance on anything leads to people exploiting it, which is where the spoofing, the jamming, and the intentional denial come in. We all rely on technology nowadays to do all our menial tasks; then, if we lose the technology, we don’t have the skills to do the task ourselves and we’re in trouble. Reliance on a mass global scale on GNSS is a good and a bad thing. It is good for technology because costs come down. Access to GNSS data is increasingly easy and devices that use it are increasingly cost-effective. But if your commercial, industrial, or military operations rely too much on that one sensor, they can fall over. That’s where complementary PNT comes in: if you can put your eggs in other baskets, so that you have that resilience or redundancy, then you can continue your operation — be it survey, automotive or industrial — even if GNSS falls or is intermittently unavailable or unavailable for a long period of time.

However, you can fully replace a GNSS only with another GNSS.

You cannot replace GNSS with anything that has all the pros and none of the cons. You could use something like lidar or an IMU to navigate relative to where you started. However, you would not know where you are in the world without reference to a map, which would have been made with respect to GNSS global coordinates. The best thing you can do is use things with GNSS to plug the gaps or rely less on it periodically in the sense of having multiple updates per second and be able to at least start with a global reference, then navigate relative to that for a period of time and then get another global update. Then you can navigate in between either via dead reckoning or local infrastructure that is being referenced with respect to the global frame. That way, you can transition between GNSS and localized aiding without any dropouts in your operation or your functionality without relying on completely clean GNSS data all the time.

As you say, you can’t replace it. If you do claim to be breaking free from GNSS you’re really playing a different game and just describing it in a way that sounds as good as GNSS, but in reality you’re saying, “I can navigate in this building but I don’t know where this building is” until you start saying, “Well, I’ve referenced it with respect to a survey point that used a GNSS survey pole.” At that point, you’re not breaking free from GNSS, you’re just using it differently.

INS-GNSS integration has been around for a long time and the two technologies are natural partners because each one compensates for the other’s weaknesses. What have been some of the key recent developments in that integration?

The addition of new GNSS constellations has helped a lot because you need four satellites for a position or time lock and six satellites to get RTK. What previously were 12 to 14 satellites from GPS and GLONASS visible at any one time have doubled with the addition of Galileo and BeiDou. So, your requirement for six satellites at any one time has become a much more reasonable proposition in terms of maintaining that position lock in the first place. Meanwhile, IMU sensors have been coming down in price. So, you can make a more cost-effective IMU than ever, or you can spend the same and get a much better sensor than you ever could before. Your period between the GNSS updates is also less noisy and you have less random walk and more stability.

With less drift you can also go for longer periods without re-initializing your IMU.

Yeah, exactly. Your dead reckoning period can go longer, while still taking advantage of tight coupling wherein you use the ambiguity area of the IMU to reduce the search area for the satellites. So, a better IMU means that you can use GNSS more readily when you go under a bridge or go through a tunnel. You can lock on to satellites quicker again because of the advancements that have been made with the IMU technology.

What have been some of the key advances in IMU technology in the last five or ten years?

With GNSS receivers, the market has become more competitive, there are now more options than ever before. People being disruptive in the space has allowed us to use lower cost sensors for the same performance or mix and match gyroscopes and accelerometers to get the best IMU complementary level. Previously, you may have had an accelerometer that far outweighed the performance level of the gyroscope. So, you would have very good velocity drift over time. But if you’re heading drifts, you still end up in the wrong place when you haven’t had GNSS for a while.

So, that’s allowed us to pick a much more complementary combination of sensors and producing an IMU that we manufacture and calibrate ourselves, while using off-the-shelf gyroscopes and accelerometers. That allows us to make an IMU that is effectively not bottlenecked in any one major area. I think previously, with IMUs, you took what you could get and some of that technology was further ahead than other. So, it’s a good thing for us because the sensors that we’re getting do not cause single-source bottlenecks and we can achieve higher level of performance than we ever could, without having to significantly increase our prices.

The way we’ve always seen it, either you add features or performance level and maintain the price, because the technology is maturing over time, or you disruptively lower your price with the same technology. On occasion, we have done that in the survey space. That’s where the performance level requirements are far tighter because people are moving from static survey using GNSS, where they’re used to millimeter-level surveys, into the mobile mapping space, where they still rely entirely on RTK GNSS.

However, they also rely on high accuracy heading, pitch, and roll to georeference points from a lidar scan at a distance instead of only exactly where they are. Where new IMU technology has helped us is to get the better heading, pitch, and roll performance for georeferencing as well as reducing the drift while we dead reckon in a GNSS outage.

What is the typical performance of IMU accelerometers and gyros these days?

It boils down to what it gives us in terms of position drift or heading, pitch, and roll drift over 60 seconds. Real-time heading, pitch, and roll is heavily affected by gyroscope performance.

How much more do you have to pay to get that increase in performance?

There are definitely diminishing returns. When you look at some of the Applanix systems that have very good post-processing performance in terms of drift, you’re talking about something like $80,000 for a mobile mapping survey system that is maybe 50% better on roll and pitch in normal conditions, let alone an outage, vs. $30,000 to $40,000 for our top system, which is 0.03 roll and pitch, for example. If you go down to 0.015, you can pay double for the INS. Similarly, if you go the other way, and you go cheaper, you can probably get a .1 degree roll and pitch system for $1,000.

So, it’s a very steep curve. The entry level systems are very disruptively low priced now but given the requirements for certain applications —particularly survey — that .1 degree means that you can never achieve centimeter-level point cloud georeferencing. And that’s where people are still justifying spending $80,000 or more on the INS. They also spend similar levels on their RIEGL lidar scanners and other profilers. So, it’s complementary to the quality of the other sensors. However, it really doesn’t make sense to spend $1,000s on your INS and then $80,000 on your lidar, because you’re going to be bottlenecking the point cloud that you get out of it at the end anyway.

The same goes for autonomous vehicles, where people are now spending sub-$1,000 on their lidar or their camera, and they don’t want to spend $30,000 to $40,000 on their INS for a production level, autonomous vehicle. So, there needs to be that similar complementary pricing for sensors in that space, where you can offer an INS for hundreds of dollars, for example, that performs maybe only a percentage less than INSs do today.

For an autonomous vehicle to stay in lane, it still needs these building blocks to be high accuracy, because they’ve only got 10s of centimeters with which to play. However, they are doing it from the point of view that they don’t care where they are in the global frame at that moment in time to stay in their lane, only where the lane markings are. However, they will care where they are in the global frame when they come to navigate off of a map that someone else has made and they’re looking for features within the map, for such things as traffic signs, stoplights, and things that are out of sight or occluded by traffic, so that they know if they’re approaching them and the camera is just blocked at that time. That’s where the global georeferencing comes in and where GNSS remains critical effectively. Right?

It ranges price-wise. The top-end systems — Applanix and NovAtel — in the open road navigation sense, are not orders of magnitude better but you do end up paying double very quickly. If you look at the datasheet, positioning in open sky conditions is identical between a £1,000 power system and an £80,000 pound system. The differences all come in those drifts specs, or the heading, pitch, and roll specs that are being achieved, because the value really comes from the IMU being used at that point.

Is most of the quality difference between these devices due to better machining, smarter electronics, or improved post-processing?

Any one of them on their own will not get you a good navigation solution. Fundamentally, you can have a good real-time GNSS-only system that will work at a centimeter level if you just use, say, a u-blox receiver, which is less than $100. Adding a low-cost IMU can fill some gaps, but not particularly intelligently and you’ll get jumps and drop-outs or unrecoverable navigation. That’s when the algorithms come in to play in terms of intelligent filtering of bad data and when to fall back on one solution versus the other and when to blend the two.

I was asking specifically within INS. When you’re talking about a $1,000 INS versus an $80,000 INS, how much of the improvement in performance is due to manufacturing, how much of it is due to smart electronics, and how much of it is due to algorithms or post processing?

Most of it is probably down to the raw sensor quality and then the calibration of the sensors. An IMU calibration is important, in terms of compensating for bias and scale factor errors, but also for the misaligned angle of the sensors. So, you need to make sure that your accelerometers and your gyros are all mounted exactly orthogonal to each other. A $1,000 sensor is very unlikely to be calibrated to the same level as an $80,000 one. That’s probably because you’d get 10% more out of calibrating the $1,000 one but you might get three times the performance out of calibrating the $80,000 one. So, you have a lot more to get out of a high-end system in terms of unlocking the potential whereas the low-end sensors are probably already giving 80% to 90% of their potential out of the box, with no calibration at all.

You affect such things as warmup time. A well-calibrated system will already be modeled accurately almost as soon as you power it on. If you don’t calibrate the system, you can still have a Kalman filter or something running in real time that can model the errors live. But it will mean that you won’t be at spec level performance as soon as you power up. When does it matter to you that you get the best data? Is it the instant you power up because you’re navigating an autonomous vehicle out of the parking garage? Or do you have 10 minutes before you need to take the data and use it for anything, and therefore you can take those 10 minutes to model the sensors live?

You might save money on the electronics budget but spend it to pay the driver to do the warm-up procedure. You can reallocate where you spend your money. If you’re rolling out a fleet of 100 vehicles, though, you probably don’t want to have to have 100 drivers that are trained to do a warm-up procedure. So, you would spend the money on the electronics to have an INS that does not require a warm-up. That is an option that you can go with now. If you spend the extra you can get away from the warm-up procedure requirements, because things have been modeled during calibration instead of in real time.

Your website focuses on three areas: automotive, autonomy, and surveying and mapping. Why those and what might be next in terms of markets or end user applications?

Automotive is probably the bread-and-butter part of OxTS. For a long time, automotive users were looking for a test and validation device that could give them their ground truth data to validate onboard vehicle sensors. We were very much the golden truth sensor, making sure that the sensors they were putting into the production vehicles were fit for purpose and safe. So, if they claimed it had autonomous emergency braking, they used our sensor to say how far away it was from the target — for example, a pedestrian — when it made the vehicle stop. Did it break with the appropriate distance between them? They had a unit in each vehicle and got centimeter accuracy between them. That was very easy to do with GNSS. Because on a proving ground for automotive users, they always have RTK.

Now the automotive world is moving into the urban environments and doing more open-road testing. So, the need for complementary PNT is more on their mind than ever. They are looking for a technology from us and our competitors that allows them to keep doing those tests that they did on the proving ground, but in real world scenarios. They may collect 1,000 hours of raw data and then only have an autonomous emergency breaking (AEB) event kick in three times in those 1,000 hours. They will then look at the OxTS data at that time and say something like, “Did the dashboard light come on and then did the brake kick in at the required time to avoid the collision?”

So, they rely on the INS data to be accurate all the time. It cannot be that in 1,000 hours, if you get those three events, two of them do not meet the accuracy requirements to be your ground truth sensor. Because then they would basically say, well, we don’t know whether the AV kicks in at the right time on the open road. They would have to fall back to the proving ground testing to have any confidence. So, that’s where the automotive world is looking to use an INS to reference its onboard sensors.

In autonomy and survey, on the other hand, the INS is used actively to feed another sensor to either georeference or, in the case of autonomy, actively navigate the vehicle. So, that data being accurate is critical because an autonomous vehicle without accurate navigation cannot move effectively and would have to revert to manual operation. There’s a lot to do with localization and perception and avoidance of obstructions and things like that.

Timing synchronization is critical. People haven’t solved a way to synchronize multiple vehicles without using GNSS and PPS. Some people are using PTP to synchronize, but they’ll often have a GNSS receiver at the heart of it with the nanosecond-accurate time to be the actual synchronization time. And then everything else is a slave PTP device that operates off of that. So, if we did not give accurate timing, position and orientation, there is basically nothing that that vehicle could do to navigate other than navigating relative to where it was when it last had accurate INS time.

Often, these vehicles will enter a kind of limp mode or stop completely and require user operation to get it to the next stage. It’s where you see the street drone-type small robots now, which will stop if a pedestrian walks in front of it, obviously, because it is a safety requirement. But also, if it doesn’t know where it is, like a Roomba operating inside, it cannot localize with respect to landmarks that it has in its map, it will just effectively try to re-localize off of random movements until it can orient itself. In that scenario, an INS or an IMU can help you reduce the number of times that you’re losing absolute localization. Where the autonomy side of things comes in for us is if we can offer the navigation quality, more of the time and to a high accuracy but for acceptable cost, then the sensor is a viable one to be put into the autonomous vehicle.

In autonomy, our active and potential customers are looking to do everything for a very, very low cost base, because they know that they’re trying to reach consumers with these products rather than businesses. So, their value box is entirely within the algorithms that they’re selling. They’re trying to offer scalable solutions that could roll out to thousands or millions of vehicles around the world, with their algorithms at the center of them. That localization and perception stuff is where you see companies such as Nvidia getting involved, because they want to be at the heart of it. Then they say that they can support any sensor while not being tied to any one of them. However, their algorithm is always going to be there at the heart of it. They will have GNSS receivers they support, they will have IMUs, they will have cameras, lidar, and radar and all the other kinds of possible aiding sensors. But they will say that their algorithm will still function if you have any number of those being fed in at any time.

So, autonomy relates to automotive in a sense, because you have autonomous passenger vehicles, but you also have autonomous heavy industry and autonomous survey, where people are flying drones autonomously or operating Spot autonomous dog robots, things like that, which can still be a survey application where you don’t want to have a human in the loop but you still need to navigate precisely. Someone may be sending a Spot dog robot into a deactivated nuclear reactor where they don’t want to send a human, but they still need to get to a very specific point within that power station and report back. They need to avoid obstructions, they need to georeference data they collect, and then take a reading from a specific object or sensor that’s inside and come back out safely. So, accurate navigation throughout the whole process is very important.

I understand the role of OxTS in testing and development. However, are any of your systems going to be in any production vehicles?

Many of the companies that are working on autonomous passenger vehicles are realizing that they are still a long, long way away.

What about your presence in the auto market more broadly?

They are used, but as separate components. You will have GNSS, IMU, radar, cameras, and lidar but the localization and perception will all be done by the OEM or by a tier one supplier to the OEM. So, they don’t want a third-party solution that is giving them a guarantee of their position because it’s a black box. They need to have traceability and complete insight as to what each sensor is saying so that they can build in redundancy and bring the vehicle safely to a stop if one of those systems is reporting poor data. For production vehicles, we are very much used as a validation tool in the development stage, but in terms of producing the production vehicle, they need to have that visibility of the inner workings of the system. Most INSs will not give you that insight as to how they arrived at their navigation output, because that is proprietary information. As a result, many automotive customers are looking to do that themselves. However, as I said, they’re realizing that it’s very difficult, and they’re quite a long way from navigating anywhere.

Therefore, currently no OxTS products are in production vehicles.

Not for passenger autonomy. However, they are used in some of the other autonomous spaces, such as heavy industry, that take place in private, fixed spaces such as mines, quarries, and ports where there is little interaction with the public. That is not only because the vehicle price point is much higher for some of these mining vehicles and heavy industry vehicles, but also because you don’t have to have your algorithm and perception capability deal with vehicles that are not autonomous or are driven by drivers that are not trained on health and safety in the area.

In these private spaces, you can tune your systems to work with each other without having to worry about the pedestrians and the random vehicles for which you’ve not accounted in your perception algorithms. That’s where the divide comes at the moment. If there are untrained people in the area, then there’s a lot more to accommodate and that makes the proposition much more difficult.

Are you at liberty to discuss any recent end user success story with your products?

The Ordnance Survey in the UK has been using our INS to create 3D maps on which they can then use semantic segmentation to classify features within the environment and pull out all the relevant features within a survey of a city, for example. They’re blending the raw data from OxTS lidar and map data that they have to create high accuracy 3D maps that can be used to add that third dimension to the high accuracy 2D maps that have been their value proposition for the past few decades. They can say, “here are all the trees in the environment” or all the traffic signs or buildings or that kind of thing that you’re going to see in Google Earth imagery. They start to reach the realms of high accuracy map data. They’re looking to sell that map data to commercial entities to monetize it and use it on a nationwide level and then on a global level.

If you have that map data, there’s a lot that you can do with it, in terms of intelligent decision making about routing a vehicle, or many other things, such as monitoring the heat output of buildings. In the EU, there are many directives around such things as carbon emissions. If you’re being more efficient with the heat output of your buildings, you can effectively say that you’re hitting your CO2 emissions reduction goals, by running whatever initiative to insulate buildings better and things like that. It always starts with, “Where was I when I saw this object or this building?” Therefore, I can georeference that building, I can color it by thermal imaging and things like that.

They can start to produce 3D imagery that is colored by thermal output, they can do it by any other number of sensors as well, that can give them meta data that can allow them to sell the data to someone else. It makes what was previously a very big job very efficient. So, they can drive hundreds of kilometers in a day where previously it was a static survey that was done over the course of weeks on foot. It’s also changing the efficiency metric that they can deliver to their end users.

Thank you very much!

Exit mobile version