Skip to main content

Common Problems With Coordinate System Configurations

Part 2: Common problems with coordinate system configurations

“Why does my data look that way?”

Now that we’ve covered the basics of coordinate systems and geodetic topics relevant to GIS data collection, we can start to look at common configuration problems and how they manifest themselves in your data.

But first, we need to cover one last educational topic, and it is perhaps the most important one. That is, how do we achieve high accuracy in GNSS data collection and what are the important geodetic considerations in that process?

Geodetics in Differential Correction

The most common way of achieving high accuracy is through a process called differential correction. In this process the difference between the measured and the true distance between a known location (a base station) and a satellite is calculated and then applied to GNSS measurements as a “correction”. Corrections can be done as soon as measurements are made (i.e. real-time) or after measurements are collected (i.e. post-processing).

Illustration explaining differential correction

In general, correction works because most error sources (e.g. atmospheric) are similar across wide areas. Differential correction can be applied to the mobile GNSS receiver measurements to eliminate most errors.

The most important aspect with regards to geodetics is that the result of differential correction is directly dependent on the position and reference frame of the base station used. That is, if the base station used for differential correction is accurately defined with reference to say ETRS89, then corrected GNSS measurements will also be referenced to ETRS89. This aspect applies to all forms of GNSS correction, including post-processing, RTK, VRS, etc. It’s important to always know the reference frame of the correction source prior to starting your data collection project. In most cases, this information should be available from the provider of the correction service. Although the specifics will vary, the most common generalization is that local correction sources like VRS and single-base RTK networks will use a local reference frame while global correction sources like SBAS and Trimble RTX will use a global reference frame.

It is typical for correction service base station infrastructure to be maintained by surveyors or other geodetic agencies. As such, the reference position and coordinate system information is generally kept current with the latest datums and realizations. This is often in contrast to GIS systems-of-record which will utilize the same coordinate system over a longer period of time. For example, although the most current realization of the “standard” US coordinate system is NAD 1983 (2011), the majority of customer datasets we see are still using the older NAD 1983 (CORS96) or even the original NAD 1983 (1986). In almost any GNSS data collection project, datum transformations will come into play at one or more points in the workflow. The proper identification and configuration of these transformations is one of the biggest sources of error and user frustration.

Let’s take a look at some of these configuration challenges in more detail.

Challenge 1: How do I know that what I’m comparing my data to is a valid source of truth?

When evaluating the quality (accuracy) of the data you collect, you’ll need to know what you’re comparing it to. Although the GNSS receiver may tell you it is accurate to within 1 cm, that’s of little use if you can’t validate the results against something in the real world. Ideally, you’ll be able to collect some test data over known “benchmarks” or “control points” for which you know the accurate location in an “official” reference frame. These are often maintained by regional or national geodetic agencies.

Illustration of control points on street map

If you are comparing against existing data you have, you’ll want to verify that the existing data is in fact accurate; knowing its lineage is important (how and when it was collected or digitized, what is the original coordinate system, etc.). It’s not uncommon for historical GIS data to lack positional accuracy and instead be drawn in a way to look good at a certain zoom scale. 

One other consideration needs to be made when using reference data from a local survey, commonly known as a “site calibration”. In typical survey workflows, data is adjusted to optimize accuracy relative to a project or site (it may still use a standard coordinate system definition). As such, the reference frame is specific to the site and may not exactly match a “published” reference frame that would be valid across a wide region. Most GIS software has very limited tools for working with adjusted coordinate systems like this although there are generally paths to work with custom transformations that could be provided by the surveyor.

Challenge 2: My field-collected data looks correct by itself, but is nowhere near to my source of truth. They barely show up on the same map!

Illustration showing data collected in one location showing up in another

In this case, when you bring your field-collected data into your GIS, it either shows up in a completely different location or maybe it doesn’t come in at all. This can happen more commonly when working with data formats that don’t carry the coordinate system information along with them (e.g. CSV exports) and usually reflects some sort of misconfiguration of one of the basics:

  • The coordinate system of the field-collected data and the coordinate system of the GIS don't match. Perhaps the GIS is expecting projected coordinates (northing, easting) but you are trying to use geographic coordinates (longitude, latitude). Further, when working with projected coordinate systems, an incorrect coordinate system setting (e.g., the wrong zone) can cause problems because coordinates would not be valid in the desired zone.
  • Another detail with projected coordinate systems is that they depend on units. In most parts of the world, meters are used as the default unit on projected coordinate systems. But in certain places like the United States, there can be a mix of units available - meters, US feet, and international feet. 
  • Units also need to be carefully considered when working with feature heights or coordinate Z values. In most GIS systems, the vertical coordinate system and unit is decoupled from the horizontal coordinate system and unit (meaning, they can be set independently). Feature heights can be stored in both metadata and 3D geometries. 
  • One other source of misconfiguration that may cause significant error is a missing or incorrect datum transformation. In certain parts of the world, this can result in an offset of >100 m. We’ll cover this more in the next section below.

Challenge 3: My field-collected data looks close to my source of truth but is shifted by a constant amount (e.g. an offset).

This is perhaps the most common challenge in GNSS data collection projects when it comes to integrating field-collected data into a GIS system-of-record. Customers will frequently report that their data looks shifted by half a meter up to several meters. Far and away the most common cause is a misconfigured datum transformation - either missing, incorrect, or applied twice.

Illustration of map showing data collected offset by 1-2m

As we alluded to earlier, datum transformations are required any time we have to work between reference frames or coordinate systems that use different datums. In a typical high-accuracy GNSS data collection workflow there are potentially four different coordinate systems in use:

  • The GIS source or system-of-record
  • The data collection project used in the field application
  • The correction source used in the field for real-time correction workflows
  • The correction source used in the office for post-processing workflows

Both the field and office software will generally provide configuration options for datum transformations between each of these as required by the workflow. In most cases, you’ll want to do some field validation of the configuration (using known control points as described earlier) prior to starting a production-level data collection project.

There are several finer points to be aware of:

  • Although similar at the parameter (mathematical) level, datum transformations are implemented differently in common GIS software. For example, in Trimble software, a datum transformation (to the global WGS84) is generally stored with a coordinate system. For instance, when you pick NAD 1983 (2011), you are also getting a singular 7-parameter datum transformation between that coordinate system and the global WGS84. On the other hand, Esri decouples this in both their coordinate system model and their user experience - you pick the two coordinate systems first, and then pick from a list of datum transformations available between those two coordinate systems. In the example above, Esri actually provides multiple datum transformations for working between NAD 1983 (2011) and WGS84.
  • In some cases a datum transformation may be provided in the software strictly for compatibility or book-keeping purposes. These datum transformations have zero or null parameters. These types of datum transformations exist to allow a workflow to proceed - but it won’t actually change or transform the coordinates being calculated. An example of this in Trimble software is the NAD 1983 (Conus) datum in which the transformation parameters (to the global WGS84) are all zero meaning no transformation will be calculated. This is common in Esri software as well.
  • For some regions in the world, particularly those near tectonic plate boundaries, there may not be a 3- or 7-parameter transformation available for transforming coordinates between the local coordinate system and the global WGS84. This is most likely to impact users who use SBAS or RTX correction services in these regions.

Challenge 4: My field-collected data is very close to my source of truth, but not within the accuracy estimates of what the GNSS receiver is telling me.

In this case, you’ve collected good data, your accuracy estimates either in real-time or after post-processing are within a few centimeters, but you’re still 10~20 centimeters from your control points. You’ve already confirmed you’re using the best available datum transformations but this is as close as you can get.

Illustration of map showing data collected offset by <1m

There can be various reasons for this - some of which are a result of configuration issues and others of which are just limitations. Here are the most common ones:

  • Datum transformations can be limited in accuracy. Generally published by academia or geospatial government agencies (as the authoritative source), datum transformations are defined for a specific area and the accuracy of the results within that area will vary. If you are closer to the boundaries of that area, then the accuracy may be decreased. You can consult the authoritative source of the datum for an estimate of accuracy. If your workflow involves multiple datum transformations, the accuracy of the data at the end of the workflow will reflect the accuracy of each of the datum transformations used. 
  • In most post-processing workflows, you have some control over what base station reference position is used - either from the base station files or from the Trimble base station list. The difference in these two positions can be in the 5~20 centimeter range depending on locale and this will have a direct impact on the post-processed results.
  • In tectonically active areas, it is not uncommon for correction source providers (for real-time and post-processing) to use intermediate epochs when providing reference frame information for networks or base stations. This is necessary to account for tectonic motion. For example, in California, real-time networks may use NAD 1983 (2011) epoch 2020.00 or newer and further, it is adjusted semi-annually. If your GIS software is only capable of working with static (e.g., 7-parameter) datum transformations, your ability to work with intermediate epoch data will be limited. Whenever that epoch is adjusted and you collect new data, you will see an immediate shift in your data (the movement of the Pacific plate relative to the Continental US plate is on the order of several centimeters annually). As we discussed in the first part of this blog series, this is one of the primary drivers for GIS data collection workflows needing to support time-dependent datum transformations: to be able to store data collected across multiple epochs into a system-of-record that uses a single epoch.
  • When using a global, real-time correction source like Trimble RTX with a local coordinate system like NAD 1983 (2011), accuracy of the workflow can be limited and may not meet the accuracy specification of the correction service. This is the second driver for the use of time-dependent datum transformations in the data collection workflow - to be able to accurately transform between global and local datums. This is further complicated in tectonically active areas or in areas near plate boundaries where crustal deformation exists. Here, even time-dependent datum transformations can be insufficient and you would need to utilize a local deformation model to fully realize the accuracy of the corrected data through the entire workflow.

In the next part of this blog series, we’ll discuss how our Trimble GIS software workflows significantly improve coordinate system workflows and help translate the accuracy of the GNSS receiver to the accuracy of the overall data collection workflow.

Continue to Part 3: What is Trimble doing to solve these problems?