Oh bella Italia ci manchi! Our Trimble eCognition channel partner in Italy, Sysdeco Italia s.r.l., recently shared this customer story with me and unfortunately it looks like it will be the closest I get to Italy at the time being.
The growth in mobile mapping devices and data can be seen across the industry and of course Trimble is active in this business with a number of different sensors, including the Trimble MX9. The MX9 offers a very high point cloud density at up to 500 scans/second. In addition, a full range of imagery can be acquired and all this at speeds up to 110/kmh.
In this project, a LiDAR point cloud from the Trimble MX9 has been processed in eCognition to extract various streetscape features. The project not only demonstrates the high quality of the Trimble MX9 but the great versatility of the eCognition software which can transform such data into valuable mobile mapping deliverables in a few steps via an automated rule set.
In this case, we were investigating the extraction potential for road signs, guardrails and pavement markings. For this purpose, a mask was applied to the input data to focus the analysis on the roadway and not surrounding areas.
The .las file was imported directly into the eCognition Developer 10 software, along with a vector layer (.shp) representing the extents of the mask area in such a way as to classify only the point cloud elements within the masked area.
The first step in eCognition was to adjust the map resolution to support higher resolution rasterizations of the point cloud features. Within eCognition the user has the ability to generate a series of raster layers (for example intensity and height) via the rasterize point cloud algorithm.
Initially, two raster layers were derived from the point cloud: one using the maximum Z values (which leads to the DSM) and one using the minimum Z values (to create the DTM). Subsequently, the DTM and DSM layers were used as input for the NDSM layer calculation algorithm – an NDSM is a Normalized Digital Surface Model that represents the height of a pixel above ground. This layer was then used as input for segmentation, thus the objects generated were classified using height attributes and shape, to arrive at the identification of two classes: guardrails and signs.
To identify pavement markings, the initial step was to use the automatic point cloud classification algorithm which supports about 10 classes. With this first classification it was possible to clearly distinguish some elements of the roadway including horizontal signs.
Despite precise auto-classification results, there were several errors that required additional attention. Therefore the parts of the point cloud classified as pavement markings were rasterized using the intensity attribute, generating a so-called intensity layer that can be used in combination with eCognition’s great depth of OBIA tools. A segmentation was then applied to this layer that made it possible to better define the “signage class."
The final results are excellent, especially taking into account the fact that the approach is fully automatic, without any manual editing. An accuracy of 86% was reached for the feature extraction. The automated results achieved with eCognition were compared to objects identified manually in the raw point cloud. The calculated accuracy accounts for areas where data was missing.
The rule set has been optimized for performance – to process the entire point cloud (about 50 million points and a file size of 1.8 GB) took only 10 minutes. This is an excellent example of eCognition’s flexibility and how such GIS/Asset monitoring features can be automaticaly extracted.