Skip to main content

Segment Anything (SAM) AI Model Now Available in eCognition

You can now use the Segment Anything (SAM) AI model from Meta AI in Trimble® eCognition® to easily apply the segmentation to your geospatial imagery.

Segmentation is the fundamental step in eCognition ruleset development. Selecting a good segmentation strategy is key to successful object-based analysis of geospatial data. Trimble eCognition offers an extensive list of options for segmenting your image layers, including our unique multiresolution segmentation, simple chessboard segmentation, watershed and many more.

Segment Anything (SAM) is a segmentation strategy developed by Meta AI that can identify and “cut out” individual objects in images. The AI-powered algorithm was trained on over 1 billion segmentation masks collected on 11 million images from different geographic locations.

It can be used across various domains, including the geospatial industry. SAM simplifies the task of finding buildings, cars and even individual trees in imagery. Additionally, you can combine the power of SAM and eCognition multiresolution segmentation for even more accurate delineation of the objects in the images.

You can download the ruleset here. To run this ruleset, you need to make sure that you did not uncheck the Python when installing eCognition. If you did, please reinstall eCognition and make sure the Python Support is checked when running the installation. 

This ruleset has several sections: Setup, SelectModelType and a section that applies SAM, a fusion of SAM and multiresolution segmentation, and SAM with user-defined seeds.

Configure Your Python Environment

Setup needs to be executed only once. It will install the PyTorch library to your Python environment. You need to clone your environment before running this section of the ruleset. To clone your environment, go to Tools > Manage Python Environments. You then need to activate this environment.
 
There are two set-up options. Users with an NVIDIA GPU should execute the algorithm python script that installs torch-CUDA, and users who do not have an NVIDIA GPU will need to execute the algorithm with torch-CPU installation. The torch-CUDA version has significant performance benefits.

SAM requires a very powerful GPU. In case your graphic card cannot handle segmentation, you will get an CUDA OutOfMemory error message. If you have already installed torch-CUDA, you will need to clone the default environment again and install torch-CPU.


Installation of torch-CPU or torch-GPU.

Select Model Type

In the section SelectModelType, you are asked to select a model type. Each model type has a different level of complexity.

Select the model type from several complexity options.

If you want to obtain fast but coarse results, execute the process that initializes the variable model_type = vit_b.

If you want to obtain the most accurate possible results and the processing time is less critical for you, execute the process that initializes variable model_type = vit_h. In this case, you will most likely need to install torch-CPU, as high quality requires an extremely advanced GPU.

The option with the variable model_type = vit_l is a compromise between quality and performance.

Segment Your Image

We enabled three approaches SAM - Everything, SAM + MRS seeds, and SAM - custom seed points.

SAM - Everything, SAM + MRS seeds, and SAM - custom seed points.

The first and the simplest approach, SAM - Everything, is based purely on Meta AI’s SAM algorithm. It will apply segmentation strategy to your imagery and will output image objects.


SAM - Everything (pure SAM-based), model_type=vit_b (fast results).

SAM generates a grid of seed points based on parameter "point_per_side" which are used by the algorithm to segment images, however, these seeds are not meaningful. In the next two options, we give you control over the creation of the seeds that are used for the delineation of objects by SAM.

The SAM + MRS seeds will apply a fusion of multiresolution segmentation and SAM. Applying multiresolution segmentation prior to SAM will generate more meaningful seeds and will result in a more accurate delineation of objects in some cases. You can change the Scale parameter in the multiresolution segmentation algorithm to generate seeds that better describe objects of interest in your imagery. Increasing the Scale parameter will result in faster processing time as the number of seeds for the SAM segmentation will be smaller.

The third approach, SAM - custom seed points, allows full control over seed generation for SAM segmentation. You can manually create a point vector that contains the seeds or generate such a vector layer using your ruleset.

In the example below, you can see how the fusion of multiresolution segmentation and SAM can benefit the detection of single cars in the parking lot. A pure SAM-based approach does not find each car. Generating seeds with multiresolution segmentation results in significantly better output.


RunSegmentAnything (pure SAM-based), model_type = vit_h.

Fusion of SAM and multiresolution segmentation.


Feel free to share your experiences and questions with us at support@ecognition.com!

Read more...