Why is my SmartMatch / Scanner / UAS Project taking so long to process?

This note refers to projects run with Smart Match and Smart Orientation processing. Sometimes these projects take longer to process than expected. A small to medium size uav project with 200 photos typically takes less than 30 minutes to process (matching and orientation) on a modern computer. If your project is taking significantly longer than this, here are the things to consider. For the most part is comes down to project setup issues.

Computational Power

Before talking about project setup issues, let’s answer how computational power (cpu speed, number of cores, amount of memory, disc speed) affects PhotoModeler Smart processing speed.

The amount of memory is important for the MVS (dense point cloud), texturing and ortho-photo processes but not as important for SmartMatch and SmartOrient. While these processes do use a fair amount of memory it is not as heavy. The memory speed and the processor cache would make a difference as Smart processing does access memory a lot.

The number of processing cores helps with SmartMatch considerably. A 4 core (or 8 core hyperthreaded) cpu is a good standard – an Intel i7 for example. While parts of SmartOrient are threaded and use multiple cores, the speed up with multiple cores is not as great.

Disc speed relates not so much to processing speed but does affect speed to load large images, and load large projects.

Here is a link to the Recommended Computer Requirements article for further information on this topic.

Project Setup – Matching Steps

SmartMatch steps run after the Smart feature detection and are responsible for getting a preliminary matching of the features across photographs.

The speed of matching is affected by computational power (speed and number of cores), but also by gaps (lack of overlap between photos), and lack of texture. Sometimes you will see Smart Match change the resolution of the images for sampling to try to improve matches. If there are true gaps in photographic overlap though, no amount of changing the resolution will fix this and it will unnecessarily slow things down.

Ensure there are no gaps in the photography and the photos are of random textured subjects. See How It Works for Scanner and UAS for examples. See also ‘Gaps in Photograph Overlap’ section below.

Project Setup – Orientation / Calibration Step

The main issue that slows down Smart Orientation is conflicting information. That is, if the input data says one thing but reality is different, the orientation algorithm ‘fights’ for a solution and that can be slow. Here are some ways the input data or assumptions may not match reality and cause these issues:

  • incorrect camera calibration
  • incorrect control data
  • incorrect control precisions
  • image data that is repetitive and not random

Incorrect camera calibration
If you are loading a pre-calibrated camera and are not using automated calibration (Smart Calibration as part of Smart Orientation), it is important that it be the correct calibration. A small variation in the calibration may not make a big difference, but a completely wrong match between the actual camera and the calibration file could.

Incorrect control data
Control data is 3D information that comes from outside PhotoModeler to help with a) placing the project in a particular coordinate system, or b) correcting project distortions and low accuracy in weaker geometry projects.  Control data can come from a ground survey (called ground control), or from a GPS detector with the camera (called camera station control).

If the control data entered into PhotoModeler is incorrect or incorrectly assigned to the wrong object(s) in the project, this can cause a conflict within the processing algorithm while it tries to resolve what it thinks is the correct solution vs what you have told it from the external control data.

If you suspect this you can try processing with no control to see if the processing speed is more reasonable (you can use a multipoint transform to place the model in the real world coordinate system instead of control, see this for an explanation of these two approaches).

Incorrect control precisions
The input precisions on control data tell the software how well you know the 3D true positions of this external data. There is always some uncertainty in the input data.  In the case of GPS data for example, it may only be precise to 5 to 20 meters depending on the type of GPS device. If you input GPS control data into PhotoModeler that has a precision of 5m, but then tell PhotoModeler the precision is 0.5m the processing algorithm may have trouble reconciling poor position input. The GPS could be saying one position where PhotoModeler thinks it is another position that is not within the specified tolerance and that causes a conflict and potential decrease in processing speed.

If you are working with a drone / UAV, they will sometimes place the X and Y or N and E GPS position in the image EXIF header along with either a GPS height or a more accurate height based on a barometric sensor (more common with multirotor drones).  In this case you might have a precision in N and E of 5m but a height precision of 0.2m (the drone height position stored is more precise).  On the other hand, if you are flying a fixed wing drone with GPS height with a precision of 5m or worse but tell PhotoModeler it is 0.2m precise, this will cause conflict in processing and may slow it down.

If you suspect this you can try processing with no GPS control position or relax the precision values.

Image data that is repetitive and not random
SmartMatch does a good job of automatically matching features in random textures (sand, dirt, grass, crops, rocks, coal, gravel, bricks) but not as good with man-made structures with many features that look exactly the same (imagine the front of a sky scraper with its repeating window frames). The features that are not very random can be incorrectly matched by SmartMatch. SmartOrient is designed to handle a certain percentage of mismatches but if that gets too high then it takes it longer to filter the data and come up with the right answer.

Gaps in Photograph Overlap

In addition to conflicting information causing slow downs as described above, the other issue that can cause a speed problem is gaps in photograph overlap. A gap is where some photographs in the project do not overlap, or do not overlap well, with other photographs. Photos that overlap share points – they ‘see’ the same part of the object, scene or ground.

Imagine taking 10 photos in one area and then moving far away and taking another 10, and then feeding them into the same project. In this extreme case SmartMatch may throw out the 10 photos, and SmartOrient is not greatly affected. But if there was some weak overlap where just a couple of photos overlapped, then SmartOrient may take quite a bit of time trying to connect the two sets of photos so they are consistent.

These gaps can result from the physical location of the cameras (too far apart in one part of the photo set), but can also occur for unexpected reasons. These reasons apply more to UAV/drone projects and are due to a difficulty in SmartMatch successfully matching points due to water or high tree canopy. Water moves and so while the Smart feature detector finds points, they can’t reliably be matched because they are not in the same place in each photo.  Lakes, rivers, or coastlines can cause these issues.  Try to minimize these in your projects and keep more of the shoreline in the photo frame rather than the water.

High canopy trees can also cause a problem for a similar reason. If a drone/uav is flying close to the tops of trees, then the trees and leaves/needles look very different in each photograph. It is a perspective issue. Place a box 1m in front of you and then step sideways 10 cm. The box’s appearance will appear quite different – you might be able to see one side of the box at one standing position but not at the other. Now put the box 20m away and again step 10cm sideways. The box will look very similar from both positions. Trees that are close to the camera can look very different photo to photo. To alleviate this issue fly the UAV higher and/or decrease the overlap so photos are taken closer to each other.

Photography has to be planned carefully so such matching gaps do not cause a failure in SmartOrient.

Review the following if your Smart project is very slow to solve:

  • do you have confidence that the input control data from GPS camera positions or ground control is correct?
  • is the precision setting of control data correct (e.g. if you have a fixed wing UAV with a standard GPS its height precision may not be that good)
  • if you are using a pre-calibrated camera is it the correct one?
  • is the image data suitable (random textures, etc.)?
  • are there any large gaps in the overlap between photos due to camera positioning or SmartMatch failure on water, trees, etc.

Project that process well and quickly have:

  • subject material that is suitable
  • good photographic overlap
  • either have no control data, or if any, the quality is good
  • control precisions that match the true precision of the capture instrument

Please contact Technical Support if you want help with a slow project.