SfM PHOTOGRAMMETRY
Digital imagery and position data recorded by the SQUID-5 system were processed using Structure-from-Motion (SfM) photogrammetry techniques that generally follow the workflow outlined by Hatcher and others (2020). These techniques are detailed here and include specific references to parameter settings and processing workflow.
The primary software used for SfM processing was Agisoft Metashape Professional, version 1.6.6, build 11715, which will be referred to as "Metashape" in the discussion herein. Because of the large number of images in this dataset, processing was conducted on a 792-CPU-core linux-based High-Performance Computing (HPC) cluster at the USGS Advanced Research Computing (ARC) group (
https://doi.org/10.5066/P9XE7ROJ).
First, the raw images collected during the six mission days were added to a new project in Metashape. Raw images were used over the color-corrected images, owing to their larger dynamic range, which generally resulted in more SfM tie points. The images were derived from only four of the five cameras on the SQUID-5 system due to a camera focusing problem (see metadata for the raw image files for more explanation), so each camera was assigned a unique camera calibration group in the Camera Calibration settings. Within the Camera Calibration settings, the camera parameters were also entered as 0.00345 x 0.00345 mm pixel sizes for all camera sensors, 8 mm focal length for the central camera (CAM13), and 6 mm focal lengths for the remaining cameras (CAM39, CAM75, CAM82). These different focal lengths represented different lenses chosen for each camera.
Additionally, the cameras required offsets to transform the GNSS positions to each camera's entrance pupil (that is, optical center). Initial measurements of these offsets were obtained using a separate SfM technique, outlined in Hatcher and others (2020), which found the offsets to be:
Camera X(m) Y(m) Z(m)
CAM13 0.034 0.011 0.840
CAM39 0.273 -0.109 0.916
CAM75 0.131 0.559 0.754
CAM82 -0.010 -0.594 0.762
Where X and Y are the camera sensor parallel offsets, and Z is the sensor normal offset. The accuracy settings were chosen to be 0.01 m for CAM13 and 0.025 m for the other 3 cameras. Lastly, these offsets were allowed to be adjusted using the "Adjust GPS/INS offset" option, because slight camera shifts may occur with each rebuild and use of the SQUID-5 system.
The SQUID-5 GNSS antenna positions were then imported into the project and matched with each image by time. The easting and northing (in meters) were obtained from the NAD83 UTM Zone 17N data, and altitudes were obtained from the NAVD88 orthometric heights (in meters).
Prior to aligning the data, the Metashape reference settings were assigned. The coordinate system was "NAD83(2011) / UTM zone 17N" The camera accuracy was 0.10 m in the horizontal dimensions and 0.15 m in the vertical, following an examination of the source GNSS data. Tie point accuracy was set at 1.0 pixels. The remaining reference settings were not relevant, because there were no camera orientation measurements, marker points, or scale bars in the SfM project.
The data were then aligned in Metashape using the "Align Photos" workflow tool. Settings for the alignment included "High" accuracy and "Reference" preselection using the "Source" information. This latter setting allowed the camera position information to assist with the alignment process. Additionally, the key point limit was set to 50,000 and the tie point limit was assigned a value of zero, which allows for the generation of the maximum number of points for each image. Lastly, neither the "Guided image matching" nor the "Adaptive camera model fitting" options were used. This process resulted in over 112 million tie points. The total positional errors for the cameras were reported to be 0.017 m, 0.018 m, and 0.048 m in the east, north and altitude directions, respectively. Thus, the total positional error was 0.054 m.
To improve upon the camera calibration parameters and computed camera positions, an optimization process was conducted that was consistent with the techniques of Hatcher and others (2020), which are based on the general principles provided in Over and others (2021). First, a duplicate of the aligned data was created in case the optimization process eliminated too much data using the "Duplicate Chunk" tool. Within the new chunk, the least valid tie points were removed using the "Gradual Selection" tools. As noted in Hatcher and others (2020), these tools are used less aggressively for the underwater imagery of SQUID-5 than commonly used for aerial imagery owing to the differences in image quality. First, all points with a "Reconstruction Uncertainty" greater than 20 were selected and deleted. Then, all points with a "Projection Accuracy" greater than 8 were selected and deleted. The camera parameters were then recalibrated with the "Optimize Cameras" tool. Throughout this process the only camera parameters that were adjusted were f, k1, k2, k3, cx, cy, p1, and p2. Once the camera parameters were adjusted, all points with "Reprojection Errors" greater than 0.4 were deleted, and the "Optimize Cameras" tool was used one final time. This optimization process resulted in slightly over 59.3 million tie points, a reduction of roughly 47 percent of the original tie points. The camera positional errors were reported to be 0.015 m, 0.015 m, and 0.047 m in the east, north and altitude directions, respectively, and the total positional error was 0.052 m.
The final computed arm offsets were found to be:
Camera X(m) Y(m) Z(m)
CAM13 0.028 0.016 0.821
CAM39 0.274 -0.106 0.912
CAM75 0.128 0.562 0.741
CAM82 -0.015 -0.570 0.765
Following the alignment and optimization of the SQUID-5 data, mapped SfM products were generated in Metashape. For these steps, the original raw images were replaced with color-corrected images. This replacement was conducted by resetting each image path from the raw image to the color-corrected image.
First, a three-dimensional dense point cloud was generated using the "Build Dense Cloud" workflow tool. This was run with the "High" quality setting and the "Moderate" depth filtering, and the tool was set to calculate both point colors and confidence. The resulting dense cloud was over 5 billion points over the 0.07 square kilometer survey area, or roughly 68,000 points per square meter (6.8 points per square centimeter).
The dense points were classified by thresholding Metashape-computed confidence values, which are equivalent to the number of image depth maps that were integrated to make each point. Values of one were assigned "low noise", and values of two and greater were assigned "unclassified". The final Dense cloud was partitioned into blocks (also referred to as tiles) measuring 150 meters on a side, and exported with point colors and classification as a LAZ file type.