SfM PHOTOGRAMMETRY
Photographic and position data generated by the SQUID-5 system were processed using Structure-from-Motion (SfM) photogrammetry techniques that generally follow the workflow outlined by Hatcher and others (2020). These techniques are detailed here and include specific references to parameter settings and processing workflow.
The primary software used for SfM processing was Agisoft Metashape Professional, version 1.6.4, build 10928, which will be referred to as "Metashape" in the discussion herein. For reference, the processing was conducted on a computer system with an Intel Xeon CPU E5-2687W v4 at 3.00 GHz64, 256 GB of installed RAM, two GeForce GTX 1080 Ti GPUs, and running the Windows 10 Pro (64 bit) operating system.
First, the raw photographs collected during JD069 and JD070 were added to a new project in Metashape. Raw photos were used over the color-corrected photos, owing to their larger dynamic range, which generally resulted in more SfM tie points. The photos were derived from four cameras on the SQUID-5 system, so each camera was assigned a unique camera calibration group in the Camera Calibration settings. Within the Camera Calibration settings, the camera parameters were also entered as 0.00345 x 0.00345 mm pixel sizes for all camera sensors, 8 mm focal length for the central camera (Cam13), and 6 mm focal lengths for the remaining cameras (Cam30, Cam39, Cam82). These different focal lengths represented different lenses chosen for each camera.
Additionally, the cameras required offsets to transform the GNSS positions to each camera's entrance pupil (that is, optical center). Initial measurements of these offsets were obtained using a separate SfM technique, outlined in Hatcher and others (2020), which found the offsets to be:
Camera X(m) Y(m) Z(m)
Cam13 0.036 -0.005 0.836
Cam30 -0.294 -0.077 0.921
Cam39 0.279 -0.015 0.926
Cam82 -0.016 -0.616 0.739
Where X and Y are the camera sensor parallel offsets, and Z is the sensor normal offset. The accuracy settings were chosen to be 0.01 m for Cam13 and 0.025 for the remaining three cameras. Lastly, these offsets were allowed to be adjusted using the "Adjust GPS/INS offset" option, because slight camera shifts may occur with each rebuild and use of the SQUID-5 system.
The SQUID-5 GNSS antenna positions were then imported into the project and matched with each photo by time. The easting and northing (in meters) were obtained from the NAD83 UTM Zone 10N data, and altitudes were obtained from the NAD83 ellipsoidal heights (in meters). These heights were converted to NAVD88 orthometric heights in Metashape using the "Conversion" tool.
Prior to aligning the data, the Metashape reference settings were assigned. The coordinate system was "NAD83(2011) / UTM zone 10N" The camera accuracy was 0.02 m in the horizontal dimensions and 0.06 m in the vertical, following an examination of the source GNSS data. Tie point accuracy was set at 1.0 pixels. The remaining reference settings were not relevant, because there were no camera orientation measurements, marker points, or scale bars in the SfM project.
The data were then aligned in Metashape using the "Align Photos" workflow tool. Settings for the alignment included "High" accuracy and "Reference" preselection using the "Source" information. This latter setting allowed the camera position information to assist with the alignment process. Additionally, the key point limit and tie point limit were both assigned a value of zero, which allows for the generation of the maximum number of points for each photo. Lastly, neither the "Guided image matching" nor the "Adaptive camera model fitting" options were used. This process resulted in over 94.5 billion tie points. The total positional errors for the cameras were reported to be 0.0066 m, 0.0097 m, and 0.0305 m in the east, north and altitude directions, respectively. Thus, the total positional error was 0.0326 m.
To improve upon the camera calibration parameters and computed camera positions, an optimization process was conducted that was consistent with the techniques of Hatcher and others (2020), which are based on the general principles provided in Over and others (2021). First, a duplicate of the aligned data was created in case the optimization process eliminated too much data using the "Duplicate Chunk" tool. Within the new chunk, the least valid tie points were removed using the "Gradual Selection" tools. As noted in Hatcher and others (2020), these tools are used less aggressively for the underwater imagery of SQUID-5 than commonly used for aerial imagery owing to the differences in photo quality. First, all points with a "Reconstruction Uncertainty" greater than 20 were selected and deleted. Then, all points with a "Projection Accuracy" greater than 8 were selected and deleted. The camera parameters were then recalibrated with the "Optimize Cameras" tool. Throughout this process the only camera parameters that were adjusted were f, k1, k2, k3, cx, cy, p1, and p2. Once the camera parameters were adjusted, all points with "Reprojection Errors" greater than 0.4 were deleted, and the "Optimize Cameras" tool was used one final time. This optimization process resulted in slightly over 62.5 billion tie points, a reduction of roughly one-third of the original tie points. The camera positional errors were reported to be 0.0065 m, 0.0094 m, and 0.0302 m in the east, north and altitude directions, respectively, and the total positional error was 0.0322 m. Additionally, all original photos were aligned through this process.
The final computed arm offsets were found to be:
Camera X(m) Y(m) Z(m)
Cam13 0.035 -0.004 0.847
Cam30 -0.292 -0.077 0.932
Cam39 0.276 -0.015 0.937
Cam82 -0.017 -0.607 0.750
Following the alignment and optimization of the SQUID-5 data, mapped SfM products were generated in Metashape. For these steps, the original raw photographs were replaced with color-corrected photos. This replacement was conducted by resetting each photo path from the raw photos to the color-corrected photos.
First, a three-dimensional dense point cloud was generated using the "Build Dense Cloud" workflow tool. This was run with the "High" quality setting and the "Moderate" depth filtering, and the tool was set to calculate both point colors and confidence. The resulting dense cloud was over 3.6 billion points over the 0.0774 square kilometer survey area, or roughly 46,500 points per square meter (4.65 points per square centimeter).
The dense points were classified with the confidence values, which are equivalent to the number of photo depth maps that were integrated to make each point. Values of one were assigned "high noise", and values of two and greater were assigned "unclassified." The final Dense cloud was exported with point colors, confidence, and classification as a LAZ file type.
REFERENCES CITED
Hatcher, G.A., Warrick, J.A., Ritchie, A.C., Dailey, E.T., Zawada, D.G., Kranenburg, C., and Yates, K.K., 2020, Accurate bathymetric maps from underwater digital imagery without ground control: Frontiers in Marine Science, v. 7, article 525,
https://doi.org/10.3389/fmars.2020.00525.
Over, J.R., Ritchie, A.C., Kranenburg, C.J., Brown, J.A., Buscombe, D., Noble, T., Sherwood, C.R., Warrick, J.A., and Wernette, P.A., 2021, Processing coastal imagery with Agisoft Metashape Professional Edition, version 1.6—Structure from motion workflow documentation: U.S. Geological Survey Open-File Report 2021–1039, 46 p.,
https://doi.org/10.3133/ofr20211039.