Images of two study sites (approximately 640 m from the shoreline and approximately 20 m apart in the alongshore direction), representing low- and high-relief of the seabed, were collected at 1 Hz in RAW format by a swimmer using a Sony a6300 Camera fitted with a Sony SEL2418Z lens. The camera was mounted inside an underwater casing fitted with a 15.2 cm dome and a 2.5 cm extension ring. This dome and extension ring arrangement was determined with laboratory testing to identify the precise location of the image sensor within the camera and lens setup. This testing was necessary to ensure that the focal length of the camera was constant during the entire image capture process.
To provide scale for the images, 15 rulers were distributed throughout both sample areas. Each ruler was affixed with two targets (one at either end of the ruler separated by a known distance), which were used as control points in the three-dimensional (3D) point clouds. To ensure sufficient coverage of the study sites, a swimmer spooled a 6 m line from a fixed central point that was marked by a 1.2 m vertical pole with a 12 cm diameter. Three sweeps of the area were conducted with the camera positioned at three different angles to fully resolve the 3D bottom roughness.
Unique points in each image (key points) were identified and matched across each set of images by the SfM software. The overarching goal was to produce several thousand key points per image that were well-distributed within and throughout the image dataset. The total limit of tie points and key points was set to be 4,000 and 40,000 per image, respectively. To minimize errors in the photogrammetry solutions, inaccurate and poorly resolved key points that were positioned well above, below, or beyond the reef bathymetry were removed. Points that had high reconstruction uncertainties (a non-dimensional parameter relate directional uncertainties in the point position) and high re-projection errors were identified and removed. Through iteration, it was determined that a reconstruction uncertainty of greater or equal to 30 (which removed a total of 8,737 or approximately 5.7 percent of the tie points) and a re-projection error of greater than or equal to 1.0 pixels (which removed an addition 5,507 points or approximately 3.6 percent of the key points) was appropriate. These resulting key points were the basis for determining camera and scene geometries using photogrammetric principles.
Optimal camera-calibration parameters were calculated from the remaining key points, which solved for the focal length (f), the optical center of the image (cx, cy), the radial distortion factors (k1, k2, k3), the tangential distortion factors (p1, p2), pixel-aspect ratio (aspect), and pixel skew of the lens. Because the key point locations were also recomputed by the software during this step, an additional 5 percent of key points that did not meet the key point threshold requirements were identified. A subsequent camera-calibration was then conducted by removing these points and resolving for the camera-lens parameters. The mean error in the camera positions was reported to be plus or minus 0.1 m, and RMSE of the key-point cloud was less than 40 pixels.
Following the image-alignment step, control points were added to improve the camera calibrations. The camera-calibration parameters were then re-optimized using uncertainty settings to match the data and the camera parameters described above. Finally, dense topographic point clouds were generated for the low and high relief sites each with sub-centimeter densities. Both point clouds contain 5 to 10 million topographic points, where each point includes 8-bit red, green, and blue (RGB) color values sampled from the original images. Error analysis conducted using Cloud Compare indicated that the model had a slight concave form, but the displacement at the outer edge of the model, when compared to the middle of the model was small (less than 1 cm) relative to the roughness variability at both sites. The coarse scale cloud contains about 300K topographic points and does not have RBG color values associated with the points.
Coarser-scale point-cloud data immediately surrounding the two study sites were merged with these new finer-scale data for presentation purposes. Data collection, processing, and error analysis information for the coarse-scale point-cloud data can be found in Logan and Storlazzi, (2022).