Processing Images to Build a Dense Point Cloud
The following process was used to generate the dense point cloud. The images used were acquired from the camera designated rgb2 during flight 1 over Coast Guard Beach, Nauset Inlet, and Nauset Marsh on 1 March 2016 (Sherwood, 2016). Thirty ground control points (GCPs) were incorporated in the photogrammetric processing. Details and locations of the images and GCPs were provided by Sherwood (2016). The processing was performed using Agisoft Photoscan Professional v. 1.2.6 build 2834 (64 bit) software. The computer was a HP Z800 workstation running Windows 7 Enterprise SP1 operating system with dual 6-core Xeon X5675 CPUs running at 3.06 GHz with 96 GB RAM.
Initial alignment
1) Using the “Add photos…” tool, all 1246 of the photos in the directory flight_1_rgb_2 (Sherwood, 2016) were added to a single chunk.
2) Using ”Convert”, the coordinate system of the images (called “cameras” in Photoscan) was converted from native GPS geographic units (latitude/longitude, assumed to be in the WGS84 coordinate system) to meters in NAD83/UTM zone 19N (EPSG::26919). Camera location accuracy was left at the default 10 m (found in Reference Settings on the Reference Pane).
3) ”Align Photos” was selected to align all of the cameras using the following settings: Accuracy: “Medium” (which downsampled the images to 1/4 by using pixels from every other row and column); Pair selection: “Reference” (which used GPS information identify nearby images when searching for tie points); Key point limit: 60,000; Tie point limit; 0 (unlimited). Adaptive camera model fitting option was selected. 1127 cameras (images with varying viewpoints) were initially aligned.
4) ”Optimize Cameras” was used to perform initial lens calibration and camera alignment. Lens-calibration parameters f, cx, cy, k1, k2, k3, b1, b2, p1, and p2 were included; higher-order parameters k4, p3, and p4 were not. These parameters define focal length (f), pixel coordinates of the principal point (cx, cy), and radial distortion coefficients (k1, k2, k3, k4, p1, p2, p3, and p4). The software generates a metric for assessing model fit called the standard unit weight error (SUWE). Values close to 1.0 are optimal. The initial SUWE was 0.165 and the overall alignment error for the cameras was 35.26 m.
5) ”Optimize Cameras” was performed again, this time including parameter k4, but there was no change in the SUWE.
6) Image quality for the photos was estimated using “Estimate image quality…”. The resulting image quality metrics (which are relative non-dimensional measures) ranged from 1.52 to 0, with only 9 images below 0.5. Five images had 0 quality (these were all images of sand sheets), but all were aligned.
Ground control points
1) The bounding box was manually adjusted to delineate the region for further processing. The northern, southern, and western edges were based on the extent of complete photo coverage, and the eastern (seaward) boundary was placed just offshore of visible land features. The vertical extent of the bounding box was reduced to a few meters above and below the topography evident in the sparse point cloud.
2) “Detect Markers” was used to automatically identify targets in the photos, with settings “Cross (non-coded)” and a tolerance of “100” (on a scale of zero to 100, with 100 being the least discriminating). A total of 54 possible markers were automatically detected, but most were manually identified in the photos as false positives and removed. All 15 of the 4-ft square black and white targets deployed were automatically detected, but the other targets (black plastic trash bags and in-place features; see Sherwood, 2016) were not. The automatically-generated marker labels were manually changed to match the names in the GCP location file ("CACO_ground_control_points_20160301.txt" file in Sherwood, 2016) with reference to a map of the labeled GCPs.
3) “Import markers” was used to load the GCP location file ("CACO_ground_control_points_20160301.txt" file in Sherwood, 2016), which assigned coordinates (northing, easting, and elevation in UTM Zone 19 North meters in NAD83 and NAVD88 coordinate systems) from the location file to the detected markers, and placed new markers for the GCPs that had not been auto-detected.
4) The locations of all markers were established in all of the images in which they appeared, except when the image of the target was so poor that the reference point on the target could not be precisely determined. This was a manual process aided by the ability of the software to identify images in which each marker appeared and to maintain a centered view at constant zoom level across all of those images. Each of those images was inspected to verify and adjust the precise marker placement. Manual placement was a painstaking and somewhat subjective process that introduced slight uncertainties into the GCP location in the images. However, our experience indicates that addition of GCPs and pinpointing targets in as many images as possible improves the final alignment of the point cloud.
5) Two “a posteriori” GCPs were added in the marsh, as discussed in the process steps for the GCPs in Sherwood (2016). These were points placed on features visible in several images. Horizontal coordinates for these points (named "fake1" and "fake2") were determined by constructing an orthophoto mosaic from a preliminary version of the dense point cloud and extracting coordinates for the features from the mosaic. Vertical coordinates for these locations were determined from LiDAR data in the 2013-2014 U.S. Geological Survey CMGP LiDAR: Post Sandy (MA, NH, RI).
6) The camera calibration was optimized using “Optimize Cameras”, using all of the lens-calibration coefficients except p3 and p4 and a tie-point accuracy of 1 pixel (set in “Reference Settings”).
Refinement of the sparse point cloud
The sparse point cloud representing tie points among the images consisted of approximately 1.5 million points. An iterative method developed by Tommy Noble (pers. communication, 2016) was used to identify and remove lower-quality tie points. This method involved using “Gradual Selection” of tie points, with the following criteria and target values.
* Reconstruction uncertainty – Quality based on the geometry of the reconstruction. A dimensionless ratio of the maximum/minimum axes of the three-dimensional ellipse describing reconstruction uncertainty based on ray triangulation (target was 10)
* Projection accuracy – Quality of pixel matching among images. A weighted ranking (1 is best, larger numbers worse) based on the size and sharpness of tie-points (target was 3)
* Reprojection error – Estimate of residual error in tie-point location. A measure (pixels) of the precision of calculated tie-point locations based on the geometry (target was 0.3 pixels).
“Gradual Selection” was used and the target value was set, but if more than about 20% of the points were flagged at that setting, the threshold was adjusted to select only about 10% of the points. (The total number of points and the number of flagged points was shown on screen as selections were made). Selected points were deleted, and camera settings were optimized before the next iteration. After each iteration, the improvement in accuracy was assessed by checking the marker error for ground control points and the standard unit weight error (SUWE). The SUWE is reported in the console pane and, ideally, should be close to 1 (dimensionless; Tommy Noble, pers. communication, 2016). This procedure was repeated three times for each criterion listed above (in order; i.e. points were selected based on reconstruction uncertainty three times before next selecting by projection accuracy). A final optimization was made after adjusting the tie-point accuracy to one-half pixel.
At the end of this procedure, approximately one-third of the points had been removed, leaving 967,313 tie points, two photos were eliminated automatically because they had too few (less than about 200) tie points, and the following values for the target metrics were obtained:
* Reconstruction uncertainty - 10 (no units)
* Projection accuracy - 9 (no units)
* Reprojection error - 0.3 (pixels)
The marker error for the ground control points was reduced to 0.021 m (0.188 pixels), and the SUWE increased from the initial value of 0.146 to 0.275.
Finally, hand editing of the sparse point cloud was used to remove clearly erroneous points from that were offshore or significantly above or below ground level.
Dense point cloud
“Build Dense Cloud” was invoked with “High” quality and “Aggressive” depth filtering to generate a dense point cloud. "Export points" was used to export the point cloud in .LAZ format. The resulting dense point cloud containing 434,096,824 points is the data product distributed here.
Estimate uncertainty of point cloud
Uncertainty in the location of points in the dense point cloud is, in general, the quadrature sum of a) uncertainty in the locations of the ground control points (GCPs) to which the point cloud is referenced, plus b) uncertainty in the geometric reconstruction represented by the sparse point cloud, plus c) interpolation errors associated with placing the dense-cloud points in the geometric reconstruction. Uncertainty in the geometric reconstruction (b) includes uncertainty in the location of tie points, camera locations, camera look angles, and camera lens calibrations, assuming the GCP locations are exact. Interpolation errors (c) arise when the locations of dense-cloud points between sparse-cloud points differ from the real-world locations. The Photoscan software does not provide means to estimate (b) or (c), so we inferred that uncertainty from root-mean-square (RMS) errors in the reconstructed locations of ground control points, combined with the resolution of the images and the estimated reprojection error, all as reported by Photoscan. This is likely an underestimate of the uncertainty, because the reconstruction was optimized to match the ground control points.
We estimated (a), the horizontal and vertical precision of the surveyed ground control point locations, as the RMS error for repeat measurements of survey reference marks taken at the beginning and end of the survey day, plus the reported error in the Online Positioning User Service (OPUS) solution for the primary reference point used to locate the survey. The combined horizontal and vertical uncertainties for surveyed locations of the GCPs were +/- 0.027 m and +/- 0.017 m, respectively. Our replacements for (b) were the horizontal and vertical RMS errors associated with reconstruction of the GCP marker locations reported by Photoscan (+/- 0.019 m and +/- 0.009 m, respectively). In lieu of values for (c), we combined the reported unscaled reprojection error reported by Photoscan (0.3 pixels) with the resolution of the images (1 pixel equaled approximately 0.04 m in nadir views) to derive a reprojection error of 0.012 m.
We combined all of these uncertainties to establish minimum estimates of the horizontal and vertical uncertainties in real-world coordinates of the reconstructed points as +/- 0.035 and +/- 0.022 m, respectively.
One check on these values emerges from comparisons of 144 field measurements against the digital elevation model (DEM) generated from the point cloud. The field measurements are provided as transect points in Sherwood (2016; the DEM is not provided in this release). The independent elevation measurements were made with survey-grade GPS, and were compared with values extracted from the closest point in the DEM. The mean vertical difference was 0.001 m, the RMS difference was 0.06 m, and the minimum and maximum differences were -0.23 m and 0.16 m.