Point cloud data of Lake Tahoe near Dollar Point

Metadata also available as - [Outline] - [Parseable text] - [XML]

Frequently anticipated questions:


What does this data set describe?

Title: Point cloud data of Lake Tahoe near Dollar Point
Abstract:
Three-dimensional point clouds (LAZ format) were developed from underwater images collected near Dollar Point in Lake Tahoe, California, and processed using Structure-from-Motion (SfM) photogrammetry techniques. Point cloud data include x,y,z positions, RGB colors, Metashape-computed confidence values, and a two-class classification ('unclassified' and 'high noise') derived from the confidence values. LAZ is an open format developed for the efficient use of point cloud lidar data. A description of the LAZ format and links to software tools for using LAZ files are provided at the USGS website: https://www.usgs.gov/news/3d-elevation-program-distributing-lidar-data-laz-format
Supplemental_Information:
Additional information about the field activity from which these data were derived is available online at: https://cmgds.marine.usgs.gov/fan_info.php?fan=2021-607-FA Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
  1. How might this data set be cited?
    Warrick, Jonathan A., Hatcher, Gerald A., and Kranenburg, Christine J., 20211217, Point cloud data of Lake Tahoe near Dollar Point: data release DOI: 10.5066/P9934I6U, U.S. Geological Survey - Pacific Coastal and Marine Science Center, Santa Cruz, California.

    Online Links:

    This is part of the following larger work.

    Warrick, Jonathan A., Hatcher, Gerald A., and Kranenburg, Christine J., 2021, Point clouds, bathymetric maps, and orthoimagery generated from overlapping lakebed images acquired with the SQUID-5 system near Dollar Point, Lake Tahoe, CA, March 2021: data release 10.5066/P9934I6U, U.S. Geological Survey - Pacific Coastal and Marine Science Center, Santa Cruz, California.

    Online Links:

  2. What geographic area does the data set cover?
    West_Bounding_Coordinate: -120.10407
    East_Bounding_Coordinate: -120.09931
    North_Bounding_Coordinate: 39.18048
    South_Bounding_Coordinate: 39.17705
  3. What does it look like?
    Tahoe_2021_Survey_AllCams_REVISED_sml.gif (gif)
    Full resolution sample view of larger point cloud data set.
  4. Does the data set describe conditions during a particular time period?
    Beginning_Date: 10-Mar-2021
    Ending_Date: 11-Mar-2021
    Currentness_Reference:
    ground condition
  5. What is the general form of this data set?
    Geospatial_Data_Presentation_Form: vector digital data
  6. How does the data set represent geographic features?
    1. How are geographic features stored in the data set?
      This is a Point data set. It contains the following vector data types (SDTS terminology):
      • Entity point
    2. What coordinate system is used to represent geographic features?
      Grid_Coordinate_System_Name: Universal Transverse Mercator
      Universal_Transverse_Mercator:
      UTM_Zone_Number: 10N
      Transverse_Mercator:
      Scale_Factor_at_Central_Meridian: 0.9996
      Longitude_of_Central_Meridian: -123
      Latitude_of_Projection_Origin: 0.0
      False_Easting: 500000.0
      False_Northing: 0.0
      Planar coordinates are encoded using coordinate pair
      Abscissae (x-coordinates) are specified to the nearest 0.001
      Ordinates (y-coordinates) are specified to the nearest 0.001
      Planar coordinates are specified in Meters
      The horizontal datum used is North American Datum of 1983 (2011).
      The ellipsoid used is GRS 1980.
      The semi-major axis of the ellipsoid used is 6378137.000000.
      The flattening of the ellipsoid used is 1/298.257222101.
  7. How does the data set describe geographic features?
    Entity_and_Attribute_Overview:
    Points represent three-dimensional locations of the mapped lakebed with horizontal position in meters projected in NAD83(2011) UTM Zone 10N and elevation in meters relative to NAVD88. Points additionally have values for 8-bit RGB color derived from the color-corrected photographs, Metashape confidence values, and a classification of either high noise or unclassified.
    Entity_and_Attribute_Detail_Citation: U.S. Geological Survey

Who produced the data set?

  1. Who are the originators of the data set? (may include formal authors, digital compilers, and editors)
    • Jonathan A. Warrick
    • Gerald A. Hatcher
    • Christine J. Kranenburg
  2. Who also contributed to the data set?
  3. To whom should users address questions about the data?
    U.S. Geological Survey, Pacific Coastal and Marine Science Center
    Attn: PCMSC Science Data Coordinator
    2885 Mission Street
    Santa Cruz, CA

    831-460-4747 (voice)
    pcmsc_data@usgs.gov

Why was the data set created?

The underwater images and associated location data were collected to assess the accuracy, precision, and effectiveness of the new SQUID-5 camera platform to collect contiguous imagery for use in Structure-from-Motion (SfM) data processing of an area similar in size to an individual coral reef.

How was the data set created?

  1. From what previous works were the data drawn?
    raw images (source 1 of 1)
    Hatcher, Gerald A., Warrick, Jonathan A., Kranenburg, Christine J., and Ferro, Peter Dal, 2021, Overlapping lakebed images and associated GNSS locations acquired near Dollar Point, Lake Tahoe, CA, March 2021.

    Online Links:

    Other_Citation_Details:
    Hatcher, G.A., Warrick, J.A., Kranenburg, C.J., and Dal Ferro, P., 2021, Overlapping lakebed images and associated GNSS locations acquired near Dollar Point, Lake Tahoe, CA, March 2021: U.S. Geological Survey data release, https://doi.org/10.5066/P9V44ZYS.
    Type_of_Source_Media: digital images
    Source_Contribution:
    raw images to which Structure-from-Motion (SfM)techniques were applied
  2. How were the data generated, processed, and modified?
    Date: 01-Sep-2021 (process 1 of 2)
    PHOTOGRAPH COLOR CORRECTION Because of the strong color modifications caused by light adsorption and scattering in underwater photographs, a color correction process was conducted on the raw images. The color correction was a twofold process. First, images were corrected for the high adsorption (and low color values) in the red band using the color balancing techniques of Ancuti and others (2017). For this, the red channel was modified using the color compensation equations of Ancuti and others (2017, see equation 4 on page 383) that use both image-wide and pixel-by-pixel comparisons of red brightness with respect to green brightness. After compensation, the images were white balanced using the "greyworld" assumption that is summarized in Ancuti and others (2017). Combined these techniques ensured that each color band histogram was centered on similar values and had similar spread of values. The remaining techniques of Ancuti and others (2017), which include sharpening techniques and a multi-product fusion, were not employed. The resulting images utilized only about a quarter to a half of the complete 0-255 dynamic range of the three-color bands. Thus, the brightness values of each band were stretched linearly over the complete range while allowing the brightest and darkest 0.05 percent of the original image pixels (that is, 2506 of the 5.013 million pixels) to be excluded in histogram stretch. This final element was included to ensure that light or dark spots in the photos, which often occurred from water column particles or image noise, did not exert undo control on the final brightness values. Final corrected images were output with the same file names and file types as the originals to make replacement within a SfM photogrammetry project easy.
    REFERENCE CITED Ancuti, C.O., Ancuti, C., De Vleeschouwer, C., and Bekaert, P., 2017, Color balance and fusion for underwater image enhancement: IEEE Transactions on Image Processing, v. 27, p. 379-393, https://doi.org/10.1109/TIP.2017.2759252. Data sources used in this process:
    • raw images
    Data sources produced in this process:
    • corrected images
    Date: 01-Sep-2021 (process 2 of 2)
    SfM PHOTOGRAMMETRY Photographic and position data generated by the SQUID-5 system were processed using Structure-from-Motion (SfM) photogrammetry techniques that generally follow the workflow outlined by Hatcher and others (2020). These techniques are detailed here and include specific references to parameter settings and processing workflow. The primary software used for SfM processing was Agisoft Metashape Professional, version 1.6.4, build 10928, which will be referred to as "Metashape" in the discussion herein. For reference, the processing was conducted on a computer system with an Intel Xeon CPU E5-2687W v4 at 3.00 GHz64, 256 GB of installed RAM, two GeForce GTX 1080 Ti GPUs, and running the Windows 10 Pro (64 bit) operating system. First, the raw photographs collected during JD069 and JD070 were added to a new project in Metashape. Raw photos were used over the color-corrected photos, owing to their larger dynamic range, which generally resulted in more SfM tie points. The photos were derived from four cameras on the SQUID-5 system, so each camera was assigned a unique camera calibration group in the Camera Calibration settings. Within the Camera Calibration settings, the camera parameters were also entered as 0.00345 x 0.00345 mm pixel sizes for all camera sensors, 8 mm focal length for the central camera (Cam13), and 6 mm focal lengths for the remaining cameras (Cam30, Cam39, Cam82). These different focal lengths represented different lenses chosen for each camera. Additionally, the cameras required offsets to transform the GNSS positions to each camera's entrance pupil (that is, optical center). Initial measurements of these offsets were obtained using a separate SfM technique, outlined in Hatcher and others (2020), which found the offsets to be:
    Camera X(m) Y(m) Z(m)
    Cam13 0.036 -0.005 0.836
    Cam30 -0.294 -0.077 0.921
    Cam39 0.279 -0.015 0.926
    Cam82 -0.016 -0.616 0.739
    Where X and Y are the camera sensor parallel offsets, and Z is the sensor normal offset. The accuracy settings were chosen to be 0.01 m for Cam13 and 0.025 for the remaining three cameras. Lastly, these offsets were allowed to be adjusted using the "Adjust GPS/INS offset" option, because slight camera shifts may occur with each rebuild and use of the SQUID-5 system. The SQUID-5 GNSS antenna positions were then imported into the project and matched with each photo by time. The easting and northing (in meters) were obtained from the NAD83 UTM Zone 10N data, and altitudes were obtained from the NAD83 ellipsoidal heights (in meters). These heights were converted to NAVD88 orthometric heights in Metashape using the "Conversion" tool. Prior to aligning the data, the Metashape reference settings were assigned. The coordinate system was "NAD83(2011) / UTM zone 10N" The camera accuracy was 0.02 m in the horizontal dimensions and 0.06 m in the vertical, following an examination of the source GNSS data. Tie point accuracy was set at 1.0 pixels. The remaining reference settings were not relevant, because there were no camera orientation measurements, marker points, or scale bars in the SfM project.
    The data were then aligned in Metashape using the "Align Photos" workflow tool. Settings for the alignment included "High" accuracy and "Reference" preselection using the "Source" information. This latter setting allowed the camera position information to assist with the alignment process. Additionally, the key point limit and tie point limit were both assigned a value of zero, which allows for the generation of the maximum number of points for each photo. Lastly, neither the "Guided image matching" nor the "Adaptive camera model fitting" options were used. This process resulted in over 94.5 billion tie points. The total positional errors for the cameras were reported to be 0.0066 m, 0.0097 m, and 0.0305 m in the east, north and altitude directions, respectively. Thus, the total positional error was 0.0326 m. To improve upon the camera calibration parameters and computed camera positions, an optimization process was conducted that was consistent with the techniques of Hatcher and others (2020), which are based on the general principles provided in Over and others (2021). First, a duplicate of the aligned data was created in case the optimization process eliminated too much data using the "Duplicate Chunk" tool. Within the new chunk, the least valid tie points were removed using the "Gradual Selection" tools. As noted in Hatcher and others (2020), these tools are used less aggressively for the underwater imagery of SQUID-5 than commonly used for aerial imagery owing to the differences in photo quality. First, all points with a "Reconstruction Uncertainty" greater than 20 were selected and deleted. Then, all points with a "Projection Accuracy" greater than 8 were selected and deleted. The camera parameters were then recalibrated with the "Optimize Cameras" tool. Throughout this process the only camera parameters that were adjusted were f, k1, k2, k3, cx, cy, p1, and p2. Once the camera parameters were adjusted, all points with "Reprojection Errors" greater than 0.4 were deleted, and the "Optimize Cameras" tool was used one final time. This optimization process resulted in slightly over 62.5 billion tie points, a reduction of roughly one-third of the original tie points. The camera positional errors were reported to be 0.0065 m, 0.0094 m, and 0.0302 m in the east, north and altitude directions, respectively, and the total positional error was 0.0322 m. Additionally, all original photos were aligned through this process. The final computed arm offsets were found to be:
    Camera X(m) Y(m) Z(m)
    Cam13 0.035 -0.004 0.847
    Cam30 -0.292 -0.077 0.932
    Cam39 0.276 -0.015 0.937
    Cam82 -0.017 -0.607 0.750
    Following the alignment and optimization of the SQUID-5 data, mapped SfM products were generated in Metashape. For these steps, the original raw photographs were replaced with color-corrected photos. This replacement was conducted by resetting each photo path from the raw photos to the color-corrected photos. First, a three-dimensional dense point cloud was generated using the "Build Dense Cloud" workflow tool. This was run with the "High" quality setting and the "Moderate" depth filtering, and the tool was set to calculate both point colors and confidence. The resulting dense cloud was over 3.6 billion points over the 0.0774 square kilometer survey area, or roughly 46,500 points per square meter (4.65 points per square centimeter). The dense points were classified with the confidence values, which are equivalent to the number of photo depth maps that were integrated to make each point. Values of one were assigned "high noise", and values of two and greater were assigned "unclassified." The final Dense cloud was divided into 9 tiles, arranged as a 3-by-3 grid, and exported with point colors, confidence, and classification as a LAZ file type. Each tile measures 150 meters on a side and are labeled SQUID5_Tahoe_2021_PointCloud-col-row.laz where col represents the column name and can have a value of A, B or C, and row is the row number and can have a value of 1, 2 or 3. Note that the C-1 tile is empty, resulting in 8 point cloud data files.
    REFERENCES CITED
    Hatcher, G.A., Warrick, J.A., Ritchie, A.C., Dailey, E.T., Zawada, D.G., Kranenburg, C., and Yates, K.K., 2020, Accurate bathymetric maps from underwater digital imagery without ground control: Frontiers in Marine Science, v. 7, article 525, https://doi.org/10.3389/fmars.2020.00525.
    Over, J.R., Ritchie, A.C., Kranenburg, C.J., Brown, J.A., Buscombe, D., Noble, T., Sherwood, C.R., Warrick, J.A., and Wernette, P.A., 2021, Processing coastal imagery with Agisoft Metashape Professional Edition, version 1.6—Structure from motion workflow documentation: U.S. Geological Survey Open-File Report 2021–1039, 46 p., https://doi.org/10.3133/ofr20211039. Data sources used in this process:
    • corrected images
  3. What similar or related data should the user be aware of?
    Hatcher, Gerald A., Warrick, Jonathan A., Ritchie, Andrew C., Dailey, Evan T., Zawada, David G., Kranenburg, Christine, and Yates, Kimberly K., 2020, Accurate bathymetric maps from underwater digital imagery without ground control.

    Online Links:

    Other_Citation_Details:
    Hatcher, G.A., Warrick, J.A., Ritchie, A.C., Dailey, E.T., Zawada, D.G., Kranenburg, C., and Yates, K.K., 2020, Accurate bathymetric maps from underwater digital imagery without ground control: Frontiers in Marine Science, v. 7, article 525, https://doi.org/10.3389/fmars.2020.00525.

How reliable are the data; what problems remain in the data set?

  1. How well have the observations been checked?
    The accuracy of the position data used for SfM data processing is based on the accuracy of the post-processed GNSS navigation data, which produced a 1-Hz vehicle trajectory with an estimated 2-sigma accuracy of 10 cm horizontal and 15 cm vertical. The horizontal and vertical accuracies of the surface models generated by SfM were assessed with positional error assessments of the cameras and found to be less than 1 cm in the horizontal dimensions and less than 4 cm in the vertical.
  2. How accurate are the geographic locations?
    Previous SfM-based measurements of the field-based Sediment Elevation Table (SET) stations from USGS field sites in the Florida Keys were within 3 cm of the total uncertainty of the field-based GPS measurements. Additionally, the average horizontal scaling of the models was found to be between 0.016 percent and 0.024 percent of water depth. No independent assessment of horizontal accuracy was possible from the Lake Tahoe field site.
  3. How accurate are the heights or depths?
    Previous SfM-based measurements of the field-based Sediment Elevation Table (SET) stations from USGS field sites in the Florida Keys were within 3 cm of the total uncertainty of the field-based GPS measurements. The average vertical scaling of the models is between 0.016 percent and 0.024 percent of water depth. No independent assessment of vertical accuracy was possible from the Lake Tahoe field site.
  4. Where are the gaps in the data? What is missing?
    Dataset is considered complete for the information presented, as described in the abstract. Users are advised to read the rest of the metadata record carefully for additional details.
  5. How consistent are the relationships among the observations, including topology?
    All data fall within expected ranges.

How can someone get a copy of the data set?

Are there legal restrictions on access or use of the data?
Access_Constraints: None
Use_Constraints:
USGS-authored or produced data and information are in the public domain from the U.S. Government and are freely redistributable with proper metadata and source attribution. Please recognize and acknowledge the U.S. Geological Survey as the originator of the dataset and in products derived from these data. This information is not intended for navigation purposes.
  1. Who distributes the data set? (Distributor 1 of 1)
    U.S. Geological Survey - CMGDS
    2885 Mission Street
    Santa Cruz, CA

    1-831-427-4747 (voice)
    pcmsc_data@usgs.gov
  2. What's the catalog number I need to order this data set? These data are available in the compressed LAZ format for eight blocks (also referred to as tiles) of the survey area. Each tile measures 150 meters on a side and are labeled SQUID5_Tahoe_2021_PointCloud-col-row.laz where col represents the column name and can have a value of A, B or C, and row is the row number and can have a value of 1, 2 or 3. Note that the C-1 tile is empty, resulting in 8 point cloud data files.
  3. What legal disclaimers am I supposed to read?
    Unless otherwise stated, all data, metadata and related materials are considered to satisfy the quality standards relative to the purpose for which the data were collected. Although these data and associated metadata have been reviewed for accuracy and completeness and approved for release by the U.S. Geological Survey (USGS), no warranty expressed or implied is made regarding the display or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty.
  4. How can I download or order the data?
    • Availability in digital form:
      Data format: LAZ is an open source, directly accessible, ready-to-use format that also provides compression. Individual LAZ files available for download range in size from 1.0 GB to 11.0 GB. in format LAZ Size: 11000
      Network links: https://doi.org/10.5066/P9934I6U
    • Cost to order the data: none


Who wrote the metadata?

Dates:
Last modified: 17-Dec-2021
Metadata author:
U.S. Geological Survey, Pacific Coastal and Marine Science Center
Attn: PCMSC Science Data Coordinator
2885 Mission Street
Santa Cruz, CA

831-460-4747 (voice)
pcmsc_data@usgs.gov
Metadata standard:
Content Standard for Digital Geospatial Metadata (FGDC-STD-001-1998)

This page is <https://cmgds.marine.usgs.gov/catalog/pcmsc/DataReleases/CMGDS_DR_tool/DR_P9934I6U/SQUID5_Tahoe_2021_PointCloud_metadata.faq.html>
Generated by mp version 2.9.50 on Fri Dec 17 17:57:05 2021