Labeled satellite imagery for training machine learning semantic segmentation models of coastal shorelines.

Metadata also available as - [Outline] - [Parseable text] - [XML]

Frequently anticipated questions:


What does this data set describe?

Title:
Labeled satellite imagery for training machine learning semantic segmentation models of coastal shorelines.
Abstract:
A dataset of Landsat, Sentinel, and Planetscope satellite images of coastal shoreline regions, and corresponding semantic segmentations. The dataset consists of folders of images and label images. Label images are images where each pixel is given a discrete class by a human annotator, among the following classes: a) water, b) whitewater/surf, c) sediment, and d) other. These data are intended only to be used as a training and validation dataset for a machine learning based image segmentation model that is specifically designed for the task of coastal shoreline satellite image semantic segmentation.
Supplemental_Information:
This data release was funded by the USGS Coastal and Marine Hazards and Resources Program. Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government. This data release contains modified Planetscope imagery, provided under the NASA (National Aeronautics and Space Administration) CSDA (Commercial Satellite Data Aquisition) program under the standard Scientific Use License available at https://cdn.earthdata.nasa.gov/conduit/upload/14226/PlanetEULA042220.pdf, and the End User license agreement available at https://earthdata.nasa.gov/s3fs-public/2022-02/Planet_Expanded_EULA_06-21.pdf. This license permits redistribution of imagery in significantly modified form. We provide only the visible (R, G, and B) bands of small sub-portions of downloaded tiles, in png format. As such, the original imagery (multispectral scenes in geotiff format) would have been cropped, its geospatial information removed, and re-encoded into 8bit png format.
  1. How might this data set be cited?
    Buscombe, Daniel, Lundine, Mark A., Janda, Catherine N., and Batiste, Sharon, 20250325, Labeled satellite imagery for training machine learning semantic segmentation models of coastal shorelines.: data release DOI:10.5066/P13EOBZQ, U.S. Geological Survey, Pacific Coastal and Marine Science Center, Santa Cruz, California.

    Online Links:

    Other_Citation_Details:
    Suggested Citation: Buscombe, D., Lundine, M.A., Janda, C.N., and Batiste, S., 2025, Labeled satellite imagery for training machine learning semantic segmentation models of coastal shorelines, U.S. Geological Survey data release, https://doi.org/10.5066/P13EOBZQ.
  2. What geographic area does the data set cover?
    West_Bounding_Coordinate: 180.00000
    East_Bounding_Coordinate: -180.00000
    North_Bounding_Coordinate: 90.00000
    South_Bounding_Coordinate: -90.00000
  3. What does it look like?
  4. Does the data set describe conditions during a particular time period?
    Beginning_Date: 1984
    Ending_Date: 2024
    Currentness_Reference:
    collection years of satellite imagery.
  5. What is the general form of this data set?
    Geospatial_Data_Presentation_Form: JPEG
  6. How does the data set represent geographic features?
    1. How are geographic features stored in the data set?
      Indirect_Spatial_Reference:
      Data were generated within a numerical model scheme. The model training data presented are not for a particular geographic area.
    2. What coordinate system is used to represent geographic features?
  7. How does the data set describe geographic features?
    Each image name contains a string that includes the date and time, as well as the name of the sensor. PS denotes Plantscope imagery. S2 denotes Sentinel 2A imagery. L5 denotes Landsat 5. L7 denotes Landsat 7. L8 denotes Landsat 8. L9 denotes Landsat 9. Date and time of the projected data (UTC) are in yyy-mm-dd hh:MM:SS format (where yyyy is 4 digit year, mm is 2-digit month, dd is 2-digit day, hh is 2-digit hour in 24-hour format, MM is 2-digit minutes, and SS is 2-digit seconds). For example, the file name ‘formodel_dataset2_Duck_2014-02-28-15-41-42_L8.jpg’ is for an RGB image from Landsat 8 collected on 2014-02-28 at 15:41:42. File names ending with ‘label’ indicate images containing labeled classes.
    JPEG files of RGB imagery and their associated labeled imagery (Source: Producer Defined)
    Label
    Image pixel class labeled using Doodler (Buscombe 2022) (Source: Producer defined)
    ValueDefinition
    0Integer pixel value used for the water class
    1Integer pixel value used for the whitewater (surf) class
    2Integer pixel value used for the sediment class
    3Integer pixel value used for the ‘other’ class, which is everything not covered by the previous three classes
    Entity_and_Attribute_Overview:
    These data are designed to train and test a Machine Learning model that is tasked with semantic segmentation of imagery into 4 discrete classes, a) water, b) whitewater/surf, c) sediment, and d) other, for the purposes of shoreline detection. These images have been labeled using Doodler (Buscombe and others, 2022). No positions are encoded in the files because these data are intended solely to train a Machine Learning model to identify imagery suitable for shoreline analysis (such as, imagery in which the shoreline is visible to the human eye). These images and associated labels are designed to be used to create a segmentation model following the methods of Buscombe and Goldstein (2022). This model would then carry out semantic satellite image segmentation for the principal purpose of finding water and sand pixels, so the shoreline can be identified automatically from satellite imagery. The imagery have been organized into 3 zipped folders: The zipped folder images_and_labels_RGB.zip contains RGB imagery and associated labels. There are separate subsets for Alaska-only (images_AK_RGB.zip and labels_AK_RGB.zip), all locations (images_ALL_RGB.zip and labels_ALL_RGB.zip), non-Alaska locations (images_nonAK_RGB.zip and labels_nonAK_RGB.zip) and finally in the vicinity of Duck, North Carolina (images_Duck_RGB.zip and labels_Duck_RGB.zip). The zipped folder images_and_labels_NDWI.zip contains NDWI imagery and associated labels. There are separate subsets for Alaska-only (images_AK_NDWI.zip and labels_AK_NDWI.zip) and all locations (images_ALL_NDWI.zip and labels_ALL_NDWI.zip). The zipped folder images_and_labels_MNDWI.zip contains MNDWI imagery and associated labels. There are separate subsets for Alaska-only (images_AK_MNDWI.zip and labels_AK_MNDWI.zip) and all locations (images_ALL_MNDWI.zip and labels_ALL_MNDWI.zip).
    Entity_and_Attribute_Detail_Citation: U.S. Geological Survey

Who produced the data set?

  1. Who are the originators of the data set? (may include formal authors, digital compilers, and editors)
    • Daniel Buscombe
    • Mark A. Lundine
    • Catherine N. Janda
    • Sharon Batiste
  2. Who also contributed to the data set?
    This data release was funded by the USGS Coastal and Marine Hazards and Resources Program.
  3. To whom should users address questions about the data?
    U.S. Geological Survey, Pacific Coastal and Marine Science Center
    Attn: PCMSC Science Data Coordinator
    2885 Mission Street
    Santa Cruz, CA

    831-427-4747 (voice)
    pcmsc_data@usgs.gov

Why was the data set created?

These data provide resources for automatically detected coastal shoreline position for resource managers, science researchers, students, and the general public. These data can be used with image viewing software and can be used within specialist software for the purposes of training Machine Learning models to segment imagery into water, whitewater/surf, sediment, and 'other' for the purposes of shoreline mapping. Other potential uses of such data include mapping sand and whitewater/surf from geospatial imagery. The imagery are organized into three folders, which constitute three separate versions of the dataset. One folder contains visible-spectrum or RGB (Red-Green-Blue) imagery and corresponding label images. The second folder contains NDWI (Normalized Difference Water Index) imagery and associated labels. The third folder contains MNDWI (Modified Normalized Difference Water Index) imagery and associated labels. MNDWI and NDWI spectral index imagery are commonly used for water and shoreline detection. These data are used to train a Machine Learning model to carry out the task of semantic image segmentation. Labels were created using the image annotation tool, Doodler (Buscombe, 2022; Buscombe and others, 2022).

How was the data set created?

  1. From what previous works were the data drawn?
    Landsat imagery (source 1 of 4)
    U.S. Geological Survey, 2025, Landsat imagery (from Landsat 8-9): U.S. Geological Survey, online.

    Online Links:

    Type_of_Source_Media: online database
    Source_Contribution:
    The archive of Landsat 7 satellite imagery was accessed through Google Earth Engine.
    Landsat imagery (source 2 of 4)
    U.S. Geological Survey, 2025, Landsat imagery (from Landsat 7): U.S. Geological Survey, online.

    Online Links:

    Type_of_Source_Media: online database
    Source_Contribution:
    The archive of Landsat 7 satellite imagery was accessed through Google Earth Engine.
    Landsat imagery (source 3 of 4)
    U.S. Geological Survey, 2025, Landsat imagery (from Landsat 5): U.S. Geological Survey, online.

    Online Links:

    Type_of_Source_Media: online database
    Source_Contribution:
    The archive of Landsat 5 satellite imagery was accessed through Google Earth Engine.
    PlanetScope imagery (source 4 of 4)
    PBC, Planet Labs, 2025, PlanetScope imagery: Planet Labs PBC, online.

    Type_of_Source_Media: online database
    Source_Contribution:
    The archive of PlanetScope satellite imagery was accessed through the PlanetScope Application Programming Interface.
  2. How were the data generated, processed, and modified?
    Date: 01-Jul-2024 (process 1 of 4)
    Set up CoastSeg toolbox (Fitzpatrick and others, 2024) for implementation along the region of interest. Toolbox set up in python 3.10 to run for geography spanning coastline for numerous worldwide locations, for the time period of 01 March 1984 to 31 December 2024. Visible-band (RGB) images were constructed by concatenating the Red, Green, and Blue bands and saving as JPEG format. NDWI images were computed using the Near-Infrared (NIR) and Green bands of the original multispectral imagery, and saved in JPEG format. MNDWI images were computed using the Shortwave-Infrared (SWIR) and Green bands of the original multispectral imagery, and saved in JPEG format. Images were then labeled using Doodler (Buscombe, 2022; Buscombe and others, 2022). MNDWI images were not computed for PlanetScope imagery because there is no Shortwave-Infrared (SWIR) band available.
    Date: 01-Aug-2024 (process 2 of 4)
    Ran CoastSeg toolbox (Fitzpatrick and others, 2024) on Landsat and Sentinel-2A imagery available through Google Earth Engine (Gorelick and others, 2017) for geography and time period of interest. PlanetScope imagery was downloaded using CoastSeg from the Planet Application Programming Interface. Imagery had horizontal resolution of between 4 and 30 m depending on source. Imagery with an original horizontal resolution of 30m was pan-sharpened to 15 m. The geospatial information has been removed; it is not necessary for the intended purpose of training Machine Learning models to discriminate among suitable and unsuitable imagery. Only the red, blue, and green channels have been extracted from the original multispectral imagery and, after pansharpening, these three bands are saved as a png format image. Images were labeled using the Doodler software program (Buscombe and others, 2022). The classes used to label imagery were: a) water, b) whitewater (surf), c) sediment, d) other. Label images are such that each pixel represents a different class; the integer 0 is used for the water class, the integer 1 is used for the whitewater (surf) class, the integer 2 is used for the sediment class, and finally the integer 3 is used for the 'other' class, which is everything not covered by the previous 3 classes. Data sources used in this process:
    • Sentinel imagery
    • Landsat imagery
    • PlanetScope imagery
    Date: 01-Aug-2024 (process 3 of 4)
    Checked output to ensure quality results.
    Date: 01-Aug-2024 (process 4 of 4)
    Organized image data into folders of images and associated label images in JPEG format. Originating imagery dates/times are included in files. No positions are encoded in the files because this data are intended solely to train a Machine Learning model to identify imagery suitable for shoreline analysis (such as, imagery in which the shoreline is visible to the human eye).
  3. What similar or related data should the user be aware of?
    Buscombe, Daniel D., 2022, Doodler--A web application built with plotly/dash for image segmentation with minimal supervision: software release DOI:10.5066/P9YVHL23, U.S. Geological Survey, Pacific Coastal and Marine Science Center, Santa Cruz, California.

    Online Links:

    Other_Citation_Details:
    Buscombe, D.D., 2022, Doodler--A web application built with plotly/dash for image segmentation with minimal supervision: U.S. Geological Survey software release, https://doi.org/10.5066/P9YVHL23
    Buscombe, Daniel D., Goldstein, Evan B., Sherwood, Chris R., Bodine, Cameron, Brown, Jenna A., Favela, Jaycee, Fitzpatrick, Sharon, Kranenburg, Christine J., Over, Jin-Si R., Ritchie, Andrew C., and Warrick, Jonathan A., 2022, Human‐in‐the‐loop segmentation of Earth surface imagery.

    Online Links:

    Other_Citation_Details:
    Buscombe, D., Goldstein, E.B., Sherwood, C.R., Bodine, C., Brown, J.A., Favela, J., Fitzpatrick, S., Kranenburg, C.J., Over, J.R., Ritchie, A.C. and Warrick, J.A., 2022. Human‐in‐the‐loop segmentation of Earth surface imagery. Earth and Space Science, 9(3), p.e2021EA002085.
    Buscombe, D., and Goldstein, E.B., 2022, A reproducible and reusable pipeline for segmentation of geoscientific imagery.

    Online Links:

    Other_Citation_Details:
    Buscombe, D., and Goldstein, E.B, 2022, A reproducible and reusable pipeline for segmentation of geoscientific imagery. Journal of Open Source Software, 9(99), 6683
    Fitzpatrick, S., Buscombe, D., Warrick, J.A., Lundine, M.A., and Vos, K., 2024, Sub-annual to multi-decadal shoreline variability from publicly available satellite imagery.

    Online Links:

    Other_Citation_Details:
    Fitzpatrick, S., Buscombe, D., Warrick, J.A., Lundine, M.A., and Vos, K., 2024, CoastSeg: an accessible and extendable hub for satellite-derived-shoreline (SDS) detection and mapping. Journal of Open Source Software, 9(99), 6683
    Vos, K., Harley, M.D., Splinter, K.D., Simmons, J.A., and Turner, I.L., 2019, Sub-annual to multi-decadal shoreline variability from publicly available satellite imagery.

    Online Links:

    Other_Citation_Details:
    Vos, K., Harley, M.D., Splinter, K.D., Simmons, J.A., and Turner, I.L., 2019, Sub-annual to multi-decadal shoreline variability from publicly available satellite imagery: Coastal Engineering, v. 150, p. 160-174.
    Gorelick, N., Hancher, M., Dixon, M., Ilyshechenko, S., Thau, D., and Moore, R., 2017, Google Earth Engine: Planetary-scale geospatial analysis for everyone..

    Online Links:

    Other_Citation_Details:
    Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., and Moore, R., 2017, Google Earth Engine: Planetary-scale geospatial analysis for everyone: Remote Sensing of Environment, v. 202, p. 18-27.

How reliable are the data; what problems remain in the data set?

  1. How well have the observations been checked?
  2. How accurate are the geographic locations?
  3. How accurate are the heights or depths?
  4. Where are the gaps in the data? What is missing?
    Data set is considered complete for the information presented. Users are advised to read the rest of the metadata record carefully for additional details.
  5. How consistent are the relationships among the observations, including topology?
    Data have undergone QA/QC and fall within expected/reasonable ranges.

How can someone get a copy of the data set?

Are there legal restrictions on access or use of the data?
Access_Constraints No access constraints
Use_Constraints USGS-authored or produced data and information are in the public domain from the U.S. Government and are freely redistributable with proper metadata and source attribution. Please recognize and acknowledge the U.S. Geological Survey as the originator(s) of the dataset and in products derived from these data. Users are advised to read the rest of the metadata record carefully for additional details.
  1. Who distributes the data set? (Distributor 1 of 1)
    U.S. Geological Survey - CMGDS
    2885 Mission Street
    Santa Cruz, CA

    831-427-4747 (voice)
    pcmsc_data@usgs.gov
  2. What's the catalog number I need to order this data set? Image data are organized into folders of images and associated label images in JPEG format. These data are ready to be ingested into a deep learning or machine learning model training pipeline (for example, Buscombe and Goldstein, 2022), using software such as Tensorflow or Keras or Pytorch.
  3. What legal disclaimers am I supposed to read?
    Unless otherwise stated, all data, metadata and related materials are considered to satisfy the quality standards relative to the purpose for which the data were collected. Although these data and associated metadata have been reviewed for accuracy and completeness and approved for release by the U.S. Geological Survey (USGS), no warranty expressed or implied is made regarding the display or utility of the data on any other system or for general or scientific purposes, nor shall the act of distribution constitute any such warranty.
  4. How can I download or order the data?
  5. What hardware or software do I need in order to use the data set?
    These data can be viewed with image (picture) viewing software or numerical processing software such as python or Matlab.

Who wrote the metadata?

Dates:
Last modified: 25-Mar-2025
Metadata author:
U.S. Geological Survey, Pacific Coastal and Marine Science Center
Attn: PCMSC Science Data Coordinator
2885 Mission Street
Santa Cruz, CA

831-427-4747 (voice)
pcmsc_data@usgs.gov
Metadata standard:
Content Standard for Digital Geospatial Metadata (FGDC-STD-001-1998)

This page is <https://cmgds.marine.usgs.gov/catalog/pcmsc/DataReleases/CMGDS_DR_tool/DR_P13EOBZQ/images_and_labels_metadata.faq.html>
Generated by mp version 2.9.51 on Tue Apr 1 10:12:09 2025