3. Perl, AWK, Python and Shell scripts were used to extract and reformat the navigation fixes stored in the CARIS HIPS database and add them to a SQLite database (version 3.7.9) that was geospatially extended using SpatiaLite (version 3.0.1). The processing flow for this step follows:
A. Extract navigation for each line in CARIS HDCS directory using Perl script do_CARIS_nav3_2012_024.pl (which runs the CARIS program printfNav for all the lines in the HDCS dir). (Extracted navigation file is tab-delimited in format YYYY-JD HH:MM:SS:FFF DD.LAT DD.LONG SSSSS_VVVVV_YYYY-JD_LLLL AR where YYYY=year, JD=Julian Day, HH=hour, MM=minute, SS=seconds, FFF=fractions of a second, DD.LAT=latitude in decimal degrees, DD.LONG=longitude in decimal degrees, SSSSS=survey name, VVVVV=vessel name, LLLL=linename, AR=accepted or rejected navigation fix). This step creates the directory of TXT navigation files for each survey line in the CARIS project.
B. The output TXT files from the printfNav process are parsed to remove rejected navigation records then reformatted into CSV files containing additional fields for survey ID, vessel name and system name using Shell and AWK scripts.
C. A Python script (pySQLBathNav) runs on each reformatted CSV file parsing the file from each record and adding points to the SQLite database (which is created if it does not already exist). The pySQLBathNav script creates both point and trackline navigation for each survey line.
D. When all of the survey lines have been processed into SQLite database, a polyline shapefile is exported from the database using spatialite_tools (ver. 4.1.1).
Steps B, C and D were all run within a shell script called printfnavconv.