donav: files=`ls *.RAW | cut -d. -f1` for file in $files do awk -f awkit $file.RAW > $file.navdep doneThe AWK script "awkit" simply extracts any line of information in the HYPACK file that contains either the string "GPGGA" or "SDDBT" and writes this information to a new file with the extension navdep.
awkit: { if ($0 ~ /GPGGA|SDDBT/) { print $0 } }This process step and all subsequent process steps were performed by the same person - VeeAnn A. Cross.
doholdhypack: files=`ls *.navdep | cut -d. -f1` for file in $files do awk -f awkholdhypack $file.navdep > $file.holdhypack doneThe AWK script awkholdhypack:
BEGIN { FS = "," } { FS = "," depth = -9999 if ($1 ~ /GPGGA/) { utctime = $2 latdeg = substr($3,1,2) latmin = substr($3,3,6) declat = latdeg + (latmin/60) londeg = substr($5,1,3) lonmin = substr($5,4,6) declon = -1 * (londeg + (lonmin/60)) if (NR==1) { holddepth = -9999 } else { printf("%s, %9.6f, %9.6f, %5.1f\n", holdutctime, holddeclon, holddeclat, holddepth) } holdutctime = utctime holdutcdate = utcdate holddeclon = declon holddeclat = declat holddepth = -9999 } if ($1 ~ /SDDBT/) { if ($4 != "") { depthreal = $4 holddepth = depthreal } else { depthreal = -9999 holddepth = -9999 } } } END { printf("%s, %9.6f, %9.6f, %5.1f\n", holdutctime, holddeclon, holddeclat, holddepth) }
cat 201_1213.holdhypack \ 202_1414.holdhypack \ 203_1441.holdhypack \ 204_1600.holdhypack > jd103hypack.csv
BEGIN { FS="," } { FS="," hr=substr($1,1,2) min=substr($1,3,2) sec=substr($1,5,2) jday = $5 longitude = $2 latitude = $3 depth = $4 printf("%s\t%s\t%s\t%s\t%9.6f\t%9.6f\t%3.1f\n", jday, hr, min, sec, longitude, latitude, depth) }
awkbathy_fmt BEGIN { FS="," } { FS="," hr=substr($1,1,2) min=substr($1,3,2) sec=substr($1,5,2) jday = $5 longitude = $2 latitude = $3 depth = $4 printf("%s\t%s\t%s\t%s\t%9.6f\t%9.6f\t%3.1f\n", jday, hr, min, sec, longitude, latitude, depth) }
{ year = substr($3,1,4) month = substr($3,6,2) day = substr($3,9,2) hr = substr($4,1,2) min = substr($4,4,2) utchr = hr + 5 tideval = $6 tideval_meters = tideval * 0.3048 printf("1\t%s\t%s\t%02d\t%02d\t%02d\t%02d\t%02.4f\t%2.3f\n", $2, year, >month, day, utchr, min, tideval_meters, tideval) }The first element written to the output file is the number 1. The value of this number isn't important, but the MATLAB script does expect the first element to be a number. Another important piece of information about this AWK script is that 5 is added to the hours in the tide file to convert from EST to UTC to be compatible with the bathymetry data. Also, the original tide data was recorded in feet, but the work I'm doing is in meters. So that calculation is also taken care of in the AWK script.
%Step 1. Outside of Matlab, format the ASCII tide gauge data. %Load the tide data. load ir_tides.dat load rd_tides.dat greg_ir=ir_tides(:,3:7); %year month day hour minute needs to be in columns 3,4,5,6,7 greg_ir(:,6)=zeros(length(greg_ir(:,1)),1); %adding secs bc the tide file doesn't have any greg_rd=rd_tides(:,3:7); greg_rd(:,6)=zeros(length(greg_rd(:,1)),1); %then establish the julian timebase for the observed tide data jd_ir=julian(greg_ir); jd_rd=julian(greg_rd); h_ir=ir_tides(:,8); %observed tide in meters (in this case above NGVD29) h_rd=rd_tides(:,8); %Step 2: outside of Matlab I reformatted my bathy data. The original data was exported %from a shapefile using XTools and then needed to be reformatted for my needs here load bathy.dat %there are 2 bathy.dat file in 2 separate folders jd0=julian([2010 1 1 0 0 0]); %starting point for yearday %setting the julian time base for the ship stuff %this assumes column 1=jd, 2=hour, 3=min, 4=sec jdtrack=jd0+bathy(:,1)-1+(bathy(:,2)+bathy(:,3)/60+bathy(:,4)/3600)/24; x=bathy(:,5); %longitude in decimal degrees in column 5 y=bathy(:,6); %latitude in decimal degrees in column 6 htrack=bathy(:,7); %Step 3: interpolate observed tide data onto the ship track timebase using a linear %interpolation. Splining the tide data could result in overshoots tidetrack_ir=linterp(jd_ir,h_ir,jdtrack); tidetrack_rd=linterp(jd_rd,h_rd,jdtrack); h_corrected_ir=htrack-tidetrack_ir; h_corrected_rd=htrack-tidetrack_rd; %Step 4: output the information I want into a text file. I decided I basically want my %original bathymetry data, plus the tide value and the corrected value %I have to use the ' to get the columns flipped so the append thing works alltidestuff=[bathy'; tidetrack_ir'; h_corrected_ir';tidetrack_rd';h_corrected_rd']; outfile=['veetest']; fid=fopen(outfile,'w'); fprintf(fid,'%03d:%02d:%02d:%02d, %9.6f, %9.6f, %3.1f, %3.1f, %3.1f, %3.1f, %3.1f \n',alltidestuff); fclose(fid);The result is a comma-delimited text file with 8 columns of information: jdtime, longitude, latitude, depth_m, ir_tide, ir_cor, rd_tide, rd_cor The definition of this header information is as follows:
jdtime: julian day and time from the GPS longitude: longitude from the GPS latitude: latitude from the GPS depth_m: depth in meters of the fathometer (as recorded by the GPS) ir_tide: interpolated tide values at the GPS time from the Indian River Inlet tide station ir_cor: the corrected depth in meters of the fathometer based on the Indian River tide value (depth_m - ir_tide) rd_tide: interpolated tide values at the GPS time from the Rosedale Beach tide station rd_cor: the corrected depth in meters of the fathometer based on the Rosedale tide value (depth_m - rd_tide)The header line was added to the veetest file and saved under a new file name: hypbathytides.csv and jd105_moredepths.csv
wt_tide = ((1-([dist_rose]/( [dist_rose] + [dist_inlet])))* [rd_tide]) + ((1-( [dist_inlet]/( [dist_rose] + [dist_inlet])))* [ir_tide])
This should give a weighted tide value for each point based on the distance to each tide station. Now apply the tide correction: tideadded = [depth_m] - [wt_tide]
Inputs: bounding_poly; ; boundary jd105_moredepths_utm18; wttide_dep; PointElevation hypbathytides_utm18; wttide_dep; PointElevation output: bestbathy cell size: 25 drainage enforcement: no_enforce primary type of input data: spotEverything else left to defaults.
The bounding polygon was a quick polygon created around the survey area to limit the extent of extrapolation in areas where no data were available.
Input: bestbathy Output: bestbat_fil Filter type: low Check the box next to ignore NODATA in calculations.