Anchoring Depth

Sorry, what is crowd sourced data?
My understanding - you record your transit across a stretch of water, the data will include GPS location and depth (I thought it include an offset for tide - automatically) and submit the data to some central organisation. All the data collected by like minded individuals is then combined and collated into a chart. You then access the new chart and I think can overlay the chart, or data on your chart plotter. I thought both C-Map and Navionics had similar systems. Knowing the tide data seems like a simple bit of maths, the time is know from the GPS and the tide is included on most chart plotters.

It has been popular with fishermen, amateurs, often operating from kayaks, as they can find the holes in which big fish, apparently, lurk. To me it seems a way of ensuring all the sizeable fish are caught (if they really do lurk) and you are possibly better finding a less popular place to fish.

But I am known as being contrary :)

Again - correct me if I have this wrong - I have only had a very superficial interest.

It is assumed that the transducer that provides the data is accurately located, ie not at an angle. It ignores atmospheric conditions.

Jonathan
 
Sorry, what is crowd sourced data?
Data collected by members of the public and contributed to a data bank.

It's a very successful model for some things; for example OpenStreetMap works very well, and there are many special interest maps (e g. Cycling trails) that are equally successful. But the conditions under which it works well don't apply to chart data. Summarizing very hastily (I'd have to write an essay to cover everything), the necessary conditions for success are:

  1. Sufficient duplicate measurements to eliminate bad or malicious data submissions.
  2. A underlying model that data can be fitted to. For example, a road is defined by a series of linked points, and any point gathered within a certain small distance of an existing feature probably belongs to that feature. Roads join together; an isolated road that doesn't join to any other road is probably spurious.
  3. A community with a strong interest in correct data.
  4. A strong model for interpolating between measured points.
There's lots more, but depth information except in fairly limited cases doesn't match any of these except 3. Examples given here and elsewhere show breakdowns in points 1, 2 and 4. The problem is that there is no GENERAL way to validate the data - an isolated shallow or deep measurement could be an error, or it could be real, and you and the data processor have no way if knowing which. The data processor has no way of controlling the 2d distribution of measurements, so there simply isn't a reliable means of interpolating data. Interpolating uncritically, as Navionics appear to do, will certainly result in unsafe results, as demonstrated above.

I write all this as a professional in geographic data management with over 30 years experience!

Sooner or later, someone will be wrecked as a result of relying on such data, and the resulting court cases will be interesting.
 
Sorry, what is crowd sourced data?
There is a way that boats can sent their depth data back to Navonics, who then get their best wand out and crunch the numbers. I assume take into account height of tide, I hope there is some agreed total depth of water or water under the keel calculation and no idea what they do about atmospheric pressure.

Given enough data they come up with a more up to date chart of the sea bed.

I attended a meeting of the Royal Institute of Navigation about 18 months ago. One or two of the audience struggled with the concept that while GPS was very accurate there were bits of the coast last chatted in 1850 and that might not be as accurate as some like/need. Personally, I'm a bit more relaxed
 
There is a way that boats can sent their depth data back to Navonics, who then get their best wand out and crunch the numbers. I assume take into account height of tide, I hope there is some agreed total depth of water or water under the keel calculation and no idea what they do about atmospheric pressure.

Given enough data they come up with a more up to date chart of the sea bed.

I attended a meeting of the Royal Institute of Navigation about 18 months ago. One or two of the audience struggled with the concept that while GPS was very accurate there were bits of the coast last chatted in 1850 and that might not be as accurate as some like/need. Personally, I'm a bit more relaxed
Just to note that Navionics do correct for tide, but obviously cannot correct for meteorological effects on the tide. It also depends on people having set their depth sounder up accurately - they can use various usual settings such as depth below keel.

In areas where there are plenty of measurements, this works. But the weakness is that in less frequented waters, a single badly adjusted set of measurements can produce spurious results - I think the classic case was "canals" across Dutch shoals, where obviously a shallow draught vessel with an incorrectly set up depth sounder had sent in data.
 
Data collected by members of the public and contributed to a data bank.

One of OpenStreetMap's main data providers for the UK and EU is Amazon. Their package delivery planning software uses OSM and relies on having all roads, one way streets, low traffic neighbourhoods etc. correct. They have a team checking driver feedback and user error tags against satellite imagery, images tagged in Facebook etc. to correct it. They even put long driveways in.

I don't think without a similar use case, that the effort to collate and verify data the Navionics will be as useful.
 
Presumably when a chart, cruising area notes etc says anchor in 5m water, usually along with other directions, it means plus tide height at time of anchoring, so if the tide height is 0.8 m at time of arrival one should be looking for 5.8m is that correct?
I personally do the other way around: before entering an anchorage, I compute present height of tide and determine a safe clearance with the following LW(s) --> that will provide a minimum sounder depth to look for while circling the area to find a suitable spot. ?
 
And subject to many potential errors, some of which have been noted, with examples, on these fora. It's ok in well frequented areas, in which case the errors become self correcting, but in less well frequented areas I'd not trust it. There are potential errors in both data collection and data processing. Remember that charts, for all the deficiencies if the original surveys, are designed to ensure that if you navigate using them, you will be in safe water. The depth might not be what the chart shows, but it is unlikely to be less. Crowd sourced data can't include limiting danger lines, such as those used to mark dangerous reefs. Crowd sourced data does not have the same safeguards.
Data collected by members of the public and contributed to a data bank.

It's a very successful model for some things; for example OpenStreetMap works very well, and there are many special interest maps (e g. Cycling trails) that are equally successful. But the conditions under which it works well don't apply to chart data. Summarizing very hastily (I'd have to write an essay to cover everything), the necessary conditions for success are:

  1. Sufficient duplicate measurements to eliminate bad or malicious data submissions.
  2. A underlying model that data can be fitted to. For example, a road is defined by a series of linked points, and any point gathered within a certain small distance of an existing feature probably belongs to that feature. Roads join together; an isolated road that doesn't join to any other road is probably spurious.
  3. A community with a strong interest in correct data.
  4. A strong model for interpolating between measured points.
There's lots more, but depth information except in fairly limited cases doesn't match any of these except 3. Examples given here and elsewhere show breakdowns in points 1, 2 and 4. The problem is that there is no GENERAL way to validate the data - an isolated shallow or deep measurement could be an error, or it could be real, and you and the data processor have no way if knowing which. The data processor has no way of controlling the 2d distribution of measurements, so there simply isn't a reliable means of interpolating data. Interpolating uncritically, as Navionics appear to do, will certainly result in unsafe results, as demonstrated above.

I write all this as a professional in geographic data management with over 30 years experience!

Sooner or later, someone will be wrecked as a result of relying on such data, and the resulting court cases will be interesting.
The above sums up why Admiralty chart data is such a valuable asset (and why it’s not given away in the way US charting is).

I spent a very happy year or so with survey vessel of the RN and saw at first hand the extreme lengths that they went to in ensuring the data was accurate. You don’t calculate the tide, you set a tide gauge that continuously transmits tidal height data to the surveying vessel.
Once (in Antarctica) we spent most of a day doing lines to prove that ‘breakers reported -1892’ wasn’t a rock but the sailers had probably seen waves over a bit of ice.
IIRC in the US, anyone can submit data and the checks of accuracy (I am told) are not rigorous.
 
The above sums up why Admiralty chart data is such a valuable asset (and why it’s not given away in the way US charting is).

I spent a very happy year or so with survey vessel of the RN and saw at first hand the extreme lengths that they went to in ensuring the data was accurate. You don’t calculate the tide, you set a tide gauge that continuously transmits tidal height data to the surveying vessel.
Once (in Antarctica) we spent most of a day doing lines to prove that ‘breakers reported -1892’ wasn’t a rock but the sailers had probably seen waves over a bit of ice.
IIRC in the US, anyone can submit data and the checks of accuracy (I am told) are not rigorous.
And despite you proving it doesn't exist, if the rock was named, the name will still appear in the UK list of Antarctic place names. Getting the APC to delete a placename is almost impossible, even if the place patently doesn't exist. All they'll do is put a note saying that HMS Endurance found nothing in 19xx.

The funny thing is rocks that do exist despite being miles from land. I always remember being rung by the lady responsible for coordinating data for the GEBCO 2000 bathymetric chart of the oceans. She asked "does the rock called abcd exist at X,Y?" When I checked and told her yes, it did, she said "Oh, dear, we've got a 2000m contour just there!" They fixed that one!

I was at the other end of the process for Antarctic charts - the UKHO usually sent copies of Antarctic charts to us for checking, and we usually found a few things that needed attention. We also usually provided coastline and terrestrial data. But there are large areas in Antarctica that have the modern equivalent of "Here be dragons" - "Unsurveyed because unsafe for small boats"
 
One of OpenStreetMap's main data providers for the UK and EU is Amazon. Their package delivery planning software uses OSM and relies on having all roads, one way streets, low traffic neighbourhoods etc. correct. They have a team checking driver feedback and user error tags against satellite imagery, images tagged in Facebook etc. to correct it. They even put long driveways in.

I don't think without a similar use case, that the effort to collate and verify data the Navionics will be as useful.
Indeed. There's another traffic related thing that has appeared on my latest car. It reports speed and location back to whoever provides the navigation software for VW, and they extract traffic congestion data from it. That seems like a valid use of crowd-sourcing.

Ultimately it's about degrees of freedom in the data. Roads are quite well-constrained; you know also that most vehicles can't operate except on a road. The signal is simply road or not road. Then you can also assume that roads will join other roads. Lots of things to constrain and validate the data.

Go to sea, and you have no horizontal constraints, and depths are continuously variable from 0 down to 10,000m plus. Further, the seabed has holes and pinnacles, so you can't even say that adjacent measurements are a constraint. They might be in some places, but not universally. And in shallower water, you depend on ancillary data that relies on the time of day being recorded accurately and the transducer offset reported correctly. Time zone offsets must also be taken into account; something else that may be set incorrectly.

Finally, crowd sourcing works best when the person providing the data has no influence on the data being reported. That's not the case for depth data; the user has to provide things like transducer offset. I did once look into it, and it would be very easy to set that incorrectly!
 
And despite you proving it doesn't exist, if the rock was named, the name will still appear in the UK list of Antarctic place names. Getting the APC to delete a placename is almost impossible, even if the place patently doesn't exist. All they'll do is put a note saying that HMS Endurance found nothing in 19xx.

The funny thing is rocks that do exist despite being miles from land. I always remember being rung by the lady responsible for coordinating data for the GEBCO 2000 bathymetric chart of the oceans. She asked "does the rock called abcd exist at X,Y?" When I checked and told her yes, it did, she said "Oh, dear, we've got a 2000m contour just there!" They fixed that one!

I was at the other end of the process for Antarctic charts - the UKHO usually sent copies of Antarctic charts to us for checking, and we usually found a few things that needed attention. We also usually provided coastline and terrestrial data. But there are large areas in Antarctica that have the modern equivalent of "Here be dragons" - "Unsurveyed because unsafe for small boats"
Indeed. At a time that I was used to sailing in waters that were extremely well charted, there was something special about putting lines of soundings on an area of the chart that said ‘Unsurveyed’.
 
Likewise, add interpolation to plain wrong representation of charted objects:
first image "normal" navionics chart (note dotted rectangle)
the second image adds the usual very nice interpolated contours
third picture: the dotted rectangle of the first chart appears as it is, a barely submerged stretch of bottom, they are non tidal waters anyone crossing over it would run aground
Seems like a hostage to fortune not having some buoys on that shoal.
 
Indeed. At a time that I was used to sailing in waters that were extremely well charted, there was something special about putting lines of soundings on an area of the chart that said ‘Unsurveyed’.
You don't have to go far away to find "unsurveyed" locations - can find that on current UKHO chart less than 10 miles from the busy ferry port of Mallaig.
 
Seems like a hostage to fortune not having some buoys on that shoal.
Many / most locations in the world don't have navigation buoys for all or even the majority of hazards.
In West and North Scotland, buoys tend only to exist for hazards close to ferry and main commercial shipping routes.
Probably 99% of shoals and rocks are unmarked. It is navigator beware, and best look at your chart (including the last surveyed date, which could be 2 centuries back with rowing boat and lead line).

Crowd sourced data can work where LOTS of traffic, and careful computer analysis could correct for tide and incorrect keel offset by noting recorded depth over certain known control spots. But places with lots of traffic are generally well charted anyway. And might take 100 years to get sufficient data in the less well charted, and less frequented, locations.
 
Many / most locations in the world don't have navigation buoys for all or even the majority of hazards.
In West and North Scotland, buoys tend only to exist for hazards close to ferry and main commercial shipping routes.
Probably 99% of shoals and rocks are unmarked. It is navigator beware, and best look at your chart (including the last surveyed date, which could be 2 centuries back with rowing boat and lead line).

Crowd sourced data can work where LOTS of traffic, and careful computer analysis could correct for tide and incorrect keel offset by noting recorded depth over certain known control spots. But places with lots of traffic are generally well charted anyway. And might take 100 years to get sufficient data in the less well charted, and less frequented, locations.
IN fact, they do correct for incorrect data as you suggest, and it does, of course, work fine in frequented areas - exactly where you don't need it!
 
You don't have to go far away to find "unsurveyed" locations - can find that on current UKHO chart less than 10 miles from the busy ferry port of Mallaig.
Indeed. The exciting bit was that we were actually submitting the data for the first time...

I am reminded that it's been argued that many navigators who go aground thought they knew where they were on the chart, or simply trusted the chart accuracy more than what their eyes and the echo sounder was telling them.
 

Other threads that may be of interest

Top