Rocks

Yealm

Well-Known Member
Joined
13 Apr 2017
Messages
5,351
Visit site
On Navionics (Garmin now) app chart, rocks always underwater are crosses with red hatched background.

My question is, why doesn’t the app give the minimum depth of water above the rock? So one can know if it’s a ‘bad’ rock or one not to worry about.

Or is there a convention that displayed rocks are always less than a specified depth and always a danger to leisure sailors?

Many thanks :)
 

Attachments

  • IMG_8237.jpeg
    IMG_8237.jpeg
    186.1 KB · Views: 31
Last edited:
Don't know about Navionics, but there are four IHO symbols for rock: a dot is rock always visible, the asterisk is for a rock which covers and uncovers depending upon height of tide, the cross with the four dots is a rock just appearing when water height is at LAT, the simple cross is generic rock always underwater but dangerous to navigation. Depending upon height of tide they have different levels of danger, using a single symbol for all of them would lose some information.
 
Don't know about Navionics, but there are four IHO symbols for rock: a dot is rock always visible, the asterisk is for a rock which covers and uncovers depending upon height of tide, the cross with the four dots is a rock just appearing when water height is at LAT, the simple cross is generic rock always underwater but dangerous to navigation. Depending upon height of tide they have different levels of danger, using a single symbol for all of them would lose some information.
Yes the cross corresponds to the Navionics cross.
I guess I’m interested in the sort of depth that corresponds to ‘dangerous to navigation’ and surprised it’s not just quantified with a depth number attached to the cross.
 
Yes the cross corresponds to the Navionics cross.
I guess I’m interested in the sort of depth that corresponds to ‘dangerous to navigation’ and surprised it’s not just quantified with a depth number attached to the cross.
What does the official UKHO chart show? If it doesn’t show a specific depth then it isn’t there for other chart publishers to show either.
 
It refers to ''Charted depths'', your concern is about the absence of charted depth, is it not?
And now a future AI scan might think that is the correct answer to the OPs question. I do wish the mods.would delete any direct quotes of AI on these threads.
 
And now a future AI scan might think that is the correct answer to the OPs question. I do wish the mods.would delete any direct quotes of AI on these threads.
one of the last Nobel prizes in physics lately described the long discussion/training he had with an AI, where eventually the AI said "yes you are right, 2+2=3" :)
 
one of the last Nobel prizes in physics lately described the long discussion/training he had with an AI, where eventually the AI said "yes you are right, 2+2=3" :)
Can you expand upon that?

I don’t understand AI, but how can 2+2=3 be concluded?

And how was the physicist “trained” be AI?
 
Can you expand upon that?

I don’t understand AI, but how can 2+2=3 be concluded?

And how was the physicist “trained” be AI?
I am afraid you'll have to ask the experts, all I know the AIs can be ''trained'', their content modified depending on the interactions with users. As an example, young maths/physics/engineering students are hired to ''train'' the AI, my daughter (one of them) sometimes spends a couple of hours to ask/discuss scientific questions to the AI, when she finds an error she reports to the AI managers and is paid a couple of hundred euro, I think they then integrate the correct solutions so the AI won't make the same mistake again. Give or take, I know nothing about the subject just reporting what these young people do for pocket money, no more babysitting it seems.
As to the 2+2, maybe the physicist began discussing with the AI about different space/time universes and the like where everything is possible :)

edit/add by the amount of money these young students are making (being paid to find errors), it s probably best not to follow AI suggestions right now for this kind of hi-tech/science matters
 
Last edited:
Can you expand upon that?

I don’t understand AI, but how can 2+2=3 be concluded?

And how was the physicist “trained” be AI?
AI is a broad range of stuff, but most of what is currently getting people excited is one particular type of AI called a large language model (LLM). LLMs aren’t actually intelligent - but they do a good job of “sounding” intelligent to the average reader. They actually use statistics to guess what the next word in the answer should be based on lots of information it harvests from the internet. Problem is, it has no idea if the information it uses in its training set is right or wrong. Not long ago ChatGPT couldn’t tell you correctly how many r’s were in strawberry (I think that is now fixed) - it had obviously learned on data for teaching double R to children and ignored the first R. If you pointed it out - it would appologise and correct itself, but if you asked the same question the next day it would be wrong again.

It can be very good at some things - but it is actually a bit dangerous for something like chart symbols as it is just using random guesses to make plausible sounding answers.
 
I am afraid you'll have to ask the experts, all I know the AIs can be ''trained'', their content modified depending on the interactions with users. As an example, young maths/physics/engineering students are hired to ''train'' the AI, my daughter (one of them) sometimes spends a couple of hours to ask/discuss scientific questions to the AI, when she finds an error she reports to the AI managers and is paid a couple of hundred euro, I think they then integrate the correct solutions so the AI won't make the same mistake again. Give or take, I know nothing about the subject just reporting what these young people do for pocket money, no more babysitting it seems.
As to the 2+2, maybe the physicist began discussing with the AI about different space/time universes and the like where everything is possible :)
Thank you.

I am not familiar with AI at all.

I searched and found “chatGpt” made by “OpenAI” and thought it may be open source but it appears to be a four tier system where only tier one is free (one tier is £200 per month) so I have not signed up and found out anything about AI yet.

All very new to me.
 
AI is a broad range of stuff, but most of what is currently getting people excited is one particular type of AI called a large language model (LLM). LLMs aren’t actually intelligent - but they do a good job of “sounding” intelligent to the average reader. They actually use statistics to guess what the next word in the answer should be based on lots of information it harvests from the internet. Problem is, it has no idea if the information it uses in its training set is right or wrong. Not long ago ChatGPT couldn’t tell you correctly how many r’s were in strawberry (I think that is now fixed) - it had obviously learned on data for teaching double R to children and ignored the first R. If you pointed it out - it would appologise and correct itself, but if you asked the same question the next day it would be wrong again.

It can be very good at some things - but it is actually a bit dangerous for something like chart symbols as it is just using random guesses to make plausible sounding answers.
Thank you.

I just posted something I had found out about “chatGPT” and it does not encouraging to me (as a bit of a dinosaur, I will concede).
 
Top