weustace
Well-Known Member
I would suggest that analysising noise in measurements is a well respected science when dealing with large numbers of samples.
A cocked hat is three measurements to determine two variables.
That is not a statistical process, it's a measurement with a vague check.
Statistics wise, it's like an opinion poll of 3 people.
In the same street.
Dealing with single measurements, a proper approach is to put absolute limits on each indicated value.
It depends what you want the answer for.
If you need to be absolutely sure you are avoiding a hazard, that is a different requirement from wanting an indication of progress.
How do you put an "absolute limit" on the measured values? The most common way in scientific measurement in my (admittedly rather limited) experience is to ascribe what is referred to as an "absolute uncertainty"; the manipulations for uncertainty that one is taught in school science lessons, and, I suspect, most less sophisticated experimental analysis, work (I think) on the principle that the measurement is subject to zero mean noise from a Gaussian distribution (if there were a mean error, then this would rest on calibrating your instrument to remove it). The "absolute uncertainty" is then the standard deviation of this noise, or some multiple thereof.
Of course, when drawing a lot of data points, the Central Limit Theorem will apply and push the result towards a Gaussian (hopefully), which is one of the other reasons this distribution is commonly used—but I think it still represents a reasonable model for the noise in single measurements, subject to the comment on tail probabilities in my previous post.
Your other point ("in the same street") is that there is likely to be some correlation between the errors in each measurement. Thinking about this in terms of a coastal three-point fix, this is certainly possible if, for example, the compass is held off-horizontal by some amount for each measurement. On the other hand, I usually take some care to set myself again for each bearing—so I don't think the errors need be strongly correlated.
I understood the whole point of Bayesian inference to be that one can take a noisy observation and make profitable inferences from it; in the case of a single observation, the confidence in that observation might be quite low (i.e. the variance of the noise quite high)—but so long as one proceeds without violating the axioms of probability, one should hopefully end up with something fairly sensible.
I'm in no position to lecture on this, by the way—it's an interesting debate, and I suspect you may well know more about it than I do, so please take above points in that spirit...
Regards
William