lustyd
Well-known member
You need to understand what’s happening here. You didn’t get an answer from an AI. This is RAG (retrieval augmented generation). Your question was interpreted by the language model to determine intent. The intent was searched in a standard search engine, probably Google or Bing, and the results were interpreted by the language model to explain to you what it found.As a curiosity (my first time ever to use AI tools) I asked both Chatgpt and "Claude" (a French AI tool)
"how to cross the ITCZ on a sailboat?", two friends had access to the professional versions.
"Claude" derived a lot of its descriptive content from Meteo France "Météorologie tropicale" textbook, whereas Chatgpt answer was mainly based on NOAA, fair enough; but, among Chatgpt suggestions there was "You will need the most accurate weather forecast you can get, like NOAA and [hear hear] Predictwind". Should anyone take this seriously ?
The “AI” knows nothing and makes things up if allowed. It’s just a statistical language model and what you got in results were a précis of the Internet results it found, nothing more.
AI weather models will ultimately also be statistical in nature. There will be thousands of parameters considered to determine the statistically most likely weather coming up. While theoretically less accurate than physics models (although these are deeply flawed for other reasons) the results will likely be better because more parameters will be considered and less compute required.
As an example, the physics models don’t consider crop yield from the previous year, and traditional thinking weather men will tell you that’s because it’s irrelevant. Same for many of the data sets that will be added to AI models since we no longer need to know the why of it, just whether they statistically align.
I found out recently that some of the physics models use a square grid