Problems downloading ECMWF forecast

Going back to #36

everyone and their grandmother is running a WRF model these days....
............................

Two that I have had success with are the HRRR from NOAA and AROME from meteo france - I think they are both based on WRF.
You are correct in identifying “official” LAMs. These will have far better data analyses than “non-official” ones. The latter, as far as I know, start with analyses interpolated from far coarser grids, often the GFS 25 km output. LAMs run by NOAA will be based on the WRF (Weather research forecasting model). European countries do their own development, individually or jointly. Further, there are no secrets in the “official” modelling community. Through WMO meetings and those organised by ECMWF and others ideas and knowledge are exchanged freely. I once asked PW how many levels they used in their “proprietary” model. They would not say - a trade secret!

HRRR obviously in the USA - I have seen it be spectacularly good.., but it's inconsistent.

Not surprising. I have not seen ensemble data from a LAM but would expect there to be a wide range of results, especially in convective situations. On the basis of the U.K. weather app, which only shows rain (strictly speaking radar reflectivity) projections, some forecasts will be spectacularly good and some equally bad. Ask my wife who asks me to say can she get 3 hours dry wether to dry the washing.

It is the only model i know of that uses weather radar data for the initialization - their ultimate goal is to forecast individual convective cells.

That is one hell of an ask. It is impossible to predict where the next storm will form until it is already large enough to be seen on weather radar. Once it is that big, the lifetime of an individual cloud is a few hours, 3 or 4 maybe. Individual cells have shorter lifetimes. Just watch a weather radar sequence of actual storms or rain cells in a front. They are ever-changing.
what i find is that sometimes when i download HRRR.., the first time step matches what i see on the water (wind speed and direction) , and sometimes it doesn't. If it matches pretty well, I use it pretty much exclusively for as long as that's the case.
This-is, effectively, now-casting. Useful no doubt in a limited sense if the user can react. For most of us sailors, not relevant. On passage, how would I get the information and, more importantly, in my 5 knot yacht, what would I be able to do? The last time that we were badly caught out was about 3 years ago on passage Lezardrieux to St Peter Port. A forecast F5-6 became a F8-9 when west of Roche’s Douvres. Even with two or three hours warning, we could have done little.
 
Last edited:
You are correct in identifying “official” LAMs. These will have far better data analyses than “non-official” ones.

I'm actually not convinced by this! Hear me out...
As a lot of the quality of a model is as good as the data assimilation, it's worth noting that there are many private companies now with a huge quantity of additional observation data (which is not shared globally with NMHS). Spire is a private company with a network of satellite observations, as is WEathernews in Japan. Moji collects crowd-sourced surface pressure data from millions of mobile phones, for example. They can use this data to either learn from via Machine Learning/AI, or they can apply these bias corrections to smaller LAMs. They often also have access to supercomputers now. The advent of cloud computing is one of the reasons that Laser_310 is correct in saying everyone and their grandmother is running WRF. WRF even has a 'DA' module which is data assimilation, so an advanced user (rather than the grandmother ;-) ) may wish to continuously assimilate additional observations.

Many years ago, Panasonic released an article that they were better than ECMWF in some scenarios - and it's not untrue... in that scenario. In this case, they had access to a wealth of additional plane observations (which we didn't have) that significantly improved their forecast (but was still initialised with ECMWF analysis data). These observations gave them a statistically significant advantage.

Even more so now, users are taking ECMWF historical data and applying machine learning (ML) - in this scenario, machine learning can then predict the conditions with limited need to understand the actual physics! Especially now that we have given historical data an open licence (although to download it requires a fee).

Then another thing to consider is forecast blending - the use of ensemble data from many, many, many different weather models will give a reasonable forecast as it covers more possibilities. In fact, NOAA makes a study of this as well. They evaluate every ensemble dataset and which one had more weighting on any given day.
Ensemble forecasting is a very useful way to look at the forecast; more so than a single deterministic (in my opinion). Our current ENS model is only 18km, but as of next year, it will be the same as our HRES model (9km).
 
I'm actually not convinced by this! Hear me out...
As a lot of the quality of a model is as good as the data assimilation, it's worth noting that there are many private companies now with a huge quantity of additional observation data (which is not shared globally with NMHS). Spire is a private company with a network of satellite observations, as is WEathernews in Japan. Moji collects crowd-sourced surface pressure data from millions of mobile phones, for example. They can use this data to either learn from via Machine Learning/AI, or they can apply these bias corrections to smaller LAMs. They often also have access to supercomputers now. The advent of cloud computing is one of the reasons that Laser_310 is correct in saying everyone and their grandmother is running WRF. WRF even has a 'DA' module which is data assimilation, so an advanced user (rather than the grandmother ;-) ) may wish to continuously assimilate additional observations.

Many years ago, Panasonic released an article that they were better than ECMWF in some scenarios - and it's not untrue... in that scenario. In this case, they had access to a wealth of additional plane observations (which we didn't have) that significantly improved their forecast (but was still initialised with ECMWF analysis data). These observations gave them a statistically significant advantage.

Even more so now, users are taking ECMWF historical data and applying machine learning (ML) - in this scenario, machine learning can then predict the conditions with limited need to understand the actual physics! Especially now that we have given historical data an open licence (although to download it requires a fee).

Then another thing to consider is forecast blending - the use of ensemble data from many, many, many different weather models will give a reasonable forecast as it covers more possibilities. In fact, NOAA makes a study of this as well. They evaluate every ensemble dataset and which one had more weighting on any given day.
Ensemble forecasting is a very useful way to look at the forecast; more so than a single deterministic (in my opinion). Our current ENS model is only 18km, but as of next year, it will be the same as our HRES model (9km).
Thank you, I will certainly hear you out as someone nearer the action and meteorologically better informed than I am. My statement about better analyses by NWSs was based on their ability to use radar and satellite imagery and 4-d assimilation of the mass of satellite data in their LAMs. Weighting of the different data and their use is, I believe, a major problem that has exercised some exceedingly clever people in the past. I still find it difficult to believe that starting with the GFS 25 km data, that use of one or more of these data sources can really produce better analyses.

Of course, the ability to predict weather is highly (totally) dependent on the major national and international players. What I do find disturbing is that Spire, for example, is happy to take models and data provided by the “official” meteorological community and then add data that they do not share. A little while ago, I saw that Spire was providing GPSRO data for use by ECMWF and others. That has now ceased. Meteorology has always been an open science in its development and operation. Use of private data while depending on data provided freely, internationally, is anathema to me.


But, going back to your most informative post, back to the drawing board with a big block of ice.
 
I'm actually not convinced by this! Hear me out...
As a lot of the quality of a model is as good as the data assimilation, it's worth noting that there are many private companies now with a huge quantity of additional observation data (which is not shared globally with NMHS). Spire is a private company with a network of satellite observations, as is WEathernews in Japan. Moji collects crowd-sourced surface pressure data from millions of mobile phones, for example. They can use this data to either learn from via Machine Learning/AI, or they can apply these bias corrections to smaller LAMs. They often also have access to supercomputers now. The advent of cloud computing is one of the reasons that Laser_310 is correct in saying everyone and their grandmother is running WRF. WRF even has a 'DA' module which is data assimilation, so an advanced user (rather than the grandmother ;-) ) may wish to continuously assimilate additional observations.

Many years ago, Panasonic released an article that they were better than ECMWF in some scenarios - and it's not untrue... in that scenario. In this case, they had access to a wealth of additional plane observations (which we didn't have) that significantly improved their forecast (but was still initialised with ECMWF analysis data). These observations gave them a statistically significant advantage.

Even more so now, users are taking ECMWF historical data and applying machine learning (ML) - in this scenario, machine learning can then predict the conditions with limited need to understand the actual physics! Especially now that we have given historical data an open licence (although to download it requires a fee).

Then another thing to consider is forecast blending - the use of ensemble data from many, many, many different weather models will give a reasonable forecast as it covers more possibilities. In fact, NOAA makes a study of this as well. They evaluate every ensemble dataset and which one had more weighting on any given day.
Ensemble forecasting is a very useful way to look at the forecast; more so than a single deterministic (in my opinion). Our current ENS model is only 18km, but as of next year, it will be the same as our HRES model (9km).
That is all fascinating and it is difficult to get one’s mind around the possibilities you describe. For many purposes, it all sounds well and good. I could see commercial concerns, stock market traders, the military, RTW sailors etc finding value in such services.

As a simple sailor, I know how wind and weather can vary in space and time. I know about the inherent uncertainty built into the weather. Even if such a system could be harnessed to give me a forecast from Dartmouth to Roscoff or, even, just to Salcombe, how much better off would I be than studying GFS, ICON, ICON-EU or any other GRIB data plus, of course, GMDSS forecasts? Would I be able to detect an improvement, let alone benefit? Predictability places a limit on what can be achieved for specific purposes.

I am far from convinced that computer models will ever match the atmosphere in complexity. OK, maybe I was too sweeping in saying that “non-official” detailed data analyses would not be as good as “official” ones but I have usually also said that, after about three or so hours from data time, small differences in data analysis will not greatly matter. Small weather details have short lifetimes. A large thunderstorm cloud has a total lifetime of less than 6 hours. A small shower cloud could put a spanner into the kind of detail that you are describing. I do not subscribe to butterfly flapping wings effects.
 
Top