Overview of GLERLs AHPS for Meteorology Data Analysis
GLERLs AHPS provides a comprehensive analysis of meteorological data between 1948-2014, including locked historical data and provisional updates. The process involves running models based on initial conditions, updating meteorology for forecasts using station observations, and conducting forecast scenarios. This ensures accurate analysis and forecasting for informed decision-making.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
GLERLs AHPS Brief overview June 28, 2016
Some of the data is considered locked down or historical. Some is considered provisional and subject to update. The line of demarcation is called the base date. (1/1/2013 in this case)
The models are run for the entire historical period, starting from arbitrary but reasonable initial conditions. At the end of these runs we have a state of the system on the base date. i.e. initial conditions for any subsequent runs. We also have model output for that period to be used as appropriate. But the first two years of model output are considered spin-up, and not useable.
When we wish to run a forecast, we then need to update our provisional subbasin meteorology through at least the day before the forecast start date. This is typically done by inputting station observations .
The models are then run for the provisional period, up to the day before the forecast start date, using those stored base date initial conditions. At the end of these runs we have a state of the system on the base date. i.e. initial conditions for the forecast. We also have model output for the provisonal period to be used as appropriate.
Now we can run forecast scenarios using daily meteorology for those seven necessary input variables. The source of that input meteorology could be ANYTHING. Standard AHPS procedure has been to use sequences from the past, using the same time period for each lake basin.
Now we can run forecast scenarios using daily meteorology for those seven necessary input variables. The source of that input meteorology could be ANYTHING. Standard AHPS procedure has been to use sequences from the past, using the same time period for each lake basin.
Now we can run forecast scenarios using daily meteorology for those seven necessary input variables. The source of that input meteorology could be ANYTHING. Standard AHPS procedure has been to use sequences from the past, using the same time period for each lake basin.
Now we can run forecast scenarios using daily meteorology for those seven necessary input variables. The source of that input meteorology could be ANYTHING. Standard AHPS procedure has been to use sequences from the past, using the same time period for each lake basin.
Now we can run forecast scenarios using daily meteorology for those seven necessary input variables. The source of that input meteorology could be ANYTHING. Standard AHPS procedure has been to use sequences from the past, using the same time period for each lake basin.
The result is an ensemble of outputs, each corresponding to one of the input sequences. These can then be used, along with some sort of weighting, to produce a probabilistic outlook. Note that when running the models, each member of the ensemble is dependent on the correlated set of met inputs (e.g. all from 1948 or 1949). But when weighting, the ensemble members are treated as independent.
What GLERL is producing 1) NBS ensembles from AHPS using historical met sequences from the past as future meteorology (exactly as described), and using the old LBRM which has not been re-calibrated for a long time. 2) NBS ensembles from AHPS using historical met sequences from the past as future meteorology (exactly as described), but using a slightly reformulated LBRM which will be newly calibrated. 3) NBS ensembles from AHPS using 19 meteorological sequences extracted from CMIP5 model output as the future meteorology. These will also be run using the new LBRM.
To Weight or Not To Weight Model spin up for both LBRM and LLTM is ~2 years.
What IS Weighting? Assume 5 ensemble members with the following values and weights. Note that the sum of the weights equals the member count: Mem # Value Weight 1 47 0.2 2 24 1.4 3 82 0.2 4 76 2.8 5 31 0.4
What IS Weighting? We construct an unordered sample of N members, where each member is duplicated an appropriate number of times: Mem # Value Weight Mem # Value Count 1 47 0.2 1 47 4 2 24 1.4 2 24 28 3 82 0.2 3 82 4 4 76 2.8 4 76 56 5 31 0.4 5 31 8 N = 100, so duplication factor = 20
What IS Weighting? The new sample is reordered based on the value: Mem # Value Count Mem # Value Count 1 47 4 3 82 4 2 24 28 4 76 56 3 82 4 1 47 4 4 76 56 5 31 8 5 31 8 2 24 28
What IS Weighting? 82 Finally, a Cumulative Distribution Frequency is used to set the probabilistic values: 82 82 Quantile Value 82 99% 82 76 76 95% 76 90% 76 76 Mem # Value Count 47 70% 76 47 3 82 4 50% 76 47 4 76 56 47 40% 47 31 1 47 4 30% 31 31 20% 24 5 31 8 31 10% 24 24 2 24 28 24 5% 24 1% 24 24 24