Approaches to Isolate Forced Changes

Approaches to Isolate Forced Changes
Slide Note
Embed
Share

In models and observations, various approaches like linear trends, regression on relevant time series, and advanced statistical methods are used to isolate forced changes. Structural uncertainty, forcing uncertainty, and time-limited observations pose challenges in evaluating models. Nonetheless, there are identifiable silver linings and avenues for improvement in attributing extreme events and model evaluations.

  • Forced Changes
  • Model Evaluation
  • Uncertainty
  • Extreme Events
  • Observations

Uploaded on Feb 24, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Synthesis of group discussions Questions: 1. Approaches to isolate forced changes? 2. How are models evaluated? 3. Sources of uncertainty in projections?

  2. 1. Approaches to isolate forced changes In models and obs: Linear trend (no always justified), Regression on a timeseries more directly related to forcing (assumes other forcings do not matter), Advanced statistical methods: Signal to noise maximizing EOFs, Dynamical adjustment, Low frequency component analysis. In models: Ensemble mean.

  3. Hurdles (1) Time-limited and sparse observations, Structural uncertainty: Do models reproduce forced responses and variability correctly? Forcing uncertainty is a big challenge that we sometimes gloss over in a large ensemble. Missing some forcings, e.g., land use, at the regional scale.

  4. Hurdles (2) Time- and spatially- evolving responses (e.g. to anthropogenic aerosols) difficult to detect. Observed multi-decadal variability forced or unforced? Some obs. records are short, e.g. troposphere, deep ocean, and BGC. Consider sampling errors in some obs., e.g. BGC.

  5. Silver lining Well established that some fields have higher confidence for detection/emergence (temperature and other thermodynamic variables, CO2 invasion into the ocean, and perhaps Hadley Cell width, monsoons), and other fields have lower confidence (precipitation, forced changes in modes of variability).

  6. Attribution of extreme events Multiple methodologies, lack of agreement on best practices, Unclear whether large-ensembles are widely used. What is the role of large-ensembles?

  7. Hurdles Some extremes are not resolved, e.g. hurricanes, Structural uncertainty, Shortness of observational record.

  8. 2) How are models evaluated? Structural uncertainty should be the main concern going forward, Can we do anything different than CMIP? Most studies assume that the variability is realistic, Valid for atmospheric variability, Not so much for ENSO or decadal modes. ENSO evaluation needs to be more comprehensive, In some fields e.g., BGC, there are often really large biases, so users are satisfied with capturing basic features e.g., gradients correctly. More of a qualitative validation.

  9. Can we improve on this? Pay more attention to realism of mean and variability during model development. Provide model developers with metrics to assess variability and their significance Already done for some things e.g., CVDP.

  10. Sources of projection uncertainty Consider all sources of uncertainty in large- ensembles: Global: Macro and micro initialization, Perturbed physics/parameters, Eddy-resolving/permitting ocean models, Emission scenarios, i.e. full carbon cycle feedbacks.

  11. Sources of projection uncertainty Consider all sources of uncertainty in large- ensembles: Regional: Higher resolution, grid refinement, regional models, Important for extremes, Short (20-40 years) LEs with more complex process representation and/or higher resolution, External forcings, Land Use Changes on smaller regional scales.

  12. Open research questions: Develop metrics that could be used by model developers in model improvement. CVDP is a good step forward. Observed multi-decadal variability forced or unforced? Involve paleoclimate community? Also for evaluating models in terms of decadal and longer variability. Use new MMLE to evaluate advanced statistical methods used to extract forced changes

  13. Open research questions: Importance of large-ensembles for the attribution of extreme events? Use new MMLE to assess return times of extremes. Detection of forced ENSO changes, Hawkins-and-Sutton-type analyses for multi- variate applications, e.g. agriculture or ocean ecosystem stressors (e.g. scenario uncertainty is dominant for acidification and and model/internal variability are dominant for de- oxygenation).

  14. Food for thought Downscaling, Running regional models continues to be challenging, Make sure to continue outputting variables for running regional models. CORDEX? Members with higher or variable resolution? Long high emission runs to understand how internal variability could change: MIP?

  15. Food for thought LEs are important tools for shedding light on the drivers of short-term & regional changes. How to communicate this to public?

Related


More Related Content