
Mitigation Methodology for Ground Clutter in GPM Combined Algorithm
Explore the methodology devised by Mircea Grecu and David T. Bolvin to mitigate ground clutter in the GPM combined algorithm. The approach involves using near-nadir DPR reflectivity profiles, deriving an empirical prediction procedure, and considering operational considerations. Discover the results and ponder what steps to take next for enhanced predictions.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Ground clutter mitigation methodology for the GPM combined algorithm Mircea Grecu1,2and David T. Bolvin2,3 1) Morgan State University, 2) NASA GSFC, 3) SSAI
Approach Use near-nadir DPR reflectivity profiles to construct a database of X features that are available at large incidence angles Y features that need to be predicted operationally Derive an empirical Y=f(X) prediction procedure using off-shelf libraries (e.g. scikit-learn, tensorflow, etc.)
Operational considerations def recurse(left, right, threshold, features, node, s1,scode): from sklearn.ensemble import if (threshold[node] != -2): scode+=\ RandomForestRegressor "if ( " + features[node] + " <= " + str(threshold[node]) + " ) {" if left[node] != -1: s1,scode=recurse (left, right, threshold, features,left[node],s1,scode) rfr = scode+= "} else {\n" RandomForestRegressor(n_estimators if right[node] != -1: s1,scode=recurse (left, right, threshold, features,right[node],s1,scode) =15, random_state=1) scode+= "}\n" else: rfr.fit(X_train,y_train) scode+="*value="+str(value[node][0][0])+";\n" s1+=tree.tree_.n_node_samples[node]/(tree.tree_.n_node_samples[0 ]+0.) y_=rfr.predict(X_valid) return s1,scode
Whats next? Reflectivity vs PSDs (PWC, Dm, Nw) It is better to predict Z and then apply the retrieval algorithms or just directly predict variables of interest? Do more advanced methods have substantial benefits? In theory, more capable, but more difficult to embed We still need to figure out the big picture