Article Contents

SAMPLE OPTIMIZATION OF ENSEMBLE FORECAST TO SIMULATE TROPICAL STORMS (MERBOK, MAWAR, AND GUCHOL) USING THE OBSERVED TRACK

Funding:

Science and Technology Planning Project of Guangdong Province 2017B020244002

Science and Technology Planning Project of Guangdong Province 2018B020208004

Science and Technology Planning Project of Guangdong Province 2017B030314140

Natural Science Foundation of Guangdong Province 2019A1515011118

National Natural Science Fund 41705089

Science and Technology Project of Guangdong Meteorological Service GRMC2017Q01


doi: 10.16555/j.1006-8775.2020.002

  • Nowadays, ensemble forecasting is popular in numerical weather prediction (NWP). However, an ensemble may not produce a perfect Gaussian probability distribution due to limited members and the fact that some members significantly deviate from the true atmospheric state. Therefore, event samples with small probabilities may downgrade the accuracy of an ensemble forecast. In this study, the evolution of tropical storms (weak typhoon) was investigated and an observed tropical storm track was used to limit the probability distribution of samples. The ensemble forecast method used pure observation data instead of assimilated data. In addition, the prediction results for three tropical storm systems, Merbok, Mawar, and Guchol, showed that track and intensity errors could be reduced through sample optimization. In the research, the vertical structures of these tropical storms were compared, and the existence of different thermal structures was discovered. One possible reason for structural differences is sample optimization, and it may affect storm intensity and track.
  • 加载中
  • Figure 1.  Schematic illustration of the 1-h difference between the time when ensemble members arrive and the time when the ensemble mean arrives.

    Figure 2.  The tracks of three TCs. (a) Merbok, (b) Mawar, and (c) Guchol.

    Figure 3.  Flowchart for sample optimization experiments.

    Figure 4.  (a) 6-h evolution of ensemble spread and (b) 6-h evolution of mean track errors. Merbok (1900 UTC on June 10, 2017 to 0000 UTC on June 11, 2017), Mawar (0700 UTC to 1200 UTC on September 1, 2017) and Guchol (1900 UTC on September 4, 2017 to 0000 UTC September 5, 2017).

    Figure 5.  The relationship between ensemble spread (km) and track error (km) in 6-h cycles. (a) to (f) correspond to the first to the sixty cycle. For the icons of the same color, the lower left one represents those before optimization, and the upper right one represents those after optimization in the same figure.

    Figure 6.  The change in temperature anomaly (℃) with time and height (m). (a) (SOno) and (b) (SO20 (track)) represent Merbok (1800 UTC on June 10, 2017 to 0000 UTC on June 11, 2017). (c) (SOno) and (d) (SO20 (track)) represent Mawar (0600 UTC to 1200 UTC on September 1, 2017). (e) (SOno) and (f) (SO20 (track)) represent Guchol (1800 UTC on September 4, 2107 to 0000 UTC September 5, 2017). x-axis is time (hour), and y-axis is height (m).

    Figure 7.  The change in temperature anomaly (℃) with height (hPa) determined by analysis. (a) Merbok (0000 UTC June 11, 2017). (b) Mawar (1200 UTC September 1, 2017). (c) Guchol (0000 UTC September 5, 2017).

    Figure 8.  Tracks of (a) Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), (b) Mawar (1200 UTC Sep 1, 2017 to 0000 UTC Sep 4, 2017), and (c) Guchol (0000 UTC Sep 5, 2017 to 0900 UTC Sep 7, 2017) as per WRF data.

    Figure 9.  The minimum SLP (hPa) errors for (a) Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), (b) Mawar (1200 UTC September 1, 2017 to 0000 UTC September 4, 2017), and (c) Guchol (0000 UTC September 5, 2017 to 0900 UTC September 7, 2017) as per WRF data.

    Figure 10.  The maximum wind speed (m s-1) errors for (a) Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), (b) Mawar (1200 UTC September 1, 2017 to 0000 UTC September 4, 2017), and (c) Guchol (0000 UTC September 5, 2017 to 0900 UTC September 7, 2017) according to WRF data.

    Table 1.  The definitions of the abbreviations for SOno and SO20(track).

    Experiments Definitions
    SOno Conventional ensemble forecast experiment.
    SO20(track) Sample optimization based on the conventional ensemble forecast, where twenty good samples were reproduced and twenty bad samples were deleted, and twenty moderate samples were retained.
    DownLoad: CSV

    Table 2.  The start and end time (UTC) of ensemble forecast and sample optimization.

    Start time of ensemble forecast Start time of sample optimization End time of sample optimization End time of ensemble forecast
    Merbok 2017061012 2017061018 2017061100 2017061300
    Mawar 2017090100 2017090106 2017090112 2017090400
    Guchol 2017090412 2017090418 2017090500 2017090712
    DownLoad: CSV

    Table 3.  Track errors and mean absolute errors of minimum sea level pressure (SLP) and maximum wind velocity in the observations, SOno, and SO20 (track) for Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), Mawar (1200 UTC Septmeber 1, 2017 to 0000 UTC September 4, 2017), and Guchol (0000 UTC September 5, 2017 to 0900 UTC September 7, 2017). The numbers in bold indicate the errors in SO20 (track) that are lower than those in SOno.

    Track (km) Minimum SLP (hPa) Maximum wind velocity (m s-1)
    Merbok Mawar Guchol Merbok Mawar Guchol Merbok Mawar Guchol
    SOno 77.79 88.21 159.80 6.07 3.86 3.77 10.83 7.10 4.13
    SO20(track) 55.10 81.91 92.50 5.63 5.16 3.00 4.40 2.83 7.51
    DownLoad: CSV
  • [1] SAETRA Ø, HERSBACH H, BIDLOT J R, et al. Effects of observation errors on the statistics for ensemble spread and reliability, European Centre for Medium-Range Weather Forecasts, Reading, Berkshire, United Kingdom[J]. Mon Wea Rev, 2004, 132(6):1487-1501, https://doi.org/10.1175/1520-0493(2004)132 < 1487:EOOEOT > 2.0.CO; 2. doi:
    [2] TAILLARDAT M, MESTRE O, ZAMO M, et al. Calibrated ensemble forecasts using quantile regression forests and ensemble model output statistics[J]. Mon Wea Rev, 2016, 144(6): 2375-2393, https://doi.org/10.1175/MWR-D-15-0260.1.
    [3] GUO Rong, QI Liang-bo, GE Qian-qian, et al. A study on the ensemble forecast real-time correction method[J]. J Trop Meteor, 2018, 24(1): 42-48, https://doi.org/10.16555/j.1006-8775.2018.01.004.
    [4] WANG Chen-xi. Ensemble prediction experiments of typhoon track based on the stochastic total tendency perturbation[J]. J Trop Meteor, 2016, 22(3): 305-317, https://doi.org/10.16555/j.1006-8775.2016.03.005.
    [5] RICHARDSO D S. Skill and relative economic value of the ECMWF Ensemble Prediction System [J]. Q J Roy Meteor Soc, 2000, 126(563): 649-667, https://doi.org/10.1002/qj.49712656313.
    [6] PALMER T N. The economic value of ensemble forecasts ASA tool for risk assessment: From days to decades[J]. Q J Roy Meteor Soc, 2002, 128(581): 747-774, http://doi.org/10.1256/0035900021643593.
    [7] HAMILL T M, COLUCCI S J. Evaluation of Eta-RSM ensemble probabilistic precipitation forecasts[J]. Mon Wea Rev, 1998, 126(3): 711-724, https://doi.org/10.1175/1520-0493(1998)126 < 0711:EOEREP > 2.0.CO; 2. doi:
    [8] STENSRUD D J, BAO J W, and WARNER T T. Using initial condition and model physics perturbations in short-range ensemble simulations of mesoscale convective systems[J]. Mon Wea Rev, 2000, 128(7): 2077-2107, https://doi.org/10.1175/1520-0493(2000)128 < 2077:UICAMP > 2.0.CO; 2. doi:
    [9] VUKICEVIC T, JANKOV I, MCGINLEY J. Diagnosis and optimization of ensemble forecasts[J]. Mon Wea Rev, 2008, 136(3): 1054-1074, https://doi.org/10.1175/2007MWR2153.1.
    [10] JANKOV I, SCHULTZ P J, ANDERSON C J, et al. The impact of different physical parameterizations and their interactions on cold season QPF in the American river basin[J]. Hydrometeorol, 2007, 8(5): 1141-1151, https://doi.org/10.1175/JHM630.1.
    [11] JANKO I, GALLUS Jr. W A, SEGAL M, et al. Influence of initial conditions on the WRF-ARW model QPF response to physical parameterization changes[J]. Wea Forecasting, 2007, 22(3): 501-519, https://doi.org/10.1175/WAF998.1.
    [12] HAMILL T M, WHITAKER J S. Global Ensemble Predictions of 2009's Tropical Storms Initialized with an Ensemble Kalman Filter[J]. Mon Wea Rev, 2011, 139(2): 668-688, https://doi.org/10.1175/2010MWR3456.1.
    [13] YAMAGUCHI M, SAKAI R, KYODA M, et al. Typhoon ensemble prediction system developed at the Japan Meteorological Agency[J]. Mon Wea Rev, 2009, 137(8): 2592- 2604, https://doi.org/10.1175/2009MWR2697.1.
    [14] QI L, YU H, CHEN P. Selective ensemble-mean technique for tropical storm track forecast by using ensemble prediction systems[J]. Q J Roy Meteor Soc, 2014, 140(680): 805-813, https://doi.org/10.1002/qj.2196.
    [15] DONG L, ZHANG F. OBEST: An observation-based ensemble subsetting technique for tropical storm track prediction[J]. Wea Forecasting, 2016, 31(1): 57-70, https://doi.org/10.1175/WAF-D-15-0056.1.
    [16] LI J H, WAN Q L, GAO Y D. The effect of sample optimization on the ensemble Kalman filter in forecasting Typhoon Rammasun[J]. J Trop Meteor, 2018, 24(4): 433-447, https://doi.org/10.16555/j.1006-8775.2018.04.003.
    [17] HOUTEKAMER P L. Global and local skill forecasts[J]. Mon Wea Rev, 1993, 121(6): 1834-1846, https://doi.org/10.1175/1520-0493(1993)121 < 1834:GALSF > 2.0.CO; 2. doi:
    [18] WHITAKER J S, LOUGHE A F. The relationship between ensemble spread and ensemble mean skill[J]. Mon Wea Rev, 1998, 126(12): 3292-3302, https://doi.org/10.1175/1520-0493(1998)126 < 3292:TRBESA > 2.0.CO; 2. doi:
    [19] GRIMIT E P, MASS C F. Measuring the ensemble spread-error relationship with a probabilistic approach: Stochastic ensemble results[J]. Mon Wea Rev, 2007, 135 (1): 203-221, https://doi.org/10.1175/MWR3262.1.
    [20] HOPSON T M. Assessing the ensemble spread-error relationship[J]. Mon Wea Rev, 2014, 142(3): 1125-1142, https://doi.org/10.1175/MWR-D-12-00111.1.
    [21] LI J, GAO Y, WAN Q. Sample optimization of ensemble forecast to simulate tropical cyclone using the observed track[J]. Atmos-Ocean, 2018, 56(3): 162-177, https://doi.org/10.1080/07055900.2018.1500881.
    [22] GRELL G A, DEVENYI D. A generalized approach to parameterizing convection combining ensemble and data assimilation techniques[J]. Geophys Res Lett, 2002, 29 (14): 381-384, https://doi.org/10.1029/2002GL015311.
    [23] HONG S, DUDHIA J, CHEN S. A revised approach to ice microphysical processes for the bulk parameterization of clouds and precipitation[J]. Mon Wea Rev, 2004, 132(1): 103-120, https://doi.org/10.1175/1520-0493(2004)132 < 0103:ARATIM > 2.0.CO; 2. doi:
    [24] NOH Y, CHEON W G, HONG S Y, et al. Improvement of the K-pro fi le model for the planetary boundary layer based on large eddy simulation data[J]. Bound-Layer Meteor, 2003, 107(2): 401-427, https://doi.org/10.1023/A:1022146015946.
    [25] ZHU L, WAN Q, SHEN X, et al. Prediction and predictability of high-Impact western pacific landfalling Tropical Storm Vicente (2012) through convection-permitting ensemble assimilation of Doppler radar velocity[J]. Mon Wea Rev, 2016, 144(1): 21-43, https://doi.org/10.1175/MWR-D-14-00403.1.
    [26] MENG Z, ZHANG F. Test of an ensemble Kalman filter for mesoscale and regional-scale data assimilation, Part Ⅲ: Comparison with 3DVAR in a real-data case study[J]. Mon Wea Rev, 2008, 136(2): 522-540, https://doi.org/10.1175/2007MWR2106.1.
    [27] MENG Z, ZHANG F. Test of an ensemble Kalman filter for mesoscale and regional-scale data assimilation, Part Ⅳ: Performance over a warm-season month of June 2003 [J]. Mon Wea Rev, 2008, 136(10): 3671-3682, https://doi.org/10.1175/MWR3352.1. doi:
    [28] BARKER D M, HUANG W, GUO Y R, et al. A three-dimensional variational data assimilation system for MM5: Implementation and initial results[J]. Mon Wea Rev, 2004, 132(4): 897-914, https://doi.org/10.1175/1520-0493(2004)132 < 0897:ATVDAS > 2.0.CO; 2. doi:
    [29] ZHANG F, SNYDER C, SUN J. Tests of an ensemble Kalman fi lter for convective-scale data assimilation: Impact of initial estimate and observations[J]. Mon Wea Rev, 2004, 132(5): 1238-1253, https://doi.org/10.1175/1520-0493(2004)132 < 1238:IOIEAO > 2.0.CO; 2. doi:
    [30] WHITAKER J S, HAMILL T M. Ensemble data assimilation without perturber observations[J]. Mon Wea Rev, 2002, 130(7): 1913-1924, https://doi.org/10.1175/1520-0493(2002)130 < 1913:EDAWPO > 2.0.CO; 2. doi:

Get Citation+

LI Ji-hang, GAO Yu-dong, WAN Qi-lin, et al. SAMPLE OPTIMIZATION OF ENSEMBLE FORECAST TO SIMULATE TROPICAL STORMS (MERBOK, MAWAR, AND GUCHOL) USING THE OBSERVED TRACK [J]. Journal of Tropical Meteorology, 2020, 26(1): 14-26, https://doi.org/10.16555/j.1006-8775.2020.002
LI Ji-hang, GAO Yu-dong, WAN Qi-lin, et al. SAMPLE OPTIMIZATION OF ENSEMBLE FORECAST TO SIMULATE TROPICAL STORMS (MERBOK, MAWAR, AND GUCHOL) USING THE OBSERVED TRACK [J]. Journal of Tropical Meteorology, 2020, 26(1): 14-26, https://doi.org/10.16555/j.1006-8775.2020.002
Export:  

Share Article

Manuscript History

Manuscript received: 11 December 2018
Manuscript revised: 06 November 2018
Manuscript accepted: 15 February 2020
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

SAMPLE OPTIMIZATION OF ENSEMBLE FORECAST TO SIMULATE TROPICAL STORMS (MERBOK, MAWAR, AND GUCHOL) USING THE OBSERVED TRACK

doi: 10.16555/j.1006-8775.2020.002
Funding:

Science and Technology Planning Project of Guangdong Province 2017B020244002

Science and Technology Planning Project of Guangdong Province 2018B020208004

Science and Technology Planning Project of Guangdong Province 2017B030314140

Natural Science Foundation of Guangdong Province 2019A1515011118

National Natural Science Fund 41705089

Science and Technology Project of Guangdong Meteorological Service GRMC2017Q01

  • Author Bio:

  • Corresponding author: WAN Qi-lin, e-mail: qlwan@gd121.cn

Abstract: Nowadays, ensemble forecasting is popular in numerical weather prediction (NWP). However, an ensemble may not produce a perfect Gaussian probability distribution due to limited members and the fact that some members significantly deviate from the true atmospheric state. Therefore, event samples with small probabilities may downgrade the accuracy of an ensemble forecast. In this study, the evolution of tropical storms (weak typhoon) was investigated and an observed tropical storm track was used to limit the probability distribution of samples. The ensemble forecast method used pure observation data instead of assimilated data. In addition, the prediction results for three tropical storm systems, Merbok, Mawar, and Guchol, showed that track and intensity errors could be reduced through sample optimization. In the research, the vertical structures of these tropical storms were compared, and the existence of different thermal structures was discovered. One possible reason for structural differences is sample optimization, and it may affect storm intensity and track.

LI Ji-hang, GAO Yu-dong, WAN Qi-lin, et al. SAMPLE OPTIMIZATION OF ENSEMBLE FORECAST TO SIMULATE TROPICAL STORMS (MERBOK, MAWAR, AND GUCHOL) USING THE OBSERVED TRACK [J]. Journal of Tropical Meteorology, 2020, 26(1): 14-26, https://doi.org/10.16555/j.1006-8775.2020.002
Citation: LI Ji-hang, GAO Yu-dong, WAN Qi-lin, et al. SAMPLE OPTIMIZATION OF ENSEMBLE FORECAST TO SIMULATE TROPICAL STORMS (MERBOK, MAWAR, AND GUCHOL) USING THE OBSERVED TRACK [J]. Journal of Tropical Meteorology, 2020, 26(1): 14-26, https://doi.org/10.16555/j.1006-8775.2020.002
  • Ensemble forecasting is a method in which the initial state of a deterministic forecast is considered perturbed, and a number of ensemble members are integrated [1]. In recent years, ensemble forecasting has gained popularity in numerical weather prediction (NWP) because its application provides more significant economic and social benefits than a single "best guess"forecast does. However, a few studies have found that ensemble builds based on initial perturbations generally have insufficient dispersion [7]. Furthermore, some studies suggest that prediction uncertainty is best simulated using ensembles containing both modeling uncertainties and the influence of initial conditions [8]. However, there is usually no reliable reference point to represent initial uncertainties because uncertainties simulated by different ensemble systems are different [9].

    With limited members, an ensemble may not produce a perfect Gaussian probability distribution; however, each ensemble member can be regarded as a sample from an approximate Gaussian probability distribution, including those with close to zero probability when a sample is outside the distribution range [9]. Ideally, samples should reflect the true state of the atmosphere (observation); however, in practice, samples do not represent the true state. Especially, some samples are outliers and small probability events. At times, these samples largely deviate from verified observations. These impossible or outlier samples are meaningful for statistical analysis and may downgrade the accuracy of ensemble forecasts. One method to solve this problem is to transform the probability density function (PDF) for the truth to a PDF for observations to be verified [1]. In this study, observation data were used to limit the probability distribution of samples. This method is different from data assimilation. In general, equal weights are calculated using a simple arithmetic average, whereas unequal weights are determined using more complex techniques. This principle can be applied to obtain optimal ensembles, and it has been applied successfully to forecast both warm-season meso-scale convective systems[10] and cold-season topographical forcing [11].

    Track forecasts and maximum surface wind speed forecasts can be used to quantitatively assess risks from tropical storms (TSs), thereby allowing for earlier and more appropriate decisions regarding coastal evacuations [12]. Typically, TSs originate far from land; therefore, data assimilation is not very effective. During the past 20 years, track forecast errors from day 1 to 5 have reduced by more than 50%. Nevertheless, improvements in NWP are still required to decrease TS tracks. Sample optimization is also known as observation-based ensemble subsetting, and it is also the objective of previous works[13-16]. Qi et al. [14] first proposed a method for obtaining ensemble mean track forecasts by choosing members weighted as a function of their short-term track forecasted error. Dong and Zhang [15] applied this method to a single ensemble after modifying the subset selection. The China Meteorological Administration discovered that for a given forecast ensemble, some samples produce small track forecast errors ("good" samples), but some produce poor forecast tracks ("bad" samples) that deviate substantially from the truth. Therefore, good members should be objectively identified by some standards. In this study, samples close to expected values (observation) were identified as"good", and those with small probabilities were regarded as"bad". The bad samples were replaced by good ones to reflect the true state as far as possible. To determine the effect of sample optimization on ensemble forecasting, some studies, such as Dong and Zhang[15] used the best-track position of hurricanes that occurred during 2012-2013. However, this best track data is not available in time during runtime. Therefore, TS track data from the Central Meteorological Station (http://typhoon.nmc.cn/web.html) were used in this study as the criterion for restricting the probability distribution.

    Houtekamer [17] showed that a spread has the most predictive value when it is"extreme", being very large or small compared to the corresponding mean value. Whitaker [18] pointed out that the more a spread departs from the corresponding climatological mean value, the more useful it is as a predictor of skill. Grimit and Mass [19] used a statistical model based on an ensemble prediction system to obtain a perfect forecast assumption. Generally, a larger (smaller) ensemble dispersion implies more (less) uncertainty in forecast [20]. However, an excessive number of bad samples being replaced by good ones may lead to an extremely small ensemble spread, which will produce inaccurate ensemble mean values. In contrast, insufficient replacement of bad samples may prohibit the adjustment and upgrade of the ensemble mean. Therefore, the number of samples to be replaced should be carefully determined so that the ensemble mean is upgraded and the ensemble spread is not excessively downgraded.

    Relevant work has been done by Li [21] in this field. However, the subject of study was typhoon (a stronger tropical cyclone) in that work. To balance the relationship between ensemble spread and track errors, Li replaced approximately ten to twenty-five samples and it showed good performance [21]. Therefore, in this study, twenty bad samples were replaced by good ones for convenience, and the subject of the research was a tropical storm (a weaker tropical cyclone). The effectiveness of sample optimization was tested.

    This paper is organized as follows. The scheme, datasets, and analysis of the cycles are presented in Section 2. The analysis of the results is described in Section 3, and concluding remarks are given in Section 4.

  • According to Li [21], two main steps were followed for sample optimization for ensemble forecasts, i. e., sample selection and supplementation.

    In the first step, samples were selected, and their qualities were evaluated. Because track is a very important indicator of TSs, the observed track data obtained from the China Meteorological Administration was selected as the standard and the samples were compared to this standard. Track was used to restrict the probability density distribution of the samples, with the principle of absolute track forecast error as the basis; absolute track forecast error is defined as the great-circle distance between the forecast position and observed-track position of a TS at a given time. According to this definition, a sample is considered good when it is close to the observed position. The inherent premise of this technique is that samples with smaller track error result in better performance for longer forecast lead times on average. It is worth mentioning that only track was chosen as the standard among many other criteria for sample selection, such as radar echo and intensity. Samples selected based on other standards will be investigated in future studies. In this study, we focused only on whether good and bad samples can influence ensemble forecasts; we used the following procedures to do so. First, the track error between each sample and observation was calculated and classified. Through comparative analysis, samples with a lower track error were identified as good samples, and samples with a higher track error were identified as bad samples. Good samples were copied, and the bad ones were removed. Neither type of sample is called a moderate sample.

    In the second step, after elimination of the bad samples, good samples were supplemented with their copies to maintain the total number for the ensemble forecast. Further, perturbations were added to the supplemented samples using EnKF (EnSRF), that is, the copied samples acquired assimilation increment.

    Figure 1 is a schematic diagram of the sample optimization process for ensemble prediction in one cycle. A 1-h lag was observed between the time when the sample optimization spins up and the time at which TS track forecast data were first made available at the observed position. Although the ensemble comprised a total of N samples in this study (N=60), for convenience the schematic diagram shows the results of only 8 samples (N=8), and the forecast track is a simple arithmetic average of the positions of selected samples or M samples. Conventional ensemble forecasts are called SOno, and ensemble forecasts with sample optimization are called SO. Ensemble members 4 and 5 (i. e., M= 2 and N= 8) are good samples, based on the observed position 1 h after the spin-up time. The exact number of selected good samples is based on past performance and this number is constant for each experiment (introduced in Section 2 b).The method used in this study is different from that used by Dong and Zhang [15], in which ensemble subsetting only contains selected good samples and both M and N vary. The method used in this study has a fixed N and only M is variable. M bad samples are replaced by M good samples, and there are N-2M medium samples.

    Figure 1.  Schematic illustration of the 1-h difference between the time when ensemble members arrive and the time when the ensemble mean arrives.

  • In this study, three TSs were considered: Merbok, Mawar, and Guchol. These were the second, sixteenth, seventeenth TSs in 2017, respectively, and all did not reach typhoon level.

    Data for Merbok were compiled by the Central Meteorological Observatory at 0000 UTC June 10, 2017. Merbok then intensified into a TS at 0600 UTC June 11, 2017 and landed on the coast of the Dapeng Peninsula in Shenzhen at around 1500 UTC June 12, 2017. At that time, it had a maximum wind speed of 25 m s-1.

    Data for Mawar was compiled on September 1, 2017. Around 21: 30 on September 3, 2017, Mawar landed in Lufeng City, Guangdong Province.

    From September 6 to 7, Guchol crossed the Bashi Channel and entered the Taiwan Strait. There, it combined with the cold air front and turned into an extratropical cyclone.

    To examine the impact of sample optimization on ensemble forecasts, seven experiments were designed for each TS. All experiments were performed using the Advanced Research Weather and Research Forecasting (ARW-WRF) model, version 3.4.1. In this study, the ensemble was initialized using ECM / ECMWF 0.125° × 0.125° analysis data for each TS.

    Figure 2 shows the tracks of the three TCs. The horizontal grid spacing is 3 km. Merbok, Mawar, and Guchol have 298 × 667, 304 × 262, and 409 × 445 grid points, respectively, all with 35 vertical levels. The Grell Devenyi cumulus scheme[22], WRF single-moment (WSM) 6-class microphysics scheme with graupel[23], and Yonsei State University (YSU) scheme[24] for planetary boundary layer (PBL) processes were used in this research.

    Figure 2.  The tracks of three TCs. (a) Merbok, (b) Mawar, and (c) Guchol.

    In particular, the sample optimization experiments comprised of three steps:

    (1) Spin-up (6h forecast without data assimilation). The initial value (created using the WRF-3DVAR method) and the major physical parameters were the same as those in Zhu et al. [25]. The WRF-based EnKF system was first developed for regional-scale data assimilation by Meng and Zhang [26-27]. The control variables are stream function, velocity potential, and unbalanced pressure. The initial ensemble members were generated by adding perturbations that were randomly sampled from the default"cv3"background error covariance option in the WRF 3DVAR package [28]. Similar perturbations were used to represent the boundary condition uncertainties of the ensemble. The covariance relaxation method was the same as that of Zhu et al. [25].

    The perturbed variables included horizontal wind components (u, v), potential temperature, mixing ratio for water vapor, and a standard deviation of 2 m s-1 for wind, 1 K for temperature, and 0.5 g kg-1 for mixing ratio [26-27]. Similar perturbations were used to represent the boundary condition uncertainties of the ensemble. The covariance relaxation method, with a relaxation coefficient of 0.8, proposed by Zhang [29] was used to infl ate the background error covariance. The prognostic variables of perturbation potential temperature (T), vertical velocity (W), horizontal wind components (u, and v), mixing ratio for water vapor (QVAPOR), cloud water (QCLOUD), rainwater (QRAIN), perturbation geopotential (PH), perturbation dry air mass in column (MU), surface pressure (PSFC), and perturbation pressure (P) were updated according to Zhu et al. [25]. The horizontal length of the covariance localization was set to 30 km and the vertical length was set to 6 layers.

    (2) Data assimilation was performed using the WRF-EnKF method for 6h cycles after the spin-up. The configuration of this process was the same as that of Zhu et al. [25], with the only difference being the inclusion of three steps in each cycle: selection, assimilation, and integration. In other words, the replacement of good and bad samples (proposed in Section 2 a: Sample optimization for ensemble forecasts) was conducted before the application of EnKF, similar to Zhu et al. [25].

    In EnKF, α is a constant. HPbHT and R are scalars representing the background and observational-error variance at the observation location [30].

    $$ a = {\left( {1 + \sqrt {\frac{R}{{H{P^{\rm{b}}}{H^{\rm{T}}} + R}}} } \right)^{ - 1}} $$ (1)

    In this work, some differences remained between the supplemental samples and good ones. When assimilated, α is not equal to 0 for good samples but is set to 0 for the supplemental ones.

    According to Table 1, SOno was conventional ensemble forecast experiment, and SO20 (track) was conducted for sample optimization. In other words, SO20 (track) is the case in which twenty good samples were replicated, twenty bad samples were deleted, and twenty moderate samples were retained. Furthermore, the sample number was maintained at 60 throughout the sample optimization process.

    Experiments Definitions
    SOno Conventional ensemble forecast experiment.
    SO20(track) Sample optimization based on the conventional ensemble forecast, where twenty good samples were reproduced and twenty bad samples were deleted, and twenty moderate samples were retained.

    Table 1.  The definitions of the abbreviations for SOno and SO20(track).

    (3) The initial value (ensemble average) was generated via the final analysis in step 2, and then forecast to the end. Fig. 3 is a flow chart of the process.

    Figure 3.  Flowchart for sample optimization experiments.

    Table 2 shows the start and end time of ensemble forecast and sample optimization. For example, 2017061012 means 1200 UTC on June 10, 2017. Merbok began at 1200 UTC on June 10, 2017 and ended at 0000 UTC on June 13, 2017. Mawar started at 0000 UTC on September 1, 2017 and ended at 0000 UTC on September 4, 2017. Guchol started at 1200 UTC on September 4, 2017 and ended at 1200 UTC on September 7, 2017. Their sample optimization began at 1800 UTC on June 10, 2017, 0600 UTC on September 1, 2017, and 1800 UTC September 4, 2017, and ended at 0000 UTC on June 11, 2017, 1200 UTC on September 1, 2017, and 0000 UTC on September 5, 2017, respectively, in all experiments.

    Start time of ensemble forecast Start time of sample optimization End time of sample optimization End time of ensemble forecast
    Merbok 2017061012 2017061018 2017061100 2017061300
    Mawar 2017090100 2017090106 2017090112 2017090400
    Guchol 2017090412 2017090418 2017090500 2017090712

    Table 2.  The start and end time (UTC) of ensemble forecast and sample optimization.

  • The influence of sample optimization on ensemble spread was also investigated. Fig. 4 shows the evolution of ensemble spread and the 6-h evolution of mean track errors in three experiments.

    Figure 4.  (a) 6-h evolution of ensemble spread and (b) 6-h evolution of mean track errors. Merbok (1900 UTC on June 10, 2017 to 0000 UTC on June 11, 2017), Mawar (0700 UTC to 1200 UTC on September 1, 2017) and Guchol (1900 UTC on September 4, 2017 to 0000 UTC September 5, 2017).

    According to the figure, sample optimization can certainly make ensemble spread reduce. Next, it can be easily seen that the ensemble spread reduced very quickly during the first 3-h cycles but decreased relatively mildly in the next 3-h cycles. Finally, the ensemble spread was very close in the last cycle for all three experiments.

    It can be seen from Fig. 4b that sample optimization can reduce the track error in each cycle. The variation trend of track errors seen in Fig. 4b is very similar to that of ensemble spread seen in Fig. 4a. With further sample optimization, the magnitudes of the track errors in the last three cycles reduce to values smaller than those in the first three cycles.

    Figure 5 displays the relationship between ensemble spread and track error for each cycle. It can be clearly seen in the figure that the ensemble spread and track error reduce at the same time. With passage of time, the declines in the ensemble spread and track error reduce. Both of them have a value of approximately 20 km in the last two cycles. There is little need for further sample optimization, and 6-h cycles may be enough.

    Figure 5.  The relationship between ensemble spread (km) and track error (km) in 6-h cycles. (a) to (f) correspond to the first to the sixty cycle. For the icons of the same color, the lower left one represents those before optimization, and the upper right one represents those after optimization in the same figure.

    Figure 6 shows the change in temperature anomaly with time and height for 6-h cycles. The intensities of the warm cores were effectively weakened by sample optimization. A comparison of Fig. 6a and 6b shows that there is an obvious positive temperature anomaly under 3 km both in the case of SOno and that of SO20(track); however, there is still a stronger positive temperature anomaly at the height of 12 km for SOno. Further, the stronger positive temperature anomaly at the height of 12 km disappeared after sample optimization. Analogously, in Fig. 6c, large scale positive temperature anomalies are seen in the middle and upper layers for SOno. However, the positive temperature anomaly above 1 km completely disappeared in the case of SO20 (track). In this respect, the change for Guchol is similar to those for Merbok and Mawar, and detailed discussion of this point is not required here.

    Figure 6.  The change in temperature anomaly (℃) with time and height (m). (a) (SOno) and (b) (SO20 (track)) represent Merbok (1800 UTC on June 10, 2017 to 0000 UTC on June 11, 2017). (c) (SOno) and (d) (SO20 (track)) represent Mawar (0600 UTC to 1200 UTC on September 1, 2017). (e) (SOno) and (f) (SO20 (track)) represent Guchol (1800 UTC on September 4, 2107 to 0000 UTC September 5, 2017). x-axis is time (hour), and y-axis is height (m).

    Figure 7 shows the change in temperature anomaly (℃) with height (hPa) by reanalysis data. It should be noted that there are only reanalysis data of 0000 and 1200. In other words, for a particular TS, the time seen in Fig. 7 corresponds to the last time seen in Fig. 6.

    Figure 7.  The change in temperature anomaly (℃) with height (hPa) determined by analysis. (a) Merbok (0000 UTC June 11, 2017). (b) Mawar (1200 UTC September 1, 2017). (c) Guchol (0000 UTC September 5, 2017).

    The relationships between air pressure and altitude are roughly as follows: 1000 hPa for approximately 0 km, 700 hPa for approximately 3 km, 500 hPa for approximately 5.5 km, 300 hPa for approximately 9 km, 200 hPa for approximately 12 km, and 100 hPa for approximately 16 km.

    It can be seen from a comparison of Figs. 6a, 6b, and 7a that the simulated temperature anomaly was larger than the observed temperature anomaly for the reanalysis data below 2 km; however, the temperature anomaly simulated in SOno was much larger. Between 6 km to 9 km, the temperature anomaly simulated in SO20 (track) was larger than that of analysis data. Further, the temperature anomaly of the analysis data was larger than that simulated in SOno. For 12 km to 15 km, the temperature anomaly of the analysis data was approximately in the range 2~4℃; that in SO20 (track) was close to this range, but that in SOno was smaller than this range.

    On comparison of Fig. 6a and 6d with Fig. 7b, it can be seen that the temperature anomaly simulated in SOno above 1 km slightly changed and then remained in the 1.5~2℃ range. Moreover, the temperature anomaly simulated in SOno was about 5~6℃ below 1 km. These values are largely deviated from the analysis data. The temperature anomaly simulated in SO20 (track) was similar to the analysis data below 1 km and that around 9 km, but larger than the analysis data around 3 km and 12 km.

    From Figs. 6e, 6f, and 7c, it is obvious that the temperature anomaly simulated in SO20 (track) was much larger than that simulated in SOno, and close to the analysis data, above 9 km.

    Therefore, by sample optimization, the vertical distribution of TS can be brought closer to reality.

    In summary, sample optimization is helpful for adjusting the track, intensity, and structure of TSs.

  • A comparative analysis of the prediction results was performed. Fig. 8 shows the observed and simulated TS tracks. It can be seen that the tracks simulated in SO20 (track) are closer to the observation than those simulated in SOno, especially at the starting time (in these figures).

    Figure 8.  Tracks of (a) Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), (b) Mawar (1200 UTC Sep 1, 2017 to 0000 UTC Sep 4, 2017), and (c) Guchol (0000 UTC Sep 5, 2017 to 0900 UTC Sep 7, 2017) as per WRF data.

    Table 3 shows the average track and intensity errors between the observations and simulations. All tracks errors in the simulations in SO20 (track) were smaller than those for SOno. The track error for Guchol decreased by approximately 42% after sample optimization.

    Track (km) Minimum SLP (hPa) Maximum wind velocity (m s-1)
    Merbok Mawar Guchol Merbok Mawar Guchol Merbok Mawar Guchol
    SOno 77.79 88.21 159.80 6.07 3.86 3.77 10.83 7.10 4.13
    SO20(track) 55.10 81.91 92.50 5.63 5.16 3.00 4.40 2.83 7.51

    Table 3.  Track errors and mean absolute errors of minimum sea level pressure (SLP) and maximum wind velocity in the observations, SOno, and SO20 (track) for Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), Mawar (1200 UTC Septmeber 1, 2017 to 0000 UTC September 4, 2017), and Guchol (0000 UTC September 5, 2017 to 0900 UTC September 7, 2017). The numbers in bold indicate the errors in SO20 (track) that are lower than those in SOno.

    The SO20 (track) track errors for typhoons Rammasun, Nida, and Megi were observed to have decreased by approximately 12.7%, 4.3%, 16.4% respectively in the study by Li (2018). Furthermore, the track errors for TSs Merbok, Mawar, and Guchol reduced by approximately 29.2%, 7.1%, 42.1%, respectively.

    To some extent, the track errors for weaker tropical cyclones reduced more significantly than those for stronger tropical cyclones in the case of SO20 (track).

    Also, in the majority of cases, the intensity error reduced in SO20 (track).

    Figures 9 and 10 show the development of TS intensity (minimum sea level pressure (SLP) and maximum wind speed). In the two figures, it can be seen that the intensity adjustment is not perfect. At first, the minimum SLP by simulated was lower than that by observation in for the three TSs. In the end, the minimum SLP was higher than the SO20 (track) observation for these cases. Furthermore, the evolution of the minimum SLP by SO20 (track) was similar to that of the observation. In Fig. 10a and 10b, it can be seen that the simulated maximum wind speed was much larger than the observation, but through sample optimization, the simulated maximum wind speed was improved successfully. The exception to this can be seen in Fig. 10c where the simulated maximum wind speed in SO20 (track) was far from the observation. Therefore, SO20 (track) can improve the accuracy of track prediction but cannot definitely improve intensity prediction. Hence, there is room for further development of this sample optimization scheme.

    Figure 9.  The minimum SLP (hPa) errors for (a) Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), (b) Mawar (1200 UTC September 1, 2017 to 0000 UTC September 4, 2017), and (c) Guchol (0000 UTC September 5, 2017 to 0900 UTC September 7, 2017) as per WRF data.

    Figure 10.  The maximum wind speed (m s-1) errors for (a) Merbok (0000 UTC Jun 11, 2017 to 0000 UTC Jun 13, 2017), (b) Mawar (1200 UTC September 1, 2017 to 0000 UTC September 4, 2017), and (c) Guchol (0000 UTC September 5, 2017 to 0900 UTC September 7, 2017) according to WRF data.

  • In this study, we performed a series of simulations to achieve samples that are as close to the true state as possible. We took the observed track data from the China Meteorological Administration as standard during the process of restricting the probability distribution of the samples. We then investigated the influence of sample optimization on ensemble prediction. Finally, we conducted a comparative analysis of the results to determine the effect of sample optimization on the vertical structure of TSs.

    In previous works, typhoons (stronger tropical cyclones) were the objects of research, and good results had been obtained. Therefore, in order to prove the applicability of sample optimization, we considered three TSs in our simulations.

    In 6-h cycles, the track, intensity, and vertical structure were all found to have adjusted better and were closer to the observation. It was found that the vertical structures were more reasonable and closer to the reanalysis data in the case of SO20(track) than in the case of SOno. Therefore, it may be concluded that the various thermal structures that result from sample optimization lead to different TS evolutions.

    The prediction results show that sample optimization outperforms conventional ensemble forecasts (SOno) with respect to TS track simulations. Thus, sample optimization was found to be beneficial for updating tracks of TSs.

    Sample optimization can be used to simulate both stronger tropical cyclones (such as typhoons) and weaker ones (just like tropical storms).

    However, despite the better results shown in this study, there are potential drawbacks to this new optimization method. For instance, the criterion of sample optimization is based on the distance between the forecast position of a TS and the corresponding observed track position. Also, the intensity of TSs was not improved significantly. Thus, intensity or other indicators should be included in the future work.

    In short, double standard (such as track plus intensity) may be applied in future.

    Acknowledgments: The authors would like to thank all the members of the Guangzhou Institute of Tropical and Marine Meteorology for their constructive remarks.

Reference (30)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return