Peter Dailey says RMS’ loss estimates from last year’s storms were credible. Its up to all modelling firms to be transparent about how they calculate their post-loss estimates.

The presence of significant “outlier” loss estimates in the aftermath of Harvey, Irma and Maria (HIM) last year, and the disparity among the loss estimates produced by model vendors still continues to generate comment and raise concern within the risk community.

It is obvious that vendors who approach the modeling differently will generate different estimates. But rather than choosing to ignore the factors behind this and simply move on, we feel it is critical to acknowledge and understand these differences. In fact, it is our duty as modelers to investigate further.

At RMS, we develop probabilistic models that cover many perils and regions, and we deliver that risk insight to our clients. Uncertainty is inherent within the modeling process for any natural hazard, and there are multiple components that contribute to differences in loss estimates, including the scientific approaches and technologies used, and the granularity of the exposure data. Using probabilistic modeling means that we do not simply rely on past events, or wait for new events to validate models, but rather we simulate the full range of plausible future events.

Everything boils down to transparency, and as modelers, we must be fully transparent in our loss estimation approach. Every modeler applies scientific and engineering knowledge to detailed exposure datasets to generate the best possible estimates given the skill of the model. Yet the models always provide a range of opinion when events happen, and sometimes that is wider than expected. Clients must know exactly what steps we take, what data we rely upon, and how we apply the models to produce our estimates as events unfold. Only then can stakeholders conduct the due diligence to effectively understand the reasons for the differences and make important financial decisions accordingly.

Learning from Outliers

There were some notable “outlier” estimates during HIM particularly for Hurricane Maria, and these estimates must be scrutinized in greater detail. The onus lies on the individual modeler to acknowledge the disparity and be fully transparent about the factors that contributed to it. And most importantly, how such disparity is being addressed going forward.

Without clear explanation, a “big miss” in a modeled loss estimate generates market disruption and this impacts the credibility of all catastrophe models. Looking at how RMS models performed during Maria for instance, we believe our models stood up well. One reason for this was our detailed local knowledge of the building stock and engineering practices in Puerto Rico. We had built strong relationships over years and multiple visits to the island, and the payoff for us and our clients comes when events like Maria happen.

In fact, our estimates for all three storms were sufficiently robust in the immediate aftermath to stand the test of time, and in the months following HIM, we have not needed to significantly revise our initial loss figures even though they were produced when uncertainty levels were at their peak as the storms unfolded in real-time. These events gave us confidence in our models. There is much to take on board from HIM, but however we evolve our modeling, our transparent approach to explaining our estimates ensures we will continue to build client confidence.

Topics