Catastrophe Modeling-Feeding the Risk Transfer Food Chain

By | February 25, 2002

The increasingly sophisticated business of constructing catastrophe models is driven by three main imperatives: what are the chances of a given event happening; what are the upper and lower limits of a catastrophic occurrence; and what effect, depending on the intensity of the event, is it going to cause in any given area?

The models don’t predict exactly when and where a disaster will strike. They attempt to define the risks involved in terms of mathematical probabilities that can then be used to help insurance companies and their clients more accurately assess a given situation. The more precise and the more sophisticated the model, the more help it is going to provide.

Applied Insurance Research (AIR), founded in Boston in 1987 by current President and CEO Karen Clark, pioneered the use of advanced technology to help the insurance industry manage catastrophe risk.

“We were the first modeling company, and still the only privately owned one,” said AIR Senior Vice President Uday Virkud. “From the start we’ve focused on technology. We produced the first hurricane model accepted in Florida and the first viable earthquake model.”

Oakland’s EQE International, another leader in the field, was also founded in the ’80s. It describes the uses and capabilities of its EQECAT series of models as “advanced natural hazard modeling software [which] enables insurers and reinsurers to assess wind, earthquake, flood and other natural hazards risk exposures worldwide for individual sites and entire portfolios.” The program combines elements of “risk management software and consulting services that incorporate science, engineering, insurance, financial and computer science expertise.”

According to a 1999 research report by TowerGroup, Risk Management Services (RMS), located in Newark, Calif., currently holds 50 percent of the market for catastrophe modeling and services, making it the largest in the field. It was the first to develop modeling products for the weather derivative market.

The programs provided by these three companies are made possible by the rapid development of sophisticated computer software. They constitute the cutting edge in risk management assessment, and their models have become indispensable tools for calculating, not only catastrophe risks, but also weather derivative products, reinsurance and alternative risk transfers.

AIR’s Virkud characterized their work as “feeding the risk transfer food chain,” in describing how risk assessment moves from a single home, building or factory—to an insurance agent or broker—to the primary carrier—and ultimately to the reinsurance market.

“Cat models are used at each stage; as you go up, a broader view is needed, and the tools you use are different in response to the need, but the underlying models are the same,” Virkud said.

The heart of their efforts has historically concentrated on natural catastrophe modeling, earthquakes and severe windstorms (typhoons and hurricanes), as they pose the greatest threat to lives and property. However, their efforts now encompass tornadoes and hailstorms, inland and seacoast floods, wild fires and the new threat of urban attacks by terrorists, which are equally destructive, and can be analyzed using the experience gained dealing with earthquakes and hurricanes.

How to build one
Catastrophe models don’t just happen overnight, however, constructing them is an intricate and complex task. “We commonly spend from six months to three years building one,” said Fatia Grandjean, a RMS principle in the Paris Office. “It depends on the complexity of the data and the risks involved.” RMS currently has “around 50” catastrophe models included in the 174 basic products it offers to its clients.

Grandjean explained that a basic cat model has four major components. “The first step is to construct the Hazard Model, that is to determine what is the peril.” In the case of earthquakes one has to examine “the whole spectrum of events. What is its history? What’s the location? How has the phenomenon itself behaved in the past?”

Virkud stressed the difference between relying solely on historical experience, as a company would commonly do in analyzing auto coverage, and the need to go deeper using engineering and scientific analysis. “History is too short for an actual model to see what’s possible,” he said.

The ultimate goal is to create a “probabilistic model,” the form now universally employed. It takes into account “all the possible events that could happen within a given area. Some events are more likely to occur than others,” Grandjean said. While early calculations were largely centered on “deterministic models,” (i.e. in examining what would be the results today if an event such as the 1906 San Francisco earthquake happened again) these types of models are no longer used. “We’ve learned that a model must include all the possibilities which may occur and not just the ones that have already happened,” she said.

In fact one of the key events that spurred the growth of cat modeling was Hurricane Andrew, which struck Florida in 1992. “Nobody believed us when we told them that a hurricane could cause $13 billion in losses,” said Virkud. “Andrew was a defining moment, because people then realized you had to go beyond past experience and examine what a given event could do.” Since Andrew, U.S. insurers have increasingly sought to employ ever more sophisticated models to more accurately gauge their risks.

Europe received a similar wake-up call in 1999 when the wind storms Lothar and Martin hit France, Germany, Switzerland and Spain with extreme force. The demand for the services of the catastrophe modelers has been greatly in demand there ever since.

“The second component in a cat model is an assessment of local conditions; you can call them ‘site conditions,'” said Grandjean. This includes a study of soil structure and the degree of urbanization in any given area. “While most catastrophe models are done by country, for example, earthquake models of Italy or Taiwan, California requires a more detailed level of study,” she added.

Insurers are basically interested in assessing their risks in a certain finite space, and the models they require are, therefore, constructed to coincide with their needs. RMS recently designed parametric triggers for earthquake risk in Japan and California for the SCOR Group’s Atlas II cat bond. They developed a model for the Bay Area and L.A. that is “based on reported earthquake magnitude within 12 geographical ‘boxes’ surrounding sources of major tectonic activity.”

After calculating the hazard level and incorporating specific data defining the site conditions, a cat model then assesses the “vulnerability level” in the area. To do this the model examines the nature of the structures—What percentage are houses, office buildings, factories and stores? What’s the density? How are they constructed? What forces are they designed to withstand? What are they used for? How do they compare to other, similar structures?

The data used by cat modelers comes from many sources, and is crucial in constructing a viable model. “We use census data, building surveys, past occurrences and any other data that will help us learn what is actually at risk,” said Grandjean.

The end product is the catastrophe model itself. It is designed to show both the frequency and severity of any given event, how often it will happen and how frequently a certain amount of loss will result. Grandjean characterized it as “an exceeding probability curve that tells you what the probabilities are, greater than ‘X’ [a no loss event].”

The curves themselves can be diagrammed along an axis of probability. The horizontal component usually indicates the relative frequency of an event in years, and the vertical component the measure of loss that would occur in each case. Most curves rise sharply as the time increment increases, reflecting the greater damages caused by infrequent, but more violent, events.

How they are used
Once a catastrophe model has been constructed, it is ready to be used by any of the participants in the “risk transfer food chain,” including the insurance agent sitting in his office in Los Angeles or Miami. “Our models are used mainly by insurers and reinsurers, banks, corporate clients and governments,” said Grandjean. “Brokers and primary insurers use them to advise their clients.”

The use of catastrophe models to assess risks has become a matter of routine. Grandjean indicated that most of the larger insurance and reinsurance brokers “rent” models from RMS. “They have our models ‘in house,’ and we’re not even directly involved in using them.” she said

As many analysts have pointed out, each model has variations and built-in assumptions that affect its conclusions. The area they try to cover also affects them. Guillaume Gorge of AXA Cessions, at a conference in Paris sponsored by EQECAT in 2000, divided them into “aggregate models” and “detailed models,” and explained the advantages and disadvantages of each. Basically an aggregate model, which covers a larger area, is a useful tool for “portfolio analysis and for rating mass direct risks,” but not for the exposure of unique policies.

Detailed models, which are mainly available in the U.S., give better analysis of individual policy risks.

Do they actually work? RenaissanceRe, one of the world’s most sophisticated and successful property/catastrophe reinsurers, relies heavily on them. According to Martin Merritt, financial vice president, “We try to use [modeling capabilities] for every single risk we write, and we get as many details from the broker as we can.” He estimated that RenRe has spent more than $20 million to develop its models, and employs eight to 10 people on a full-time basis to update them and construct new ones. ” We have dozens of them,” Merritt said, “and we use as many as we can on each risk.” He explained that every model has a bias, “using only one model works against you, by using more than one you can reduce this effect.” This approach perhaps explains why RenRe had far less exposure to the property losses caused by the Sept. 11 attacks than did some of its peers. By using models it managed to diversify its risks to the extent that its exposures were reduced. Models might also help explain how RenRe consistently manages to earn between 18 and 20 percent on shareholders equity, far above the industry average.

Models are frequently updated
Once a model has been prepared, it does not remain static. All models are being constantly upgraded as new data is added and incorrect or incomplete assumptions are modified. “All our U.S. models are being updated all the time,” Grandjean said. “New data is added at a minimum of once a year, and usually more often. It depends on the model; it may be new hazard data, financial data—whatever affects the primary model.”

AIR recently released the 2002 versions of its earthquake and typhoon models for the Asia-Pacific region, in time for the April 1 renewals of reinsurance coverage for Japanese insurance companies. The upgrades affected its CATMAP®/2 and CATRADER® software systems. “A highlight of the release is the updated Japan earthquake model, which features the capability to estimate losses from earthquake- induced fires,” said the announcement.

The upgrades now enable the “damage estimation module” to use “a separate dynamic simulation to calculate the rate of fire ignitions as a function of earthquake intensity and construction type, the spread of the fire as affected by wind conditions and building density, and the capacity of available firefighting resources to respond to and suppress fires.”

AIR also released an updated Australia typhoon model featuring “an enhanced methodology for estimating changes in primary model parameters, such as central pressure, forward speed, and radius of maximum winds, along fully probabilistic, curving and recurving storm tracks.”

Dr. Jayanta Guin, vice president of AIR Research and Modeling also indicated that, “[i]n addition to a significant upgrade of the model’s hazard component, new damage functions have been developed for a wider variety of construction types and occupancy classes.” As the models improve they are able to provide more precise data for insurers to use in assessing risks.

Future applications
While severe windstorms and earthquakes are the main focus of catastrophe models, they are also being employed to assess other perils. The close correlation of windstorm phenomena with related weather events, such as hailstorms, tornadoes or seacoast flooding, can also be analyzed in terms of probability. “We’ve done several cat models for floods and coastal storm surges, as they correlate with winds and tides,” said Grandjean. She indicated, however, that floods were very hard to model, as a huge amount of local data must be used to achieve accuracy.

The same requirement applies to hailstorms and tornadoes. While devastating, a hailstorm in Sydney caused half a billion dollars in damage in April 1999, they are extremely local, and their occurrence is very hard to predict. “There are literally thousands of scenarios for hail and tornadoes,” said Virkud. “What we can do is give them [insurers, reinsurers and other clients] an idea of what they can expect. We can assign probabilities from mathematical models to get damage estimates.”

How to assess terrorist risks is an even more daunting task. The possibility of creating useful models nevertheless exists, and the need is great.

“The first question we got after Sept. 11 was, ‘How much risk do I have in that area?'” Grandjean said. In response, RMS is working on developing models to assess “urban catastrophes.” They include evaluations of building values, assets, workers’ compensation and business interruption. “It’s very difficult to work on the frequency of the event,” Grandjean continued, “so we work on analyzing the main urban concentrations to try and determine what will happen, if it happens.”

Virkud indicated that while the historical data is sketchy, it does exist. “The government has a lot of technical data and historical information; we can start there,” he said. AIR is working on post-Sept. 11 models focused mainly on workers’ comp. “We’re looking at what’s happened in cities,” said Virkud, adding that the concentration is on “landmark properties, bridges and tunnels. We hope to come up with a model that can give answers to certain scenarios, the frequencies and the losses.”

Sept. 11 may well turn out to have been a seminal event in catastrophe modeling, in the way that Andrew and the European storms were. It has made a whole new segment of the insurance industry highly aware of a vast area of risk that cries out for better analysis. Catastrophe modeling will be the main tool employed to conduct that analysis, and the three companies most closely involved are gearing up to meet the challenge.

Topics Catastrophe Carriers Agencies Flood Reinsurance Hurricane Market Japan Earthquake

Was this article valuable?

Here are more articles you may enjoy.

From This Issue

Insurance Journal Magazine February 25, 2002
February 25, 2002
Insurance Journal Magazine

Ocean Marine, Catastrophe