Exam 7 (CAS)

¡Supera tus tareas y exámenes ahora con Quizwiz!

a) Identify a plot that can be used to determine outliers. b) Briefly describe how to adjust a GLM to remove the influence of outliers. c) An actuary removes an outlier from a GLM caused by a major catastrophe. Briefly describe the danger in removing this outlier if no other action occurs.

(a) Plot of standardized Pearson residuals (b) Give the outlier zero weight in the regression. (c) If removed from the model, the cost of those catastrophes needs to be captured somewhere else. Otherwise, the model fails to recognize the possible impact of such an event.

Describe the three evolutionary steps of the decision analysis process.

1. Deterministic project analysis: uses a single forecast to project cash flows. Not handled stochastically. 2. Risk analysis: Get a distribution of future cash flows. 3. Certainty equivalent: incorporates corporate risk preference to get a consistent application of judgment

Define operational risk.

Operational risk is the risk of loss from failed or inadequate systems, processes, or people, or from external events. It includes legal risk but excludes strategic or reputational risk.

What is naive cycle management? On the flip side, what is effective cycle management?

Naive: Writing business at inadequate prices during a soft market in order to maintain market share. Effective: Decrease written premium volume during a soft market (don't write at inadequate rates), and increase it during a hard market.

An actuary is building a stochastic chain ladder model and is considering the following distributions: 1) Over-dispersed Poisson 2) Over-dispersed Negative Binomial 3) Normal The actuary wants to use a model where the connection the chain ladder method is immediately apparent. Identify and briefly explain which of the three models under consideration would achieve this.

Negative binomial. The Lambda parameter is effectively a loss development factor, analogous to chain ladder.

Describe how competition can influence the UW cycle.

Not all competitors have the same view of the future. Inexperienced firms may have poorer loss forecasts than mature firms. As a result, inexperienced firms may drop prices based on poor forecasts. This eventually pushes the market toward lower rates

Formula for Spearman's rank correlation coefficient

T = 1 - Sum[(Rank Differences)^2]/(n*(n^2-1)/6)

A company's plan loss ratio determination process is considered the "fulcrum" of operational risk. Fully describe a bridging model that could lead to the financial downfall of the company. In your discussion, include three explanations for the company's downfall and explain how they are related to operational risk.

The plan loss ratio is a forecast of the loss ratio for the upcoming underwriting period. A bridging model determines the plan loss ratio by bridging forward more mature prior-year ultimate loss ratios using year-over-year loss cost and price level changes. If the Bornhuetter/Ferguson (BF) method is used for immature prior years with an ELR equal to the initial plan loss ratio for the year, the prior-year ultimate loss ratio will remain close to its plan loss ratio. Once older prior years begin to deteriorate, the BF ELRs for the more recent prior years will increase via the bridging. This could lead to a booked reserve deficiency, a possible rating downgrade, and a large exodus of policyholders. 1) The plan loss ratio and reserve models could not accurately forecast the loss ratio and reserves. - Could be UW or operational risk (depends if competitors are having similar problems). 2) The plan loss ratio and reserve models could have accurately forecasted the loss ratio and reserves, but the models were not properly used. - Operational risk due to people failure. 3) The plan loss ratio and reserve models did accurately forecast the loss ratio and reserves, but the indications were ignored. - Operational risk due to process and governance failure.

What is the net insurance charge?

The portion of total losses capped due to the min/max retro premium.

What are three advantages of using Clark's method to organize loss data rather than organizing it in a triangle?

‐ Data does not have to be the same age. Eg, can work with data at 12 months and at 9 months ‐ Don't need a full triangle. Can use a subset of the triangle, such as the last several calendar years ‐ Output is a smooth payment curve

Discuss two relationships regarding loss development excess the aggregate limit.

• Aggregate excess loss development drops off faster for smaller per-occurrence deductible limits - Because later development is more likely for larger claims that already are over the deductible (so they don't impact the aggregate limit) • Higher aggregate limits have more leveraged LDFs - Because there are fewer losses excess the aggregate limit when the aggregate limit is high, especially at earlier periods

What are the key aspects of ERM? (7)

• An ERM program should be a regular process • Risks should be considered across the whole enterprise (insurance hazard, financial, operational and strategic risks) • An ERM program should focus on risks posing the most significant impact to the value of the firm • Risk can be positive or negative • Risks must be quantified where possible to measure the impact of the risk on the enterprise as a whole, including correlations between risks • An ERM program should develop and implement strategies to manage risk by avoiding, mitigating or exploiting risks • Risk Management strategies should be evaluated for risk/return to maximize firm value

Identify, for both a long-tailed and short-tailed portfolio, whether the premium liability CoV should be higher or lower than the outstanding claim CoV.

• Long-tailed portfolios: Premium Liability CoV should be higher than Outstanding Claim CoV • Short-tailed portfolios: Premium Liability CoV should be lower than Outstanding Claim CoV ??? I don't know why this is true.

Briefly describe three testable implications of the Mack assumptions according to Venter. For each of the three implications, briefly discuss how the implication could be tested.

1. The f(d) factor, the LDF, is significant 2. The factor model is linear 3. There are no particularly high or low diagonals. How to test: 1. Run a linear regression of incremental losses vs prior cumulative losses. Check if the coefficient is significant. 2. Create a residual plot against the prior cumulative loss. If there is a pattern in the residuals, this is evidence that emergence is a non-linear function of prior losses. 3. Run the high/low diagonal test.

Provide three characteristics of a good ERM model.

1. The model shows the balance between risk and reward from different strategies (think of reinsurance programs) 2. The model reflects the relative importance of various risks to business decisions. 3. The model incorporates dependencies between risks.

What is control self assessment?

A process to examine and assess the effectiveness of internal controls with a goal of providing reasonable assurance that all business objectives will be met. Goals of internal controls: • Reliability and integrity of information • Compliance with policies, plans, procedures, laws, regulations, and contracts • Safeguarding of assets • Economical and efficient use of resources • Accomplishment of established objectives and goals for operations or programs

What do f(d) and h(w) represent in Venter?

f(d): the dev period h(w): the AY

Allocation of risk capital is ______?

Allocation of risk capital is theoretical - no capital is transferred to a policy when it is written - the entire capital of the firm stands behind every policy.

Burning cost = ?

Burning cost = ELR * EP

In Hurlimann's methodology, what is the burning cost of ultimate losses?

Burning cost = ELR * EP

Provide four reasons for holding sufficient capital.

Capital must be sufficient to: - Sustain current underwriting - Provide for adverse reserve changes - Provide for declines in assets - Support growth - Satisfy regulators, shareholders, and policyholders.

When plotting return by risk, the accounting systems should be ____?

Consistent. An example of a violation would be statutory accounting measure for risk with a GAAP measure for return.

Variance assumption in least squares.

Constant

Definition of, examples of, and a method for medium-tailed reinsurer reserves

Defn: Any exposure for which claims are almost completely settled within five years and with average aggregate claims dollar report lag of one to two years

Definition of, examples of, and two methods for short-tailed reinsurer reserves

Defn: Losses are both reported and settled quickly Examples: treaty property proportional, treaty property catastrophe, treaty property excess, facultative property. Method 1: Set IBNR equal to some percentage of the latest-year EP Method 2: Reserve up to a selected loss ratio (especially for new LOBs), where the selected loss ratio is larger than the one computed from reported non-cat claims

A public insurer uses a simple ERM model to make decisions about different lines of business. The model uses deterministic inputs to estimate the internal rate of return by line. Management makes decisions based on these figures. What corporate decision-making approach is the insurer using? What would be a more appropriate approach? Why?

Deterministic project analysis. Use Risk Analysis (DFA): -This approach uses forecasted distributions of the. A Monte Carlo simulation then calculates a distribution of the present value of the cash flows and IRR. Risk judgment is intuitively applied by decision makers. This is a better approach because it incorporates the uncertainty of the input variables.

Seven categories of external systemic risks.

Eccentric Late Caleb Exceeds Every Large Race -Economic/Social -Legislative/political -Claim management process -Expense risk -Event risk -Latent claim risk -Recovery risk

Discuss the goal of sensitivity testing in the context of selecting a risk margin, as well as the steps to performing it.

Goal: Gain insight into how our risk margin varies with assumptions. Steps: 1) Vary key assumptions (CoVs and correlations) and monitor the impact on the risk margin. 2) Review key assumptions

What is a good measure for reinsurance, and what is a bad measure?

Good: net present value of earnings from program Bad: combined ratio, distorts effects of reinsurance on earnings

Briefly describe an advantage of the collective loss ratio claims reserve over the traditional Bornhuetter/Ferguson method.

Different actuaries always come to the same results provided they use the same premiums

Briefly describe an advantage of the collective loss ratio claims reserve over the traditional BF method.

Different actuaries will come to the same results provided they use the same premiums (i.e. no judgmentally selecting an ELR)

Discuss two advantages and disadvantages of the FCFE method.

Disadvantages • Adjusting projected Net Income to calculate forecasted free cash flows makes the interpretation of FCFE difficult. FCFE may bear little resemblance to internal forecasts. • May be difficult to assess reasonableness of cash flows/growth rates Advantages • Relatively simple to understand • Focused on the firm's net cash flow generating capacity

What is the credibility weighting of the Stanard-Buhlmann and chain ladder methods as presented by Patrik?

Credibility weight = Cred Factor * Lag,ay Then: IBNR,cred = Z Each AY gets a different credibility weight, Z.

What is the deductible loss charge? Formula and defn.

DLC = Prem * ELR * XS Ratio ... the expected losses that exceed the deductible at an occurrence level

Identify a copula that is more appropriate than the normal copula for modeling insurance loss at the portfolio level, and briefly describe a feature of this copula that makes it more appropriate than the normal distribution for modeling insurance loss data.

Heavy right tail copula. Reflects high correlation in the right tail and lower correlation in the left tail. This will reflect the increase in correlation of insurance losses in the right tail during extreme events.

What is heterothesious data? Discuss the two most common types of this issue. And what to do about them.

Heteroecthesious data refers to incomplete or uneven exposures at interim evaluation dates 1) Partial first development period data - First development column has a different exposure period than the rest of the columns (e.g. first column is 6 mo., then 18, then 30...). - when running the bootstrapping process, the most recent AY needs to be adjusted to remove exposures beyond the evaluation date during the simulation process. 2) Partial last calendar period -when running the bootstrapping process, one approach is to annualize the exposures in the latest diagonal and make them consistent with the rest of the triangle. Then, during the simulation process, age-to-age factors can be calculated from the annualized sample triangles and interpolated. Next, we de-annualize the exposures in the latest diagonal of the sample triangles to reduce the latest diagonal back to its original form. The interpolated factors can then be used to project these values to ultimate and produce reserve point estimates.

Meyers shows that the ODP model is biased ____?

High.

To project future losses, an actuary fit a trend line to historical data. Using standard statistical procedures, the actuary placed prediction intervals around the projected losses. Explain why these prediction intervals may be too narrow.

Historical data is often based on estimates of past claims which have not yet settled. In the projection period, the projection uncertainty is a combination of the uncertainty in each historical point AND the uncertainty in the fitted trend line. Thus, the actuary's prediction intervals may be too narrow due to the missing uncertainty associated with the historical data

What are four common approaches to setting capital?

Holding enough capital ... 1) so the probability of default is remote. 2) to maximize franchise value. 3) to continue to service renewals. 4) so the insurer not only survives a major CAT but thrives in its aftermath

We are told that the optimal credibility for a LOB at age 24 months is 0.3. What can we say about the error around the Benktander estimate?

If p,k is in the interval [0, 2c], then MSE(Benktander) <= MSE(BF) where c is the optimal credibility weight

Discuss the importance of being cautious with using a parameterized curve, according to Clark.

If the distribution you're fitting to is a long-tailed line, there will be significant extrapolation of losses beyond what is perhaps the latest age of your losses. Consider using a truncation point to limit the extrapolation, or use a lighter-tailed curve (Weibull) rather than a longer-tailed one (Loglogistic) to get less extrapolation past the latest age of your data.

Describe why it is reasonable to assume that there is positive correlation between lines of business for internal systemic risk for claims liabilities.

If the same actuary is doing the reserving for multiple lines of business, the estimates might be subject to aggressive or conservative bias.

Briefly discuss why some copulas are better than others for use in an ERM model.

If there is high tail dependency between two risks, a copula with greater joint probabilities in the tail (high losses for both risks) would be more appropriate. In this case, you might want to use the Heavy Right Tail copula rather than the normal copula.

Discuss how an actuary would add process variance to the forecasted incremental losses in an ODP bootstrap model. How might it differ if the residuals are heteroscedastic?

IncLoss(w,d) ~ Gamma(Mean = projected incLoss, Var = ScaleParam * projected incLoss) If you're adjusting for heterscedasticity, use the scale parameter for the development period of the incremental loss.

In the context of a retrospectively rated policy, explain how a positive fourth adjustment to the incremental premium could follow a third adjustment of $0.

Incremental losses from third adjustment could have been from a claim above the cap, while loss development in the fourth period could be from losses below the cap.

Why might historical PDLD ratios be volatile after the first retro adjustment?

Incremental premium development may reflect incremental loss development on only a small number of policies.

Formulas for risk margin CoV calculations (independent, internal systemic, external systemic)

Independent CoV^2 = wt^2*CoV,ind^2 External CoV^2 = CoV,ext^2 Internal CoV^2 = Corr,i,j x (wt * CoV,i) x ( wt * CoV,j)

Difference between systemic and independent risk.

Independent risks are risks arising from randomness inherent to the insurance process. Systemic risks are risks that are potentially common across multiple valuation classes or claims groups.

Identify two strategic risks to which an insurance company is exposed.

Industry risk - includes capital intensiveness, overcapacity, commoditization, deregulation, and cycle volatility. These are all significant risks for an insurer. Competitor risk - includes global rivals, gainers, and unique competitors. For an insurer, this might be aggressive or predatory pricing that drives market price levels down below adequate levels.

Briefly describe the variance assumption underlying the ODP cross-classified model.

It assumes that the Ykj is restricted to an ODP distribution where the dispersion parameters φkj are identical for all cells (φkj = φ)

When variance is constant, we weight the age-to-age factors by what?

Loss^2

What's one conclusion that Meyers reached regarding the Mack model on incurred losses? On paid losses?

Mack model understates the variability of ultimate losses, particularly because it assumes independence between accident years. On paid losses, I guess it biases high? Idk I should probably read the paper

Briefly discuss a macroeconomic example and an insurance example of tail dependencies an ERM model should incorporate.

Macroeconomic example: inflation would impact both underwriting losses and loss reserve development Insurance: Home and Auto generally have low correlation but an extreme event like a snowstorm might cause large losses for both.

Cons of incentivizing management with a percentage of the increase in market cap over x years

Management might take too much risk, high upside, but they're gambling with others' money (no downside)

Given per-occurrence charges at 250k and 1M, how would you estimate ultimate losses in the layer between 250k and 1M using Siewart's loss ratio approach?

Per-occurrence charges are our XS ratios. Ult,250k-1M = Prem * ELR * (Proportion of losses between 250k and 1M) =Prem * ELR * (Basic ratio at 1M - Basic ratio at 250k) =Prem * ELR * ((1-per-occ at 1M) - (1-per-occ at 250k))

Explain the difference between private valuation and equilibrium market valuation.

Private valuation assumes investors have their own view of risk, so the investments are analyzed relative to their current portfolio and will have different values for different investors. Equilibrium market valuation assumes that all investors hold the same portfolio and assess "risk" in identical fashions. Thus, investments will have the same value for all investors

Given report lags and reported losses as of the end of 2015Q4 and of 2016Q1 for multiple AYs, how would you calculate the difference between actual and predicted reported losses?

Predicted Loss,ay = Loss,ay,priorEval + IBNR,ay,prior eval * Chg in Lag / (1 - Lag,priorEval)

Prediction error formula in the context of a stochastic Bayesian model.

Prediction error % = Bayesian Std Dev / Bayesian Mean

Describe the difference between the prediction error and the standard error.

Prediction variance = Process Variance + Estimation Variance Standard Error ^ 2 = Estimation Variance So, prediction error incorporates both process variance and parameter variance, whereas the standard error incorporates just parameter variance, ignoring process variance.

What is net leverage?

Premium + Reserves / Surplus.

Formula for net income

Premium - [ Expenses + Paid Claims + Unpaid Claims]

Explain why the slope factor in Fitzgibbon's linear regression method is typically not exactly unity.

Premium is not 1-to-1 with losses due to capping of losses in the retro formula, the loss conversion factor, and the tax multiplier.

Two examples of leverage ratios

Premium to surplus. Want this to be low. Reserves to surplus. Also want this to be low.

How do probability transforms measure risk?

Probability transforms measure risk by shifting the probability towards the unfavorable outcomes and then computing a risk measure with the transformed probabilities

Compare and contrast the process and parameter variances of the Cape Cod method and the LDF method.

Process variance: LDF can be higher or lower than CC Parameter variance: CC is lower

Frank copula

Produces weak correlation in the tails. Correlated pair graph looks more spread out than any of the copulas, like a rectangle extending at a 45 degree angle from 0 to 1.

What is a risk of using market multiples to value an insurer, and what's a way counteract that risk?

Risk profiles can vary greatly by P&C company due to primary lines written, types of coverage, etc. The ratios for one company might not reflect the ratios of another. Identify pure players that operate in only one LOB and use this to value the insurer in pieces based on premium volume.

What are RBC models and how do they differ from leverage ratios?

Risk-based capital models and leverage ratios both seek to evaluate capital adequacy. Leverage ratios used a variety of numbers, while RBC models try to combine into one number.

Describe how data lags might influence the underwriting cycle.

Since insurance pricing involves forecasting based on historical results, there are time lags between the compilation of the historical data and the implementation of the new rates. One theory is that these time lags lead to poor extrapolation by actuaries during the ratemaking process. Due to the lags, historical data may suggest that further rate increases are needed when rates have actually returned to adequate levels

Discuss how lags in the insurance process can affect the UW cycle.

Since insurance pricing involves forecasting based on historical results, there are time lags between the compilation of the historical data and the implementation of the new rates. One theory is that these time lags lead to poor extrapolation by actuaries during the ratemaking process. Due to the lags, historical data may suggest that further rate increases are needed when rates have actually returned to adequate levels.

Standardized deviance residual formula for ODP Cross-classified model [Taylor]

The standardized deviance residual is R,d = sgn(Yi − Yˆi)(di/φˆ)^0.5 sgn is the sign (positive/negative) φ = dispersion factor di is the contribution of that incremental paid loss to the total deviance

Reasons why valuing equity as a call option isn't practical for an insurance company

The value would be a call option on the company's assets with a strike price equal to the face value of debt. Problems: • "Debt" isn't well defined for an insurance company. Policyholder liabilities are indistinguishable from other forms of debt for the equity holders • There's no clear single expiration date for an insurer's debt, considering policyholder liabilities

Briefly discuss a risk related to the UW cycle that should be considered within enterprise risk management.

There is the risk that the pricing in the market will change (e.g. enter soft market and pricing falls , and the insurer makes the wrong decision about how to respond (e.g. trying to hold market share to make Plan). This is a strategic risk that should be considered within ERM.

Why are high-growth/low-discount rate and low-growth/high-discount rate combinations unlikely?

These assumptions aren't independent. Rapid growth is unlikely without increased risk.

Briefly describe a situation in which external benchmarking is useful.

External benchmarking is beneficial when little information is available for analytics

Define external systemic risk and internal systemic risk. Provide two examples of each.

External systemic risk is risk from outside of the valuation process that impacts all valuation classes and claim groups. - Economic risks such as inflation and interest rates - Event risks such as earthquakes and hurricanes Internal systemic is risk that is internal to the liability valuation process that impacts all valuation classes and claim groups. - Specification error: it is not possible to fully replicate the insurance process; thus any model we build will have some difference from the true process. -Data error: the risk that due to imperfect data, the model will be inaccurate. -Parameter selection error: uncertainty that the model will not be able to capture all the parameters and trends.

Define free cash flow.

FCF is all cash that could be paid out as dividends. It's usually net of any cash flow that is required to invest to maintain operations and generate company growth at the rate that is assumed in the forecasts.

Why is the free cash flow to equity method preferable to the free cash flow to the firm method?

FCFE - is after any payments to bondholders and thus that source of leverage is removed. So if we do FCFF, then we must also consider leverage of debt issued. But then we must also consider leverage from reserves held for the benefit of policyholders. The distinction between the two sources of leverage is arbitrary. The additional leverage complicates the calculation, so we prefer FCFE.

Formula for FCFE?

FCFE = Net Income + Non-Cash Charges (excl. Reserve Changes) - Net Working Capital Investment - Increase in Required Capital + Net Borrowing

Formula for FCFE

FCFE = Net Income + Non-Cash Charges - Net Working Capital Investment - Increase in Required Capital + Net Borrowing

What are the two main examples of the DCF model and what is its implicit assumption?

FCFE and FCFF. Implicit assumption is that the free cash flow not paid as a dividend is reinvested to earn an appropriate return.

Formula for g in dividend growth model

g = plowback ratio * ROE

According to Goldfarb, how would you come up with a growth rate, g?

g = reinvestment rate * ROE. g = Reinvested Capital / Beg.Cap = ( NI - FCFE ) / Beg. Cap Reinvestment rate might be the plowback ratio. Otherwise, Reinvestment rate = Change in Capital / Net Income

Explain how the abnormal earnings method represents an improvement over the DCF method.

Usually DCF assumes the free cash flow grows in perpetuity. In reality, this is unlikely as competitors will enter the market and reduce profit for incumbents. Abnormal methods assumes this free cash flow will last only for a specified period of time, which is more realistic.

Formula for VHM, EVPV, and Z in Bayesian Credibility in a Changing System

VHM = (E[x/y]*o(y))^2 EVPV = o(x/y)^2 * (o(y)^2 + E[y]^2) Z = VHM / (VHM + EVPV)

Define VaR and TVaR, and explain one significant limitation of the measure in the context of an ERM application.

VaR = E[X | X = a]. It is the percentile of a distribution, the point of a distribution at a specified level of probability. TVaR = E[X | X>a]. The expected value of the distribution given that it is higher than a specified percentile. Weakness of VaR is that it is only a single point in the distribution and does not account for risk in the tail or below the VaR. Weakness of TVaR is that it is linear in the tail and does not reflect the risk averse attitude that a loss twice as large might be more than twice as bad.

A Tweedie Mack model with p = 1.48 is fit on a triangle of incremental paid losses. Briefly describe the frequency and severity distributions.

When 1 < p < 2, we have a compound Poisson distribution (i.e. Poisson frequency) with a Gamma severity distribution

Mean squared error for collective/individual loss reserve methods for a given AY.

[ Z^2/p + 1/q + (1-Z^2)/t ] * q^2

b) Briefly describe two tests for stability. c) Provide two methods for improving stability in development factors.

a) - Plot incremental residuals or age-to-age factors against time (e.g. AY) - State-space models: compares the degree of instability of the observations around the current mean to the degree of instability in the mean itself over time b) Weighted average, with more weight going to more recent years. Adjust the triangle for measurable instability (e.g Berquist-Sherman to adjust the triangle to the latest claims settlement rate)

a) Provide two advantages of bootstrapping. b) Provide one disadvantage of bootstrapping.

a) - They allow us to calculate how likely it is that the ultimate value of the claims will exceed a certain amount - They are able to reflect the general skewness of insurance losses b) They are more complex than other models and more time consuming to create

An insurer with a substantial book of long-tailed business is using a plan loss ratio model to determine premium growth targets. The plan loss ratio is also used in the reserve review process as the expected loss ratio. a) Identify two potential negative consequences of an optimistic plan loss ratio to the company's financial results. b) Explain why it is difficult to separate operational risk from underwriting risk when explaining the impact of an optimistic plan loss ratio on the company's financial results in retrospect.

a) -Reserves may be set too low, and since it is a long-tailed line, it will takes years to correct. - Optimistic planned loss ratio used as ELR can lead to underpricing business, rating downgrades, policyholder exodus, claim paying difficulty, insolvency issues. b) Underwriting risk incorporates the random volatility in insurance; operational risk incorporates the inadequate or failed internal processes and people. You could argue that the LR deterioration is due to UW risk that could not have been modeled, or alternatively that the models were not appropriately used (operational risk).

An insurer with a large, stable book of personal auto insurance is building an internal model for solvency. The actuary building the model assumes that the loss ratios are positively correlated from one year to the next. a) Provide two reasons why this assumption is reasonable. b) Describe how the model should incorporate this assumption.

a) 1. Company will likely have many of the same customers from one year to the next 2. The pricing process takes considerable time to recognize changes and to take underwriting actions. b) If the model is forecasting a few years in the future, each year's loss ratio should consider the long-term mean, the previous year's forecast loss ratio, and a random element.

a) Describe the four stages of the evolution of a single LOB b) For each stage, state whether the primary driver is competition, data lags or both.

a) 1. Emergence: when a new LOB emerges, data is thin, demand grows quickly, and highly volatile price cycles. 2. Control: stabilization results from restricting entry, standardizing insurance products, stabilizing market shares, DOIs, regulation 3. Breakdown: due to technological and societal changes, new types of competitors enter the market and take business away. 4. Reorganization: a return to the emergence step, as a new version of the LOB has emerged. b) 1. Emergence is driven by competition 2. Control is driven by data lags. 3. Breakdown is driven by both competition and data lags. 4. Reorganization is driven by competition.

Formula for slope b in the least squares development method and formula for a.

b = E[xy] - E[x]*E[y] / E[x^2] - E[x] a = E[y]-b*E[x]

In the least squares method, what should you use in the following cases: 1) a<0 2) b<0

1) When a<0, use chain ladder 2) When b<0, use budgeted loss

Briefly describe two reasons why historical PDLD ratios may differ from the retro formula PDLD ratios.

1) Worse (or better) than expected loss experience can cause more (or less) losses to be capped, resulting in historical PDLD ratios that are smaller (or larger) than the retro formula PDLD ratios. 2) Average retro rating parameters may change over time.

Risk margin based on the lognormal distribution.

1) o^2 = ln[ 1 + CoV,total^2 ] 2) Risk margin = exp[ z*o - o^2/2 ] - 1

Identify three shortcomings of the Mack model. Propose an alternative model and briefly explain a feature of the alternative model that addresses each shortcoming.

1)- The Mack model uses the reported to date cumulative loss to estimate the ultimate loss level for AY. The reported to date is fixed and thus the row acts as a fixed parameter. - By allowing the row level to vary, we can add more volatility to the Mack model. The leveled chain ladder model does this. 2) -Mack assumes AYs are independent. - If we remove this assumption and allow AYs to be correlated and vary the level of the prior loss, we can produce the CCL (correlated CL) model which passes the K-S test due to adding increased variation in the loss projection. 3) -Mack only provides mean and variance, not a full distribution. - An alternative is to use the ODP bootstrap as its output provides a full distribution in the form of the simulated results. 4) -Mack doesn't capture the speed up in settlement rate in today's environment. - We can use the Changing Settlement Rate (CSR) model instead to allow incorporation of a variable to represent settlement rate and let it increase over time. 5) -Mack doesn't allow for the incorporation of expert opinion and/or a prediction error around an incorporated expert opinion -An alternative is a Bayesian credibility model which allow for incorporation of expert opinions and valid prediction errors estimates.

Taylor's three theorems and their implications

1) - If Mack assumptions hold, EDF LDFs match CL and they're unbiased - Restricting to ODP ⇒LDFs, fitted cum loss and reserves are MVUEs Implication: Stronger than non-parametric result... LDFs/reserve estimates are MVUE among all unbiased estimators, not just linear combinations of the age-to-age factors 2) MLE fitted incrementals match CL results 3) Assuming fitted incrementals and reserve are corrected for bias, they're MVUEs Implications of 2/3: Forecasts from ODP Mack and ODP cross-classified models are the same. Can work with just the ODP Mack model without considering the cross-classified models directly.

Briefly describe three options for a risk-free rate in CAPM.

1) 90-day T-bill. Free of both credit and reinvestment risk. 2) Maturity Matched T-Notes. Maturity matches the average maturity of cash flows. 3) T-Bonds. Most stable option, makes the most sense for corporate decision making, and can better match the duration of cash flows and market portfolio. ... risk-free rate = yield,20yrTBond - term premium

Discuss the steps to mechanical hindsight.

1) Apply the chain ladder method to data as of the valuation date to calculate a current unpaid estimate. 2) Iteratively, remove a diagonal of data and apply the same method to calculate unpaid estimates for prior valuation dates. 3) Compare the past estimates to the current estimate for the same accident years. The relevant payments made between the past valuation dates and the current valuation date should be added to the current unpaid estimate.

Briefly describe three meaningful reference points for setting capital requirements.

1) At a level where the company can service renewal business. 2) Sufficient capital to withstand and thrive after a catastrophe. 3) Rating agency requirement 4) Maximizes franchise value 5) At a multiple of a lower percentile (i.e. more reliable) loss.

Describe a meaningful reference point for setting capital requirements, and then develop a minimum capital requirement that relates a maximum capital loss tolerance to a TV@R measurement.

1) At a level where the company can service renewal business. Suppose renewals are 80% of the book and that the insurer can suffer capital loss at this level once in 10 years (90th pctl). To service 80% of the book, it can lose 20% of capital, so it needs 5 times as much capital as a 1-in-10 year loss = 5 * TV@R,90 2) To hold enough capital to not only survive a major CAT but thrive in its aftermath. Set minimum capital requirement to 6*TV@R,95. This ensures that a 1-in-20 event will deplete only 1/6th of the insurer's capital. So even after this event the insurer will have enough remaining capital to thrive.

In a 10x10 triangle, how would you determine whether the triangle has significant correlation (say at 10% where the t-value is 3.33)? Give the mean/var of the binomial distn

m = (n-3) choose 2 = (10-3) choose 2 = 21 Variance = mp*(1-p) Mean = mp where p is 10% (the significance level) Threshold = mp + 3.33*sqrt(mp*(1-p))

Expected percentile in creating a PP plot

100*i/n

Describe the connection between a GLM and a weighted linear regression model.

A weighted linear regression model is a GLM with the identity link function and normally distributed errors whose variances vary from observation to observation.

According to Portfolio theory, investors are only compensated for what type of risk? Why is this not practical?

According to Portfolio Theory, investors are only compensated for systemic risk, not firm-specific risk. However, this is impractical - Management cannot identify systemic vs firm-specific risk. - Instantaneous risk - some risks have no lag. Portfolio theory manages risk through discounting. - Market-based information is too noisy for management to use for cost-benefit analysis.

What should a histogram show if the model is a good fit?

All bars should have equal heights, meaning that the percentiles are uniformly distributed. A model that is a bad fit would have more high/low percentiles than in the middle.

Identify five advantages of a high deductible program.

Achieves price flexibility while passing additional risk to larger insureds Reduces residual market charges and premium taxes Gives cash flow advantages to insured Provides incentive for insureds to control losses while protecting them from large losses Allows "self-insurance" without subjecting insureds to demanding state requirements

Describe two ways to handle ALAE under a high deductible program.

Approach 1: Account manages expense itself (i.e. ALAE not covered) - development patterns reflect losses only Approach 2: ALAE is treated as loss and subjected to applicable limits - development patterns reflect a combination of losses and expenses

Six components of a reinsurer's loss reserve.

Component 1: Case reserves reported by the ceding companies Component 2: Reinsurer additional reserves on individual claims Component 3: Actuarial estimate of future development on components 1 and 2 Component 4: Actuarial estimate of pure IBNR Component 5: Discount for future investment income Component 6: Risk load

Briefly describe two Bayesian models that include payment year trend and can be used to model paid losses.

Correlated incremental model - models incremental losses using a payment year trend and a mixed lognormal-normal distribution. It also allows for correlation between accident years Leveled incremental trend model - models incremental losses using a payment year trend and a mixed lognormal-normal distribution. It does NOT allow for correlation between accident years

What are the seven types of external systemic risk?

Economic and Social Risks - inflation, social trends Legislative, Political Risks, and Claim Inflation Risks - change in law, frequency of settlement vs. suits to completion, loss trend Clam Management Process Change Risk - change in process of managing claims Expense Risk - cost of managing unpaid claims Event Risk - Catastrophes or manmade events Latent Claim Risk - claims, that arise from a source not currently considered to be covered Recovery Risk - recoveries from reinsurers or non-reinsurers

Two adjustments to book value before using in abnormal earnings method

Eliminate systematic bias in reported asset and liability values Adjust reported book value to reflect tangible book value, which removes the impact of intangible assets such as goodwill (i.e. brand, reputation, etc.)

Provide four questions that must be answered before an econometric model can be built.

How do economic factors (ex. interest rates, inflation, cost of capital) influence the supply and demand curves? How does capital influence the supply and demand curves? How do the supply and demand curves jointly determine price and quantity? How does profitability affect external capital flows?

Provide two advantages of using the over-dispersed Poisson distribution to model the actual loss emergence.

Inclusion of scaling factors allows us to match the first and second moments of any distribution. Thus, there is high flexibility Maximum likelihood estimation produces the LDF and Cape Cod estimates of ultimate losses. Thus, the results can be presented in a familiar format

Provide three examples of economic drivers that affect insurance profitability.

Insurance profitability is linked to investment income The cost of capital is linked to the wider economy Expected losses in some LOBs are affected by inflation

Briefly describe how the leveled chain ladder (LCL) model and the correlated chain ladder (CCL) model performed on the incurred loss data analyzed in the paper.

LCL model - increased variability relative to the standard Mack model but still understated variability by producing light tails. Failed the KS test CCL model - increased variability relative to the LCL model and passed the KS test

Briefly describe two formulations for the skew normal distribution.

One formulation produces the skew normal distribution by expressing it as a mixed truncated normal-normal distribution Another formulation produces the skew normal distribution by expressing it as a mixed lognormal-normal distribution

Briefly describe three problems with the current application of trend rates.

Tend not to vary between accident periods Trend that occurs in the development period or calendar period direction is often not considered Tend not to vary by claims layer

Briefly describe two consequences of including a payment year trend in a model.

The model should be based on incremental paid loss amounts rather than cumulative paid loss amounts. This is because cumulative losses include settled claims which do not change over time Incremental paid loss amounts tend to be skewed to the right and are occasionally negative. We need a loss distribution that allows for these features

Briefly describe two classes for required capital.

Theoretical models - those that derive required capital and changes in it based on the calculated risk metrics from the enterprise risk model (such as VaR, TVaR, etc.) Practical models - those that derive required capital based on rating agencies (ex. BCAR, S&P CAR), regulatory requirements (ex. RBC, ICAR), or actual capital

Given a Tweedie subfamily, define for p=0 to p=6 what the distribution is.

When p = 0, Y is a normal distribution When p = 1, Y is an over-dispersed Poisson distribution. This is not the same as the standard Poisson distribution When p = 2, Y is a gamma distribution When p = 3, Y is an inverse Gaussian distribution When 1 ≤ p ≤ 2, Y is a compound Poisson distribution with a gamma severity distribution (common for insurance losses)

Company ABC is a growing regional insurer in the East Coast that primarily writes personal lines. ABC is setting up a new ERM program and its actuary wants to ground the program in current best practices. Define Enterprise Risk Management and discuss the key aspects of this definition that insurer ABC should keep in mind when creating a new ERM program

"ERM is the process of systematically and comprehensively identifying critical risks, quantifying their impacts and implementing integrated strategies to maximize enterprise value." ABC should keep in mind: - The program should be a regular process, not a one-time project - Risks should be considered enterprise-wide, focusing on those with the most significant impact to the firm's value (consider insurance, financial, strategic, and operational risks) - Should quantify risks where possible and incorporate correlations between them

Standardized Pearson residual formula for ODP cross-classified model [Taylor, see MP1]

( Act - Exp ) / sqrt(dispfactor * E[IncLoss] ) where E[IncLoss] = alpha * beta for the corresponding AY, dev age *Note that this is different than Shapland's standardized pearson residual. There, you calculate the unscaled Pearson residual by taking Act - Exp / sqrt( Exp ) then adjusting by multiplying by sqrt( 1 / (1 - corresponding Hat Matrix ) )

(a) Briefly describe the assumptions underlying the non-parametric Mack model. (b) Briefly describe the two results of the non -parametric Mack model. (c) Briefly describe how the EDF Mack model differs from the non-parametric Mack model.

(a) - AYs are stochastically independent - For each accident year k, the cumulative losses Xk,j form a Markov chain - For each AY and dev period: ... E[Loss] = LDF * Prior Loss ... Var( Loss ) = σ^2,j * Loss Note that these are the same assumptions as the normal Mack model. (b) 1. The conventional chain ladder LDF estimators, f, are (a) Unbiased and (b) Minimum variance among estimators that are unbiased linear combinations of the triangle's age-to-age factors 2. The conventional chain ladder reserve estimator is unbiased. (c) The variance assumption of the non-parametric Mack model is replaced with the condition that Yk,j+1 | Xkj ∼ EDF EDF is exponential dispersion family, likely Tweedie

a) Briefly describe three external systemic risks. b) For each external systemic described, briefly describe how to analyze it for CoV determination.

(a) - Claim management process change - Expense risk - Event risk (b) 1. Analyze past experience to identify past systemic episodes, discuss with claim managers to help identify. Sensitivity testing of key valuation assumptions is useful in the assessment of CoVs. 2. Discuss with product and claim management to better understand the key drivers of policy maintenance and claim handling expenses. Analyze past experience. 3. For outstanding claim liabilities, discussions with claim management should help set expectations on claim costs. For premium liabilities, output from proprietary CAT models can help as well as analysis around perils not covered by CAT models.

Briefly describe how the following components of a loss reserve could vary for a reinsurer versus a primary insurer. (a) Additional case reserves on individual claims (b) Pure IBNR (c) Risk load

(a) Reinsurers might add additional reserves on individual claims. Primary insurers don't need to add additional reserves since they booked the original reserves themselves. (b) A claim that is known to the primary insurer may not be known to the reinsurer. Thus, this claim is pure IBNR for the reinsurer but not the primary insurer. So the reinsurer will likely have a higher pure IBNR. (c) Risk load not typically present in primary insurer. Reinsurer will add risk loads for catastrophe protection (due to geographical concentration), to not recognize uncertain profits too quickly, because they'll need to carry a higher percentage risk load than the primary due to typically covering more volatile excess losses, etc.

(a) Give the formula for SSE. (b) Give the formula for adjusted SSE, and the values for the BF, CL, and CC methods.

(a) SSE = ∑ (E[IncLoss,k] − IncLoss,k)^2 -DO NOT INCLUDE THE FIRST COLUMN IN CALCULATION (b) Adj SSE = SSE / (n-p)^2 where n = # of incremental loss observations and p for BF = 2 × # AYs − 2 and p for CL = #AYs −1 and p for CC is always the same as CL

A long-tailed book of business has a 1M limit. The average calendar year trend over exposure periods in the loss triangle is 7%. The average claim size at ultimate for the book of business is $470k as of the latest exposure period. (a) Discuss why the average LDFs calculated directly from the loss triangle limited to 1M must be adjusted before using to estimate unreported loss for the book of business at the limited layer. (b) Describe in what circumstances the difference between the adjusted LDFs and the LDFs calculated directly from the limited loss triangle would be minimal.

(a) There's a relationship between claims development, loss trend, and the claims size model. There is a large loss cost trend and the policy limit is relatively close to the average claim size. Also the book is long-tailed. This means the development pattern will need to be adjusted significantly to get appropriate CDFs for each exposure period. (b) If the policy/data limit was well above the working layer (e.g. claim size ~ $50k), the trend rate was low and there was a short development pattern, then the difference would be small.

Briefly describe two benefits to using copulas to express correlation from joint loss distributions.

- A copula can join any distributions, regardless of what family they are from - A copula can reflect increased correlation between the distribution in the tail - Copulas facilitate simulations of events, which can help in understanding how to mitigate risks.

Two limitations of DDM

- Actual dividend payments are discretionary and can be difficult to forecast - Due to increased use of stock buybacks as a vehicle for returning funds to shareholders, we may need to redefine "dividend"

One issue with the PDLD method is the failure to separate the basic premium ratio from the first PDLD ratio. Fully describe two problems that arise from the failure to separate these two items

- Cannot tell how much each line item contributes to the total slope of the first adjustment - Hard to analyze changes over time: is it due to changes in the average basic premium or premium responsiveness?

An insurer writes commercial and personal auto, as well as umbrella. Identify a source of external systemic risk with a relatively low correlation between two of the above lines of business. Name the two lines and provide a brief reason for the low correlation.

- Claim management process risk - Legal/political /legislative risk - Latent claim: a claim like asbestos unlikely to affect both personal auto and umbrella.

Identify two requirements of claim size models.

- Claim size model parameters can be adjusted for the impact of inflation - Limited expected values and unlimited means can be easily calculated

When does a marginal method of capital allocation work?

- Company growth must be homogeneous. - Risk measure is scalable.

Briefly discuss three different types of strategic decisions ERM models can help insurers with.

- Determine capital requirements to support its risk or maintain credit rating. - Decide between different reinsurance programs to manage risk - Identify risk sources that significantly contribute to the most adverse outcomes and cost of capital to support to them.

Briefly explain two benefits of using the retrospective rating formula method over using historical data when calculating the PDLD ratio.

- Formula can reflect pricing parameters that are currently being sold. Terms of policies may have changed since historical policies were written. - PDLD ratios calculated by the retro formula are more stable than those from empirical data. Patterns in historical data can be extremely volatile.

Briefly describe two reasons why it may be more appropriate to consider the correlation of a loss development triangle as a whole instead of correlations between pairs of columns.

- It's more important to know whether correlations globally prevail than to find a small part of the triangle with correlations. - Depending on the level of the significance used, some pairs of columns might show up as significant just by randomness. A single significant correlation would not be a strong indication of correlation within the triangle.

Briefly describe how parameter risk can be reduced.

- Having more data and better data quality - More advanced regression procedures - Sensitivity testing - Using expert judgment - Testing models

Two problems with the normal distribution as an approximation to the reserve.

- If data is skewed, it is a poor approximation - Allows for negative reserve estimates, even if a negative reserve is not possible.

Weighting various model results may produce negative IBNR. Describe two methods for adjusting the results for negative IBNR.

- If the distribution shape and width is appropriate, add a fixed amount of IBNR - If just the distribution is appropriate, multiply by a factor to produce positive IBNR

When would you use a GLM bootstrap model over an ODP bootstrap model?

- If there are similar loss levels, we can group accident years. Reduces the # of parameters, improving AIC. - If there are calendar year effects. GLM can include a diagonal parameter but ODP cannot.

Provide two examples of calendar year effects that may cause loss development data not to be independent by accident year.

- Inflation - New claims processing system that speeds up the settling of claims

When there are negative incremental values in loss development data, the log-link function used in the GLM framework will fail to yield values usable to parameterize the model. Describe two modifications to the log-link function that address this issue.

- Instead of modeling ln(q), model -ln(-q). When q=0, model 0. - Shift the triangle up by a constant equal to the lowest negative value. When the model is run, subtract the constant from the model results.

Identify two reasons why loss reserve models often do not accurately predict the distribution of outcomes.

- Insurance loss environment is too dynamic to be captured in a single model. - There could be other models that better fit the existing data - The insurance loss environment has experienced changes that are not observable at the current time. - Data used to calibrate the model is missing crucial information (e.g. changes in claims process) - Because we only use a small sample of universe data we are likely to misestimate parameters.

Briefly explain a practical weakness with the estimation of free cash flows.

- It uses adjusted accounting measures which does not align with balance sheets or any financial statements. May be difficult for management to understand. - It has a large terminal value. Thus is puts a lot of weight on expected growth and discount weight.

Identify three sources of external systemic risk that may be material in a risk margin analysis and give an example of each.

- Legislative risk: changes in legislative/political environment will impact claims (e.g. new regulation stipulating benefits in WC) - Claims management risk: Changes in claim reporting, payment or estimation processes (e.g. claims dept moves to a new claims management platform - Latent claim risk: uncertainty from losses that may arise in the future from a source that's not considered covered (e.g. asbestos impacting WC).

Three methods of evaluating capital adequacy.

- Leverage ratios - Risk-based capital models: includes additional risks such as accumulation risk, reserve risk, credit risk. - Scenario testing: incorporates dependencies among risks, probability distributions.

Why do quantitative approaches to measuring independent risk not enable a complete analysis?

- Outcome will depend significantly on actual episodes of risk, not all the potential ones - Model is unlikely to pick up uncertainty internal to the valuation process

Discuss two consistency or reasonableness checks that an actuary could perform when analyzing independent risk CoV selections.

- Portfolio size: larger portfolios should have smaller CoVs due to less volatility from random effects. - Length of claim runoff: Longer tailed lines have higher CoVs due to more time for random effects to have an impact

Disadvantage of using a practical model for required capital. How can it be overcome?

- Practical models for risk capital (e.g. RBC as opposed to TVaR/EPD which are calculated with an ERM model) use proxies for risk such as premium and reserves. So a reinsurance program may not result in a large change in required capital if it doesn't affect premium/reserves by that much. - We can compensate for this disadvantage by building the practical models into the enterprise risk model. A capital score can be calculated for each scenario and a probability distribution of capital scores can be produced. Then, required capital can be set at different probability levels

Give two reasons why the choice of accounting rules is still important

- Reason 1: Accounting rules can influence the perception of the business' performance by those performing the valuation, resulting in an incorrect equity value - Reason 2: A more accurate accounting system will reflect more of the firm value in the book value rather than the terminal value

What improvement does the correlated chain ladder have over the Mack model?

- Recognizes more variability than the Mack model because it has a random accident year parameter and allows for correlation between accident years.

Describe three soft approaches to modeling the underwriting cycle.

- Scenarios - Create numerous scenarios for firms to think through how they might respond - Delphi method - obtaining expert consensus on an issue. Aggregate questionnaires from experts to get a consensus. - Competitor analysis - collect from competitors key financials, news items and behavioral metrics. Identify unusually profitable or distressed financial conditions over a large number of firms.

a) Briefly describe how the terminal value is calculated under the abnormal earnings method. b) Explain why this is an appealing way to calculate the terminal value.

- The abnormal earnings method assumes that abnormal earnings do NOT continue into perpetuity. Instead, they should decline to zero as new competition enters the market to capture some of those abnormal earnings - This is an appealing quality of the AE method since it forces analysts to explicitly consider the limits of growth from a value perspective (i.e. growth in earnings does not drive growth in value; we only grow in value if we exceed expected returns)

An actuary is reviewing results from an ODP boostrap model. What should the actuary look for in reviewing standard deviations and coefficients of variation?

- The coefficients of variation should decrease from older to more recent years. The CoV for all years should be lower than any individual year. - The standard deviation should increase from older to more recent years, in line with the increase in expected unpaid loss.

Briefly describe two advantages of scenario planning.

- The insurer can plan responses to different potential market scenarios so that they have strategic action plans ahead of time for each scenario. - There's less organizational inertia because there's some flexibility in the plan (e.g. if prices fall below rate adequacy, there's less pressure to make plan based on the original plan numbers).

Discuss some of the key elements that differentiate the quality of a model.

- The model should reflect the relative importance of each risk - The modelers should a deep, fundamental understanding of the risks - The modelers should have a trusted relationship with senior management

Six forms of traditional risk management.

- Transfer - Avoidance - Reduction in frequency - Mitigation of the severity - Acceptance - Diversification TARMAD

Advantages of economic capital

- Unifying measure for all risks in an organization - More meaningful to management than RBC or Capital Adequacy ratios - Firm must quantify risks and put them into a probability distribution

Discuss five assumptions that must be met in order to implement the reserving procedure described in Sahasrabuddhe's paper.

- We need a basic limit (where we can feel credible with the data) - We need a claims size model - We need a triangle of trend indices - The procedure requires claim size models at maturities prior to ultimate - The procedure requires that the data triangle be adjusted to a basic limit and common cost level

What are the seven types of strategic risks? What are their risk levels to the insurer?

-Industry - Capital intensiveness, overcapacity, commoditization, deregulation, cycle volatility. Risk: Very high. -Technology - Technology shift, patents, obsolescence. Risk: Low -Brand - Erosion or collapse. Risk: Moderate -Competitor - Global rivals, gainers, unique competitors. Risk: Moderate -Customer - Priority shift, power, concentration. Risk: Moderate. -Project - Failure of R&D, IT, business development or M&A. Risk: High -Stagnation - Flat or declining volume, price decline, weak pipeline. Risk: high

What is the main problem with Fitzgibbon's method?

-It ignores actual loss emergence unlike PDLD method Maybe also that it assumes premium responsiveness is linear.

How do you model independent risk?

-Model process risk and the random component of parameter risk - Use Mack method, boostrapping, stochastic CL, GLM, etc. - can model frequency and severity and determine CoVs separately, combine them after

What are the components of independent risk?

-Process risk: randomness, tossing a die -Random component of parameter risk: we will not get the parameters correct due to past process risk

Insurer ABC is a large, multi-line insurer that writes both personal and commercial lines. ABC recently rolled out a new personal auto telematics product into 15 states. ABC is also in talks to acquire a small regional insurer. While the insurance industry has been in a hard market for years, the underwriting cycle has shown signs of softening. Identify and describe two strategic risks that ABC faces.

-Project risk - Failure of M&A: ABC faces the risk that the acquisition will destroy franchise value by clashes in culture, integration costs, etc. -Competitor risk - Rollout of a new product -Stagnation risk - There is a risk around the organizational response to the softening market, especially if there is a strong effort to maintain market share amid falling prices

What are the two sources of independent risk?

-Random component of process risk: pure effect of randomness in insurance process -Random component of parameter risk: past process risk compromises our ability to select appropriate parameters

Examples of additional analyses once risk margins have been selected

-Sensitivity testing -Scenario testing (what event would lead us to consume the entire risk margin) -Internal benchmarking (compare against other valuation groups -External benchmarking -Hindsight analysis (compare current estimate of liabilities to past estimates of those liabilities) -Mechanical hindsight (value liabilities using a mechanical method - e.g. 5yr LDFs - and repeat with information that is 1 yr old, 2 yrs old, etc..)

Describe two drawbacks to using default avoidance as the reference point for setting capital requirements in ERM.

-Shareholders would suffer at losses well before the company goes into default -The loss distribution is least reliable at extreme probabilities

Describe a risk of using market multiples to value an insurer.

-They use market values which can be volatile -Risk profiles can vary greatly by P&C company due to primary lines written, types of coverage, etc. The ratios for one company might not reflect the ratios of another.

What is the goal of independent risk assessment?

... to use a model to "fit away" past systemic episodes in order to analyze the residual volatility in order to get the CoVs for independent risk

Given percentiles of a model output, how would you perform the Kolmogorov-Smirnov test at 5% significance?

1) Calculate the critical value: 136/sqrt(n) 2) Sort the percentiles in ascending order. 3) Calculate the expected percentile: 100*{1/n,2/n...n/n} 4) Take the max abs val difference of actual and expected percentiles. 5) Compare to critical value

Given Table M insurance charges and savings corresponding to the retro maximum/minimum and the loss elimination ratio for the per-accident limit, how would you calculate the incremental loss capping ratios that can be used in the PDLD retro formula?

1) Calculate the cumulative loss capping ratios: CL/L = 1 - Net Ins. Chg - LER = 1 - (Charge - Savings) - LER 2) Calculate the incremental loss capping ratios using the cumulative loss capping ratios and the percent of total losses emerged. IncrLossCapRatio = CL/L,n * %Loss,n + CL/L,n-1 * %Loss,n-1 / ( %Loss,n-%Loss,n-1 )

Next year's plan premium for an insurer's auto book is $9,600,000 with an expected loss ratio of 67.2%. From a reserve analysis, the CC method is using with LDF curve fitting. Process variance is calculated with a variance/mean ratio of 74k. You are given a covariance matrix of ELR, ω, and θ. What are the steps to calculating the std deviation of losses for the prospective year?

1) Calculate the expected losses for the prospective year. - E[Loss,prosp]=Prem*ELR 2) Process variance = σ^2 ⋅E[Loss,prosp] 3) Parameter variance = Var(ELR)⋅Prem^2 4) StdDev( Loss,prosp ) = sqrt(ProcessVar + ParameterVar)

Given table of incremental losses and a hat matrix from an ODP model (simplified GLM), what are the steps to calculating the scale parameter and standardized Pearson residuals?

1) Calculate the fitted incremental loss triangle. Do this by calculating LDFs from the empirical triangle, then start with the latest diagonal of cumulative losses and divide out the LDFs. 2) Unscaled Pearson residuals = (Act-Exp)/Sqrt(Exp^z) where z depends on the variance assumption (0=normal,1=ODP,2=Gamma) 3) Scale parameter = Sum(unscaled Pearson residuals ^ 2) / (n-p) where n = # of cells, p = # parameters (#LDFs + #AYs for a simple GLM) 4) Re-order the Hat Matrix diagonal into a triangle (AYs first then dev periods) 5) Calculate the Hat matrix adjustment factors, f f,w,d = sqrt( 1 / (1 - H,w,d) ) 6) Scaled Pearson residuals = f,w,d * Unscaled Pearson residual

Steps to Sahasrabuddhe's simplifying method

1) Calculate the ult ratio = LEV(X)/LEV(B) 2) Calculate the ratio at other dev ages: Ult Ratio + (1-Ult Ratio) * Decay factor 3)

Steps to calculating internal systemic CoVs using a balanced scorecard.

1) Calculate the weighted average score 2) Lookup the average score in the CoV scale. Note that you calculate the weighted average score first, then lookup the value, not weighting the CoVs together.

Identify three causes of P&C company impairments.

1) Deficient loss 2) Underpricing 3) Rapid growth

Steps to Caseload Effect.

1) Define E[Y],o(Y),E[x/y],o(x/y) ***NOT FOR CASELOAD EFFECT. We can think of the Caseload effect as the CL method and these as BF. 2) VHM = (o(y)*E[x/y])^2 EVPV = o(x/y)^2 * ( E[y]^2 + o(y)^2 ) Z = VHM / ( VHM + EVPV ) 3) Set two equations using form E[X | Y=y ]=x,0 + dy - 1st equation: Losses at age k assuming Caseload = Ult assuming caseload * d + x,0 - 2nd equation: Losses at age k assuming no Caseload = Ult assuming no caseload * d + x,0 Solve for x,0 and 4) Cred weight the caseload CL ult (x-x,0)/d with the a priori Ult assuming no caseload

Discuss two key relationships between Sarhasrabuddhe's adjustments to CDFs and unadjusted CDFs.

1) Differences are greater for larger expected unlimited claim size which increases the expected loss in the layer between the basic limit and the new limit. 2) Differences are greater where trend and/or loss development act over longer time periods (long tailed lines) or when the loss trend is higher.

What are the four characteristic stages of the UW cycle?

1) Emergence - a new line of business emerges with a little data. There's a cycle of price wars, leading to insolvencies and price corrections, and then resulting in new competition. 2) Control - stabilization of the market comes as DOIs control price changes 3) Breakdown - technological/social change breaks down the structure and new competitors enter to grab market share 4) Reorganization - a new version of the line/marketplace emerges

Four components of parameter risk.

1) Estimation risk. Estimating the form/parameters of freq/sev requires data, but there will never be enough of it. 2) Projection risk. Uncertainty over time and the uncertainty in projecting these changes. 3) Event risk. Causal link between a large unpredicted event and losses to the insurer.

Steps to calculating premium asset.

1) Expected future loss emergence = Ult loss - Loss at prior booking 2) Calculate CPLD,n. Weight the future PDLD ratios by %Loss emerged. 3) Expected future prem = Exp fut loss * CPLD 4) Ult prem = Exp fut prem + Premium at latest eval 5) Premium asset = Ult prem - Premium at prior booking

Briefly describe two reasons why the Free Cash Flow to Equity method is preferred over the Free Cash Flow to the Firm method when valuing property and casualty insurance companies.

1) FCFF takes the total value of the firm then subtracts off debt. It is hard to distinguish policyholder liability from other debt. 2) FCFF uses a weighted average cost of capital (WACC) which requires the average duration of the debt, which is hard to calculate for policyholder liabilities.

Describe six implications of Mack's assumptions.

1) Factor f(d) is statistically signficant 2) Superiority to other emergence patterns 3) Residuals imply linear pattern 4) Test residuals for stability over time 5) No correlation among columns 6) No particularly high/low diagonals (CY effects)

Discuss two options for estimating a firm's beta. For each option, briefly describe an advantage and disadvantage.

1) Firm beta. Computed using a linear regression on firm returns vs market returns. Takes into account information specific to the firm, such as its risk profile and the amount of leverage it has. However, it's often unreliable for individual firms due to statistical issues and changes in the firm's risk over time 2) Industry beta. Mean or median beta for the industry. More stable and reliable, but doesn't necessarily reflect the risk profile or leverage a firm has.

An actuary specifies prior distributions for AY ults as a Gamma distribution. Incremental losses follow an ODP model with a given dispersion factor. What are the steps to calculating the estimated unpaid losses using a Bayesian model for the BF method.

1) First, define the ODP model: E[IncLoss]=xi*yj = Ult,ay * %Pd,dev 2) Calculate the Gamma parameters for each AY. -Mean = alpha/beta, Var = alpha/beta^2 3) Get the y parameters for the ODP model (using volume-wtd LDFs), i.e. the %paid 4) Calculate the credibility weight, Z, for each cell. Z = Cum%Pd in prior dev period / (Beta,AY * DispFact + Cum%Pd in prior dev period) 5) Calculate the expected losses for each cell (the ones in the triangle that haven't emerged yet). E[IncLoss] = xi * yi. xi comes from the actuary's prior distn. yi comes from the volume-wtd LDFs 6) Weight the CL estimates of incremental losses with the BF estimates. Do it iteratively, using the cred-wtd incLosses as inputs into the cumulative losses for calculating the CL incremental losses.

What are the two types of ODP models? Mean assumptions.

1) GLM approach - E[IncLoss] = mi,j = exp(c+αi +βj) 2) Row-column form -E[IncLoss]= xi * yj xi - Expected ultimate loss for accident year i up to the last development period of the triangle yj - % of ultimate loss emerging in development period j

Steps to heteroscedasticity fix: standard deviation in Shapland

1) Group the together ages or AYs with similar standard deviations. Calculate stddev(All)/stddev(Group) = hetero adj factor. 2) Adjust sampled residual by: Residual * hetero(AY/age)/hetero(AY/age that residual is being applied to) 3) Calculate the sampled loss as: Sampled loss = E[IncLoss,ay,k] + residual * sqrt(E[IncLoss,ay,k])

Outline and briefly discuss the five steps suggested by Mango and Venter for managing operational risk.

1) Identify exposure bases for each key operational risk source 2) Measure the exposure level for each business unit for each risk source 3) Estimate the loss potential (freq and sev) per unit of exposure for each operational risk 4) Combine steps 2 and 3 to produce modeled business unit loss freq and sev distributions 5) Estimate the impact of mitigation, process improvements or risk transfer on the business unit loss freq and sev distributions

Steps to scenario planning

1) Identify range of scenarios 2) Create responses to these scenarios 3) Implement a response according to the actual state of the market.

Options to handle extreme outcomes caused by negative incremental losses when using an ODP bootstrap model.

1. Identify and remove extreme iterations from results 2. Recalibrate the model to correct the source of negative incremental losses (e.g. model salvage and subrogation separately) 3. Limit incremental losses to a minimum of zero

Discuss two disadvantages of the Fitzgibbon method.

1) If the retro premium-to-date has been less responsive than expected, there's no way to get "back on track" because the method only estimates ultimate premium based on cumulative losses-to-date. 2) Fitzgibbon uses a constant slope, but we'd expect the slope to decrease for more mature losses and higher overall loss ratios because of loss limits and maximum premiums.

Discuss five issues in data that should be accounted for when implementing the ODP bootstrap model. For each data issue identified, briefly describe an adjustment to the ODP bootstrap model to address the issue.

1) Increasing exposures. -Divide losses by exposures and run model on pure premiums. 2) Last diagonal is not a complete year. -Annualize the losses, calculate LDFs, and then de-annualize the results. 3) First evaluation period is not a complete year (e.g. 6 months is the eval, but only 3 months have passed) -Calculate LDFs as normal. Estimate the losses for the most recent AY by dividing in half to approximate 50% earned that year. 4) Missing data - Impute based on the expected amount. When sampling, do not sample residuals from these cells. 5) Outliers - Exclude the outlier from the LDFs and the residuals. When doing simulations, sample a value in this cell.

Three key assumptions of Clark. For each, describe a graphical test to validate the assumption.

1) Incremental losses are iid. Plot normalized residuals against increment age. 2) Variance/mean ratio is constant. Plot residuals against expected loss. 3) Variance is approximated by the Rao-Cramer lower bound (i.e. the variance estimate is as low as it can be). Plot residuals against CY? Not sure why...

Define operational risk, and discuss the seven types of operational risks.

1) Internal fraud. - Employee theft, insider trading, claim falsification 2) External fraud. - Computer hacking and claims fraud 3) Employment practices and workplace safety - Discrimination claims, repetitive stress, WC claims 4) Clients, products, and business practices - Client privacy, fiduciary breaches, money laundering 5) Damage to physical assets - Hardware failures 6) Execution, delivery, and process management - Policy processing and claim payment errors

Identify three questions an actuary might ask if there was higher loss development than expected.

1) Is the deviation pure randomness? 2) Was the beginning IBNR reserve too small? 3) Are the lags (payment patterns) too short?

How should reinsurance data be partitioned?

1) LOB (property vs casualty) 2) Contract type (treaty vs facultative 3) Reinsurance cover ( quota share, XS per risk, XS per occurrence, XS of aggregate ) 4) Attachment point

Discuss three variables for partitioning a reinsurance claims portfolio.

1) Line of business (property, casualty, ...) 2) Type of contract (facultative, treaty, ...) 3) Type of reinsurance cover (quota share, excess per-occurrence, CAT, ...) 4) Primary line of business - for casualty 5) Attachment point - for casualty

Discuss Siewart's four loss development methods for excess reserves. Give advantages and disadvantages for each.

1) Loss ratio method. Advantages: - Useful for immature AYs with little data - Loss ratio estimates can be consistent with pricing's Disadvantages - Ignores actual loss experience - May not reflect account characteristics properly if development emerges differently due to written exposures 2) Implied development method (diff between unlim and limited ults) Advantages: - Can estimate excess loss for early maturities, even if no excess losses have emerged yet - Limited LDFs are more stable than excess LDFs used for direct development Disadvantage: want to explicitly recognize excess loss dev 3) Direct development (apply XSLDF to XS losses) - Focuses explicitly on XS losses, but can't use if there are no XS losses 4) Credibility Weighting (Bornhuetter-Ferguson) - Cred weight direct development est + LR est. - Can tie to the pricing estimate in immature periods where no xs losses have emerged yet. - Ignores actual loss experience to the extent of the credibility complement - Z = 1/XSLDF (obviously)

Discuss the four stochastic methods for the Chain Ladder.

1) Mack model. Mean = ldf * cumLoss. Var = o^2 * cumLoss -Easy to implement, but separate parameters for variance + LDFs. Also no predictive distn. 2) ODP. - Advantages: doesn't necessarily break down if there are some negative incremental values, gives same reserve est. as CL. -Disadvantages: connection to CL is not immediately apparent. 3) OD Negative Binomial. Mean = (LDF-1)*CumLoss. Var = disp x LDF x mean - Advantages: Same as ODP - Disadvantages: Column sums of incremental losses must be positive (or variance would be negative). 4) Normal distribution - Mean = (LDF-1)*CumLoss. Var = Disp * CumLoss - Advantages: can handle negative incremental losses - Disadvantages: Need to estimate separate parameters,ϕ j , for the variance apart from the LDFs.

Discuss three graphs Clark advises to review as diagnostics.

1) Normalized Residuals vs. Increment Age o Tests how well the loss emergence curve fits incremental losses at different development periods. 2) Normalized Residuals vs. Expected Incremental Loss ( µi ) o Tests the variance/mean ratio σ^2 : If the variance/mean ratio is not constant, we should see residuals clustered closer to zero at either high or low expected incremental losses. 3) Normalized Residuals vs. Calendar Year o Tests whether there are diagonal effects (e.g. high inflation in a calendar year) For all graphs, residuals should be centered around zero with no patterns or autocorrelations.

What are the steps to adjusting for heteroscedastic residuals in an ODP bootstrap model using scale parameters?

1) Overall scale parameter for the triangle = sum(unscaled residuals ^2 ) / (N-p) where n = # of incremental values and p = # parameters (#AYs + #LDFs + #hetero parameters) 2) Scale parameters for each heteroscedasticity group: = (N - p)/N * sum(residuals^2 in group) / n where n is # of values in group 3) Hetero-adj factor = h = sqrt(scale param overall / scale param of group)

Identify the three steps involved in reinsurer loss reserving.

1) Partition the reinsurance portfolio into reasonably homogeneous exposure groups that are relatively consistent over time with respect to mix of business 2) Analyze the historical development patterns. If possible, consider individual case reserve development and the emergence of IBNR claims separately 3) Estimate the future development. If possible, estimate the bulk reserves for IBNER and pure IBNR separately

Steps to Mack's methodology to test for correlations between subsequent development factors using a 50% CI.

1) Rank each development column + the rank of the prior development column 2) Calculate Tk = 1 - Sk / (n*(n-1)/6) 3) Weight the T's together to get T. - Weight,k = #AYs - k - 1 4) E[T] = 0 , Var(T) = 1 / [ (#AYs-2)*(#AYs-3)/2 ] 5) CI = 0 +- z * sqrt(Var(T))

Describe four aspects of reinsurance loss reserving that make it somewhat more difficult than primary loss reserving.

1) Report lag. Claim must first enter cedant's system, then a reinsurance department, then an intermediary, then the reinsurer. 2) Non-homogeneous data. Claims may be from different underlying types of business, have different attachment points, deductibles, and limits. These differences make it difficult to group the data into homogeneous groups. 3) Lack of detail. Cedant may not report all the detail of the claim, making it difficult to do a thorough analysis. 4) Systems. Reinsurers have problems coding loss data, complexity of reinsurance grows faster than the systems can handle.

Discuss three considerations in selecting an appropriate equity risk premium for CAPM.

1) Short-term vs long-term risk-free rates as a benchmark. Want to use the same risk-free rate in CAPM formula. 2) Arithmetic vs geometric averages. Arithmetic is best for single-period forecasts, geometric for multi-period forecasts. 3) Historical vs implied risk premiums. - Historical average-> need to select an appropriate time period - Implied -> based on current market conditions

Describe two sources of internal systemic risk. Briefly explain why each one might lead to a higher CoV selection for umbrella than for, say, commercial and personal Auto.

1) Specification error - the risk that the underlying process is too complex to select a model. Umbrella claims are inherently more variable due to their high attachment and longer tail. 2) Parameter selection error - the risk that the model is unable to measure accurately the predictors in claim cost or trends in those predictors. Certain trends, like severity trend, have larger impacts on excess layers - for umbrella this will create more importance of getting those factors right. 3) Data error - risk associated with errors in the data, or lack of understanding of the data, or unreliable data. Umbrella is a much more nuanced line than PA or CA with fewer industry statistics, so fewer benchmarks and in general less industry expertise. So the chance for data to be unavailable or for expertise of understanding the data to be low is much greater.

An insurer is setting the groundwork to create and implement a new ERM model. What are the four considerations Brehm recommends for implementing an ERM model?

1) Staffing/scope - organization chart, resource commitment, roles and responsibilities, purpose, scope - Recommendations: team leader should have a rep for fairness and balance, team should commit full-time to the project, control of inputs should be similar to the general ledger, scope should be defined (prospective UW year only?) 2) Parameter development - Modeling Software: determine how much is pre-built and how much needs to be built - Developing Input Parameters: include expert opinion - Correlations: modeling team makes recommendations, corporate owns them - Validation and Testing: on an extended period 3) Implementation • Priority setting - Top management should set the priority for implementation • Communications - Regular communication to broad audiences • Pilot Testing - Do pilot testing to prepare stakeholders for the magnitude of the change • Education - Bring leadership to a base level of understanding about the model 4) Integration and maintenance • Cycle - Integrate into the corporate calendar (at least for planning) • Updating - Major updates to inputs no more frequently than semi-annually • Controls - Maintain centralized control of inputs, outputs and templates

Variance assumptions of the following three models: 1) Over-dispersed Poisson 2) Over-dispersed Negative Binomial 3) Normal

1) Var(IncLoss,ay,k) = Dispersion factor * E[IncLoss,ay,k] 2) Var(IncLoss,ay,k) = DispersionFactor x LDF,k x E[IncLoss,ay,k] 3) Var(IncLoss,ay,k) = o^2 * CumulativeLossesToDate Note that (1) is Shapland, (2) is Verrall, and (3) is Mack's model.

Identify three reasons why the current booked premiums may not equal the booked premium for the prior retro adjustment.

1) The timing of retro adjustments 2) Minor premium adjustments 3) Interim premium booking that occurs between the regularly scheduled retro adjustments

Four essential elements of the enterprise risk model.

1) UW risk: loss distributions, pricing risk (uncertainty in getting adequate premiums due to market conditions), parameter risk, and CAT modeling risk. 2) Reserving risk 3) Asset risk 4) Dependencies

Briefly discuss two potential shortcomings of a traditional unilateral planning approach in one version of the plan is set.

1) Underwriters might chase policies at inadequate rates should the market conditions not align with market. 2) Does not allow the company the flexibility to reduce market share when prices are low, which then robs it of the capital needed to increase market share when prices are high 3) Suboptimal portfolio mix may result. If management had known that the true loss ratio would be worse than the plan for a segment, they would have lowered the target premium volume for that segment. 4) Loss ratios might be too optimistic. If the company uses a BF approach to project the reserves, this can lead to adverse development later on.

Three weaknesses of DCF methods.

1) Validation of assumptions - if differences in P-E ratios between firms with comparable growth rates, dividend payout ratios, etc. cannot be explained by these key variables, we may need to revisit our assumptions 2) Shortcut to valuation - when industry average performance is expected, we can select a group of peer companies and use their mean or median P-E ratio 3) Terminal value - we may rely on peer P-E ratios to guide the terminal value calculation

In addition to valuing firms, P-E ratios have alternative uses. Briefly describe three alternative uses of P-E ratios

1) Validation of assumptions - if differences in P-E ratios between firms with comparable growth rates, dividend payout ratios, etc. cannot be explained by these key variables, we may need to revisit our assumptions 2) Shortcut to valuation - when industry average performance is expected, we can select a group of peer companies and use their mean or median P-E 3) Terminal value - we may rely on peer P-E ratios to guide the terminal value calculation

An insurer writes commercial property and excess casualty, and is considering using StdDev, TV@R, and WTV@R to allocate capital. Identify three other risk measures that would be suitable for this capital allocation and briefly describe their appropriateness for this portfolio.

1) Value of put option. This would take into account the market value to protect against our extreme events. So risk measure is proportional to market value, which is what we want. 2) Exponential moment. This considers all losses in the distribution, not just the tails. This is good since the company may suffer medium-sized losses not captured by TV@R. It also reflects skewness of the distribution, unlike standard deviation, so it works well for this portfolio. 3) Expected policyholder deficit on the transformed probabilities is a tail measure that would also address attitudes toward risk, which is important for a book with higher likelihood in the right tail.

An insurer writes Personal Automobile and Homeowners insurance in multiple geographic locations. Identify three questions an actuary could ask company management to help determine how to segment the claims portfolio into appropriate classes for estimating unpaid claims liabilities. For each question, briefly explain why it should be asked.

1. Are we selling a wide range of policy limits? - Losses at different limits will development differently and we may want to group them. 2. Are we writing a lot in CAT-prone areas? CAT losses will develop differently, may want to separate into CAT/xCAT losses 3. What are the coverages written on each line? Losses under diff coverages will develop differently. For instance, fire losses are more short-tailed than liability losses for Home.

Steps to calculating the variance of reserves in Clark using LDF method *Assume o^2 not given.

1. Calculate G(x) at each average age. 2. Calculate LDFs by taking G(trunc)/G(x) 3. Calculate each AY's ultimate loss using those LDFs. 4. Since o^2 is not given, calculate the expected incremental loss triangle. 5. o^2 = 1/ [ n-p ] * sum[ (Actual incloss - expected)^2 / expected ] Note: n = number of cells in loss triangle p (in LDF method) = #AYs + #Params in G(x) 6. Process variance = o^2 * Total Reserve 7. StdDev(Resv) = sqrt(process var + parameter var)

Steps to calculating goodness of fit in Venter. Want to compare to CL. Given f(d) and h(w), IncLoss triangle. Use adjusted SSE for comparison.

1. Calculate expected incLoss triangles for both CL and parameterized BF. 2. Calculate SSE = sum(act - exp) 3. Calulate adj SSE = SSE / (n-p)^2 where n=# of incLoss observations, excluding 1st column and p = # of parameters i the model.

Two important distinctions between AE and the DDM/DCF models.

1. DDM and DCF use the accounting-adjusted net income to measure a cash flow. This removes any accounting distortions that might recognize expenses/revenue in the future. This might also not be the best idea, as unadjusted accounting valuations might be more accurate over a finite horizon. 2. AE focuses on source of value creation - the firm's ability to earn a return excess of the required return. On the other hand, DDM/DCF focus on the effect of this value creation: the firm's ability to pay cash flows to its owners.

Steps to Siewart's loss ratio method to calculate the estimated ultimate excess losses given premium, full coverage LR, the per-occurrence charge, and the per-aggregate charge. In this case, the workers comp policy has a 250k deductible and an aggregate limit. What we want is to calculate the ultimate loss given these two considerations.

1. Calculate the per-occurrence deductible loss charge. - Deductible Loss Charge = Prem * ELR * Per-Occ Charge (i.e. excess ratio) 2. Calculate the aggregate loss charge. - Agg Loss Charge = Prem * ELR * (1-Per-Occ Charge) * Per-Agg Charge 3. Expected XS losses = Deductible loss charge + Aggregate loss charge Because the WC policy has an aggregate charge as well as a deductible, we need to calculate both the excess losses and the basic losses that would be paid after the policy hits the agg limit.

The amount of capital an insurance company holds is a function of what things?

1. Customer reaction. Declines in financial rating lead to declines in amount of business. 2. Capital requirements of rating agencies. 3. Comparative profitability of new vs renewal business. Renewal business is more predictable.

a) Three integration and maintenance details that need to be addressed when developing an internal model. b) For each integration and maintenance detail, provide a recommended course of action.

1. Cycle - integrate into planning calendar at a minimum 2. Updates - determine frequency/magnitude of updates, perform major input review no more frequently than twice a year 3. Controls - maintain centralized control of inputs, outputs and application templates

Company ABC is a growing regional insurer in the East Coast that primarily writes personal lines. ABC is setting up a new ERM program and its actuary wants to ground the program in current best practices. Discuss the four steps of the ERM process and how they relate to insurer ABC

1. Diagnose: Conduct a high-level risk assessment to identify the risks posing the most threat (e.g. event risk for hurricanes) 2. Analyze: Model critical risks where possible and incorporate dependencies between risks (e.g. use a CAT model) 3. Implement: Implement activities to manage risk such as avoiding it, reducing its occurrence, or mitigating its effects (e.g. purchase reinsurance) 4. Monitor: Continually review the program, compare to expectations, update and improve. (e.g. review reinsurance options)

What are the four aspects of parameter risk?

1. Estimation risk - Risk that form and parameters don't reflect the true form and parameters 2. Projection risk - Uncertainty in projecting from extrapolating from past trends to future 3. Event risk - Added uncertainty due to large, unpredictable events outside of company's control 4. Systematic risk - Impact a large number of policies and can't be diversified away. Example would be inflation.

Four types of parameter risk in an ERM model

1. Estimation risk - misestimation of model parameters due to imperfect data 2. Projection risk - refers to changes over time and the uncertainty in the projection of these changes. Examples of projections include trending frequency and severity to future periods AND loss development. Unexpected changes in risk conditions (ex. increase in driving due to cheaper fuel, criminals attack security vehicles because banks are more secure) also contribute to projection risk 3. Event risk - refers to situations in which there is a causal link between a large unpredicted event (outside of the company's control) and losses to the insurer. Examples include class-action lawsuits, latent exposures (asbestos), and legal decisions on policy wording (court decides to ban policy exclusion) 4. Systematic risk - refers to risks that operate simultaneously on a large number of individual policies. Thus, they are non diversifying and do not improve with added volume. Examples of systematic risk include inflation, as well as all of the previously discussed parameter risks

Describe the four types of parameter risks.

1. Event risk: large unpredictable event resulting in losses (catastrophe) 2. Estimation risk: wrongly estimating parameters due to imperfect data 3. Projection risk: changes over times and the uncertainty with projecting into the future 4. Systematic risk: risks that operate simultaneously on many different policies and cannot be diversified away (inflation)

[Venter] State three basic chain-ladder assumptions needed for least squares optimality.

1. Expected losses are proportional to losses reported to date. 2. Variance of incremental losses is a function of only age and losses reported to date. 3. Losses are independent between accident years.

How to renormalize ODP cross-classified parameters

1. Exponentiate alphas and betas 2. alpha_norm = alpha * sum(betas) beta_norm = beta / sum(betas) The purpose of re-normalizing the parameters is to put the parameters in the form of the regular ODP Cross-Classified model with the β j parameters summing to 1.

Difference between FCFF and FCFE

1. FCFE focuses on only cash flows to equity holders, while FCFF focuses on free cash flow to the entire firm, prior to accounting for any debt payments or tax associated with payments. 2. FCFF uses WACC to get a discount rate; FCFE uses CAPM (or something similar) to get a discount rate, which reflects risk to equity holders only.

Discuss three steps for strategy testing with an Enterprise Risk Model.

1. Identify key variables used to make strategic decisions 2. Define desirable goals to optimize (e.g. net income, economic value of the firm) and downside constraints (e.g. Maximum TV@R) 3. Come up with actions for possible scenarios (e.g. the response management would take in different parts of the underwriting cycle)

Types of strategic risk

1. Industry: capital intensiveness, overcapacity, deregulation. Risk is very high. Examples include U/W cycle, insurance as a commodity. 2. Technology: Technology shift, patents, obsolescence 3. Brand: erosion or collapse 4. Competitors: Global rivals, gainers, unique competitors 5. Customer: Priority shift, power, concentration 6. Project: failure of R&D, IT, business development, M&A 7. Stagnation: Flat or declining volume, price decline, weak pipeline

Describe the four risks insurers face.

1. Insurance hazard: risk assumed by insurer in exchange for premium 2. Asset: risk in the portfolio related to interest rates, foreign exchange rates, equity prices, etc. 3. Operational: risk associated with the execution of the business, e.g. IT systems, policy service systems, etc. 4. Strategic: Risk associated with making the wrong decision given the current market conditions

What are three methods for measuring the capital adequacy of insurers?

1. Leverage ratios 2. RBC and Rating Agency Capital 3. Scenario Testing

Three assumptions of chain ladder method and how you would test them.

1. Losses are independent between accident years. - This assumption implies that there are no significant CY effects that impact losses from multiple AYs. To test, perform Mack's CY test. 2. Expected losses in the next development period are proportional to losses-to-date. - Plot losses from dev period 1 against losses from the next dev period. They should be linear. 3. Variance of losses in the next development period is proportional to losses-to-date with proportionality constant αk^2 that varies by age. - Plot weighted residuals with losses on x-axis. They should be centered around zero, with no trends.

Identify four common data issues associated with applying an over-dispersed Poisson bootstrap model and provide one adjustment for each.

1. Negative paid losses. If one of the LDFs is <1, then the expected losses in that column are negative, which creates problems when calculating residuals. - Limit incremental losses to zero, remove row from triangle, use a Gamma distn (see Shapland for this) 2. Missing data - Impute. 3. Residuals are heteroskedastic - Group residuals into similarly sized groups - Scale the residuals so the std dev is the same - Sample from these adjusted residuals 4. Non-zero sum of residuals - There will be a slight bias if sampling from the residuals, but overall is probably fine - To remove the bias, add a constant to each residual so they sum to zero.

How to quantify the paradigm that reinsurance can be viewed as capital?

1. Net benefit = Cost of capital reduction - Net reins. cost 2. Marginal ROE = Net reins. cost / Change in req. capital

What four things should risk management decisions be?

1. Objective 2. Consistent 3. Repeatable 4. Transparent

How would we quantify the paradigm that reinsurance provides stability? (4)

1. Probability distributions - Look at which reinsurance programs protect from the worst losses in different areas of the probability distributions. 2. Box/Space Needle View - Shows the probability in different ranges. Compare different programs based on which protect from the most unfavorable outcomes and which sacrifice profitable good years. 3. Cost-Benefit Diagram -For selected probability levels, the loss amount (net premium - net loss) is plotted against the Net Reinsurance Cost for the programs. - Review this diagram to see which programs deliver the best results at the various profitability levels. If one program is more expensive, but has worse losses at each of the probability levels, it's inefficient and shouldn't be considered. 4. Efficient Frontier - Graph the risk and return of different reinsurance programs at various probability levels. Look for programs that are clearly inefficient and below the efficient frontier.

Three reasons for purchasing reinsurance, despite the fact that it produces a net loss for the cedant.

1. Provides stability 2. Frees up capital 3. Adds market value to firm

What are four diagnostic tests to evaluate an unpaid CL GLM model for reasonableness? Assume you are given the output of mean unpaid, the standard error, and the CoV for five AYs.

1. Standard error should decrease for older AYs, as fewer losses would be outstanding. 2. CoV should increase for older AYs. Fewer outstanding claims for older AYs leads to higher variability relative to the mean. 3. The total standard error for all AYs combined is higher than any individual AY. 4. CoV for total is less than any individual CoV. 5. CoV should decrease in more recent AYs due to the influence of incremental claim payments.

Describe two procedures for adjusting for heteroscedasticity in an ODP bootstrap model.

1. Stratified sampling: Group residuals together based on the size of their variance. Only sample residuals from these groups. 2. Hetero adjustment. Group residuals together based on the size of their variance. Calculate adjustment factor as StdDev(All)/StdDev(Group). Multiply all residuals in group by that factor, then sample residuals, then divide by factors before using to calculate pseudo triangles.

Within the context of a stochastic framework, briefly describe two types of expert opinions that can be used to override the calculated parameters in a predictive reserving model.

1. Selecting LDFs - this may be used when payment patterns are changing due to a change in process and these changes haven't made it into the historical data yet. 2. Selecting row parameters, often expected ultimate losses. This may be useful when there is a change in expected losses that has not yet shown up in the data.

One of the Mack assumptions is that expected future incremental loss emergence is proportional to losses emerged to-date. a) Identify and briefly discuss three testable implications of the assumption above. b) For each implication, fully describe one test you could perform to evaluate whether the assumption is violated.

1. Significance of LDFs. - Run a regression of IncLoss,k+1 vs Loss,k to get the development factor parameter estimates. 2. Superiority to alternative loss emergence patterns. - If loss emergence is proportional to loss-to-date, then this emergence pattern should be superior to other possible emergence patterns (e.g. BF method) - Do a goodness-of-fit test to compare the adjusted SSE between the CL model and the alternative models (e.g. BF/CC). If one of the alternative models has a lower adj SSE, CL is not superior. 3. Linearity of the model - Plot the residuals against prior cumulative loss. You should see residuals random around zero with no trends or autocorrelation.

Identify and define the three main components of internal systemic risk when determining liability risk margins for a book of business.

1. Specification error: error from not perfectly replicating the insurance process 2. Parameter selection error: error in selecting parameters 3. Data error: not enough data to build an accurate model or knowledge of the underwriting process

What are two moment-based risk measures?

1. Standard deviation of change in capital - Pros: Simple to measure - Cons: Favorable deviations (increase to capital) are treated the same as unfavorable deviations 2. Exponential moment - Pros: Reflects all losses & places more weight on larger losses - Cons: Typically can't be calculated unless there are policy limits or maximum possible losses.

How would you set up a matrix to be used in linear regression to be used for testing for diagonal effects?

1. Y values = incremental losses (DO NOT INCLUDE THE FIRST COLUMN) 2. X values = prior cumulative losses, different column for each age. Dummy diagonal columns.

Implication #5: Correlation of Development factors

> Columns of Incremental losses should be independent > determine correlation for each pair of columns, conduct T-test > Reasonable range (90% confidence) is within 0.1m+sqrt(m) where m is # of pairs

Implication #4: Test of stability

> Compares empirical LDFs down a column > if factors are stable, prefer to use entire history > if factors are unstable/show trend, prefer to take more recent average

Briefly describe the bolt-on approach to determining risk margins.

A bolt-on approach occurs when separate analyses are completed to develop a central estimate of insurance liabilities and/or estimate risk margins. It is called a "bolt-on" approach because it does not involve a single unified distribution of the entire distribution of possible future claim costs.

In the context of a retrospectively rated policy, explain how observed premium development to loss development could become negative. (i.e. you have positive incremental losses, but negative incremental premium).

A claim below the cap could have had a reduction in reserves, thus reducing the capped losses and the premium. Thus, negative premium charge. And development on a loss above the capping could be larger than the reduction on the other claim. This would result in positive incremental losses.

Higher leverage should result in what in the context of CAPM? How would you get an apples-to-apples comparison for a firm and market?

A higher beta. De-lever the equity beta to compare the all-equity beta to the industry.

Briefly describe a key aspect of asset modeling.

A key aspect of asset modeling is modeling scenarios consistent with historical patterns. When generating scenarios against which to test a insurer's strategy, the more probable scenarios should be given more weight

Discuss why management should be concerned about significant partial losses in capital.

A significant partial capital loss can damage franchise value, such as its reputation and agency relationships, resulting in a greater loss than just the financial loss to shareholder value. Rating downgrades below certain levels could be devastating even if the insurer survives.

Discuss an external systemic risk for which the assumption of constant correlation between lines of business across their loss distributions might not hold.

A snowstorm in Colorado would result in higher-than-normal correlation between Auto and Home.

AIC and BIC approximations

AIC ~ SSE * e ^(2p/n) BIC ~ SSE * n ^(p/n)

Types of real options.

Abandonment Option - Option to end a project early and recover the net liquidation proceeds Expansion Option - Option to expand the scope of a successful project to capture more profits Contraction Option - Option to scale back the scope of a project Option to Deter - Option to hold off on a project until there is more information Option to Extend - Option to extend the life of a project by paying a fixed price

Abnormal earnings method

Abnormal earnings = NI - ReqReturn * Beg.BV Value = Beg. BV + PV(AE)

Formula for abnormal earnings

AbnormalEarnings = NI - k * BeginningBookVal

If a GLM residual plot displays heteroscedasticity, what should be done to the weights?

Adjust them in inverse proportion with the variance of the residuals.

Briefly discuss one disadvantage of the FCFE method.

Adjusting projected Net Income to calculate forecasted free cash flows make the interpretation of FCFE difficult (FCFE may bear little resemblance to internal forecasts).

Briefly describe one advantage and one disadvantage of using the chain ladder method on historical earned premium triangles.

Adv: Explicitly estimates premium asset, consistent with the way we develop losses Disadv: Premium data/adjustments are often lagged around 9 months and since premium depends on losses, you'll get quicker results by using losses instead.

Briefly describe an advantage and disadvantage of using an over-dispersed negative binomial distribution.

Advantage - The form of the mean is the same as the chain ladder method. Disadvantage - Column sums of incremental losses must be positive (or variance would be negative).

Discuss an advantage and a disadvantage to using the retro formula to calculate PDLD ratios, as opposed to historical data.

Advantage - responds to changes in the retro parameters for sold policies Disadvantage - must select retro rating parameters, which can be difficult as parameters will vary between policies sold.

Describe one advantage and one disadvantage of the Stanard-Buhlmann method over the chain-ladder method.

Advantage: More stable, not as responsive to high/low losses in early dev periods. Disadvantage: Have to on-level premium

Normal copula

Advantages of the normal copula: - Easy simulation method - Generalizes to multi-dimensions (i.e. greater than two dimensions) • Right tail is lighter than the Gumbel and HRT copulas, but heavier than the Frank copula

Two advantages and disadvantages of the GLM bootstrap model

Advantages: - Can tailor the model to the statistical features of the data (i.e. incorporate CY effects with a diagonal parameter) - Can use fewer parameters (by grouping AYs) Disadvantages: - Simulation is slower because the GLM must be solved for - Can't directly explain the model using LDFs

Two advantages and disadvantages of ODP bootstrap model.

Advantages: - Using LDFs makes model more easily explainable. - The GLM uses a log-link and may not work with negative incrementals, but the simplified GLM will still get a solution. Disadvantages: - Unable to adjust for CY effects - Requires many parameters, might overfit the data.

Briefly describe two advantages and disadvantages for the GLM bootstrap model.

Advantages: • Can tailor the model to the statistical features of the data (e.g. can add a diagonal parameter, γ k ) • Can use fewer parameters to avoid over-parameterization (e.g. fewer AY parameters) Disadvantages: • Simulation is slower because the GLM must be solved for in each iteration • Can't directly explain the model using LDFs

Briefly describe two advantages and disadvantages for the ODP bootstrap model.

Advantages: • Using LDFs makes the model more easily explainable to others • The GLM uses a log-link and may not work with negative incrementals, but the simplified GLM will still get a solution Disadvantages: • Unable to adjust for calendar-year effects • Requires many parameters and can over-fit the data

An actuary sets up a Bayesian model using the Bornhuetter-Ferguson method, where incremental losses are assumed to follow an over-dispersed negative binomial distribution. Briefly describe an advantage of this model compared to the Bayesian Bornhuetter-Ferguson model that's based on the over-dispersed Poisson distribution, as described by Verrall.

An advantage is that this is a fully stochastic model in both the row and column parameters. The other model described by Verrall uses static column factors, the LDFs, calculated from the loss triangle.

A Tweedie Mack model is fit to loss data and produces more widely dispersed residuals than expected. Identify whether an increase or decrease in the power p underlying the model is warranted.

An increase in p is warranted.

Allocating capital is ____ and _____.

Arbitrary and artificial. Arbitrary because different risk measures give us different allocations. Artificial because the business unit has access to the entire capital of the firm.

Explain why parameter risk may be more significant for small insurers than large insurers.

As an insurer gets larger the process risk is reduced due to the law of large numbers. Parameter risk is not reduced significantly, and thus it becomes a more important part of total risk.

What is the point of the decay factors in Sahasrabuddhe's simplifying method?

At early maturities, fewer losses have pierced the upper limit of the layer shown in the numerator of the lower layer. Thus, the expected losses in the lower layer and the higher layer are almost the same. As more experience emerges, more losses pierce the upper limit of the lower layer and the expected losses in each layer diverge. Using the decay factors ensures that this pattern occurs.

When building a model, various rules and metrics are used to select the best model form. However, the selected form may still be wrong. Describe a process to overcome this problem.

Basically, create a simulation mixture of all the better-fitting models. - Assign probabilities of being "right" to each of the models. - Use a simulation model to select a distribution from the better-fitting models - Select the parameters of the joint lognormal distribution. - Simulate losses - Start the process over again

Describe one advantage that a Bayesian approach has over a bootstrapping algorithm and one advantage that a Bayesian approach has over the Mack method.

Bayesian over bootstrapping: allows the actuary to provide expert opinion on the unpaid losses, while maintaining the integrity of the variance estimate of the unpaid losses. Bayesian over Mack: Mack provides a mean and variance for the unpaid losses (for each AY). The Bayesian method provides a full distribution of the unpaid losses, not just the first two moments.

Given bond yields and liquidity premium, risk-free rate to be used in CAPM is what?

Bond yield - liquidity premium

Describe why it is important to adjust for heteroscedasticity when using a bootstrap model.

Bootstrapping assumes residuals are independent and identically distributed. Heteroscedasticity violates this assumption as the residuals do not have constant variance. Another way to put it, bootstrapping model samples residuals from all observed residuals to create a new triangle from which to calculate LDFs. If residuals from different dev periods/AYs have different variances, this method is not appropriate.

-Insurer A is a small monoline workers compensation writer. -Insurer B is a large insurer that primarily writes personal lines, mostly Home, and has a large concentration of exposures on the East Coast. Compare and contrast the risks that are most significant to the two insurers.

Both face the following: - Asset risks due to market, liquidity, and credit risks - Operational risks (might be greater for A because of fewer controls in place) Insurance hazards vary: - Catastrophe modeling uncertainty will be larger for B - Reserving risk for A will be greater for A, because WC is a long-tailed line so the threat of reserve deterioration is much greater.

An actuary is developing a confidence interval for the unpaid losses and is deciding between using the Mack model and a stochastic Bayesian model. Discuss the similarities and differences between the two approaches, including their relative advantages and disadvantages.

Both methods can be used to estimate the mean and variance of the loss reserve. The Mack model is a non-Bayesian method that is based on the Mack assumptions. An advantage is that it is easy to implement. A disadvantage is that it doesn't produce a predictive distribution and additional parameters need to be calculated. A stochastic Bayesian model is more flexible than the Mack model. An advantage is that it creates the full predictive distribution of loss reserves instead of just the mean and variance (like Mack). Also, we can incorporate expert opinion without violating the assumptions underlying the model. A disadvantage is that it's more complicated and requires statistical software.

Compare and contrast the Fitzgibbon method and PDLD methods.

Both methods use loss development to estimate the premium asset. The Fitzgibbon method relates the ultimate loss ratio estimate to the estimated retro premium ratio linearly. Premium responsiveness is constant. In contrast, the PDLD method focuses on future expected loss to estimate future expected premium. Premium responsiveness falls over time as more losses are capped in the retro formula.

Plot to identify outliers

Box and whisker plot

Formula: Development factor for layer X given CDFs at basic limit [Sahasrabuddhe]

CDFx = CDFbas * [ LEV[X ay,ult] / LEV[ B latest ay,ult] ] / [ LEV[X ay,k] / LEVp[B latest ay, k] ] B ~ basic limit k ~ development period

How could you show that the actuary's belief is not very strong in a Bayesian approach?

Calculate Z, the credibility weight between the CL and BF estimate. If Z>0.5, this signals the actuary is not that confident in the prior distn

What is the benefit of using a Bayesian approach to incorporate expert opinion?

Can do so within a stochastic model and get a reserve range without compromising any underlying assumptions.

An insurer writes commercial and personal auto, as well as umbrella. Identify a source of external systemic risk with a relatively high correlation between two of the above

Catastrophe risk would affect both personal and commercial auto because a catastrophe would affect an entire area and if both personal and commercial auto are in that area then you will see large losses in both lines.

What improvement does the CSR model have over the Mack model?

Changing settlement rate model: Incorporates another parameter to account for, well, a change in settlement rates

What is the co-TVAR?

Co-measures are risk measures for components that, when summed, are equal to an overall risk measure. Just the TVAR but for a given LOB or business unit

What are some internal benchmarks for reviewing external systemic risks?

CoVs should be higher for long-tailed lines, except for event risk for Property and liability risk for Home.

What are some internal benchmarks for reviewing internal systemic risks?

Compare CoVs by valuation class. - If template models are used for similar valuation classes, we would expect similar CoVs. - If similar valuation methodology is used on both long and short-tailed lines, we would expect a higher CoV for the long-tailed class.

What does basic premium cover?

Covers company expenses, the insurance charge for the max/min premium, and the excess loss charge for a per-accident loss limit

What is the loss conversion factor?

Covers loss adjustment expenses

[Venter] Given a table of incremental losses, how would you set up a table of the data that can be used in a regression model to test for diagonal effects?

Create a table so that the regression models the incremental losses as the dependent variable. The independent variables in the model are the prior cumulative losses and the diagonal number. -Include a dummy variable for the diagonal (1 or 0). Do not include the first two diagonals, to avoid overparameterizing. -The 1st column shows the incremental losses (dependent var), excluding the 1st dev period, since there are no prior cumulative losses. -There is a column for each development period to show the prior cumulative losses of the incremental losses in column 1.

Provide four sources of competitor intelligence that could be used to inform an underwriting cycle model

Customer surveys, trade publications, news scanning and rate filings.

State (in words) Sahasrabuddhe's key finding.

Development factors at different cost levels and different layers are related to each other based on claim size models and trend

Formula for deviance.

Deviance = 2 * (loglikelihood,saturated − loglikelihood model) Choose model with lowest deviance. The saturated model has a parameter for each observation so the model completely fits the observations.

Formula for earned risk pure premium

ERPP = prem - (reins comm + brok. fees + expenses)

Formula for economic value added

EVA = NPV Return - Cost of Capital

Formula for E[Z] and Var(Z) in Mack's CY effects test

E[Z] = n/2 - (n-1 choose m) * n/2^n Var(Z) = n(n-1)/4 - (n-1 choose m)* n(n-1)/2^n + E[Z]-E[Z]^2

What is earned risk pure premium? What is adjusted premium?

Earned risk pure premium is calculated by taking the actual premium and netting out reinsurance commissions, brokerage fees, and internal expenses. Adjusted premium is calculated by on-leveling premium.

How would use options to value a firm? Why is it difficult to implement? Useful when?

Equity value is equal to a call option on the firm's assets with a strike price equal to the undiscounted value of the liabilities. There are practical limitations: different classes of debt, questions around which inputs to model. Useful for valuing projects (real options)

An actuary is modeling the variability of an insurer's homeowners ultimate losses for the next accident year as part of an internal solvency model. Identify and briefly explain three key elements of uncertainty inherent to the loss modeling process.

Estimation risk - uncertainty in estimating the parameters Model risk - We may have not chosen the correct model which will lead to additional error Projection risk - Our projections may be off due to systemic changes; that is, the current underlying parameters may change in the future.

Briefly discuss two problems with the FCFF model.

FCFF values equity indirectly as the value of the firm net the market value of debt. 1) Policyholder Liabilities vs. Debt - FCFF treats debt similarly to equity as a source of capital. The distinction between policyholder liabilities and debt is arbitrary and there's no good reason to treat them differently. 2) WACC and APV - FCFF uses the weighted average cost of capital (WACC) to discount free cash flows. This reflects risk to both debt and equity holders, but it's difficult to define WACC because of policyholder liabilities.

An actuary is modeling the variability of an insurer's homeowners ultimate losses for the next accident year as part of an internal solvency model. The actuary uses a normal distribution to model the losses, with the parameters selected using five years of internal loss history. The actuary has just learned of a recent court ruling affecting homeowners coverage triggers in the state where the insurer writes the most premium. For each of the three elements of uncertainty (estimation risk, model risk, projection risk), suggest an improvement to the actuary's modeling process that would decrease the overall uncertainty in modeled losses.

Estimation risk - use data from other similar books of business to select parameters or use a longer time period Model risk - consider using a fatter tail distribution (normal is typically too thin-tailed) Projection risk - attempt to consider how the trigger change will affect the selected parameters.

Cost of issuing a surplus note.

Face Value * (Pre-tax coupon rate - Bond Yield)

Within the context of using Gamma models to reflect expert opinions, discuss how the value of Beta influences the trade-off between the chain-ladder and BF estimates in a Bayesian model.

For a Gamma distn: Mean = alpha/beta and Var = alpha/beta^2 A large Beta means a smaller variance. In this case, if it's used to describe a parameter, a large Beta will give less weight to the data and more weight to the expert opinion (i.e. a priori for BF)

What are the similarities and differences between the ODP bootstrap and GLM bootstrap models?

GLM and ODP bootstrap produce same results when: - There's a separate parameter for each AY/dev period. - There are no CY parameters. - The GLM bootstrap model uses a log-link function and ODP error distn. The GLM bootstrap model fits a GLM to an incremental loss triangle. For each iteration, the GLM is re-fit to the sample loss triangles. The ODP bootstrap uses volume-wtd LDFs to calculate expected incremental losess from the original triangle. The CL method is used again on each sample loss triangle to get expected incremental losses.

Discuss how different investment strategies and reinsurance options can be compared using the ALM analysis to help management improve investment and reinsurance strategy.

For each investment/reinsurance strategy, run thousands of simulations with the model and plot the return vs risk metrics to create an efficient frontier. Management should consider moving the company strategy toward one of the portfolios on the frontier.

A Tweedie distribution with p=1.65 is a mixture of which two distributions

Gamma and Poisson

Describe Gron's supply curve in two states: initial and post-shock. Explain the curves in words.

Gron sets a minimum price level. There is also an asymptotic maximum price Up to a certain quantity, companies sell products at this price. At some point, a capacity threshold is reached and companies must raise the price to take on more business (need more risk capital to support more business). Eventually, the price (i.e. the premium) reaches a point where it can adequately fund the business. When capital is restricted, the capacity threshold is reached sooner (the post-shock line).

What is the dividend discount model extremely sensitive to?

Growth and discount rates because the majority of the valuation can be in the terminal value.

Acronym for strategic risks

Icy Turgid Beliefs Change Common Pretty Sponges Industry - Capital intensiveness, overcapacity, commoditization, deregulation, cycle volatility • Insurer Risk: Very high • Examples: o Underwriting Cycle o Insurance as a commodity Technology - Technology shift, patents, obsolescence • Insurer Risk: Low • Examples: o Data Management o Innovations in distribution over the internet Brand - Erosion or collapse • Insurer Risk: Moderate • Examples: o Reputation loss through bad press or class action lawsuits Competitor - Global rivals, gainers, unique competitors • Insurer Risk: Moderate • Examples: o Predatory pricing from competitors o Entrance into new markets with inadequate expertise or systems Customer - Priority shift, power, concentration • Insurer Risk: Moderate 86 | High-Level Summaries www.RisingFellow.com • Examples: o This is an issue with large commercial insurance business Project - Failure of R&D, IT, business development or M&A • Insurer Risk: High • Examples: o Value-destroying mergers and acquisitions (ignoring integration costs, cultural incompatibilities, reserve deficiencies, ...) • Underinvesting in R&D and IT Stagnation - Flat or declining volume, price decline, weak pipeline • Insurer Risk: High • Examples: o Response to changes in the underwriting cycle (e.g. maintaining market share at inadequate prices)

Explain why heteroscedastic residuals might cause issues when using a bootstrapping technique to estimate the variance of unpaid claim estimates.

In bootstrapping, we assume the residuals are independent and identically distributed (iid). This allows to draw residuals from any part of the triangle, and apply it to any other part of the triangle. If the residuals are heteroscedastic, they are no longer identically distributed. We will end up with too much variance applied in some portions of the triangle and not enough in others.

An ODP cross-classified GLM is fit to a triangle of incremental paid losses. A plot of standardized Pearson residuals shows that variance is increasing with development age. How would you allow for unequal variances in the GLM set up?

Instead of a constant φ where observations are given equal weight, we can allow for unequal variances in a GLM by giving less weight to the observations with larger variances. For example, if the residuals can be separated into two distinct std dev groups, with group2 having a variance that is 2.5x the variance of group1. The weights for group2 should be set to 1/2.5^2

An actuary is creating an ODP bootstrap model, but is concerned that the model output for extreme simulations will be less extreme than she believes is appropriate. Discuss an adjustment to the model that would better incorporate the possibility of extreme events that are outside historical experience.

Instead of sampling directly from the historical residuals, fit a distribution to the residuals (e.g. normal distribution) and sample residuals from the fitted distribution. This approach allows for more extreme residuals to be sampled for the simulations than are in the historical residual triangle. This method is parametric boostrapping.

When estimating unpaid losses for a long‐tailed line of business, the calendar year trend becomes important in estimating the mean of the unpaid losses, as well as the variability. One model to forecast future calendar year trend is to estimate future trend, for example, from a Normal distribution. A weakness of this method is that it does not provide enough variability in the results. Provide an alternative model for future calendar year trend and explain how it resolves this weakness.

Instead of using a single future trend, model a series of future loss trends, possibly with an AR(1) series. This allows for each future calendar year to have a different level of trend, which is more realistic, and is more variable than a single future trend.

Describe how the choice of valuation classes within a claims portfolio can affect internal systemic risk.

It is important to have homogeneous valuation classes, especially over time. If a book of business with a different development pattern than the rest of the valuation class is growing quickly, the class's development factors will be based on the old mix of business and applied to the new mix of business. This could lead to systemic errors in estimating the reserves.

An insurer is currently holding capital at the 1-in-4256 VaR level. Given this information, explain why the insurer might select the 1-in-4000 VaR as its target capital level.

It's a round number and it's a lower, more reliable, probability

Why would we use a deviance residual over a Pearson residual when validating a GLM?

Pearson residuals will display non-normality if the data is skewed. Deviance residuals remove much of the non-normality present in Pearson residuals.

What improvement does the leveled chain ladder have over the Mack model?

LCL uses random level parameters for each accident year (the Mack model uses fixed losses-to-date)

In least squares development, what are the components for weighting between budgeted loss and link ratio estimates?

LDF = E[y] / E[x] Z = b / LDF Link ratio ult = data point you're estimating * LDF Budgeted loss ult = E[y]

What is the link ratio method?

LDF = E[y]/E[x]

In the least squares method, what is the LDF you would use for the link ratio method? And what is the credibility factor for link ratio method?

LDF = mean(y)/mean(x) Z = b/LDF

An actuarial student sums the squared standard errors of the loss reserves by accident year to estimate the total reserve confidence interval. Discuss what's wrong with this approach and briefly discuss whether the correct confidence interval or the total reserve will be larger or smaller than the actuarial student's estimate.

Larger, because the student is not accounting for the correlation between AYs. Since the same LDFs are used for each accident year, the reserve estimates are correlated.

Given incremental paid losses, expected ults for each AY, and col parameters, what are the steps to calculating the new sets of parameters for a fully stochastic version of the BF method?

Let the parameters be Y,ay 1) Set Y,1 = 1.0 2) Y,ay = 1 + E[Ult,ay]*%Unpaid / Sum(IncLosses from prior AYs for future dev periods) 3) Calculate the expected future incremental losses for the AY as you go (Fill in the triangle): E[IncLoss] = (Y,ay - 1) * Sum(IncLosses from prior AYs for the given dev period) Repeat steps 2 and 3 until all AY parameters are calculated.

An actuary is using a GLM to model the chain-ladder method stochastically. She only wants to use the most recent m experience years in the model. Briefly describe an adjustment to the GLM to meet this requirement.

Like normal reserving, we may only want to use the latest m experience years in the model. To do this, we simply set the weight of the observations before the latest m experience years to zero in the model.

Discuss why limited LDFs should be calculated from limited data that uses inflation-indexed limits.

Limited LDFs should be calculated with indexed limits so that the proportion of deductible losses-to-excess losses is consistent around the limit from year-to-year. This way, we can use data from all accident years to calculate the limited LDFs.

Least Squares method is a credibility weighting of what two methods? What is the formula for the LSE?

Link Ratio and Budgeted Loss methods. y^ = Z*LDF*x + (1-Z)*mean(y)

What is Brehm's formula for marginal ROE? How should it be used?

Marginal ROE = Net Benefit or Cost / Capital Consumed or Released If capital is consumed (a cost to the insurer), select the reinsurance program with the highest marginal ROE that is also greater than the cost of capital. If capital is released (a benefit to the insurer), select the option with the lowest marginal ROE that is also lower than the cost of capital.

Briefly describe two considerations when using an industry beta.

May need to adjust the industry beta to reflect firm-specific characteristics like: 1) Leverage/debt 2) Mix of business or lines written

Give the mean of a Tweedie distribution.

Mean,k,j = [ (1-p) * Theta,k,j ] ^ [ 1/( 1-p ) ] Theta is called the location parameter

Discuss a modification to the Bayesian framework for the chain-ladder so that it applies to the Bornhuetter-Ferguson method.

Model incremental losses as row times column parameter. The column parameters are deterministic, based on chain ladder. The row parameters have a small variance, so that the a priori is given a strong weight.

How to go from unscaled Pearson residual to scaled

Multiply unscaled by degrees of freedom adjustment: sqrt(N / (N-p) )

Meyers concludes that the Mack model is biased ____ and the ODP model is biased _____. Why?

ODP based on *paid* data is biased high. Mack based on *incurred* data is biased low because it understates the variability, since it treats the observed losses as fixed AY parameters and it doesn't incorporate correlation between AYs.

Discuss why negative incremental losses is an issue in an ODP bootstrap model, and two types of adjustments to fix this.

Our log link cannot accommodate negative values. Also, if we're incorporating process variance, our variance (scale param * projected incremental loss) cannot be negative. 1) If the sum of the development column is >0, then we can take the -ln[abs(incLoss)]. This is the simple adjustment. 2) If the sum of the development column is <0, then we need to shift the incremental losses up by the negative amount, run the model, then shift the losses back down afterward.

Verrall's method of incorporating expert opinion uses what distribution? How would you set up the model used to describe incremental losses?

Over-dispersed negative binomial. E[IncLoss,ay,k] = (LDF,ay,k - 1) * CumLoss Var(IncLoss,ay,k) = DispersionFactor x LDF,k x E[IncLoss,ay,k]

Formula for Price to BV ratio

P/BV = 1 + (ROE - k) / (k-g)

Using a modification of the P/BV formula, what is the formula for the value of a firm assuming that the ROE will decline to the cost of capital after n years?

P/BV = 1 + (ROE - k) / (k-g) * [ 1 - ( (1+g)/(1+k) ) ^ n ]

Formula for P/E Ratio

P/E Ratio = (1 - Plowback Ratio) / (discount rate - growth rate) where growth rate equals ROE * plowback ratio

Relationship between P-BV and abnormal earnings method.

PBV ratio should reflect the ability of a company to earn a return on equity capital that exceeds its cost of equity. Similarly, the abnormal earnings method considers the firm's ability to earn a return greater than the expected return. The difference is the time period that each valuation method assumes: PBV - perpetuity and AE - limited horizon

Formula for first PDLD ratio. Formula for nth PDLD ratio.

PDLD,1 = (BP/SP) x TM / (ELR x %Loss,1) + CL/L x TM x LCF PDLD,n = (CL/L)*TM*LCF

Explain why parameter risk is a key source of uncertainty in enterprise risk management.

Parameter risk is a key source of uncertainty because if an estimated parameter does not accurately reflect the true underlying parameter, then the projected results could be highly underestimated or overestimated. Further, parameter risk does not diversify.

Formula for parameter variance in the Cape Cod method.

Parameter variance = Var(ELR)⋅Prem^2

Disadvantage of using the retro formula to estimate the PDLD ratio

Potential bias exists since the formula approach uses the average parameters for the LCF, tax multiplier, maximum, minimum and per accident limit. We should retrospectively test PDLD ratios against actual emergence to check for bias.

What does an efficient frontier graph look like?

Profit as Y-axis (reward) And on the X-axis, there should be some measure of the risk. This could be VaR or TVaR at some level of probability, probability of making Plan, probability of default/distress, etc.

Discuss the problems with allocating capital proportionally.

Proportional allocation ignores the contribution of each line to the overall risk measure. Instead, the allocation looks at each line in isolation. This can lead to inappropriate strategic decisions. For instance, if most of the overall risk capital is driven by the hurricane and earthquake risk of Home, then the proportional method under-allocates to home and its profitability is overestimated

Discuss whether quantitative methods should be used to assess correlation effects for sources of risks within or between valuation classes.

Quantitative methods shouldn't be used for correlation effects because: 1) They would require a lot of data and the time and effort may outweigh the benefits. 2) Calculated correlations would be driven by past correlations, but future external systemic risks may be different than past episodes. 3) The results are unlikely to be split between independent and internal/external systemic risk, as it's difficult to separate those.

Formula for individual loss reserve and collective loss reserve.

R ind = Loss * q / p R coll = ELR * Prem * q where p = %paid and q = %unpaid

Right tail concentration function formula Left tail concentration function formula

R(z) = P(U > z|V > z) = [1 − 2z + C(z, z)]/(1 − z) L(z) = P(U < z|V < z) = C(z, z)/z

Given FCFE, net income, and beg/end regulatory capital, how is ROE calculated?

ROE = NI / Beg. Capital

Formula for return on risk-adjusted capital, and for economic value added

RORAC = Risk-Adjusted Capital * Hurdle Rate EVA = NPV return - Cost of Capital

How are increases in loss/LAE reserves treated in the discounted cash flow model?

Reflected in net income.

The Fitzgibbon method fits a linear regression to what?

Retrospective premium ratio (Y) and the overall incurred loss ratio (X).

Assuming all claims are greater than the plan minimum, explain how a push to settle small claims faster would impact PDLD ratios.

SA1: A push to settle small claims faster would increase the amount of claims in the early periods that fall within plan limitations since these are not subject to per-occurrence limit. This would likely increase early PDLD ratios. On the other hand, settling smaller claims earlier makes later loss emergence come mostly from large claims, so the PDLD ratios for later adjustments will drop. This is because the large claim development likely occurs outside the plan parameters. SA2: Settling small claims faster (assuming small means below cap) will increase loss and capped loss by the same amount. So, ratio of CL/L will increase towards one. Earlier PDLD ratios will be higher. But since later PDLD claims will not see only the larger claims (that can hit the cap) develop, those PDLD ratios will decrease. Basically, more uncapped (i.e. smaller) claims will result in higher PDLD ratios initially, and then the greater proportion of larger (i.e. capped) claims will cause later PDLD ratios to decrease.

Briefly describe how scenarios and the Delphi method feed off of each other.

Scenarios and the Delphi method are both soft approaches to modeling the UW cycle. Scenarios allow firms to think through how they might respond while the Delphi method obtains expert consensus. A Delphi process can create a set of scenarios and scenarios can form the input to a Delphi assessment about the likelihood of each scenario

A company's growth rate is 4% and its beta is .75. The industry's is 5.5%, while the industry beta is .89. Assess the reasonableness of the company beta.

Since the firm's growth rate is less than the industry average, this suggests the firm may be less risky than the industry. Thus, the lower beta is reasonable.

Discuss one analysis you should perform after valuing a company with DDM or DCF models.

Sensitivity analysis, using a range of discount rates and growth rates beyond the forecast horizon.

What is service revenue?

Service revenue is revenue the insurance company gets by servicing claims for an insured with a high deductible program. The service revenue asset is the expected ultimate service revenue minus the recoveries for service revenue to-date.

What is the budgeted loss method?

Set the new loss equal to the mean of all the losses in the same development period.

Discuss two relationships regarding severity relativities.

Severity relativity ~ severity,lim / severity,unlim • Severity relativity should decrease as age increases - This is because more losses are capped at the per-occurrence limit as age increases • Severity relativity should be higher for a larger limit - This is because a higher limit means less of the loss is capped, so the relativity is higher

FCFE represents the cash flow that could be paid to whom?

Shareholders.

Risk indicators for internal systemic risk (broken out by its components).

Specification Error: • Number of independent models used • Range of results produced by the models • Checks made on reasonableness of results Parameter Selection Error: • Best predictors have been identified • Best predictors are stable over time (or change due to process changes) • Value of the predictors used (predictors used are close to the best predictors) Data Error: • Good knowledge of past processes affecting predictors • Extent, timeliness, consistency and reliability of information from business • There are appropriate reconciliations and quality control for the data

Difference between standard and manual premium.

Standard premium is manual premium adjusted for experience rating.

Formula for alpha and beta in the ODP cross-classified model.

Start with oldest AY. alpha,k = CumLoss / (1 - sum(all the betas remaining)) beta,j = Sum of IncLoss from past AYs / Sum of alphas

Static vs stochastic scenarios

Static = predefined by management/the firm Stochastic scenarios generated by a stochastic process.

Discuss the difference between strategic risk and operational risk for an insurer.

Strategic risk is the risk associated with the strategic decisions of the company (e..g choosing the wrong plan). Operational risk is the risk in execution of these (e.g. internal fraud).

A company has excluded motorcycle liability from its ERM model because it is a short-tailed line of business, represents a small portion of its book, and is in run-off. Briefly describe whether this is a strength or weakness of the ERM program.

Strength. The book is not material and modeling it would be a waste of resources.

Formula for XTVaR (excess tail value at risk)

TVaR - overall mean

What is the Feldblum enhancement in calculating premium asset? When will you arrive at different/same ultimate premium estimates when comparing to the original T&P method?

Takes the basic premium portion out of CPDLD,1 because the basic premium doesn't vary by loss. So for CPLD,1 you would subtract BasicPremFactor*TM / ELR To get ultimate premium, you multiply expected future loss by this new CPLD,1 ratio, but then you have to add back in the basic premium. You will arrive at the same ult prem if expected future loss = a priori expected loss (as determined in pricing). If, for example, expected future loss > a priori expected loss, then the T&P original method will result in a higher ult prem (since it will be applying the CPLD factor the basic premium as well).

When b=1, what does the least squares estimate equal?

The BF method.

Why should the DDM and FCFE use different discount rates?

The DDM and FCFE models should use different discount rates due to the riskiness of the cash flows paid to shareholders. With DDM, assumed dividends are paid to shareholders, and remaining income is reinvested in marketable securities. In contrast, the FCFE pays out all free cash flow to shareholders. This means that the DDM's measure of risk is impacted by a larger proportion of the risk coming from marketable securities than from underwriting risk

Transaction multiples offer an alternative method of valuation to market multiples. Briefly describe what they are, as well as one advantage and two weaknesses of using transaction multiples.

The Transaction Multiples Method is the method that has you look at a group of companies similar to the one you are valuing, see what kind of prices they have been bought and sold for, and apply a similar valuation method to the target company. Advantage: -Not subject to random market fluctuations, should have been valued by careful analysis -Transactions done by experts involved in companies Disadvantages: -Data is dated. - Economic conditions could be different now. - IPOs have historically been underpriced. - Transaction multiples typically include some optimism for synergies created by the merger.

An actuary believes the root reason for insurer failures is reserve deficiency. State whether or not the actuary is correct. If the actuary is incorrect, provide the root reason for insurer failures.

The actuary is incorrect. The root reason for insurer failures is the accumulation of too much exposure for the supporting asset base.

An actuary is reviewing residual plots from ODP bootstrap model. In reviewing a plot of residuals vs development periods, the actuary notices that the residuals appear to have larger absolute values at lower maturities. The actuary argues that this is to be expected, because the incremental values are much larger in the earlier development periods and hence these incremental values should have a higher variance. Assess the validity of the actuary's reasoning.

The actuary's reasoning is not sound. The residuals are divided by the square root of its expected variance based on the ODP model (which is the expected value of the incremental loss). Thus, any variance beyond that we can attribute to unexpected changes in the variance and need to make an adjustment to our model.

As the retro adjustment period increases and losses mature, we would expect what to happen to the cumulative loss capping ratios?

The cumulative loss capping ratios should decrease, because more losses are capped due to the maximum premium limit (the net insurance charge increases) and more individual losses are capped at the per-accident limit (the LER increases).

Explain why an investor might prefer the abnormal earnings valuation method over the dividend discount model. Give an example of when this would make sense.

The dividend discount model does well when income is stable. Firms with a long history do well with this model. Abnormal earnings focuses on the current equity and any added value (abnormal earnings) and thus does not have as much leverage. Example: A company is entering a niche market. DDM assumes dividend growth in perpetuity, while AEM assumes abnormal earnings will decrease to zero as more competition enters the market. AEM is more suitable in this case.

Define EPD

The expected policyholder deficit with a time horizon of one year is defined as the expected value of the amount by which available assets, including allocated capital, will be inadequate to satisfy all claims one year in the future.

In the context of a retrospectively rated policy, discuss why incremental premium can be greater than incremental losses at the first adjustment.

The first premium collected has to cover the basic premium, which is the premium charged even if no losses occur - to cover fixed expenses. Also, any losses are multiplied by the tax multiplier and the loss conversion factor, increasing the premium.

What is the idea behind the Fitzgibbon method? What's the main assumption behind it?

The idea of the Fitzgibbon method is that the retro formula is essentially a linear equation and instead of matching the individual policy retro rating parameters to the individual loss experience, we can run a linear regression to get the average retro rating parameters for the linear equation. The key assumption is that premium is assumed to be a linear function of incurred losses

What is the value of default option?

The market cost of insuring for losses over VaR

What might happen if a firm implements a poor ERM model?

The model will exaggerate or underestimate certain risks, which will lead to overly aggressive or cautious corporate strategic decisions.

An actuary is building a stochastic chain ladder model and is considering the following distributions: 1) Over-dispersed Poisson 2) Over-dispersed Negative Binomial 3) Normal The loss development triangle being used has a column of incremental values with a negative sum. The actuary wants to use a model that does not require adjustments to the data. Identify and briefly explain which of the three models under consideration would achieve this.

The normal model. It is the only one of the three models than can have a negative value as an output.

What is estimation risk?

The risk that the parameter estimates used are not the "true" parameter estimates for the underlying process

The selection of a GLM consists of four components. Identify these components.

The selection of a GLM consists of the following four components: Selection of a cumulant function, controlling the model's assumed error distribution Selection of an index p, controlling the relationship between the model's mean and variance Selection of the covariates x (i.e. the variables that explain µi) Selection of a link function, controlling the relationship between the mean µi and the associated covariates

What is simplified GLM approach? How does it differ from the normal approach?

The simplified GLM approach can be used if the GLM bootstrap model has a parameter for each accident year and development period and uses an over-dispersed Poisson error distribution.

How are the forecasts from the ODP Cross-Classified model [Taylor] different from those of the Chain Ladder and the ODP Mack models?

They are not. The forecasts are the same.

How do increases in reserves impact FCFE?

They do not. They show up both as Non-Cash Charges and as Increased Required Capital, so they cancel out in the adjustments to Net Income in the formula. The increase in required capital is the difference between the required minimum capital at year-end to maintain the insurer's credit rating and the beginning capital for the year.

Discuss the steps to Sahasrabuddhe's simplifying method to adjusting basic limit CDFs to policy limit CDFs.

This simplifying assumption allows us to avoid needing a claims size model by development period. 1) Trend back the mean to prior AYs 2) Calculate the LEVs at the policy limits and the basic limits. Calculate the ult ratio by dividing LEV[X]/LEV[B] for each AY. 3) Calculate the Selected Ratio for each AY: Selected Ratio = UltRatio + (1-UltRatio)*Decay 4) CDF,x = CDF,b * UltRatio/SelectedRatio

Briefly describe how return on risk-adjusted capital (RORAC) is calculated and explain how RORAC can be used to determine if an activity is worth pursuing.

To calculate RORAC, allocate risk capital to portfolio elements (e.g. allocate total to Home, Auto, WC) and then multiply by hurdle rate to get RORAC. Calculate economic value added by subtracting RORAC from NPV of portfolio element. If the EVA is positive, then pursue.

What are non-cash charges?

Typically, expenses that are deducted under U.S. GAAP accounting but do not represent actual cash expenditures are added back to the reported net income to determine the cash flow available to be paid to equity holders. These amounts are referred to non-cash charges.

Discuss why it is better to estimate the retrospective premium asset with the PDLD procedure as opposed to estimating it directly with the chain ladder method.

Ultimate loss can be estimated soon after policy expiration, but you need to wait longer for retrospective adjustments to premiums to be booked. Retro premium depends on incurred losses, so the PDLD method produces a premium asset estimate sooner. The PDLD estimate of the premium asset can be updated quarterly with new loss information, but a premium chain ladder method could only be updated annually after retro adjustments.

Given basic limit ILFs to 250k and 1M, how would you get the ultimate severity relativity for these two limits

Ultimate relativity = ILF,250k/ILF,1M

What is model risk?

Uncertainty about which fitted distribution form is the "correct" distribution for the underlying process.

In the ODP Mack GLM, how would you set up the matrices?

Under the ODP Mack model, we have fkj − 1|Xkj ∼ ODP(fj − 1, φj/Xkj ) Y = age-to-age factors - 1 X = num. cols is equal to the # of dev periods. Place a 1 where the corresponding value of Y would be in the triangle. W = weight matrix, where the weights are cumulative losses µ = h^−1(Xβ) and W is the weight matrix. For this model, h is the identity function. Thus, µ = Xβ

Briefly describe the difference in the variance assumptions between the non-Parametric Mack model and the ODP Mack model

Under the non-parametric Mack model, the variance of the next cumulative loss is a function of the age and the cumulative losses to date. Under the ODP Mack model, we assume that the next incremental loss is ODP distributed with dispersion parameter kj . In general, we assume that the dispersion factor for the ODP Mack model is constant across the triangle

What is a better way to allocate capital than a proportional method?

Use Co-TV@R 98 which is a marginal decomposition of the overall risk measure. This approach looks at the contribution of each line to overall risk measure. Co-TV@R 98 is the average loss of a line when the company loss is greater than the 98th percentile.

Better capital requirement than TV@R,99.5?

Use a multiple of a lower probability level, such as TV@R,95. This reference point is easier to model and protects shareholder value better.

How do you model internal systemic risk?

Use balanced scorecard technique. Grade the methodology/model used from 1-5, for each of several risk indicators: 1. Specification error - # independent models used, ability to detect trends, range of results produced by models 2. Parameter selection error - Best predictors have been identified, they're stable over time, and you're using them in your model 3. Data error - data is reconciled and has quality control; extent, timeliness, consistency, and reliability of information from business

Fully describe a goodness-of-fit measure used to assess GLMs

Use deviance. This measures the difference between a saturated model (a model with a parameter for every observation) and the actual model. When this difference is small, then the fitted values are close to the actual values. Thus, we want to determine parameters that minimize the unscaled deviance.

An actuary wants to fit a Tweedie Mack model to loss data. He has no insight into the shape of the loss distribution. Suggest a power of p for the Tweedie Mack model and explain why that power is reasonable

Use p = 1. This corresponds to an ODP distribution which is useful when little is known of the subject distribution.

An actuary is developing a confidence interval for the unpaid losses and is deciding between using the Mack model and a stochastic Bayesian model. The actuary believes the 5-year LDFs are more suitable for use than the all-year LDFs. Make a recommendation for which model the actuary should use and support your recommendation.

Use the Bayesian stochastic model. This model can incorporate the selected LDFs (5-year) by defining the prior distributions. The Mack model assumes all-year LDFs. Also, a Bayesian stochastic model will produce the full predictive distribution, so we can calculate confidence intervals without having to make an assumption about the reserve distribution.

How should different regulatory constraints be treated in the calculation of FCFE?

Use the most binding capital constraint.

What are valuation classes? What are claims groups?

Valuation classes: How you would initially split a claims portfolio (e.g. home vs auto) Claims groups are how you would further segment valuation classes (e.g. splitting Home into CAT vs xCAT)

Variance of incremental loss for an ODP cross-classified model

Var(IncLoss) = Dispersion Factor * E[IncLoss] E[IncLoss] = alpha * beta for the corresponding AY, dev age Note that this is the same variance assumption as a regular ol' ODP model

Variance assumption of an ODP GLM model.

Var(IncLoss) = Scale parameter * E[IncLoss]

Variance assumption of the following error distributions: 1) Normal 2) Poisson 3) Gamma

Var(IncLoss) = Scale parameter * mean^z where: z=0 if Normal z=1 if Poisson z=2 if Gamma

Variance of incremental loss in ODP bootstrap

Var(q(w,d)) = scale param * mean scale param = sum(residuals^2) / (n-p) n=# cells in triangle, p=#AYs*2 - 1

Variance function for Tweedie

Var(µ) = µ^p

Discuss the relationship between capital flows and profit.

When profit is low (i.e. operating losses), capital is used to pay claims. Over time, this leads to firms exiting the market and/or insolvencies. When profit is "normal", retained earnings add to the capital stock and firms pay out capital in the form of dividends to shareholders. At some point, profit hits a threshold where external capital infusion occurs (capital infusion tends to happen when capacity is limited and profit expectations are high). This capital funds existing firms and new entrants into the market.

a) Briefly describe why the sum of the squared error for the overall reserve is not the sum of the individual (s.e.(R,i))^2 b) Briefly describe the allocation procedure Mack uses to allocate the confidence interval for the overall reserve to confidence intervals for each individual accident year reserve.

a) Each estimator, Ri, is influenced by the same age-to-age factor, so they are not independent b) Determine the overall reserve confidence interval, then adjust the individual AY reserves by varying the t or z value so that the sum of the upper limits of the individual reserves balances to the upper limit of the total reserve.

Formula for weighted residuals [ Mack - 1994 ]. What do you use these for?

Wtd Residual = ( Loss, k+1 - LDF * Loss, k ) / sqrt(Variance assumption of Loss, k+1) where variance assumption: = 1, loss, or loss^2

Formula for weighted residuals in Mack

Wtd residual = (Loss,k+1 - Loss,k * LDF) / sqrt(Loss,k)

Formula relating XSLDF, unlimited LDF, and the severity relativities at the two different ages.

XSLDF = LDF * (1 - Relativity at T) / (1 - Relativity at T-1) Can also think of this as LDF,unlim * XS relativity at T / XS relativity at T-1

GLM set up of an ODP Mack model [Taylor]

Y = age-to-age factors - 1 w = corresponding cumulative losses X = Row parameter for each observation in dictionary form and a column for each development period parameter. Ones should run along the diagonal for each block corresponding to each AY. u = h^-1(XB) h = identity function age-to-age - 1 ~ ODP

GLM set up of the ODP Cross-Classified model [Taylor]

Y = incremental loss observations

How would you create a graph of Fitzgibbon's regression method?

Y-axis: retro premium as % of standard premium X-axis: incurred losses as % of standard premium The y-intercept represents the retro premium when there are no incurred losses. This is known as the basic premium charge.

An insurer primarily writes personal auto and is considering a project to create a predictive model to better identify claims fraud. Before moving forward, senior management is interested in estimating the impact the fraud model might have on reducing losses to fraud. a. Briefly describe the necessary steps to model operational risks as part of operational risk management. b. Based on the steps from part a, give examples of the specific steps the insurer could take to model claims fraud risk and estimate the impact of improving the claims process.

a) 1. Identify exposure bases for each key operational risk source (typically a KRI). 2. Measure the exposure level for each operational risk source. 3. Estimate frequency and severity of loss potential per unit of exposure based on current processes. 4. Combine 2 and 3 to create loss frequency and severity distributions. 5. Estimate the impact of risk mitigation, process improvement or risk transfer on the frequency/severity distributions. b) 1. Identify exposures for claims fraud such as claim count. 2. Measure claim count for personal auto. 3. Estimate the % of claims that are fraudulent (frequency) and the loss severity of fraudulent claims. 4. Combine 2 and 3 to create claims fraud frequency/severity distributions. 5. Estimate how much the predictive model will reduce fraud frequency (e.g. 10% reduction) and fraud severity (by predicting the most serious cases of fraud). This will give an estimate of the benefit of the model.

a) Four parameter development details that need to be addressed when developing an internal model. b) For each parameter development detail, provide a recommended course of action.

a) 1. Modeling software: capabilities, scalability, learning curve, integration with other systems 2. Developing input parameters: process should be heavily data-driven, incorporate expert opinion, 3. Correlations 4. Validation and testing: no existing internal model with which to compare, multi-metric testing required b) 1. Ensure final software choice aligns with capabilities of the team 2. include product expertise from multiple groups 3. modeling team should suggest, corporate should own 4. validate/test over multiple periods, educate others on probability and stats

a) Four model implementation details that need to be addressed when developing an internal model. b) For each model implementation detail, provide a recommended course of action.

a) 1. Priority setting 2. Interest and impact 3. Pilot test 4. Education process b) 1. have top management set the priority for implementation 2. plan for regular communication to broad audiences 3. assign multidisciplinary team to analyze real company data; prepare the company for the magnitude of change resulting from using an internal 4. target training to bring leadership to similar base level of understanding

a) Four organizational details that need to be addressed when developing an internal model. b) For each organizational detail, provide a recommended course of action.

a) 1. Reporting relationship: modeling team reporting line 2. Resource commitment: mix of skill set (e.g. actuarial vs IT), full-time vs part time 3. Inputs and outputs: control of input parameters, control of output data, analyses and uses of output 4. Initial scope: prospective UW year only, including reserves, assets, operational risks? b) 1. Report to a leader who is fair 2. It's best to transfer internal employees or hire external employees for full-time positions 3. Controlled in a manner similar to that used for general ledger or reserving systems 4. Define prospective UW period, variation around plan

a) Explain how accumulated risk is created. Provide an example of something that produces accumulated risk. b) Briefly describe the concept of as-if loss reserves and explain how it can be used to approximate accumulated risk. c) Provide two advantages of as-if loss reserves d) Briefly describe how the tails of the distribution of underwriting results changes when including accumulated risk.

a) Accumulated risks result from elements of an insurer's business that absorb capital over multiple periods. Loss reserves are an example of something that produces accumulated risk. b) For an accident year of new business, the as-if loss reserves are the reserves that would exist at the beginning of the accident year, if that business had been written in a steady state in all prior years. This acts as a proxy of the accumulated risk from the prior years of reserves c) -They can measure the impact of accumulated risk caused by correlated risk factors across accident years - The reinsurance being considered can be applied to the accident year and as-if reserves, providing a more valid measure of the impact of reinsurance on accumulated risk and on capital absorbed over the full life of the accident year. d) By including accumulated risk, the distribution of underwriting results is less compressed and has bigger tails

Briefly describe how premium responsiveness changes with the following: a) Maturity of the book b) Loss ratio

a) As a book of business matures, premium responsiveness on loss-sensitive contracts declines b) At higher loss ratios, premium responsiveness on loss-sensitive contracts declines

Explain how investment risk differs in each of the following asset/liability mixtures: a) Asset portfolio with no liabilities b) Asset portfolio with fixed duration liabilities c) Asset portfolio with variable duration liabilities

a) Asset portfolio with no liabilities - In this case, short-term treasuries are considered risk-free while high-yield assets are considered risky b) Short-term treasuries have durations shorter than the liabilities and introduce reinvestment risk to the equation. If interest rates drop, total investment income may not be sufficient to cover the liabilities. If interest rates rise, longer-term investments (with durations longer than the liabilities) introduce risk as well if depressed assets have to be liquidated to fund liabilities. Duration matching would be a good strategy to neutralize interest rate changes. c) In this case, duration matching is no longer possible because the duration of the liabilities is unknown. A model incorporating asset and liability fluctuations would be needed at this point to determine the optimal investment portfolio.

a) What is the relationship between EPD and the Value of the Default Put Option? b) Describe how the EPD calculation can be used to estimate the Value of the Default Put

a) At any capital level B, EPD is the expected loss to the firm excess of the capital held. Without a risk load, EPD would be the cost of purchasing a stop loss attaching at B. The Value of the Default Put is the market cost of such a stop loss contract. b) Apply a Transform to the Distribution, then calculate the EPD based on the transformed probabilities. This will add a riskload to the EPD.

If there is rapid exposure growth, how would you set up the GLM model?

a) Divide all loss data by exposures for each accident year to get pure premium. b) Run the model based on pure premium. c) At the end, multiply results by the exposure level to get the total value.

a) Briefly define enterprise risk management. b) Briefly describe four aspects of the definition above.

a) Enterprise risk management is defined as the process of systematically and comprehensively identifying critical risks, quantifying their impacts and implementing integrated strategies to maximize enterprise value. b) - Should be a regular process, not a one-time event - Risks should be considered on an enterprise-wide basis - Focus on risks that are material - Quantify the risks and incorporate dependencies between them

a) Identify two sources of risk that can be fully analyzed using modeling techniques such as bootstrapping or a stochastic chain-ladder model. b) Identify two sources of risk that cannot be fully analyzed using modeling techniques such as bootstrapping or a stochastic chain-ladder model. c) Briefly explain why traditional modeling techniques cannot capture all sources of uncertainty.

a) Independent risk + *historical* external systemic risk b) Internal risk + *future* external systemic risk c) Since models fit past data, we can only capture past systemic risk, not future systemic risk.

a) Define claim report lag R(t) in terms of the standard chain-ladder age-to-ultimate development factor. b) Explain how the claim report lag can be interpreted as a probability cumulative distribution function. Give one reason why this interpretation is useful.

a) Inverse of the CDF b) The claim report lag can be read as the probability that any particular claims dollar will be reported to the reinsurer by time t. This view allows us to compute statistics of the claims reporting process, enabling us to compare one claim report pattern with another

a) Briefly explain why we need to index limits for inflation when calculating development factors for various deductibles. b) Provide two methods for indexing the limits.

a) It keeps the proportion of deductible/excess losses constant about the limit from year to year b) - Fit a line to average severities over a long-term history - Use an index that reflects the movement in annual severity changes

a) Provide two quantities that could be used as the dependent variable in an underwriting cycle model. b) Provide four quantities that could be used as independent variables in an underwriting cycle model.

a) Loss ratio, combined ratio b) Historical combined ratio, reserves, inflation, and GNP

a) Briefly describe what it means for a risk decomposition method to be "marginal." b) Provide two reasons why the marginal property is desirable. c) Describe two required conditions for a marginal decomposition.

a) Marginal means that the change in overall company risk due to a small change in a business unit's volume should be attributed to that business unit b) - Links financial theory of pricing proportionally to marginal costs - It ensures that when a business unit with an above-average ratio of profit to risk increases its volume, then the overall company ratio of profit to risk increases as well c) - Business units must change volume in a homogeneous fashion - The risk measure must be scalable (ρ(aY ) = aρ(Y ))

a) Provide three different variance assumptions for Cik. b) For each variance assumption, provide the formula for the corresponding estimator for fk and briefly describe the estimator in words. c) Describe a graphical variance test that can be employed in order to determine the appropriate fk estimator to use.

a) Proportional to 1, C,ik, or Cik^2 b) If proportional to 1, weight by Cik^2 If proportional to Cik, normal volume-wtd approach If proportional to Cik^2, take a simple average of the age-to-age factors.

a) Assuming risk capital itself has been allocated, explain how cost-benefit analysis can be used to determine which risk mitigation strategies should be pursued. b) Assuming the cost of capital has been allocated, explain how cost-benefit analysis can be used to determine which risk mitigation strategies should be pursued.

a) Pursue activities where the benefit (i.e. decrease in required capital) exceeds the costs of implementation b) Pursue activities that produce positive incremental EVA

An actuary is reviewing residual plots from ODP bootstrap model. The actuary chooses to review a plot of the residuals vs. development periods. a) Identify two other residual plots the actuary might choose to review. b) Briefly describe two features of residual plots that would suggest a need for the actuary to adjust the model.

a) Residuals vs AY, vs CY, vs predicted loss. Normality plot, Box and Whisker plot. b) -Trends in residuals, e.g. early AYs having negative residuals and later AYs having positive residuals. -Outliers in residuals -Normality plot: residuals are not tightly grouped around a straight 45 degree line

a) Briefly describe key risk indicators. b) Briefly explain the difference in review frequency between self-assessments and key risk indicator measurement. c) Briefly explain the difference in key risk indicators and historical losses. d) Provide four insurer-specific examples of key risk indicators.

a) Risk indicators are measures used to monitor the activities and status of the control environment of a particular business area for a given operational risk category b) While typical control self-assessment processes occur only periodically, key risk indicators can be measured daily c) Key risk indicators are forward-looking indicators of risk, whereas historical losses are backward-looking d) Production - hit ratios Internal controls - audit results Staffing - employee Claims - frequency

a) In most cases, firms allocate capital directly. Briefly describe how a firm can allocate the cost of capital. b) Explain how a business unit's right to access capital can be viewed as a stop-loss agreement. c) Provide one approach for calculating the value of the stop-loss agreement.

a) Set the minimum profit target of a business unit equal to the value of its right to call upon the capital of the firm. Then, the excess of the unit's profits over this cost of capital is added value for the firm. Essentially, we are allocating the overall firm value (rather than the cost of capital) to each business unit b) Since the business unit has the right to access the insurer's entire capital, it essentially has two outcomes - make money or break-even. This is how a stop-loss agreement works as well c) Calculate the expected value of a stop-loss for the business unit at the break-even point

Given the following exposure types: - Short-tailed exposures - Medium-tailed - Long-tailed exposures a) Identify one reinsurance loss reserve estimation method for each exposure type above. b) Provide two reinsurance examples of each exposure type above

a) Short-tailed: set IBNR equal to some percentage of the latest year EP Medium-tailed: chain ladder Long-tailed: Cape Cod method b) Short-tailed: treaty property proportional, treaty property catastrophe Medium-tailed: treaty property excess higher layers, construction risk Long-tailed: treaty casualty excess, asbestos

a) Identify three styles of modeling the underwriting cycle. b) The styles identified in part a. vary by three dimensions. Briefly describe the three dimensions. For each dimension, state how each style compares.

a) Soft approaches, behavioral modeling, and technical modeling b) - Dimension 1 - data quantity, variety and complexity: soft > behavioral > technical - Dimension 2 - recognition of human factors: soft > behavioral > technical - Dimension 3 - mathematical formalism and rigor: technical > behavioral > soft

Describe how the following things affect the amount of capital held by an insurance company: a) Customer reaction b) Capital requirements of rating agencies c) Comparative profitability of new and renewal business

a) Some customers care about capital/ratings. Rating declines can result in loss of customers. b) Different rating agencies require diff amounts of capital. c) Renewal business is more profitable than new business so it's more important to retain RB than NB. A company might hold 80% of its capital to be able to service its renewals that comprise 80% of its book.

a) In terms of risk charges, fully describe one reason why a regulatory RBC model might differ significantly from a rating agency RBC model. b) In terms of risk charges, fully describe one reason why two rating agency models might differ significantly.

a) Some regulatory models are used to determine if the company will be solvent in the long run, while others are more short-term focused. b) Some models incorporate covariance adjustments, so that the total risk is less than the sum of the individual risk components.

For each of the following accounting systems, explain how bonds and liabilities are valued and state whether assets successfully hedge against liabilities: a) Statutory accounting b) GAAP accounting c) Economic accounting

a) Statutory accounting - bonds are amortized and liabilities are not discounted. Assets provide little hedging to liabilities b) GAAP accounting - bonds are marked to market and liabilities are not discounted. Assets provide little hedging to liabilities. c) Economic accounting - bonds are marked to market and liabilities are discounted. Assets hedge against liabilities.

An insurer writes Auto and Home and is planning its portfolio mix for the upcoming year. The industry is currently in a soft market, especially for Homeowners, but competitor and industry analysis indicates the market may harden in the upcoming year. a) Briefly discuss a problem with traditional insurer planning and how creating one, detailed plan may lead to suboptimal financial results. b) Recommend how the insurer could incorporate scenario planning into the planning process. c) Briefly discuss two advantages of using scenario planning compared to traditional planning.

a) The key problem with traditional planning is that a single, fixed plan is created. However, market conditions can change, making the original plan inappropriate. If the UW cycle changes compared to what was expected, underwriters may still be incentivized to "make plan", resulting a portfolio mix that's more heavily weighted to underpriced lines, resulting in worse results. b) 1. Create different scenarios of potential states of the UW cycle with probability of likelihood. 2. Decide on a response plan of the portfolio mix in each scenario with enough detail to be actionable. 3. Monitor the market condition to see which scenario arises to respond appropriately. c) 1. The insurer thinks through responses ahead of time instead of during a crisis. 2. Reduces organizational inertia by adding flexibility into the planning instead of focusing on "making the numbers" at all costs.

A firm is setting next year's plan for a LOB. a) Describe the traditional planning approach based on "plan estimates." b) Briefly describe two issues that may be caused by using the traditional approach.

a) The traditional planning approach is based on single-point estimates. These estimates are often overly optimistic due to the need to meet overall corporate profit or premium volume targets. When actuals deviate from the overly optimistic plan, managers are reluctant to deviate from the plan numbers. This results in booked numbers that are unrealistic for far too long. b) - Unforeseen reserve deficit - The overall portfolio mix (combination of written premium and corresponding written loss ratios) may not be what is intended. For example, if leadership had known that the loss ratio would be xx.x% during the planning phase, the target premium volume may have differed.

An insurer writes predominantly long-tailed lines of business in a highly competitive environment. The company's incentive plan is structured around achieving both top line growth and target CY combined ratios. Recently the company has seen a number of accounts go to competitors for lower rates and increased coverage. a) State a goal of agency theory. b) In the context of agency theory, discuss the problems this insurer may be facing. c) Briefly describe two actions that company can take to implement effective underwriting cycle management.

a) To align management and owner interests. b) Management has an incentive to write business at low rates. Unfortunately the market price is low now and so in looking to meet the incentive plan goals, management is incentivized to write unprofitable business. c) Focus on intellectual property by maintaining investments in talent pipelines, systems, processes. Educate owners that in times of soft markets we do not want to write at unprofitable levels.

a) Briefly describe how risk-based capital (RBC) models differ from leverage ratios. b) Provide the four main sources of risk contemplated in RBC models. c) Briefly describe how RBC models quantify these sources of risk.

a) Unlike leverage ratios, risk-based capital (RBC) models combine measures of different aspects of risk into a single number b) - Invested asset risk - Credit risk - Premium risk - Reserve risk c) Each of these risks is measured by multiplying factors by accounting values. The magnitude of the factor varies by the quality and type of asset or the line of business

An actuary is estimating ultimate loss ratios by accident year using a bootstrap model. a) Briefly describe how the actuary can estimate the complete variability in the loss ratio. b) Briefly describe how the actuary can estimate the future variability in the loss ratio.

a) Use all values in triangle b) only use projected

For each of the following external systemic risks, give an example of a line of business for which the risk would be important, and explain why it is important for that line of business. a. Legislative risk b. Event risk c. Latent claim risk

a. Workers comp: the coverages provided in this product are determined by law, and thus any changes to the coverage law will impact the liabilities of the insurer. b. Property. Catastrophes such as earthquakes generates significant risk for property loss. c. Casualty. Latent claims (claims where coverage was not originally intended) can surface many years after the fact, and thus affect many policy years at the same time.

a) Briefly explain how agency theory relates to operational risk. b) Describe a situation where the interests of the firm's owners and management may not be aligned. c) Quantifying this operational risk can be extremely difficult. Provide an alternative solution for managing this risk.

a) When the interests of management and the interests of a firm's owners diverge, management may make decisions that are not supported by the firm's owners. This is an operational risk. b) A company can agree to pay management a percentage of the increase in its market cap after five years. Although this ties manager compensation to the firm's performance, management may be more willing to take on risky investments. In their mind, they could either end up incredibly wealthy or right where they are now. This allows them to gamble with the owner's money. c) Rather than trying to quantify this risk, we should study the incentive plan and make adjustments if necessary.

a. Identify and briefly describe three themes in the theories of what drives the UW cycle. b. For each theme, provide an example of how the UW cycle could be affected.

a. (1) Institutional factors: historical data is projected for future pricing and there are regulatory and reporting delays. These all create time lags that help drive the cycle. (e.g. historical, immature losses are used to estimate rate changes and go into effect in the future. Then there's a delay with regulatory approval) (2) Competition: insurers lower rates to unprofitable levels, leading to crisis and price correction. (e.g. lowering rates in personal auto, a competitive line.) (3) Supply and demand: Shocks that increase or decrease capacity (e.g. CATs) impacts the supply and thus the price of insurance.

An insurance company has decided to manage the underwriting cycle by reducing market share when pricing is soft and expanding market share when pricing is hard. (a) Outline and justify an asset management strategy that could reduce the company's earnings volatility. (b) Discuss a risk that would increase if this strategy were implemented.

a. Need assets that can weather long periods of low premium. Equity has high return expectations, but can be very volatile. Long bonds that are available for sale have the same problem. Likely will be safe with a significant component of short duration (<5 years) bonds, say 80%; the rest in riskier securities. b. Since the above has very little equity, if inflation were to increase (and thus interest rates), this portfolio would not do as well as one with more equity. That is, interest rate is increased.

What is d1 in real options valuation? What is d2? What is N(d1) and N(d2)?

d1 = [ ln(assets/liabilities) + (rate,rf + o^2/2)*T ] / (o^2*sqrt(T)) d2 = d1 - volatility*sqrt(T) N(d) = 1-Normal score

Briefly describe two underlying causes of each of the following technical problems with reinsurance reserving: i. Persistent upward development of claims ii. Industry statistics not as useful. iii. Reports reinsurers receive are lacking important information. iv. Data coding and IT systems problems.

i) Primary insurers have a tendency to under-reserve ALAE. Trend has a greater impact on excess losses. ii) Schedule P groups data into groups that are not homogeneous enough. The lag in reporting of claims goes up with attachment points. iii) May not have AY information. Only summary information might be given. iv) Heterogeneity makes data coding challenging. Systems not updated quickly enough to keep up with changing needs.

What are the three approaches to modeling the underwriting cycle?

i) Soft: Gathers a wide variety of data about the market and competitor intelligence, then uses it to identify or predict a turn in the U/W cycle. This approach recognizes human factors and complexity of the UW cycle. ii) Behavioral: the middle of technical and soft. Uses supply and demand curve of insurance and capital flows. iii) Technical: models the UW cycle as a time series. Future values can be simulated from this.

i. Formula for first PDLD ratio. ii. Formula for PDLD ratios after the first.

i. (BP/SP)*(TM/ELR*%Loss,1) + (CL/L,1) * TM * LCF ii. Change[(CL/L,1)] * TM * LCF

For each of the following categories of strategic risk, briefly discuss the magnitude of risk and describe an example: i. Industry ii. Brand iii. Stagnation

i. Industry: Very high magnitude risk, insurance is capital-intensive. E.g. the UW cycle is an industry risk, especially how an insurer decides to respond. ii. Brand: Risk of brand erosion is moderate. E.g. unfair claims handling and bad press could deteriorate an insurer's brand. iii. Stagnation: Risk of stagnating business is high. E.g. strategic response to changes in the UW cycle such as maintaining market share in a soft market.

A mutual insurance company is setting up an asset-liability analysis using its enterprise risk model to help inform its investment strategy. Management's key concern is maintaining a strong balance sheet and high credit rating so that the policyholders are protected. Propose recommendations for each of the following considerations when setting up the ALM analysis and briefly discuss why they're appropriate: i) Risk metric for the analysis ii) Return metric iii) Time horizon iv) Relevant constraints

i. Since it's a mutual company, I recommend using a statutory basis for accounting. Use EPD as the risk metric since management is more concerned with the balance sheet. ii. I recommend using terminal value of surplus as the return metric, since its focus is the balance sheet. iii. I recommend using a multi-year model since it's more realistic and because this is a mutual insurer (more focused on protecting policyholders). iv. Should add investment constraints both from regulators and to keep the credit rating at an acceptable level for management.

Formulas for nj, mj, Zj in Mack's CY test.

nj = #S + #L in diagonal mj = (nj - 1)/2 ROUND DOWN Zj = min(Sj, Lj) where Sj and Lj are either large or small LDFs in a development period

Formula for process variance if interested in variance of reserves.

o^2 * Reserve

[Venter - Correlation of development factors] Formula for r, the correlation coefficient, and for the T statistic. Include the DoF for the T statistic.

r =( E[XY] - E[x]*E[y] ) / (stddev(x)*stddev(y) T = r * sqrt[ (n-2)/(1-r^2) ] T has n-2 dof

Provide two co-measures of standard deviation.

r(X,j) = Stdev(Y )E[ X,j ] / E[ Y ] r(X,j) = Cov( Xj ,Y ) / E[ Y ] - The first spreads the standard deviation in proportion to the mean of the components - The second decomposes the standard deviation in proportion to the covariance of the component of the total. This is the preferred co-measure.

MSE of individual and collective loss reserve

t = sqrt(p) -mse(R,coll) = E[alpha,i^2(U,i)] x q x (1 + q/t) -mse(R,ind) = E[alpha,i^2(U,i)] x q / p

Variance assumption of Tweedie.

φ * µ^p where µ is the expected incremental loss and φ is a parameter that varies with AY/age. This ties out with the variance assumptions for ODP and Gamma, seeing as ODP var assumption is dispersion factor * Expected loss and when p=1, Tweedie is ODP.

Losses follow a distribution, F(x). ALAE ~ G(y). The insurer would like to model the relationship between losses and ALAE with the following copula: C(u, v) = min(u, v)^0.3(uv)^0.7 How would you calculate the probability that losses are less than $12,000 and ALAE is less than $250?

u = F(12,000) and v = G(250) Then, Pr(X < 12000, Y < 250) = C(u,v) *just plug in the values

What are six components of a reinsurer's loss reserves?

• Case reserves reported by the ceding companies • Reinsurer additional reserves on individual claims • Actuarial estimate of future development on known case reserves (IBNER) • Actuarial estimate of pure IBNR - Usually grouped with IBNER • Discount for future investment income • Risk load

Discuss three strategic decisions an ERM model can help with.

• Determining capital need (to support risk, maintain rating, ...) • Identifying sources of significant risk and the cost of capital to support them • Selecting reinsurance strategies • Planning growth • Managing asset mix • Valuing companies for mergers and acquisitions

Things to consider when reviewing a CoV scale for an internal systemic risk balanced scorecard.

• Minimum CoV for best practice is unlikely to be less than 5%. • Maximum CoV for worst practice could be greater than 20% (e.g. single, aggregated model with limited data) • Scale should not be linear - Model improvements show diminishing returns o Improvement from a poor to fair model is greater than from a fair to good model • CoVs for long-tail lines are higher than short-tail CoVs for the same score o It's more difficult to model the underlying process of a long-tail line and predictors are less stable. • It's reasonable to use the same scale for outstanding claim liabilities and premium liabilities.

Why is both quantitative and qualitative analysis necessary to properly assess risk margins?

• Quantitative analysis can only reflect uncertainty in historical experience and can't capture adequately all possible sources of future uncertainty • Judgment is necessary to estimate future uncertainty

What are the sources of uncertainty that quantitative modeling is best able to assess?

• Quantitative modeling is best for analyzing independent risk and past episodes of external systemic risk. • Quantitative modeling must be supplemented with other qualitative or quantitative analysis to incorporate internal systemic risk and external systemic risk (Future external systemic risk may differ from past episodes).

Describe two adjustments to beginning book value for the abnormal earnings method

• Remove any systematic bias in reported assets and liabilities (e.g. restating reported loss reserve) • Remove intangible assets (e.g. goodwill) to isolate tangible book value

Discuss two reasons why quantitative methods might not be appropriate for assessing correlation effects

• Techniques tend to be complex and require substantial data (Time/effort required may outweigh benefits). • Correlations would be heavily influenced by past correlations • Difficult to separate past correlation effects between independent risk and systemic risk or identify the effects of past systemic risks • Internal systemic risk can't be modeled with standard correlation risk techniques • Results unlikely to be aligned with the framework, which splits between independent, internal systemic, and external systemic risk

We can identify key potential sources of external systemic risk through discussions with business experts as part of the valuation process. These discussions should consider what? How then should CoVs be selected?

• Underwriting and risk selection • Pricing • Claims Management • Expense Management • Emerging portfolio trends • The environment in which the portfolio operates CoVs for each external systemic risk category should be selected using a mix of quantitative analysis and qualitative judgment

What are the three plots that should be created to evaluate Mack's variance assumption?

• Variance assumption - plot weighted residuals against Cik in order to see if the residuals appear random. Three separate residual plots should be analyzed to ensure that the usual age-to-age factors are appropriate: Plot 0: (Ci,k+1 − Cikfk0) against Cik Plot 1: (Ci,k+1 − Cikfk1)/√Cik against Cik Plot 2: (Ci,k+1 − Cikfk2)/Cik against Cik ALWAYS PLOT AGAINST THE OLDER CUMULATIVE LOSS.

Briefly describe the three steps to reinsurance loss reserving.

⇧ Partition the reinsurance portfolio into reasonably homogeneous exposure groups that are relatively consistent over time with respect to mix of business ⇧ Analyze the historical development patterns. If possible, consider individual case reserve development and the emergence of IBNR claims separately ⇧ Estimate the future development. If possible, estimate the bulk reserves for IBNER and pure IBNR separately

Process variance vs parameter variance

o Process variance - uncertainty due to randomness o Parameter variance - uncertainty in expected value

Why is TVaR often criticized? What is a solution to overcome this?

It is linear in the tail; a loss twice as large is considered twice as bad. Weighted TVaR considers that loss to be more than twice as bad.

What are leverage ratios? Pros/cons?

Leverage ratios (e.g. Net Written Premium-to-Surplus or Net Reserves-to-Surplus) are the simplest way regulators monitor capital adequacy. Leverage ratios are compared to a threshold which is used to trigger regulator attention. In the US, the IRIS ratios are still used to measure capital adequacy. Pros: Easy to calculate/monitor. Cons: Doesn't distinguish between different lines of business, ignores risk other than UW risk.

When a=0 in the least squares development method, this equals what method?

Link ratio method

What is a general procedure for estimating a reinsurer's loss reserve?

Step 1: Partition the reinsurance portfolio into reasonably homogeneous exposure groups that are relatively consistent over time with respect to mix of business. Step 2: Analyze the historical development patterns. If possible, consider individual case reserve development and the emergence of IBNR claims separately. Step 3: Estimate the future development. If possible, estimate the bulk reserves for IBNER and pure IBNR separately. Seriously, this is pretty obvious, not sure why I even wrote it down

Five major elements in internal risk modeling

Step 1: Starts with an aggregate loss distribution, with many sources of risk (such as lines of business) Step 2: Quantifies the impact of the possible aggregate loss outcomes on the corporation Step 3: Assigns a cost to each amount of impact Step 4: Attributes the costs back to the risk sources Step 5: Determine corporate risk tolerance, cost of capital allocation, and cost-benefit analysis.

Heavy right tail (HRT) and joint Burr

The HRT copula produces less correlation in the left tail and more correlation in the right tail Like a cone from 0 to 1, with the tip at 1.

What is internal systemic risk?

Uncertainty because the valuation models are imperfect representations of the insurance process. These are risks internal to the liability valuation process.

What to look for in a graph of Normalized Residuals vs. Expected Incremental Loss when testing a loss emergence curve [Clark]

Use this graph to check the assumption that the variance/mean scale parameter, σ^2 , is constant. If the residuals are clustered closer to zero at either high or low expected incremental losses, this assumption may not appropriate.

What to look for in a graph of Normalized Residuals vs. Calendar Year when testing a loss emergence curve [Clark]

Use this graph to test for diagonal effects. You might see particularly high or low residuals for a specific calendar year. For example, if CY 2014 has negative residuals, this might be evidence of CY effects that are resulting in lower losses than expected.

What to look for in a graph of Normalized Residuals vs. Increment Age when testing a loss emergence curve [Clark]

Use this graph to test how well the loss emergence curve G(x) fits incremental losses at different development periods. See whether the curve overestimates for some development periods and underestimates for others.

G(x) formula for Weibull

G(x) = 1 - exp[-(x/θ)^ω]

G(x) formula for Loglogistic

G(x) = x^ω / (x^ω +θ^ω)

Formula for collective loss ratio claims reserve.

%Unpaid * Premium * ELR

Credibility weights for (1) Benktander, (2) Neuhaus, and (3) Optimal credible loss ratio claims reserve

(1) pi (2) pi * ELR (3) pi / [pi + sqrt(pi)]

Formula for EPD (expected policyholder deficit).

(TVaR - VaR) * (Complement of the specified probability level) Expected value of the defaulted losses if there is a default

In a soft market, what should an insurer focus on?

- Intellectual property: maintain talent, invest in tech, etc. - Underwriter incentives: incentive plans should be flexible, changing with the market. - Market overreaction: don't write at inadequate rates - Owner education: education stockholders/policyholders that financial figures will change in a soft market.

Why is the Benktander method superior to the BF and CL methods?

-Lower mean squared error -Better approximation of the exact Bayesian procedure -Superior to CL since it gives more weight to an a priori expectation of ultimate losses -Superior to BF since it gives more weight to actual loss experience.

What characteristics does a good ERM system have?

-Balance between risk/reward -The model must recognize its own imperfections -Reflect the relative importance of various risks to business decisions -Quantify the risks

What risks do insurers face?

-Insurance hazard: the risk assumed by insurer in exchange for premium. Underwriting (xCAT from current exposures), Accumulation (CAT from current exposures), and Reserve Deterioration (losses from past exposures) -Financial risks (investments, equity prices, interest rates) -Operational (e.g. IT systems) -Strategic

Three types of insurance hazard risk

1. U/W risk 2. Accumulation/CAT risk 3. Reserve deterioration risk

What are the steps to calculating the variance of prospective losses using the CC method with LDF curve fitting [Clark] given premium, ELR, the variance/mean ratio, and a covariance matrix of ELR, θ, and ω

1) Calculate expected losses for the prospective year. E[Prospective Loss] = Prem * ELR 2) Process variance = E[Prospective Loss] * o^2 which is the variance/mean ratio 3) Parameter variance = Var(ELR) ⋅ Prem^2 4) StdDev(Prospective Loss) = sqrt(Process var + Param var)

How can risk allocation be done?

1. Allocate the overall risk to individual business units based on the business unit's individual risk. 2. Allocate it based on the business unit's contribution to the overall risk.

The three steps to decision analysis.

1. Deterministic project analysis. Use a single deterministic forecast to produce an objective function like IRR or PV. Handled judgmentally rather than stochastically. 2. Risk analysis. Forecasts of distributions of critical variables are input into a Monte Carlo simulation process to produce a distribution of the present value of cash flows. Risk judgment is still applied intuitively. 3. Certainty equivalent. expands upon risk analysis by quantifying the intuitive risk judgment using a utility function (i.e. corporate risk preference). The utility function does not replace judgment. Instead, it formalizes judgment so that it can be consistently applied.

What steps are in the ERM process?

1. Diagnose all risks (general environment, industry, and firm-specific risks) 2. Analyze those risks (quantify, identify correlations, etc.) 3. Implement a risk-mitigation strategy. 4. Monitor the actual outcomes.

Assumption and disadvantage of the discounted cash flow model.

Assumption: We assume that free cash flow not paid immediately as dividends can be invested to earn an appropriate risk-adjusted return Disadvantage: Adjusting projected Net Income to calculate forecasted free cash flows make the interpretation of FCFE difficult (FCFE may bear little resemblance to internal forecasts).

Assumptions of the dividend discount model. Disadvantages.

Assumptions: • Expected dividends in the forecast horizon • Dividend growth rate beyond the forecast horizon (for the terminal value) • Growth rate is linked to the risk-adjusted discount rate • Risk-adjusted discount rate Disadvantages: • Actual dividend payments are discretionary and difficult to forecast • Terminal value is sensitive to assumptions and can represent the majority of the overall valuation

Why is the Benktander method a mixture of the CL method and the BF method?

Both the BF and CL methods represent extreme positions (fully believe Ck (the current losses) or do not all)

How does the discounted cash flow model overcome some of the limitations of the dividend discount model?

DCF uses free cash flow instead of dividends. Free cash flow is all the cash that could have been paid as dividends, but could've been used instead to pay other capital providers.

What is economic capital? Pros/cons when it comes to setting capital?

Economic capital is the capital level required so that the probability of insolvency is very remote (e.g. 1 in 3000). Pros: Unifying measure for all risks the insurer has, aggregates risks, more meaningful than RBC or leverage Cons: not very reliable at remote probability levels, choice of a given probability level is questionable

Examples of violations of the assumption underlying the Mack model of losses are independent between AYs.

Examples of calendar year effects: • Major changes in claims handling practices • Major changes in setting case reserves • Unexpectedly high (or low) inflation • Significant changes due to court decisions

Reinsurer ABC writes an excess of loss treaty with primary insurer ABC. • The treaty covers losses in the $1,000,000 excess of $500,000 layer. • The contract requires notification of loss occurrences with a reported Loss & ALAE that exceeds 50% retention. Discuss the steps involved from the time an accident is reported to the primary insurer to the time it's reported to the reinsurer.

First, loss occurrence would need to be reserved at 250k or more (50% retention) before the primary insurer would need to report it to the reinsurer. Then, the claim must go through the cedant's reporting system to the reinsurance accounting department before it is reported to the reinsurer, possibly through an intermediary. Finally, the loss would be booked and show up in the reinsurer's claims system. The result is a longer reporting lag than primary insurance.

Steps to calculating normalized residuals [Clark]

Follow the same steps as you would for calculating the variance of the reserves (calc G(x), LDFs, exp. incLoss triangle). Norm. Resid = (Actual incLoss - Exp. incLoss )/sqrt(o^2 * exp. incLoss)

Why are dependencies in an ERM model important? What is the difference between correlation and dependencies?

If the dependencies in the ERM model aren't realistic, then each individual component of the ERM model could be realistic, but combined model of the company as a whole will be unrealistically stable. Sources of Dependency: • Macroeconomic conditions may impact many risks simultaneously. For example: o Inflation impacts underwriting losses AND adverse reserve development • Catastrophes may impact multiple lines that are usually uncorrelated. For example: o Home and commercial property are impacted by a hurricane • Underwriting cycles, loss trends and reserve development may impact multiple lines Correlation uses a single value and doesn't differentiate between different levels of dependency. A copula is better for this.

Identify and briefly describe two additional sources of external systemic risk that may be material in a risk margin analysis and give an example of each.

Legislative, political and claims inflation risk - Changes to the legislative/political environment and trends in the level of claim settlement (e.g. impact of recent legislation or changes in court interpretation) Claim management process change risk - Changes to claim reporting, payment or estimation processes (e.g. claims department moves to a new claims management platform)

Advantages of Hurlimann's optimal credibility weighting

Minimizes MSE and variance of the loss reserve estimate

Gumbel copula

More tail concentration than Frank's copula Asymmetric with more weight in the right tail. Graph looks like a hard 45 degree line running from 0 to 1, then more spread out upwards from there.

How would you test whether the variance of cumulative losses at 36 months is proportional to cumulative losses at 24 months [ Mack - 1994 ]?

Plot weighted residuals vs. cumulative losses. The residuals should be random around zero and shouldn't have significant trends or patterns.

What are some problems with reinsurance loss reserving? (7)

Problem 1: Claim report lags to reinsurers are generally longer, especially for casualty excess losses Problem 2: There is a persistent upward development of most claim reserves, caused by inflation, tendency of claims adjusters to reserve at modal values, and tendency to under-reserve ALAE Problem 3: Claims reporting patterns differ greatly by reinsurance line, by type of contract, by specific contract terms, by cedant and possibly by intermediary Problem 4: Because of the heterogeneity stated in Problem 3, industry statistics are not very useful Problem 5: Information given to resinsurers is often insufficient. Problem 6: Because of the heterogeneity in coverage and reporting requirements, reinsurers often have data coding and IT systems problems Problem 7: The size of an adequate loss reserve compared to surplus is greater for a reinsurer

Fully discuss the similarities and differences between the ODP bootstrap and GLM bootstrap models.

The GLM and ODP bootstrap produce the same results when: • There is a separate parameter for each AY and development period • There are no calendar year parameters • The GLM bootstrap model uses a log-link function and ODP error distribution The GLM bootstrap fits a GLM to the incremental loss triangle to estimate expected incremental losses mw,d. For each iteration, the GLM is re-fit to the sample loss triangles and the parameters are used to estimate expected incremental losses for projected losses. The GLM bootstrap allows for flexibility in grouping parameters or adding new ones. In contrast, the ODP bootstrap uses volume-weighted LDFs to calculate expected incremental losses, mw,d, from the original triangle. The Chain Ladder method is used again on each sample loss triangle to calculate expected incremental losses for projected losses.

When is the budgeted loss ratio method equal to the least squares development method?

When b=0 (no AY effects). Recall that the budgeted loss methods sets the new loss equal to the mean of all the losses in the same development period.

Formula for individual loss ratio claims reserve.

[ %Unpaid * Cumulative Loss ] / %Paid

Insurance company ABC writes Personal Auto insurance. Over the past year the claims department was re-organized and new policies were implemented to focus on closing small claims faster. Recently the reserving department has noticed a significant uptick in Bodily Injury inflation for the latest calendar year, more than expected. a) State the three assumptions of the Mack Model. b) State whether these assumptions are violated.

a) • Expected incremental loss is proportional to cumulative loss-to-date • Losses are independent between accident years • Variance of the next incremental loss is proportional to cumulative loss-to-date, with a factor varying by age b) The first assumption is violated if there is a significant speed-up of settling claims because the historical data is no longer appropriate to estimate development factors. • The second assumption is violated because of the calendar year effect of unexpected inflation. • There's no indication if the third assumption is violated or not, but this can be tested by looking at a plot of the normalized residuals calculated using different variance assumptions.

Forms of risk management (ARMER)

• Avoidance of the risk • Reduction in the chance of occurrence (lower the frequency) • Mitigation of the effect of the risk (if it occurs) • Elimination/Transfer of the risk (e.g. reinsuring losses) • Retention of some or all of the risk (assuming the risk)

Advantages to using parameterized curves like Loglogistic and Weibull to describe a loss emergence pattern

• Estimation is simple since we only have to estimate two parameters • We can use data that is not from a triangle with evenly spaced evaluation data - such as the case in which the latest diagonal is only nine months from the second latest diagonal • The final pattern is smooth and does not follow random movements in the historical age-to-age factors

What is agency theory?

• How to align management and owner interests • Understanding the impacts when management and owner interests are different

Key assumptions to Clark variance

• Incremental losses are iid o Independent - One period doesn't impact surrounding periods (this assumption fails if there are calendar year effects such as inflation) o Identically distributed - Assume the same emergence pattern, G(x), for all accident years (this assumption fails if the mix of business or claims handling changes) • Variance/Mean Scale parameter, σ2 , is fixed and known o We ignore the variance of σ2 • Variance estimates use approximation to the Rao-Cramer lower bound

An actuary fit a Weibull curve to incremental losses and estimated the coefficient of variation of unpaid claims with the Cape Cod method. What are two assumptions underlying the model?

• Incremental losses are iid - One period doesn't affect surrounding periods and the emergence pattern is the same for all accident years. • The Variance/Mean scale parameter (σ^2 ) is fixed and known.

The three main assumptions of the Clark model (i.e. using Weibull to describe loss emergence)

• Incremental losses are independent and identically distributed. • The Variance/Mean scale parameter (σ^2) is fixed and known. • Variance estimates are based on an approximation to the Rao-Cramer lower bound.

Insurer ABC has a portfolio of personal auto, homeowners, personal umbrella and workers compensation business. The actuary is setting up a new model to calculate the company's risk margin for the outstanding claims and premium liabilities. a. Describe the three sources of uncertainty for ABC. b. For each source of uncertainty, briefly discuss an example of a risk that the actuary should account for when calculating the risk margin. c. Discuss what correlation effects the actuary should incorporate between the sources of uncertainty when calculating the risk margin.

• Independent Risk (e.g. randomness of insurance) - - - - Represents risks due to the randomness of the insurance process from parameter risk and process risk. • Internal Systemic Risk (e.g. model does not fully represent the insurance process) - - - - Risks internal to the valuation process that are common across claim groups. Reflects the fact that the valuation model imperfectly represents the insurance process. • External Systemic Risk (e.g. legislative decisions) - - - - Risks external to the valuation process that may result in actual experience differing from what's currently expected. c. Independent risk is uncorrelated. Internal systemic: uncorrelated with independent risk, but same actuary effect (same model used) and correlation between outstanding claim / prem liabs apply. External systemic: Uncorrelated between risk categories. Correlation effects may arise between valuation classes and between premium/outstanding claims, depending on the risk

Types of financial risks

• Interest rates • Foreign exchange rates • Equity prices • Credit quality • Liquidity

LDF vs Cape Cod

• LDF method over-parameterizes the model, fits to the "noise" in the data o For a 10-yr triangle, there are 12 parameters to estimate and only 55 data points • Cape Cod has lower parameter variance because there are fewer parameters to estimate and it uses more information (on-level premium)

Why is it difficult for outsiders to create reliable projections of a company?

• Outsiders may not have sufficiently detailed data to create projections • It may be difficult to estimate growth and rate adequacy • It may be difficult to forecast financials for even relatively short forecast horizons (e.g. 5yr)

What is the difference in opinion between how regulators and shareholders think capital should be used?

• Regulators and rating agencies require enough capital to protect policyholders from default • Shareholders require that capital is used efficiently

What are some potential reference points in how capital should be set?

• Set capital so that the probability of default is appropriately remote (default avoidance) • Set capital to support renewal business • Set capital so that an insurer can thrive after a catastrophe

Advantages of Hurlimann method (cred wtg individual and collective reserves) over Mack-2000 (Benktander)

• Straightforward calculation of the optimal credibility weight • Different actuaries get the same result using the collective loss ratio claims reserve with the same premiums (BF method requires an ELR assumption)

When is least squares development appropriate?

• The least squares fit does NOT make sense if year to year changes in loss experience are due largely to systematic shifts or distortions in the book of business • The least squares fit may be appropriate if year to year changes are due largely to random chance


Conjuntos de estudio relacionados

Life Cycle Nutrition (Ch. 10-13) Prep for Quiz #3

View Set

Environmental Science - Chapter 5: Section 5-3 Page 3

View Set

PSY 361 - Ch. 11 - Cognitive Development in Middle Childhood

View Set

IS-200.B - ICS for Single Resources and Initial Action Incidents

View Set

consumer behavior chapter 10 quiz

View Set

Social psychology second semester 8-14

View Set