Robertson - NCCI's 2007 HG mapping Flashcards Preview

CAS Exam 8B > Robertson - NCCI's 2007 HG mapping > Flashcards

Flashcards in Robertson - NCCI's 2007 HG mapping Deck (11):

Summarize the process used in the 2007 NCCI HG mapping study

1. Developed Excess LR for each class at five selected limits
2. Grouped classes with similar standardized, credibility weighted excess ratio vectors using weighted, k-means clustering analysis
3. Enhanced cluster/grouping using PC analysis
4. Determined Optimal number of groups (7) using weighted, k-means cluster analysis
5. Had underwriter panel review the initial groups, revised groupings based on their input


Why was 5 limits selected?

1.. ELFs at any pair of limits are highly correlated

2. Limits below 100K are heavily represented in the list of 17 limits


Why standardize?
When is standardization appropriate?

Why? when variables have different units, spreads; prevents a variable with large values from exerting undue influence on cluster results.

It's appropriate when the spread of values is due to normal random variation

Not appropriate if due to presence of sub-classes.


why did NCCI decide not to standardize?

1. XS ratios share a common unit of measure($ of excess loss/ $ of total loss); standardizing results in a new variable without a common unit interpretation
2. standardizing could result in excess ratio outside the range of 1
3. standardizing reduced the influence of lower loss limits, where the bulk of data is


How k-mean algorithm works

1. Assign classes to k arbitrary groups
2. calculate Ri of each group (weighted excess ratios)
3. compare excess ratio of each class to those of all centroids
4. move each class to group with closest centroids
5. if any class move, go back to step 2 and repeat.

-This is analogous to maximizing R^2
- it minimizes the within variance and maximizes the between variance


How did the NCCI decide to use seven as the new number of HGs?

1. Two test statistics selected:
- Calinski/Harabasz
- Cubic Cluster Criterion

for both tests, higher statistic => better cluster

2. Three scenarios tested
- all classes
- only classes with over 50% credibility
- only 100% credibility classes

3. number of groups tested was between 4 & 9

4. 7 groups were indicated in 5 of the 6 tests

5 The exception was the CCC test on all classes, which indicated 9 groups.
This was given little emphasis because:
- CH test outperforms CCC
- CCC deserves less weight when correlation is present as is the case in all NCCI scenarios
- selection should be drive by large credible classes
- there was crossover in ELFs in the 9HGs


on what basis does NCCI define HGs
Why are HGs defined on a country-wide basis, does not vary by state.

A HG is a collection of WC classifications that have similar ELFs over a wide range of limits.

NCCI defines HGs on a country-wide basis. HGs does not vary by state. NCCI takes the view that classes are homogeneous w.r.t operations of the insureds, and therefore the relative mix of injuries within a class should not vary much from state to state.


Describe the desirable optimality properties that result from k-means to determine clusters

It is equivalent to maximizing R-sqaured in linear regression. It maximizes the variance between groups while minimizing the variance within groups.


Credibility by class is determined by the following formula: Z = min(n/(n+k)*1.5,1)
What is the one consideration when deciding whether to use this credibility formula
Describe two alternative methods.

Consideration: what size of class is required to achieve full credibility

k is based on the average # of claims per class

alternative method: Eliminate med-only claims, could replace k with median claims per class


One advantage of PC analysis over GLM

PC analysis identifies variables that are most predictive of the outcome, allowing one to eliminate other correlated variables from the model. It makes the model simpler without much loss of function.


Describe two test statistics that could be used to determine the optional number of groups from the cluster analysis.

1. Calinski-harbasz: measure the between variance divided by within variance

2. Cubic Clustering Criterion (CCC): compares variance explained by clusters to that explained by randomly assigned clusters.