GLMs - Anderson Flashcards Preview

Exam 8 > GLMs - Anderson > Flashcards

Flashcards in GLMs - Anderson Deck (13):
1

Problems with One-way analysis

1. Potentially distorted by correlations among rating variables.

2. Does not consider inter-dependencies between rating variables in the way they impact what is being modeled.

2

Problems with Classical Linear Models

1. It is difficult to assert Normality and constant variance for response variables.

2. The values of the dependent variable (the Y variable) may be restricted to positive values, but the assumption of Normality violates this restriction.

3. If Y is always positive, then intuitively the variance of Y moves toward zero as the mean of Y moves toward zero, so the variance is related to the mean.

4. The linear model only allows for additive relationships between predictor variables, but those might be inadequate to describe the response variable.

3

Benefits of GLMs

1. The statistical framework allows for explicit assumptions about the nature of the data and its relationship with predictive variables.

2. The method of solving GLMs is more technically efficient than iterative methods.

3. GLMs provide statistical diagnostics which aid in selecting only significant variables and validating model assumptions.

4. Adjusts for correlations between variables and allows for interaction effects.

4

Steps to solving a Classical Linear Model

1. Set up the general equation in terms of Y, ß, and X’s.

2. Write down an equation for each observation by replacing X’s and Y with observed values in data. You will have the same number of equations as observations in the data. For observation i, the equation may contain some ß values and will contain errori.

3. Solve each equation for the errori.

4. Calculate the equation for the Sum of Squared Errors (SSE) by plugging in the errori2 formulas. SSE = Σ (i = 1 to n) of errori2

5. Minimize the SSE by taking derivatives of it with respect to each ß and setting them equal to 0.

6. Solve the system of equations for the ß values.

5

Components of Classical Linear Models

1. Systematic - The p covariates are combined to give the "linear predictor" eta, where eta = ß1 X1 + ß2X2 + ß3X3 +...+ ßpXp

2. Random - The error term, is Normally distributed with mean zero and variance sigma2. Var(Yi) = sigma2

3. Link function - Equal to the identity function.

6

Components of Generalized Linear Models

1. Systematic - The p covariates are combined to give the "linear predictor" eta, where eta = ß1 X1 + ß2 X2 + ß3 X3 +...+ ßp Xp

2. Random - Each Yi is independent and from the exponential family of distributions. Var(Yi) = phi * V(mui) / omegai

3. Link function - Must be differentiable and monotonic.

7

Common exponential family distribution variance functions

Error Distribution & Variance Function

Normal : V(x) = 1 (as in a Classical Linear Model)

Poisson : V(x) = x

Gamma : V(x) = x2

Binomial : V(x) = x(1 - x)

Inverse Gaussian : x3

Teedie : V(x) = (1 / lambda) * xp, where p < 0 or 1 < p < 2 or p > 2

8

Methods of estimating the scale parameter

1. Maximum likelihood (not feasible in practice)

2. The moment estimator (Pearson chi2 statistic): phi hat = (1 / (n-p)) * Σ (i=1 to n) [(omegai * (Yi - mui)2) / V(mui)]

3. The total deviance estimator : phi hat = D / (n-p)

9

Common Link Functions

Link Function: Function, & Inverse Function

Identity: eta, eta

Log: ln(eta), eeta

Logit: ln(eta / (1-eta)), eeta / (1+eeta)

Reciprocal: 1 / eta, 1 / eta

10

Common model forms for insurance data

1. Claim frequencies/counts - Multiplicative Poisson (Log link function, Poisson error term)

2. Claim severity - Multiplicative Gamma (Log link function, Gamma error term)

3. Pure Premium - Tweedie (compound of Poisson and Gamma above)

4. Probability (i.e., of policyholder retention) - Logistic (Logit link function, Binomial error term)

11

Aliasing and near-aliasing

Aliasing is when there is a linear dependency among the covariates in the model. Types of aliasing:

1. Intrinsic aliasing - When the linear dependency occurs by definition of the covariates.

2. Extrinsic aliasing - When the linear dependency occurs by the nature of the data.

3. Near-aliasing - When covariates are nearly linearly dependent, but not perfectly linearly dependent.

12

Ways to decide whether to include a factor in the model

1. Size of confidence intervals (usually viewed graphically in practice)

2. Type III testing

3. See if the parameter estimate is consistent over time

4. Intuitive that factor should impact result

13

Type III test statistics

1. chi2 test statistic = D1* - D2* ~ chi2 (df1 - df2)

2. F test statistic = [(D1 - D2) / (df1 - df2)] / (D2 / df2) ~ F (df1 - df2), df2