Lecture 6 Flashcards

(21 cards)

1
Q

What is assumption 1 for Multiple Linear Regression (MLR1)

A

The (true) population model is a linear function of the explanatory variables
𝑦=𝛽0+𝛽1π‘₯1+𝛽2π‘₯2+𝛽3π‘₯3+…+π›½π‘˜π‘₯π‘˜+𝑒

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Assumption 2 for MLR (MLR2)

A

The sample of size 𝑛, {(𝑦𝑖, π‘₯𝑖1,π‘₯𝑖2,π‘₯𝑖3,…, π‘₯π‘–π‘˜): 𝑖=1, 2, …, 𝑛}, taken from the population and used to generate the estimates, is random and 𝑛>π‘˜+1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Assumption MLR3- No perfect collinearity

A
  • There is variation in all the variables
  • There are no exact linear relationships among the independent variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

MLR4- Zero conditional mean assumption

A

The disturbance term, 𝑒, has an expected value of zero given any values of the independent variables
𝐸(𝑒|π‘₯_1,π‘₯_2,π‘₯_3,…, π‘₯_π‘˜ )=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is the zero conditional mean assumption often not valid

A

Because factors affecting Y and correlated with x have been omitted from the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is zero conditional mean assumption more likely to be valid in the MLR model

A

Because more variable are explicitly included in the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If two variables are perfectly collinear, what is the solution for MLR

A

Exclude one of the explanatory variables from the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Assumption MLR5: Homoscedasticity

A

The errors have the same variance irrespective of the explanatory variables

π‘‰π‘Žπ‘Ÿ(𝑒|𝒙)=𝜎^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Under the 5 assumptions, what are the variances of the OLS estimators (𝛽̂_1, 𝛽̂_2, 𝛽̂_3,…, 𝛽̂_π‘˜)

A

Where 𝑅𝑗^2 is a measure of the correlation between π‘₯𝑗 and all of the other explanatory variables, i.e., all the other π‘₯s,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does the variance of the error, sample size and explanatory variables (x) effect the variances of the OLS estimators

A

-larger the larger 𝜎^2, the variance of the error (disturbance) terms

  • smaller the larger 𝑛, the size of the sample used in the estimation
  • smaller the greater the variance in the explanatory variables 𝒙
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does the correlation between the explanatory variables (𝑅𝑗^2 ) impact the variances of the OLS estimators

A

larger the greater the correlation between the explanatory variables 𝒙

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In the model
𝑦=𝛽0+𝛽1π‘₯1+𝛽2π‘₯2+𝑒, how does a greater correlation between x1 and x2 impact the π‘‰π‘Žπ‘Ÿ(𝛽̂1 )

A
  • The greater correlation between π‘₯1 and π‘₯2 the closer the 𝑅1^2 gets to its maximum value of 1
  • The closer 𝑅1^2 gets to 1, the closer π‘‰π‘Žπ‘Ÿ(𝛽̂1 ) gets to ∞
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which assumption is violated if 𝑅1^2=1

A

MLR3 would be violated as that means x1 and x2 are perfectly collinear and the assumption is that there is no perfect collinearity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Diagram showing how multicollinearity increases the variance of the estimators

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How much of a problem of multicollinearity

A
  • It does not cause bias in the estimators
  • It only seriously inflates π‘‰π‘Žπ‘Ÿ(𝛽̂𝑗 ) when the correlation very high
    -It is tempting to remove one (or more) of the explanatory variables, but this will lead to bias in the estimators
    -Increasing the sample size, 𝑛, helps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is used, like in the SLR, as an estimator for 𝜎^2

A

the variance of the residuals, πœŽΜ‚^2=π‘‰π‘Žπ‘Ÿ(𝑒̂𝑖 )

17
Q

What are the measures of the dispersion of the OLS estimators 𝛽̂1 to π›½Μ‚π‘˜, in the MLR

18
Q

What is the standard error of the regression

A

πœŽΜ‚=√(πœŽΜ‚^2 )

19
Q

What are the standard errors of the OLS estimators 𝛽̂1 to π›½Μ‚π‘˜, in the MLR

20
Q

Acronym for the Gauss-Markov Theorem (BLUE)

A

B est (smallest variance)
L inear (linear function of the data)
U nbiased (𝐸(𝛽̂𝑗 )=𝛽𝑗)
E stimators

21
Q

As long as the Gauss Markov assumptions are valid, is there any better estimator