Ex ante vs Ex post return distributions
Ex post returns are realized outcomes.
Future possible returns and their probabilities are referred to as expectational or ex ante returns.
Two properties for past return behaviour to predict future potential return
Normal vs Lognormal
Benefit of lognormal over distribution of discretely compounded
Modeling the distribution of discretely compounded returns as being normally distributed over a particular time interval –> model will not be valid for any other choice of time interval.
Normal distribution replicates additively; thus, if the log returns over one time interval can be modeled as being normally distributed, then the log returns over all time intervals will be lognormal as long as they are statistically independent through time.
Skewness
Kurtosis
Captures the fatness of tail in distribution. High value = fatter tails.
Shapes
1. Leptokurtic (Tallest): positive excess kurtosis –> fat tail (higher probabilities of extreme outcomes)
2. Mesokurtic (Normal)
3. Platykurtic (Lowest): negative excess kurtosis –> skinny tail (lower probabilities of extreme outcomes)
Covariance
Directional relationship between the returns on two assets.
Covariance is calculated by
1. analyzing at-return surprises (standard deviations from the expected return) OR
2. multiplying the correlation between the two random variables by the standard deviation of each variable.
Correlation Coefficient
Strength of association between two variables
Perfect Linear Negative Correlation (-1): 2 assets move in exact opposite direction in same proportion
Perfect Linear Positive Correlation (+1): 2 assets move in exact same direction in the proportion
0: no linear association between returns of two assets
Covariance vs Variance
Both variance and covariance measure how data points are distributed around a calculated mean. However, variance measures the spread of data along a SINGLE AXIS, while covariance examines the DIRECTIONAL RELATIONSHIP between TWO VARIABLES.
Covariance vs Correlation
Covariance measures the DIRECTION of a relationship between two variables, correlation measures the STRENGTH of that relationship
Beta
Beta is a measure of a stock’s volatility in relation to the overall market.
Textbook: covariance between asset return and index return, divided by variance of index return
Covariance vs Variance
Both variance and covariance measure how data points are distributed around a calculated mean. However, variance measures the spread of data along a SINGLE AXIS, while covariance examines the DIRECTIONAL RELATIONSHIP between TWO VARIABLES.
Relationship between Variance and Standard Deviation
SD = Square root of Variance
Variance = SD squared
Autocorrelation
Relationship between Second-Order vs First-Order Autocorrelation
E.g. First order correlation: 0.7. If no further causality beyond one period, then the Second order correlation will be 0.7x0.7=0.49. If more than 0.49, then returns between T and T-2 is positive, beyond the correlation of T and T-1.
Partial Autocorrelation Coefficient
Adjusts autocorrelation coefficients to isolate the portion of the correlation in a time series attributable directly to a particular higher-order relation.
E.g. “removes” the effect of first order autocorrelation from second order to isolate the marginal affect of T-2 return on T.
Durbin Watson test
Test to detect autocorrelation in the residuals from a stats or regression model.
Always has a value between 0 and 4
Define and explain standard deviation (volatility)
Typical amount by which actual return deviates from the average
Describe the properties of variance
Describe the properties of standard deviation
Why are some returns non-normal?
1. A____
2. I____
3. N__-L___
Test for Normality
Jarque-Bera
Heteroskedasticity vs Homoskedasticity
Heteroskedasticity happens when the standard errors of a variable, monitored over a specific amount of time, are non-constant.
Homoskedasticity is when the variance of a variable is constant.
With heteroskedasticity, the tell-tale sign upon visual inspection of the residual errors is that they will tend to fan out over time. IMPACT: violation of the assumptions for linear regression modeling, like CAPM.
Generalized autoregressive conditional heteroskedasticity (GARCH) method:
Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) is a statistical model used in analyzing time-series data where the variance error is believed to be serially autocorrelated. GARCH models assume that the variance of the error term follows an autoregressive moving average process.