Quantitative Methods 8% - 12%

Ace your homework & exams now with Quizwiz!

When sampling from the below, what statistic should you choose for a small and large sample size?: - Normal distribution with a known variance: - Normal distribution with an unknown variance: - Nonnormal distribution with a known variance: - Nonnormal distribution with an unknown variance: When should you use t stat? When should you use z stat?

***Note: Unknown population variance uses t test. If given sample variance, you still don't know the population variance. Normal distribution w/ known population variance: - Use z stat for small & large sample sizes Nonnormal distribution w/ known population variance: Small sample = No test ; Large sample use z stat Normal distribution w/ unknown population variance: Small sample use t stat ; Large sample use t stat** Nonnormal distribution & unknown population variance: Small sample = No test ; Large sample use t stat** Nonnormal distribution & small sample = No test **t stat is preferred, but can use z stat Use t stat: Normal or nonnormal distribution & population variance is unknown. Df = n - 1 Use z stat: Normal or nonnormal distribution & population variance is known

Explain: - Combination formula: - Permutation formula: - Labeling formula:

Combination formula: # of ways we can choose r objects from a total of n objects. Order doesn't matter. Look for words "choose", "select" or "combination". - Use nCr key on calc. Input large # 1st, then smaller # Permutation formula: # of ways we can choose r objects from a total of n objects. Order matters. nPr answer is always > nCr. Look for "order" being important. - Ex) Has 5 stocks & wants to sell 3 at one time. Use nPr key on calc Press 5, 2nd, nPr, 3, = Input large (5) 1st, then smaller (3) Multiplication rule for Labeling: 3+ subgroups of predetermined size. Each element of group must be assigned a place/label in one of three or more subgroups. # of ways n objects can be labeled with k different labels, with n₁, of 1st type, n₂ of 2nd type, etc = n! / (n₁! n₂! .... nk!)

7 steps of hypothesis testing: Explain: - Type I error: - Type II error: - Significance level: - Power of test: If you decrease the significance level, what happens to Type I and Type II errors?

Hypothesis testing 7 Steps: 1. State the hypothesis 2. Identify the test stat & its distribution 3. Specify the significance level 4. State the decision rule for hypoth 5. Collect the data & perform the calculations 6. Make the statistical decision to accept/rejct 7. Make decision based on the test result Type I Error: Rejecting the true null. Prob of a Type I Error is the tests's significance level (rejection area) - You reject Hₒ when it's true Type II Error: Failing to reject a false null (the null when it's false) - You don't reject Hₒ when it's false Significance level: Prob of a Type I Error, rejecting the true null (ie rejecting null when it's false) Ex) Convicting an innocent person Power of test: 1 - Prob(Type II Error) = Prob(rejecting false null) Ex) Failing to convict a guilty person If significance level↓ (smaller): Prob of Type I Error↓, & prob of Type II Error↑

What are the 3 assumptions of linear regression? Explain the simple linear regression formula: Use this example to explain below: Ex) r̂ stock = 0.5% + 0.9r̂market - Predicted intercept formula? How do you interpret regression coefficient intercept? - Predicted slope coefficient formula? How do you interpret regression coefficient slope? - Residual: Also known as?

Linear Regression Assumptions: Linear relationship b/w dependent & independent variables. The dependent variable's variation in terms of the independent variable's variation. - Variance of residual terms is constant/same across all observations (homoskedasticity) - Error terms are normally & independently distributed (uncorrelated) w/ each other Linear regression line formula: Ŷᵢ = b̂₀ + b̂₁(Xᵢ) + εᵢ Ŷᵢ = Dependent variable (vertical y-axis) b̂₀ = Predicted intercept term b̂₁ = Predicted slope coefficient Xᵢ = Independent variable (horizontal x-axis) εᵢ = Residual for ith observation (disturbance/error term) Ex) r̂ stock = 0.5% + 0.9r̂market r̂ stock = Y r̂market = X ^ = predicted value Residual/Error Term: Difference b/w Y's actual & predicted value: = (Yi - Ŷi) Predicted intercept: b̂₀ = Ȳᵢ- b̂₁(X̅) Lines intersects w/ y-axis at X = 0. Value of Y when X = 0 ~ Predicted stock return is 0.5% when market return is 0 Predicted slope coefficient for regression line: △Y for one unit △X: b̂1 = COVx,y / σₓ² (ie var of X) - Interpretation: Stock return is expected to ↑ by 0.9% from a 1% ↑ in market return Predicted Values Confidence Interval: = [Ŷ - (Tc • Sf)] < Y < [Ŷ + (Tc • Sf)] Tc = Two tailed critical value w/ df = n - 2 Sf = Standard error of the forecast = Predicted value +/- (critical value w/ n-2 df)(Standard error of forecast) Ex) Given standard error of forecast. Asked for 95% confidence interval: Use outline: 95% confidence interval = Mean +/- (1.96)(SD) but w/ above formula

Explain: - Regression line: What does it do? - Ordinary least squares (OLS) regression: - Sum of squared errors (SSE): - Regression sum of squares (RSS): - Total sum of squares (SST): In an ANOVA table: - Is the regression explained or unexplained? Df? - Is the residual explained or unexplained? Df? - How many df are in SST? How do you calculate: - R²: What is this? - Standard error of estimate (Se): - Mean square regression (MSR): - Mean square error (MSE):

Regression line: Minimizes SSE. Minimizes vertical diff b/w actual Y & Ŷ along the regression line. Linear Regression Confidence Intervals: Df = n-2 Regression interval T Test = [(b̂ ₁ - b₁)/(Sb̂₁)] Sb̂₁ = Standard Error for b₁ - Linear regression only: T test = T Test for correlation = [r⋅√(n-2) / √(1 - r²)] Ordinary Least Squares Regression: The added line to minimize prediction errors (minimizes SSE/residuals). Best regression line minimizes SSE. OLS slope = b̂₁. Sum of Squared Errors (SSE): (Yi - Ŷi)² Unexplained variation in Y. Diff b/w Y's actual & Ŷ value. Df = n-2 - Mean Square Error (MSE): = SSE / Df Standard Error of Estimate (SEE): SD of the residuals. Low is best. = √MSE or (RSS/MSE) Regression Sum of Squares (RSS): (Ŷi - Ȳ)² ; Df = n-1 Variation in Y explained by X. Sum of squared diff b/w Y's predicted value & Y's mean. Explained variation in Y Mean Squared Regression (MSR): = (RSS/Df) F Stat: Tests if regression has significant explanatory power; F Test = MSR/MSE Sum of Squares Total (SST): (Yi - Ȳ)²: Total variation in Y. Sum of squared diff b/w actual & Ȳ value. Df = n-1 = SSE + RSS Coefficient of Determination (R²): % of Y's total variation explained by X's variation. High is best. R² = (Correlation)². ; R² = (RSS/SST) ; (Explained/Total)

How to calculate: - Absolute frequency: - Relative frequency: - Joint frequency: - Measures of dispersion: - Measures of central tendency:

Relative frequency: Absolute frequency as % of total = Absolute frequency / = Total # of observations in set Absolute frequency: # of occurrences, not % of occurrences = # of observations tallied or in the interval Joint frequency: Probability of two things occurring Ex) Voter supports two candidates Measures of Dispersion: Variability around the central tendency. Measures risk. Measures of Central tendency: Measures reward.

Explain: - Unconditional probability: Also called? - Conditional probability: Also called? - Joint probability: Formula? - Addition rule of probability: Formula? - Total probability rule: What is it used to do? Formula?

Unconditional (marginal) Prob: P(A), Probability an event occurs regardless of other event outcomes Conditional Prob (likelihood): P(A | B), Events are dependent. Prob of A happening, given B has already occurred. P(R | RI) = Prob of recession given ↑ in rates Joint Prob (multiplication rule): P(AB), Prob of both events occurring = P(AB) = P(A) x P(B | A) - For independent events: P(AB) = P(A) x P(B) - "And": Use multiplication rule of probability Addition Rule of Prob: Prob A or B will occur = P(A or B) = P(A) + (B) - P(AB) ** If A & B are mutually exclusive, then P(AB) = 0 - "Or": Use addition rule of probability Total Prob Rule: Unconditional prob of an event occurring, given conditional probabilities = P(A) = [P(A|B₁) • P(B₁)] + [P(A|B₂) • P(B₂)] + .... + - B₁, B₂, Bn are mutually exclusive & exhaustive events Event has 0.125 prob of occurrence: - Event will occur 1 to 8 times (1/8 = 0.125) - Odds Event Will Occur: Prob event occurs divided by prob event doesn't occur. Ie odds for E = P(E) / 1 - P(E) Ex) 0.125 / (1 − 0.125) = (1/8) / (7/8) = 1/7 or 1 to 7 - Odds Against event occurring: Reciprocal of the odds for event occurring. Reciprocal, 7 to 1

What is an annuity? When is it not an annuity? What is the difference between an annuity and a perpetuity? What is an example of a perpetuity and how do you calculate it? For an annuity: - What is an ordinary annuity: - What is an annuity due: - When do you set your calculator to Begin Mode vs End Mode? - When is the payment an asset vs a liability?

Annuity: Receiving the same CF amount at equal intervals over a given period Not an annuity: If cash flows are different each period Perpetuity: A perpetual annuity that goes on forever Formula: PMT / rate of return (this shows you how much you should pay for the stock) Ex) A preferred stock (bc it has no maturity date, ie goes on forever) Ordinary Annuity: Use END Mode. Cash flows occur at the end of each compounding period. The individual paying the annuity due has a legal debt liability requiring periodic payments. Annuity Due: Required pmts at beginning of each period. Annuity due payments received by an individual legally represent an asset. Use BGN Mode. When payment occurs at the beginning of each period (i.e., the first payment is today at t = 0). Begin Mode vs End Mode: If the question says a person "receives" money then it's an annuity due (set to BGN Mode).

How do you calculate: - The expected value of a variable?: - Bayes' formula: In a probability model, how do you calculate: - Expected return: - Variance: - Standard deviation: - Covariance: How do you calculate correlation or correlation coefficient? Does order of covariance matter? Ie CovAB vs CovBA? How do you get the standard deviation from variance? How do you calculate the variance of risky stocks or the variance of portfolio returns? What does the covariance matrix show and how do you calculate it?

Bayes' Formula: Updates prior probs when given new information P(Evnt | NewInf) = [P(Evnt)•P(NewInf | Evnt)] / P(NewInf) = [(New info prob ie joint prob) / (Unconditional prob w/ new info)] x (Prior event prob) Unconditional prob: Prob of event occurring regardless of other event outcomes ** Easier way to calculate is build a tree. Each two branches should equal 100%. To find the end % of each branch, multiply 1st line x 2nd line. To calculate the problem's probability question, take (% of it happening) / (total % of both branches in that category by adding them) Expected Value/Return (EPS): Weighted avg of possible outcomes. Build a tree: Multiply branches of tree, then sum the answers = [P(X₁)•(R(X₁)] +....+ P(Xn) = Given probability for X ** Var = σ². SD = σ. SD is √var. Var = SD² Variance of Return: Multiply by Prob (instead of dividing by n-1) = P(R₁)•[R₁ - E(R)]² +....+ Variance of Prob Returns: - S1) Calculate Expected value/return (EPS) = [P(X₁)•(ER(X₁)] +....+ - S2) Calculate Variance, but Multiply by Prob (instead of dividing by n-1) = P(R₁)•[R₁ - E(R)]² +....+ E(R) = The calculated value from Step 1 - S3) To get SD of Returns: √answer Joint Prob Table: 3 pairs of outcomes for P & Q. Diagonal #s in middle are the prob%s. Prob of 15 & 7 is 20%. Prob of 12 & 4 is 20%. Prob of 0 & 0 is 60%. Q = 7 Q = 4 Q = 0 P = 15 .2 0 0 P = 12 0 .2 0 P = 0 0 0 .6 Covariance for probability = E[(Ra - ERa)(Rb - ERb)] +...+ Step 1) Find the expected returns for each. E(Ra) = (Prob₁)(Return₁) + (Prob₂)(Return₂) + ... + E(Rb) = (Prob₁)(Return₁) + (Prob₂)(Return₂) + ... + Step 2) Calculate E(Ra) w/ it's prob & E(Rb) w/ it's prob using below formula. Add answers together = Covab Covab probility formula: = E[(Ra - ERa)(Rb - ERb)] + ... + E = Probability for that calculation Correlation = Correlation coefficient = ρAB = CovAB /(σA • σB) ; CovAB = CovBA; Order doesn't matter Return & Risk of 2 Asset Portfolio, Portfolio variance: = (σ²a)(Wa²) + (σ²b)(Wb²) + 2[(Wa)(Wb)(ρAB)(σA)(σB)] (σ²a) = Variance of A - If asked for SD: √Answer - Note** Var is σ². If given variance, don't ² #2 again Covariance Matrix: Cov b/w asset returns. CovAB is the diagonal #s that are the same. Non matching #s are variances (or a #s Cov w/ itself which is it's Var). Correlation = CovAB (ie diagonal #s) / σA x σB

Explain the Bernoulli trials How would you calculate P(4) from 5 options with a 60% probability? Explain Shortfall risk: What is Roy's safety first ratio? What is the formula? Do we you want a high or low safety first ratio? Monte Carlo simulation:

Binomial prob distribution (Bernoulli trials) Only two possible outcomes (success or failure) for each trial - Random variable is the # of successes - Prob of success remains constant from one trial to another. Trials are independent. - Mean (expected value) in binomial trial = np - Variance in binomial trials = np(1 - p) Prob of Exactly X Successes in N Total Trials: = (nCx)⋅(pᵡ)⋅[(1 - p)ⁿ⁻ᵡ P = Prob of success on each trial N = Total # of trials X = Desired # of successes in N trials Mean (expected value) of binomial trial = np Variance of binomial trials = np(1 - p) - Ex) Probability of 4 successes in 5 trials: P(4) = 5nCr4⋅(0.60)⁴⋅(1 - 0.60)⁵⁻⁴ = 0.2592 Shortfall Risk: Prob(Return is < Threshold Target) - S1) Look at Ztable & locate SFRatio answer - S2) Shortfall = (1 - Ztable # using SFRatio answer) Roy's Safety First Ratio: If client's min acceptable return is > RFR, SFRatio will give the optimal portfolio. A portfolio that maximizes SFRatio will minimize prob the return will be < min acceptable return if we assume returns are normally distributed. - Answer = # of SDs the threshold is < E(R)/value - Choose portfolio w/ highest ratio. Higher is safer (lower prob of giving neg/below target return) SFRatio = [ E(Rp) - Threshold ] / σp Rl = Threshold/target return ; Rp = Portfolio return σp = SD of portfolio Monte Carlo Simulation: Generating repeatedly risk factors that affect security values. Based on stats. Answers "what if" questions. - 1. Specify prob distributions of stock price, interest rates, etc & parameters (mean, variance), distributions (interest rates, underlying prices) - 2. Computer randomly generates values for each - 3. Value options for each pair or risk values - 4. Repeat thousands of times. Calculate mean/variance of the values - Limitation: Can't independently specify cause & effect relationships.

Explain: - Contingency table: - Marginal frequency: - Mode interval: - Confusion matrix: Explain each # in the confusion matrix chart: (1) Yes (2) No (3) Yes (4) No What do these mean?: In (4) No and (1) Yes In (2) No and (3) Yes:

Contingency table: Two dimensional array w/ joint frequencies of two variables (prob both occurring). Analyze two variables at same time. Must be finite numbers. Uses chi squared test statistic: Marginal frequency: Total # of frequencies in a row or column. How to remember? Total is at margin of the page, end of chart/column Mode interval: Interval w/ greatest frequency Confusion matrix: Two dimensional array w/ two variables showing # of predicted vs observed (actually happened) outcomes. - How to read confusion matrix chart: (1) Yes (2) No (3) Yes (4) No 1 = Yes, it happened ; 2 = It didn't happen ; 3 = Was predicted to happen ; 4 = Not predicted to happen - In (4) No & (1) Yes: Model predicted event wouldn't occur, but it did - In (2) No & (3) Yes: Model predicted event would occur, but it didn't

What is central limit theorem? What is the standard error of the sample mean? When should you and how do you calculate it? Explain these estimators: - Unbiased: - Efficient: - Consistent:

Central Limit Theorem: As random sample sizes get larger (>30), sample means approach normal distribution. Theorem holds useful for any distribution >30, regardless of skew, distribution, etc. - Population mean & the mean of all possible sample means are equal. - Sample mean for large sample sizes is distributed normally regardless of distribution of the underlying population. When the sample size is large, sampling distribution of the sample means is approx normal Standard Error of Sample Mean: SD of distribution of sample means when population SD is known. Used to create confidence interval for sample mean returns: = (σ/√n) = (Population SD/√sample size) (if SD is unknown, formula = SD/√n) Resampling Methods to estimate Standard Error of Sample Mean: - Jackknife: Calculating multiple means from the same sample, each with one observation removed & calculating the SD of those means. Put it back & repeat. - Bootstrap: Draw repeated samples of the same size from the same data set & calculating SD of the resulting sample means. Put it back, repeat. Confidence Interval for Single Value/Observation. Use when n =1, to estimate range of possible values for next period's returns, use: = μ +/- Critical value • SD Population mean, μ = Mean of all sample means Confidence Interval for Population Mean: When n > 1 & the resulting sample mean infers the population parameter value Hypoth testing & Confidence Intervals: Closely related. A confidence interval contains value 0 is equivalent to saying we cannot reject a null hypoth that the hypothesized value is = 0 Unbiased Estimator: Expected value = True Parameter Efficient Estimator: Sample distribution is < any other unbiased estimators. Ie has the smallest variance of all unbiased estimators Consistent Estimator: More accurate the bigger the same size

What do these measure?: - Covariance: Formula? - Correlation coefficient: Formula? - Correlation formula? What is: - Spurious correlation: - Outliers: - The covariance of a random variable with itself?" What is a problem with interpreting a correlation coefficient? What does correlation not imply?

Covariance: How two variables move together = [(x - x̄ )(Y - Ȳ) + ... + ] / (n - 1) Correlation Coefficient: Strength of a linear relationship b/w two variables. Shows linear observation b/w outcomes, but doesn't imply one causes the other Outliers & spurious correlation cause interpretation problems. = CovAB / (σA)(σB) - Bounded by +1 (perfect positive correlation) & -1 (perfect negative correlation) Spurious Correlation: Correlation is by chance or from relation to a third variable. No relationship to capture. Outliers: Data point significantly different from others. An extreme value lying outside the overall data pattern.

Formulas & explain: - Geometric mean (when do you use it): - Harmonic mean: - Compound annual rate of return: - Compound annual growth rate: - Portfolio return: - Account return: - Trimmed mean: - Winsorized mean: - Box and whiskey plot:

Geometric Mean: Performance or compounded annual rate of return over multiple years or effective annual return (not EAR) = [(1 + R₁)•(1 + R₂) •....• (1 + Rn)]¹ᐟⁿ - 1 Harmonic Mean: Avg price per share you'd pay if you invest the same amount each period = N / (1 / price) Compound annual rate of return: (& effective annual return) When periodic return rates vary from period to period. Uses Geometric mean. = [(1 + R₁)•(1 + R₂) •....• (1 + Rn)]¹ᐟⁿ - 1 Compound (annual) growth rate = (EndV / BeginV)¹ᐟⁿ - 1 Ex) Given divs for 5 years. Asked for compound annual growth rate over the period Yr 1 = BeginV. Yr 5 = EndV. Divs in middle yrs are irrelevant, don't use. N = 4 bc there's 4 compounding periods, not 5 (even though there's 5 yrs, draw timeline if confused) Portfolio/Account Return: E(Rp) = Wa•E(Ra) + Wb•E(Rb) Trimmed mean: Removes outliers. Removes extreme observation %s, ie excludes highest (from top) & lowest observations (from bottom) from the chosen %. A trim 1% removes 0.5% from top & 0.5% from bottom Winsorized mean: Removes outliers. Substitute values for (extreme) chosen % of highest & lowest observations Ex) 90% winsorized mean substitutes the 95th percentile for all larger observations (above) & 5th percentile for all smaller observations (below) Arithmetic > Geometric > Harmonic Box & whisker plot: Shows data set based on quartiles

In a linear model, what leads to biased predictions? For each model below, what is the dependent variable, independent variable, and slope interpretation? - Loglin: - Linlog: - Loglog:

Linear model creates biased predictions: If relationship b/w X & Y is not linear, To make relationship linear, take ln of nonlinear variable(s), which makes them linear. Linear regression formula: Yᵢ = b₀ + b₁(Xᵢ) + εᵢ Relative △ = Transformed variable chosen Absolute △ = Variable stays the same Dependent variable = Y Independent variable = X Loglin: ln(Yᵢ)= b₀ + b₁(Xᵢ) + εᵢ b₁ interpretation: Relative △ in dependent variable (Ln Y), for an absolute △ in independent variable (X). Forecast is e^lnY Linlog: Yᵢ = b₀ + b₁ln(Xᵢ) + εᵢ b₁ interpretation: Absolute △ in dependent variable (Y) for a relative △ in independent variable (Ln X) Loglog: ln(Yᵢ)= b₀ + b₁ln(Xᵢ) + εᵢ b₁ interpretation: Relative △ in dependent variable (Ln Y) & relative △ in independent variable (Ln X). Forecast is e^lnY Check for improvements in goodness of fit (R² & F stat) w/ transformed variable(s) & w/ SEE (if Y isn't transformed)

For lognormal distribution: - It is always ___ and used for? - How is it skewed? - What is it always? What is the continuously compounded rate? How do you calculate it when given HPR? How do you calculate it when given price?

Lognormal Distribution: Generated by eᵡ. Always positive. Skewed to right (hump on left). Bounded by zero from below (can't be negative). Can model asset values if returns are normally distributed. Continuously Compounded Rate: Stated rate w/ continuous compounding = ln(1 + HPR) = ln(End price / Beg price) Ex) Given 18 months, but asked for annual rate of return w/ continuous compounding. = (12/18) • ln(EndP / BegP) EAY w/ Continuous Compounding = e^(i⋅n) - 1 i = Stated interest rate

Explain the: - Parametric tests: - Nonparametric tests: What does the parametric test of correlation test? What is the formula? How many degrees of freedom? Test of independence from contingency table (chi squared test statistic) How do you calculate the expected if independent and then the test statistic? What is the spearman rank correlation test? How do you calculate it? What do you do if n > 30?

Parametric tests: Assumptions about distribution of & parameters of a population (t, z, & f test) Nonparametric tests: Doesn't consider specific parameters or makes few assumptions about distribution. Tests other things such as rank correlation (firm's rank in one period vs rank in next period) & runs tests (randomness of a sample). Uses contingency table. T test for correlation: Tests if correlation = 0 (or 1) of two populations. Two tailed test. Df = n - 2. (How to remember: Correlation has 2 variables, so df = n - 2) T statistic = [r⋅√(n-2) / √(1 - r²)] r = Rank correlation - Spearman Rank Correlation Test: Tests if two sets of ranks are correlated (does being ranked #1 in one year say anything about next year) Nonparametric test. T distribution. If n > 30, use T table w/ Df: n - 2 Contingency Table: Tests independence. Shows # of sample observations w/ combination of two characteristics. Df = (r - 1)(c -1) Use the actual observations in the contingency table to calculate expected observations in each cell, if recommendations are independent. Then calculate chi-square statistic to test hypoth that recommendations are independent. Independent variables are uncorrelated, but uncorrelated (no linear relationship) variables are not necessarily independent. Step 1) Calculate expected # observations in each cell if independent = (Row i total⋅Column j total)/(All columns & rows total) Step 2) Calculate Chi Square Statistic based on diff b/w actual & expected # of observations. Tests hypoth that recommendations are independent. Df = n-1 Test Statistic: x² = [(Oi,j - Ei,j)²/(Ei,j)] Oi,j = Observed frequency in cell i,j Ei,j = Expected if independent from Step 1 Parametric Correlation Test: Tests if population corr coefficient = 0. T test w/ df = n -2 T stat = [r⋅√(n-2) / √(1 - r²)] r = Sample correlation coefficient

Normal distribution: - Is completely described by: - What is more likely, large or small deviations from the mean? - Mean = ? - Skew = ? - When does probabilities decrease?: - What is multivariate normal: What do you need to calculate? Explain: - Confidence interval: - Standard normal distribution: - How do you standardize a random variable? What is the formula? - When given SD and asked the probability of X being greater than another number, how do you calculate? How do you calculate the confidence intervals for: - 68% confidence interval: - 90% confidence interval: - 95% confidence interval: - 99% confidence interval:

Normal distribution: Completely described Mean (μ) & Variance/SD. Symmetric distribution bc skew = 0. Kurtosis = 3. Linear combinations of 2+ normal random variables is also normally distributed. - Large deviations from mean are less likely than small deviations from mean. - Probabilities of outcomes above or below mean get smaller & smaller. Tails go on forever (very thin) Univariate distribution: Single normal random variable Multivariate normal (distribution): Specifies probs for a group of related random variables. Must know means, variances & correlation coefficient. Standard Normal Distribution: Standardized normal distribution has a mean of 0 & SD of 1 - To standardize a random variable: Calculate z value, which is # of SDs from the mean Z = (X - μ) / σ or Z = X - x̄ / SD X = Given value or requested/probability # desires μ = Population mean or given # of desires Ex of Standardizing random variable. EPS is $5 w/ SD of $1. What's the probability EPS will be $6 or more? = ($6 - $5) / $1 = 1 ; Probability = 16% - Interpretation: Answer of 1 means 68% of observations fall within +/- 1 SD of mean. ≥ $6 = Two tailed test, so (1 - 0.68) = 32% left in tails w/ 16% prob of EPS being ≥ $6 (bc two tails). If asked for < $6, would be a One tailed test, so 68% remains in tail w/ prob = 32% Confidence interval: Range of values around an expected outcome. "Outcomes will be b/w this & this, X% of the time". Asserts given prob (1 - α) it will contain the parameter it's intended to estimate. If random variable is normally distributed (>30). These are for two tailed tests: - 68% = Within 1 SD = Mean +/- (1.00)(SD) - 90% confidence interval = Mean +/- (1.645)(SD) - 95% confidence interval = Mean +/- (1.96)(SD) - 99% confidence interval = Mean +/- (2.58)(SD) 95% confiden (5% sig) 1 tailed test: Mean +/- (1.645)(SD)

Explain: - Null hypothesis (Ho): - Alternative hypothesis (Ha): - When do you reject the null on a two tailed test? - When do you reject the null on a one tailed test? - What are the critical values like?

Null Hypoth (Hₒ): Must contain =, ≤, or ≥. Null is what is tested. Hₐ is the claim a researcher wants to accept. Hₒ is the hypothesis they must test (but hope to reject). If researcher rejects the null, Hₐ is supported. If you want to test if X < 3., Hₒ will test is X is ≥ 3. Hₐ is X <3. If null is rejected, Hₐ is supported. Ex) Will returns will be less than 5% next year? - You can't test if returns will be < 5%. You must test if returns will be ≥ 5%. Then, if you reject null, Hₐ is supported. Alternative hypothesis (Hₐ): If null is rejected, Hₐ is supported. Contains <, > , or ≠ Critical value: The calculated test stat answer (t, z, f, chi sq) or the memorized values 1.96, 2.58, 1.65, etc - Don't reject null: If Hₒ is true. - Reject null: If Hₒ is false. Two tailed test, Reject if: Null > Critical value. Prob is in both tails. Significance level is split (2.5% in one tail & 2.5% in other). Uses words, "equal to". Null hypoth: μ = μ₀ Reject Null ← | Accept Null | → Reject Null Accept Null = Inside critical value Reject Null = Outside critical values (exceeds left or right tail) One tailed test, Reject if: Hypoth > Critical value. Prob in one tail. Null hypoth: μ ≤ 0 Accept Null |→Reject Null or Reject Null← | Accept Null - One tailed tests uses words, "less than or equal to" - One tailed test w/ 5% sig level: Use 90% confidence interval = Mean +/- (1.65)(SD). (Using 1.65, not 1.96 bc 5% would go in each tail) Use t test or z test when: Hypothesizing single mean. Sample is ≥ 30. If < 30, distribution is (approx) normal - Z test & T test = [(x̄ - μ₀) / (σ/√n)] μ₀ = Null hypoth mean σ = For t test is sample SD. For z test is population SD.

What is the percentile position formula? Ex) What is the fourth quintile based on sample observations of 24, 18, -31, 13, 9? What does it mean to be ranked in the third quartile and what is the 75th percentile? What is: - Interquartile range: - Range: - Mean absolute deviation (MAD): Ex) What's the data's MAD? Annualized returns: [30%, 12%, 25%, 20%, 23%]

Observation's position at a given percentile: = (n + 1)•(Percentile / 100) 1st) Rearrange #s in order from smallest to largest: -31, 9, 13, 18, 24 2nd) Use percentile formula: (5 + 1) x (80/100) = 4.8 3rd) Count 4 places in observation which shows 4.8 falls b/w 18 & 24 4th) Calculate difference: 0.8(24 - 18) + 18 = 22.8 or (18 + 4.8) = 22.8 Quartiles: Quarters. 3rd quartile is 75th percentile (ie you beat 75% of the class, ranking you in class' top 25%) Quintiles: Fifths (Divide distribution into fifths) Deciles: Tenths (3rd decile is the 30th percentile) Percentile: Hundredths Interquartile range: Difference b/w the third & first quartile (25th percentile) Range: Dif b/w highest & lowest values in data set = (Max value − Min value) MAD: Use absolute values for each outcome. = |Return₁ - Mean| + |Return₂ - Mean| + | ... | + / N Ex) Annualized returns: [30%, 12%, 25%, 20%, 23%] [30+12+25+20+23] / 5 = 22% mean |30−22|+|12−22|+|25−22|+|20−22|+|23−22|] / 5 = 4.8% MAD

How to calculate principal

Principal Calculation: The money that you originally agreed to pay back (i.e. $25K loan w/ 8% interest, your principal is the $25K) To calculate principal of a month you take the monthly payment (interest + principal) and subtract it from the interest (interest only portion of the payment) ex) If your monthly car loan payment is $610.32 on a $25,000 car loan and the interest is (8% annual interest which is 8/12 = .667 ; interest of .667 is .08 / 12 = .00667 ; .006667 x $25,000 = $166.67 of interest per payment ) $610.32 (monthly payment) - $166.67 (interest on monthly payment) = $443.65 (principal amount for that month)

Explain: - Probability distribution: - Random variable: - Mutually exclusive events: - Exhaustive events: - Empirical probability: - A priori probability: - Subjective probability:

Probability Distribution: Specifies probabilities of all possible outcomes of a random variable Random Variable: Outcome is uncertain Independent Events: P(A | B) = P(A), Occurrence of one event tell us nothing about another, they're not related Mutually Exclusive events: Both events can't happen at the same time. ie P(A and B) = 0 Exhaustive Events: Includes all possible outcomes Empirical Prob: Prob is based on analyzing past data/outcomes. Data + Analyzing A Priori Prob: Prob is based on formal reasoning & inspection process (not data). No data or analyzing, solely based on reasoning. Ex) Inspecting a coin & reasoning prob of each side coming up when flipped Subjective Prob: Prob based on personal judgment or experience. Data, but no analysis. Ex) Report is adjusted to reflect analyst's "perception of changes in quality"

Explain: - Probability function: - Cumulative Distribution Function (CDF) - Discrete uniform distribution: For a discrete uniform distribution: Ex) p(x) = 0.2 for X = {1,2,3,4,5} What is: p(2), F(3), and Prob (2 ≤ X ≤ 4)? In continuous, any single variable has a probability of what and how do we show the probability? If given interval 2 to 7. What is the probability between 3 and 6?

Probability Function, p(x): Probability a random variable will take on the value x. P(x) encompasses all possible outcomes of the random variable. - Key property is: 0 < p(x) < 1 (ie b/w 0 & 1) - Sum of all probs p(x) over all possible values of X = 1 When it takes on x, p(x) = P(X = x) Ex. What is the probability of 3? p(x) = x / 15 for X = {1,2,3,4,5} ---> p(3) = 3/15 = 20% Probability Density Function, f(x): In continuous, any single variable has a prob of 0. To evaluate, probabilities are shown within a range/interval. = Interval requested / Interval given Ex) Given interval 2 to 7. Asked for prob b/w 3 and 6. Answer: 6 - 3 / 7 - 2 Cumulative Distribution Function, F(x): Prob a random variable is ≤ a given value of x. F(x) = P(X </= x) Ex) For the probability function: p(x) = x / 15 for X = {1,2,3,4,5}. Calculate F(3) Calculate all #s ≤ 3 F(3) = 1/15 + 2/15 + 3/15 = 6/15 = 40% Discrete uniform distribution: Finite # of countable, possible outcomes w/ equal probabilities p(x) = 0.2 for X = {1,2,3,4,5} p(2) = probability of 2 = 20%. Bc it's discrete uniform and they are all likely you do 100% / 5 = 20% prob for each F(3) = probability of 3 or less = Prob(1, 2, or 3) = 60% Prob (2 ≤ X ≤ 4) = 60% bc it can be 2, 3, or 4 Continuous (Uniform) Distribution: Graph is a straight line from starting point of 0 to 100 & Prob of being < 0 is zero. Prob of any particular outcome is zero.

Explain both steps of stratified random sampling: What does it preserve? Cluster sampling: - What is it? - Explain one stage cluster sampling: - Explain two stage cluster sampling: Explain: - Convenience sampling: - Judgement sampling: What are some sample size issues?

Probability sampling: Sampling when the population probability of each sample member is known Simple Random Sampling: Each item in the population has equal prob of being selected Systematic Sampling: Choosing every kth element Nonprobability Sampling: Selecting sample items based on researcher judgement or low cost, readily available data - Convenience Sampling: Uses readily available, low cost data. Prone to sampling error (if data isn't random). Non probability sampling method. Used for preliminary investigation. - Judgement Sampling: Selects observations based on analyst's judgement & experience. Non probability sampling method. Stratified Random Sampling: Select random samples from each subgroup in proportion to their subgroup size. Preserves each subgroup's characteristics. Uses classification system to divide population into subgroups. Cluster Sampling: Each subset (cluster) represents the overall population Ex) Personal incomes of residents by county One Cluster Sampling: Random sample of clusters is selected. All data in those clusters creates the sample. Two Cluster Sampling: Sample created by choosing random samples from each cluster Cost: Additional data increases costs. Including more data points from a population w/ different parameters won't improve your estimate

Formulas & def: - Real risk-free interest rate: - Effective annual rate (EAR): - Compounding effective annual rate (EAR): - Nominal risk free rate: When is EAR higher? When should you compute EAR? Stated annual rate is also called: When are stated interest rate and the actual interest rate equal? Future value is also called: Discount rate can also be called: What is a discount rate? Explaining cash flow on a timeline

Real Rf Rate = Nominal Rf rate - Expected inflation rate Effective Annual Rate: Already includes compounding. Use to compare different compounding periods. The actual return earned after adjustments have been made for compounding periods ***NOT the same as effective annual returns (these compound) EffectiveAR = (1 + (StatedAR)/n)ⁿ - 1 Given 12% StatedAR: Need quarterly for FV → 12/4 = 3 for I/Y (Since it's not asking for EAR) Given 12% StatedAR: Need EAR monthly → (1 + .12/4)⁴ -1 Given 8% EAR: Need monthly for FV; → (1 + .08)¹ᐟ¹² - 1 - Don't use 8/12 & input .667 for I/Y Given EAR: Need monthly for FV;→ (1 + EAR)¹ᐟⁿ - 1 Given End & Beginning Amounts Asked for EAR = (End / Beg)¹ᐟⁿ - 1 StatedAR(Periodic) ≠ Actual (EAR, EAY) Rate/Return unless interest is compounded annually, if it is, then StatedAR = EAR Annual rate of 9% EAR compounded monthly is 9%. EAR is given annually. To get monthly, use formula. - If the question explicitly asked for EAR Maturity Risk Premium: LT bond prices are more volatile than ST. LT requires maturity risk premium bc bonds have longer YTM & more maturity risk than ST. Compounding EAR: = (1 + real)(1 + expected inflation) = (1 + nominal) Nominal Rf rate: = Real RFR + Inflation premium + Default risk premium + Liquidity & Maturity Premiums Stated annual rate = Nominal rate = stated rate EAR: Higher w/ more frequent compounding EAR > Stated rate (unless stated rate is quoted annually then they are equal) Stated interest & actual interest are equal when interest is compounded annually FV is also called: Compound value Discount rate: Interest rate used to determine PV of future CFs through discounted CF (DCF) analysis Discount rate = Opportunity cost = Required RR = Cost of capital End of one period = Beginning of next period. Ex) End of Yr2 (t = 2) is same as beginning of Yr3, Beginning of Yr3 CFs appears at time t = 2 on time line

Explain the different types of bias: - Data snooping bias: - Sample selection bias: - Survivorship bias: - Look ahead bias: - Time period bias: - Self selection bias: - Backfill bias:

Sampling error: Difference b/w a sample statistic & the true population mean = (Sample mean - μ) μ = True population mean Ex) Mean age of all CFA candidates is 28 years. A random sample of 100 candidates' mean age is 26.5 years. The 1.5 yr difference is the sampling error Nonprobability sampling: May lead to a greater sampling error than probability sampling Sampling distribution: The sample means distributed around the true mean. Distribution of all possible values a statistic can assume when computed from equal sized random samples in the same population. Data snooping (mining) bias: Running repeated tests on the same data for various strategies or patterns. Eventually a pattern emerges; uses out-of-sample test. Sample selection bias: Sample isn't really random Survivorship bias: Sampling only surviving firms/funds Look ahead bias: Using information not available during the time period the sample was constructed Ex) Using Book value or price earnings at beginning of yr when companies don't file until late spring Time period bias: Relationship exists only during the sample data's time period. Results are time specific & not reliably outside sample period. Backfill bias: Backfilling previous years of data when producing index returns. Back filing w/ data that wasn't available during the time period. Similar to survivorship

Explain: - Skew: - Normal symmetry: - Negatively skewed distribution: - Positively skewed distribution: - When mean is most affected by outliers: - What mean & variance measure: - Kurtosis: - Sample kurtosis: - Excess kurtosis: - Interpret kurtosis: - Leptokurtic: - Platykurtic: - Mesokurtic distribution:

Skew: Deviation from normal symmetry. Measures symmetry. - Normal Symmetry: Symmetrical distribution: Skew = 0. Mean = Mode = Median. Probability is highest. - Negatively Skewed: Skewed to left (hump is on right). Skew < 0 ; Mean < Median < Mode - Positively Skewed: Skewed to right (hump is on left). Skew > 0 ; Mean > Median > Mode Mean is most affected by outliers: In both negatively or positively skewed distributions - Mean measures: Where it's centered - Variance/SD measures: How spread out it is Kurtosis: Distribution of shape (not skew) relative to normal distribution. Degrees more or less peaked than normal distribution Sample kurtosis: Deviations raised to the 4th power = (x - x̄ )⁴ / (SD)⁴ Excess Kurtosis: Distribution w/ more or less kurtosis than normal distribution (which is 3) Interpret Kurtosis: Relative to normal distribution kurtosis. Formula = Kurtosis - 3 Leptokurtic: Positive excess kurtosis is > 3 or < -3: Fat tails. More peaked. High kurtosis = High prob in tails Platykurtic: Negative excess kurtosis is <3 (not 0) & up to - 3: Thin tails, less peaked (platykurtic) Mesokurtic Distribution: Kurtosis of normal distribution (ie bell curve)

Student's T Distribution: - What is it defined by and how is it shaped? - What are the degrees of freedom? - What are the tails like and what does this mean? - How does t distribution approach normal distribution? Chi square distribution & F distribution: - What is each definition? - How is F distribution described? - How are they different? - Are they symmetrical or asymmetrical? - What are the degrees of freedom? - What happens when df increases? - How are they bounded? Explain these tests for hypothesis testing: - Chi square test: One tail or two? - F tests: Formula? - T tests: What type of tailed test and df?

Student's T Distribution: Defined by a single parameter w/ df = n -1 - Symmetric (bell shaped) regardless of df. So skew = 0 - More prob in tails (fat tails) than normal distribution & greater prob of extreme outcomes. - Wider confidence intervals Chi Square Distribution: Sum of squared values of k independent, standard normal random variables. - Df = k ; Df = n - 1 - Asymmetric, but approaches normal distribution shape (bell shape) as df gets larger - Bounded from below by zero (can't be neg), but uses distribution w/ a lower bound of zero Chi Square Test: Tests if normal variance = hypoth variance (ie hypoth test of a normally distributed population variance) Uses a single variance of one population. Two tailed test. n-1 F Distribution: Two independent chi square variables/distributions. Uses two variances (one from each chi square test). - Described by Df in num & den (each has their own Df bc it's 2 sample sizes). Df for num & den is = n - 1. - Asymmetric, but approaches normal distribution shape (bell shape) as df gets larger - Bounded below by zero (can't be neg) F Test: Tests if two normal variances are equal to each other; = Var A / Var B . If two tailed test, put larger Var # in numerator always. n-1 (ie equality of two normal variances by comparing)

What is the z statistic formula? How do you calculate the adjusted significance for each test when there are multiple tests? Explain the: - Difference in means test - Mean differences test: Another name for it? What type of test are both of these? What is the p value?

Use T test or Z test when: Hypoth of single mean. Sample is ≥ 30 or < 30 w/ (approx) normal distribution - Z test & T test = (x̄ - μ₀) / (σ /√n) - T test: df = n - 1 μ₀ = Null hypoth mean σ = For t test is sample SD. For z test is population SD. P-value: Smallest sig level at which null can be rejected. Count both tails. Reject if p-value is < sig levels Two tailed test P-value = (1 - P-value)•(2) One tailed test P-value = (1 - P-value)•(1) Ex) 5% Significance Level. P values are .25, .04, .01 Calculation: (1 - .25) = .75 Keep (1 - .04)•2 = 1.92 reject bc <1.96 Two tailed test P-value= (1 - P-value)•2 Ex) Two tailed test p-value is 0.0213. At what significance can & can't you reject? 0.0213 = 2.13% - Reject null at 5% sig (95% confidence) & at 3% sig level bc 2.13 is < both 3% & 5%. Can't reject at 1% (99% confidence) bc 2.13 > 1%. Multiple hypoth tests: For all tests that reject Hₒ: - Step 1) Rank p-values (lowest to highest) - Step 2) Calculate adj significance for each test Adj Sig = [α(p value rank)/(total # of tests)] α = Confidence level (ie 10%, 5%, etc) P-value rank = # it ranked in lowest to highest - Step 3) Adj sig ≥ p-value, keep rejection. Adj sig ≤ p-value, don't reject. Difference in Means test: Two independent samples. Calculate each sample's mean. Then find difference b/w both means. Variances are equal. Df = n - 1. T test. - Uses pooled est of variance using both samples. Mean Difference (Paired comparison) test: Two dependent samples w/ same sample sizes & time periods. Calculate difference b/w the samples for each time period. Then, find avg of all the differences & use that for t test. Df = n - 1. T test. - Used to test whether the means of the two dependent normal variables are equal - Numerator: Avg diff b/w paired observations. Denominator: SD of the differences

Explain data type: - Time series data: - Cross section data: - Panel data: - Structured data: - Unstructured data: - Discrete data: - Continuous data: - Nominal data: - Ordinal data:

Time series: Takes place over multiple periods Ex) EPS over 14 quarters Cross sectional: Set of comparable observations for multiple companies take in same time period Ex) Last quarter's EPS from 12 firms. 2019 stock returns for these 3 firms Panel data: Both time series & cross sectional data together Structured data: Organized in a defined way, ie you can put it in an Excel sheet. Ex) Market data, closing prices, EPS, time series, cross-sectional, and panel data, etc Unstructured data: Text, audio, video, etc types of data. AI can analyze Discrete Data: Countable. Numerical data that can be counted Continuous Data: Unlimited (infinite) amount of possible outcomes/values. Making # a fraction, it can take on any fractional value Nominal data: Categorical, but no logical order. Only names makes sense. Ordinal data: Categorical & has logical order. Order matters. No names. Ex) Ranking stocks best to worst

What is and how do you calculate: - Sample variance: When is it used? - Sample standard deviation: - Target semideviation: Also known as? - Relative dispersion: How do you calculate? - Coefficient of variation (CV): What does it measure? Do you want a high or low CV?

Variance: Avg of squared deviations around the mean Sample variance (s²): Dispersion from evaluating a sample from a population. Sample or historical data Sample variance = [(x - x̄ )²/(n - 1)] ; SD = √Var Sample SD = √[(x - x̄ )² +...+ /(n - 1)] Target Downside (semi)Deviation: Use only observations < given target = √[(Xi - Target)² +...+/(n -1)] Coefficient of variation (CV): Measures risk. Risk per unit of return. Low is best. Amount of dispersion relative to the mean (relative dispersion) CV = (SD of x)/(x̄ )

Explain: - Scatter plot: - Scatter plot matrix: How many variables does it measure? - Heat map: - Tree map: - Histogram: - Frequency polygons: - Bubble line chart: - Cumulative frequency (absolute or relative):

Visualize relationships w/: Heat maps and - Scatter plots: Observations of two variables. Reveals nonlinear relationships not measured by correlation - Scatter plot matrix: Three scatter plots, each presenting two of the three variables Compare among categories & distributions w/ categorical data: Tree maps (Value differences of categorical data/groups), Heat map (Correlation b/w two variables, shaded frequency has occurred the most) & Bar charts Visualize comparisons over time: Line charts, dual scale line charts & Bubble line charts (shows data of two or more variables over time) Visualize distributions w/ text: Word clouds Visualize distributions w/ numerical data: - Histogram: Shows frequency distributions using bar height to represent absolute frequency in each return interval. Shape, center & spread of numerical data distribution. - Frequency Polygons: Plots the midpoint of return intervals on X axis & absolute frequency for that interval on Y axis. Frequencies at midpoints of intervals are joined w/ line segments Cumulative Distribution Chart: Bar or line chart of absolute or relative frequencies. As you read across it's shows probability of occurrences ≤ - Cumulative Frequency (absolute or relative): = Sum all frequencies ≤ a given value, starting at lowest interval & moving through highest. Ex) Cumulative relative frequency for bin -1.71% < x < 2.03%. Include all options ≤ 2.03%


Related study sets

The absolutely True Diary of a part time Indian Study Guide Answers

View Set

anatomy (ch7 the skeletal system)

View Set

Oceans CH 5 - The Chemistry of Seawater

View Set

CHAPTER 8. Structuring Organizations for Today's Challenges

View Set

Chapter 14: Care of Preoperative Patients

View Set

AP Review Questions for Chapter 37

View Set