Скачать презентацию Purpose of Regression Analysis Regression analysis is Скачать презентацию Purpose of Regression Analysis Regression analysis is

2775f8becade5cb1d2828484f1156352.ppt

  • Количество слайдов: 76

Purpose of Regression Analysis • Regression analysis is used primarily to model causality and Purpose of Regression Analysis • Regression analysis is used primarily to model causality and provide prediction – Predicts the value of a dependent (response) variable based on the value of at least one independent (explanatory) variable – Explains the effect of the independent variables on the dependent variable

Types of Regression Models Positive Linear Relationship Negative Linear Relationship NOT Linear No Relationship Types of Regression Models Positive Linear Relationship Negative Linear Relationship NOT Linear No Relationship

Simple Linear Regression Model • Relationship between variables is described by a linear function Simple Linear Regression Model • Relationship between variables is described by a linear function • The change of one variable causes the change in the other variable • A dependency of one variable on the other

Population Linear Regression Population regression line is a straight line that describes the dependence Population Linear Regression Population regression line is a straight line that describes the dependence of the average value (conditional mean) of one variable on the other Population Slope Coefficient Population Y intercept Dependent (Response) Variable Population Regression Line (conditional mean) Random Error Independent (Explanatory) Variable

Population Linear Regression (continued) Y (Observed Value of Y) = = Random Error (Conditional Population Linear Regression (continued) Y (Observed Value of Y) = = Random Error (Conditional Mean) X Observed Value of Y

Sample Linear Regression Sample regression line provides an estimate of the population regression line Sample Linear Regression Sample regression line provides an estimate of the population regression line as well as a predicted value of Y Sample Y Intercept Sample Slope Coefficient Residual Sample Regression Line (Fitted Regression Line, Predicted Value)

Sample Linear Regression (continued) • • • and are obtained by finding the values Sample Linear Regression (continued) • • • and are obtained by finding the values of and that minimizes the sum of the squared residuals provides an estimate of provides and estimate of

Sample Linear Regression (continued) Y X Observed Value Sample Linear Regression (continued) Y X Observed Value

Interpretation of the Slope and the Intercept • is the average value of Y Interpretation of the Slope and the Intercept • is the average value of Y when the value of X is zero. • measures the change in the average value of Y as a result of a one-unit change in X.

Interpretation of the Slope and the Intercept (continued) • is the estimated average value Interpretation of the Slope and the Intercept (continued) • is the estimated average value of Y when the value of X is zero. • is the estimated change in the average value of Y as a result of a one-unit change in X.

Simple Linear Regression: Example You want to examine the linear dependency of the annual Simple Linear Regression: Example You want to examine the linear dependency of the annual sales of produce stores on their size in square footage. Sample data for seven stores were obtained. Find the equation of the straight line that fits the data best. Annual Store Square Sales Feet ($1000) 1 2 3 4 5 6 7 1, 726 1, 542 2, 816 5, 555 1, 292 2, 208 1, 313 3, 681 3, 395 6, 653 9, 543 3, 318 5, 563 3, 760

Scatter Diagram: Example Excel Output Scatter Diagram: Example Excel Output

Equation for the Sample Regression Line: Example From Excel Printout: Equation for the Sample Regression Line: Example From Excel Printout:

Excel Output Regression Statistics Multiple R 0. 970557 R Square 0. 941981 Adjusted R Excel Output Regression Statistics Multiple R 0. 970557 R Square 0. 941981 Adjusted R Square 0. 930378 Standard Error 611. 7515 Observations 7 ANOVA df SS MS Significance F F 81. 17909 Regression 1 30380456 Residual 5 1871200 374239. 9 6 Coefficient s 1636. 415 1. 486634 32251656 Standard Error 451. 4953 0. 164999 Total Intercept X Variable 1 t Stat 3. 624433 9. 009944 0. 000281 P-value 0. 015149 0. 000281 Lower 95% 475. 8109 1. 06249 Upper 95% 2797. 019 1. 910777

Graph of the Sample Regression Line: Example 1 87 X i. 4 = 16 Graph of the Sample Regression Line: Example 1 87 X i. 4 = 16 Y i + 15 4 36.

Interpretation of Results: Example The slope of 1. 487 means that for each increase Interpretation of Results: Example The slope of 1. 487 means that for each increase of one unit in X, we predict the average of Y to increase by an estimated 1. 487 units. The model estimates that for each increase of one square foot in the size of the store, the expected annual sales are predicted to increase by $1487.

How Good is the regression? • • • R 2 Confidence Intervals Residual Plots How Good is the regression? • • • R 2 Confidence Intervals Residual Plots Analysis of Variance Hypothesis (t) tests

Measure of Variation: The Sum of Squares SST = Total = Sample Variability SSR Measure of Variation: The Sum of Squares SST = Total = Sample Variability SSR + Explained + Variability SSE Unexplained Variability

Measure of Variation: The Sum of Squares (continued) • SST = total sum of Measure of Variation: The Sum of Squares (continued) • SST = total sum of squares – Measures the variation of the Yi values around their mean Y • SSR = regression sum of squares – Explained variation attributable to the relationship between X and Y • SSE = error sum of squares – Variation attributable to factors other than the relationship between X and Y

Measure of Variation: The Sum of Squares (continued) Y SSE = (Yi - Yi Measure of Variation: The Sum of Squares (continued) Y SSE = (Yi - Yi )2 _ SST = (Yi - Y)2 _ SSR = (Yi - Y)2 Xi _ Y X

The Coefficient of Determination • • Measures the proportion of variation in Y that The Coefficient of Determination • • Measures the proportion of variation in Y that is explained by the independent variable X in the regression model

Coefficients of Determination (r 2) and Correlation (r) Y r 2 = 1, r Coefficients of Determination (r 2) and Correlation (r) Y r 2 = 1, r = +1 ^ Yi = b 0 + b 1 Xi Y r 2 = 1, r = -1 ^ Y = b + b X i 0 X Yr 2 =. 8, r = +0. 9 X Y ^ Yi = b 0 + b 1 Xi X 1 i r 2 = 0, r = 0 ^ Yi = b 0 + b 1 Xi X

Linear Regression Assumptions 1. Linearity 2. Normality – Y values are normally distributed for Linear Regression Assumptions 1. Linearity 2. Normality – Y values are normally distributed for each X – Probability distribution of error is normal 2. Homoscedasticity (Constant Variance) 3. Independence of Errors

Residual Analysis • Purposes – Examine linearity – Evaluate violations of assumptions • Graphical Residual Analysis • Purposes – Examine linearity – Evaluate violations of assumptions • Graphical Analysis of Residuals – Plot residuals vs. Xi , Yi and time

Residual Analysis for Linearity Y Y X e X Not Linear Residual Analysis for Linearity Y Y X e X Not Linear

Residual Analysis for Homoscedasticity Y Y X SR X Heteroscedasticity X Homoscedasticity Residual Analysis for Homoscedasticity Y Y X SR X Heteroscedasticity X Homoscedasticity

Variation of Errors around the Regression Line f(e) • Y values are normally distributed Variation of Errors around the Regression Line f(e) • Y values are normally distributed around the regression line. • For each X value, the “spread” or variance around the regression line is the same. Y X 2 X X 1 Sample Regression Line

Residual Analysis: Excel Output for Produce Stores Example Excel Output Residual Analysis: Excel Output for Produce Stores Example Excel Output

Residual Analysis for Independence Graphical Approach Not Independent e Time Cyclical Pattern Time No Residual Analysis for Independence Graphical Approach Not Independent e Time Cyclical Pattern Time No Particular Pattern Residual is plotted against time to detect any autocorrelation

Inference about the Slope: t Test • t test for a population slope – Inference about the Slope: t Test • t test for a population slope – Is there a linear dependency of Y on X ? • Null and alternative hypotheses – H 0: 1 = 0 – H 1: 1 0 • Test statistic – – (no linear dependency) (linear dependency)

Example: Produce Store Data for Seven Stores: Store Square Feet Annual Sales ($000) 1 Example: Produce Store Data for Seven Stores: Store Square Feet Annual Sales ($000) 1 1, 726 2 1, 542 3 2, 816 4 5, 555 5 1, 292 6 2, 208 7 1, 313 3, 681 3, 395 6, 653 9, 543 3, 318 5, 563 3, 760 Estimated Regression Equation: Yi = 1636. 415 +1. 487 Xi The slope of this model is 1. 487. Is square footage of the store affecting its annual sales?

Inferences about the Slope: t Test Example H 0: 1 = 0 H 1: Inferences about the Slope: t Test Example H 0: 1 = 0 H 1: 1 0 . 05 df 7 - 2 = 5 Critical Value(s): Reject. 025 Test Statistic: From Excel Printout Decision: Reject H 0 Reject. 025 -2. 5706 0 2. 5706 t Conclusion: There is evidence that square footage affects annual sales.

The Multiple Regression Model Relationship between 1 dependent & 2 or more independent variables The Multiple Regression Model Relationship between 1 dependent & 2 or more independent variables is a linear function Population Y-intercept Dependent (Response) variable for sample Population slopes Independent (Explanatory) variables for sample model Random Error Residual

Population Multiple Regression Model Bivariate model Population Multiple Regression Model Bivariate model

Sample Multiple Regression Model Bivariate model Sample Regression Plane Sample Multiple Regression Model Bivariate model Sample Regression Plane

Simple and Multiple Regression Compared • Coefficients in a simple regression pick up the Simple and Multiple Regression Compared • Coefficients in a simple regression pick up the impact of that variable plus the impacts of other variables that are correlated with it and the dependent variable. • Coefficients in a multiple regression net out the impacts of other variables in the equation.

Simple and Multiple Regression Compared: Example • Two simple regressions: – – • Multiple Simple and Multiple Regression Compared: Example • Two simple regressions: – – • Multiple regression: –

Multiple Linear Regression Equation Too complicated by hand! Ouch! Multiple Linear Regression Equation Too complicated by hand! Ouch!

Interpretation of Estimated Coefficients • Slope (bi) – Estimated that the average value of Interpretation of Estimated Coefficients • Slope (bi) – Estimated that the average value of Y changes by bi for each 1 unit increase in Xi holding all other variables constant (ceteris paribus) – Example: if b 1 = -2, then fuel oil usage (Y) is expected to decrease by an estimated 2 gallons for each 1 degree increase in temperature (X 1) given the inches of insulation (X 2) • Y-intercept (b 0) – The estimated average value of Y when all Xi = 0

Multiple Regression Model: Example (0 F) Develop a model for estimating heating oil used Multiple Regression Model: Example (0 F) Develop a model for estimating heating oil used for a single family home in the month of January based on average temperature and amount of insulation in inches.

Sample Multiple Regression Equation: Example Excel Output For each degree increase in temperature, the Sample Multiple Regression Equation: Example Excel Output For each degree increase in temperature, the estimated average amount of heating oil used is decreased by 5. 437 gallons, holding insulation constant. For each increase in one inch of insulation, the estimated average use of heating oil is decreased by 20. 012 gallons, holding temperature constant.

Confidence Interval Estimate for the Slope Provide the 95% confidence interval for the population Confidence Interval Estimate for the Slope Provide the 95% confidence interval for the population slope 1 (the effect of temperature on oil consumption). -6. 169 1 -4. 704 The estimated average consumption of oil is reduced by between 4. 7 gallons to 6. 17 gallons per each increase of 10 F.

Coefficient of Multiple Determination • Proportion of total variation in Y explained by all Coefficient of Multiple Determination • Proportion of total variation in Y explained by all X variables taken together – • Never decreases when a new X variable is added to model – Disadvantage when comparing models

Adjusted Coefficient of Multiple Determination • Proportion of variation in Y explained by all Adjusted Coefficient of Multiple Determination • Proportion of variation in Y explained by all X variables adjusted for the number of X variables used – – Penalize excessive use of independent variables – Smaller than – Useful in comparing among models

Coefficient of Multiple Determination Excel Output Adjusted r 2 q reflects the number of Coefficient of Multiple Determination Excel Output Adjusted r 2 q reflects the number of explanatory variables and sample size q is smaller than r 2

Interpretation of Coefficient of Multiple Determination • – 96. 56% of the total variation Interpretation of Coefficient of Multiple Determination • – 96. 56% of the total variation in heating oil can be explained by different temperature and amount of insulation • – 95. 99% of the total fluctuation in heating oil can be explained by different temperature and amount of insulation after adjusting for the number of explanatory variables and sample size

Using The Model to Make Predictions Predict the amount of heating oil used for Using The Model to Make Predictions Predict the amount of heating oil used for a home if the average temperature is 300 and the insulation is six inches. The predicted heating oil used is 278. 97 gallons

Testing for Overall Significance • Shows if there is a linear relationship between all Testing for Overall Significance • Shows if there is a linear relationship between all of the X variables together and Y • Use F test statistic • Hypotheses: – H 0: 1 = 2 = … = k = 0 (no linear relationship) – H 1: at least one i 0 ( at least one independent variable affects Y ) • The null hypothesis is a very strong statement • Almost always reject the null hypothesis

Test for Significance: Individual Variables • Shows if there is a linear relationship between Test for Significance: Individual Variables • Shows if there is a linear relationship between the variable Xi and Y • Use t test statistic • Hypotheses: – H 0: i = 0 (no linear relationship) – H 1: i 0 (linear relationship between Xi and Y)

Residual Plots • Residuals vs. – May need to transform variable • Residuals vs. Residual Plots • Residuals vs. – May need to transform variable • Residuals vs. – May need to transform • Residuals vs. time – May have autocorrelation variable

Residual Plots: Example Maybe some nonlinear relationship No Discernable Pattern Residual Plots: Example Maybe some nonlinear relationship No Discernable Pattern

The Quadratic Regression Model • Relationship between one response variable and two or more The Quadratic Regression Model • Relationship between one response variable and two or more explanatory variables is a quadratic polynomial function • Useful when scatter diagram indicates nonlinear relationship • Quadratic model : – • The second explanatory variable is the square of the first variable

Quadratic Regression Model (continued) Quadratic models may be considered when scatter diagram takes on Quadratic Regression Model (continued) Quadratic models may be considered when scatter diagram takes on the following shapes: Y Y 2 > 0 X 1 Y 2 < 0 X 1 2 = the coefficient of the quadratic term 2 < 0 X 1

Dummy Variable Models • Categorical explanatory variable (dummy variable) with two or more levels: Dummy Variable Models • Categorical explanatory variable (dummy variable) with two or more levels: • Yes or no, on or off, male or female, • Coded as 0 or 1 • Only intercepts are different • Assumes equal slopes across categories • The number of dummy variables needed is (number of levels - 1) • Regression model has same form:

Dummy-Variable Models (with 2 Levels) Given: Y = Assessed Value of House X 1 Dummy-Variable Models (with 2 Levels) Given: Y = Assessed Value of House X 1 = Square footage of House X 2 = Desirability of Neighborhood = Desirable (X 2 = 1) Undesirable (X 2 = 0) 0 if undesirable 1 if desirable Same slopes

Dummy-Variable Models (with 2 Levels) (continued) Y (Assessed Value) n io ocat L e Dummy-Variable Models (with 2 Levels) (continued) Y (Assessed Value) n io ocat L e irabl Des b 0 + b 2 Intercepts different b 0 Same slopes le irab es Und X 1 (Square footage)

Interpretation of the Dummy Variable Coefficient (with 2 Levels) Example: : Annual salary of Interpretation of the Dummy Variable Coefficient (with 2 Levels) Example: : Annual salary of college graduate in thousand $ : GPA : 0 Female 1 Male On average, male college graduates are making an estimated six thousand dollars more than female college graduates with the same GPA.

Dummy-Variable Models (with 3 Levels) Dummy-Variable Models (with 3 Levels)

Interpretation of the Dummy Variable Coefficients (with 3 Levels) With the same footage, a Interpretation of the Dummy Variable Coefficients (with 3 Levels) With the same footage, a Splitlevel will have an estimated average assessed value of 18. 84 thousand dollars more than a Condo. With the same footage, a Ranch will have an estimated average assessed value of 23. 53 thousand dollars more than a Condo.

Dummy Variables • Predict Weekly Sales in a Grocery Store • Possible independent variables: Dummy Variables • Predict Weekly Sales in a Grocery Store • Possible independent variables: – Price – Grocery Chain • Data Set: – Grocery. xls • Interaction Effect?

Interaction Regression Model • Hypothesizes interaction between pairs of X variables – Response to Interaction Regression Model • Hypothesizes interaction between pairs of X variables – Response to one X variable varies at different levels of another X variable • Contains two-way cross product terms – • Can be combined with other models – E. G. , Dummy variable model

Effect of Interaction • Given: – • Without interaction term, effect of X 1 Effect of Interaction • Given: – • Without interaction term, effect of X 1 on Y is measured by 1 • With interaction term, effect of X 1 on Y is measured by 1 + 3 X 2 • Effect changes as X 2 increases

Interaction Example Y Y = 1 + 2 X 1 + 3 X 2 Interaction Example Y Y = 1 + 2 X 1 + 3 X 2 + 4 X 1 X 2 Y = 1 + 2 X 1 + 3(1) + 4 X 1(1) = 4 + 6 X 1 12 8 Y = 1 + 2 X 1 + 3(0) + 4 X 1(0) = 1 + 2 X 1 4 0 0 0. 5 1 1. 5 X 1 Effect (slope) of X 1 on Y does depend on X 2 value

Interaction Regression Model Worksheet Multiply X 1 by X 2 to get X 1 Interaction Regression Model Worksheet Multiply X 1 by X 2 to get X 1 X 2. Run regression with Y, X 1, X 2 , X 1 X 2

Evaluating Presence of Interaction • Hypothesize interaction between pairs of independent variables • Contains Evaluating Presence of Interaction • Hypothesize interaction between pairs of independent variables • Contains 2 -way product terms

Using Transformations • Requires data transformation • Either or both independent and dependent variables Using Transformations • Requires data transformation • Either or both independent and dependent variables may be transformed • Can be based on theory, logic or scatter diagrams

Inherently Linear Models • Non-linear models that can be expressed in linear form – Inherently Linear Models • Non-linear models that can be expressed in linear form – Can be estimated by least squares in linear form • Require data transformation

Transformed Multiplicative Model (Log. Log) Similarly for X 2 Transformed Multiplicative Model (Log. Log) Similarly for X 2

Square Root Transformation 1 > 0 Similarly for X 2 1 < 0 Transforms Square Root Transformation 1 > 0 Similarly for X 2 1 < 0 Transforms one of above model to one that appears linear. Often used to overcome heteroscedasticity.

Linear-Logarithmic Transformation 1 > 0 Similarly for X 2 1 < 0 Transformed from Linear-Logarithmic Transformation 1 > 0 Similarly for X 2 1 < 0 Transformed from an original multiplicative model

Exponential Transformation (Log-Linear) Original Model 1 > 0 1 < 0 Transformed Into: Exponential Transformation (Log-Linear) Original Model 1 > 0 1 < 0 Transformed Into:

Model Building / Model Selection • Find “the best” set of explanatory variables among Model Building / Model Selection • Find “the best” set of explanatory variables among all the ones given. • “Best subset” regression (only linear models) – Requires a lot of computation (2 N regressions) • “Stepwise regression” • “Common Sense” methodology – Run regression with all variables – Throw out variables not statistically significant – “Adjust” model by including some excluded variables, one at a time • Tradeoff: Parsimony vs. Fit

Association ≠ Causation ! Association ≠ Causation !

Regression Limitations • R 2 measures the association between independent and dependent variables Association Regression Limitations • R 2 measures the association between independent and dependent variables Association ≠ Causation ! • Be careful about doing predictions that involve extrapolation • Inclusion / Exclusion of independent variables is subject to a type I / type II error

Multi-collinearity • What? – When one independent variable is highly correlated (“collinear”) with one Multi-collinearity • What? – When one independent variable is highly correlated (“collinear”) with one or more other independent variables – Examples: • square feet and square meters as independent variables to predict house price (1 sq ft is roughly 0. 09 sq meters) • “total rooms” and bedrooms plus bathrooms for a house • How to detect? – Run a regression with the “not-so-independent” independent variable (in the examples above: square feet and total rooms) as a function of all other remaining independent variables, e. g. : • X 1 = β 0 + β 2 X 2 + …+ βk Xk – If R 2 of the above regression is > 0. 8, then one suspects multicollinearity to be present

Multi-collinearity (continued) • What effect? – Coefficient estimates are unreliable – Can still be Multi-collinearity (continued) • What effect? – Coefficient estimates are unreliable – Can still be used for predicting values for Y – If possible, delete the “not-so-independent” independent variable • When to check? – When one suspects that two variables measure the same thing, or when the two variables are highly correlated – When one suspects that one independent variable is a (linear) function of the other independent variables