da7b4ff5677825403179d4daf045927b.ppt
- Количество слайдов: 39
Multiple Regression 1
Introduction • In this chapter we extend the simple linear regression model, and allow for any number of independent variables. • We expect to build a model that fits the data better than the simple linear regression model. 2
Introduction • We shall use computer printout to – Assess the model • How well it fits the data • Is it useful • Are any required conditions violated? – Employ the model • Interpreting the coefficients • Predictions using the prediction equation • Estimating the expected value of the dependent variable 3
Model and Required Conditions • We allow for k independent variables to potentially be related to the dependent variable Coefficients Random error variable y = b 0 + b 1 x 1+ b 2 x 2 + …+ bkxk + e Dependent variable Independent variables 4
Multiple Regression for k = 2, Graphical Demonstration - I y The simple linear regression model allows for one independent variable, “x” y =b 0 + b 1 x + e b 1 x 1 y = b 0 + b 1 x Note how the straight line y = b 0 0 y= becomes a plain, and. . . + b 2 x 2 + b 1 x 12 x 2 b b 0 x 1 + b 2 x 2 y = b 1 x+ + 2 x 2 b + x 1 b x b+ + 1 b+b 122 x 22 0 x = bb x y =+b 0 1 b 11+1 x 2 X 1 y b b y b 00 b 0 x 1 + b 2 = = +1 1 y y =0 + y = b The multiple linear regression model allows for more than one independent variable. Y = b 0 + b 1 x 1 + b 2 x 2 + e X 2 5
Multiple Regression for k = 2, Graphical Demonstration - II y y= b 0+ b 1 x 2 Note how a parabola becomes a parabolic Surface. b 0 X 1 y = b 0 + b 1 x 12 + b 2 x 2 X 2 6
Required conditions for the error variable • The error e is normally distributed. • The mean is equal to zero and the standard deviation is constant (se) for all values of y. • The errors are independent. 7
Estimating the Coefficients and Assessing the Model • The procedure used to perform regression analysis: – Obtain the model coefficients and statistics using a statistical software. – Diagnose violations of required conditions. Try to remedy problems when identified. – Assess the model fit using statistics obtained from the sample. – If the model assessment indicates good fit to the data, use it to interpret the coefficients and generate predictions. 8
Estimating the Coefficients and Assessing the Model, Example • Example 18. 1 Where to locate a new motor inn? – La Quinta Motor Inns is planning an expansion. – Management wishes to predict which sites are likely to be profitable. – Several areas where predictors of profitability can be identified are: • • Competition Market awareness Demand generators Demographics 9
Profitabil ity Competition Rooms Market awareness Nearest Customers Office space Distance to Number of hotels/motels the nearest rooms within La Quinta inn. 3 miles from the site. Margin Community College Income enrollment Physical Disttwn Median Distance to household downtown. income. 10
Estimating the Coefficients and Assessing the Model, Example Profitabil ity Competition Rooms Market awareness Nearest Customers Office space Distance to Number of hotels/motels the nearest rooms within La Quinta inn. 3 miles from the site. Operating Margin Community College Income enrollment Physical Disttwn Median Distance to household downtown. income. 11
Estimating the Coefficients and Assessing the Model, Example • Data were collected from randomly selected 100 inns that belong to La Quinta, and ran for the following suggested model: Margin = b 0 + b 1 Rooms + b 2 Nearest + b 3 Office + b 4 College + b 5 Income + Xm 18 -01 b 6 Disttwn 12
Regression Analysis, Excel Output This is the sample regression equation (sometimes called the prediction equation) Margin = 38. 14 - 0. 0076 Number +1. 65 Nearest + 0. 020 Office Space +0. 21 Enrollment + 0. 41 Income - 0. 23 Distance 13
Model Assessment • The model is assessed using three tools: – The standard error of estimate – The coefficient of determination – The F-test of the analysis of variance • The standard error of estimates participates in building the other tools. 14
Standard Error of Estimate • The standard deviation of the error is estimated by the Standard Error of Estimate: • The magnitude of se is judged by comparing it to 15
Standard Error of Estimate • From the printout, se = 5. 51 • Calculating the mean value of y we have • It seems se is not particularly small. • Question: Can we conclude the model does not fit the data well? 16
Coefficient of Determination • The definition is • From the printout, R 2 = 0. 5251 • 52. 51% of the variation in operating margin is explained by the six independent variables. 47. 49% remains unexplained. • When adjusted for degrees of freedom, Adjusted R 2 = 1 -[SSE/(n-k-1)] / [SS(Total)/(n-1)] = 17
Testing the Validity of the Model • We pose the question: Is there at least one independent variable linearly related to the dependent variable? • To answer the question we test the hypothesis zero. H 0: b 0 = b 1 = b 2 = … = b k H 1: At least one bi is not equal to • If at least one b is not equal to zero, the 18
Testing the Validity of the La Quinta Inns Regression Model • The hypotheses are tested by an ANOVA procedure ( the Excel output) MSR/MSE k = n–k– 1 = SSR n-1 = SSE MSR=SSR/k MSE=SSE/(n-k 1) 19
Testing the Validity of the La Quinta Inns Regression Model [Variation in y] = SSR + SSE. Large F results from a large SSR. Then, much of the variation in y is explained by the regression model; the model is useful, and thus, the null hypothesis should be rejected. Therefore, the rejection region is… Rejection region F>Fa, k, n-k-1 20
Testing the Validity of the La Quinta Inns Regression Model Conclusion: There is sufficient evidence to reject the null hypothesis in favor of the alternative hypothesis. At least one of the bi is not equal to zero. Thus, at least one independent variable is linearly related to y. Fa, k, n-k-1 = F 0. 05, 6, 100 -6 -1=2. 17 This linear regression model is valid F = 17. 14 > 2. 17 Also, the p-value (Significance F) = 0. 0000 Reject the null hypothesis. 21
Interpreting the Coefficients • b 0 = 38. 14. This is the intercept, the value of y when all the variables take the value zero. Since the data range of all the independent variables do not cover the value zero, do not interpret the intercept. • b 1 = – 0. 0076. In this model, for each additional room within 3 mile of the La Quinta inn, the operating margin decreases on average by. 0076% (assuming the other 22
Interpreting the Coefficients • b 2 = 1. 65. In this model, for each additional mile that the nearest competitor is to a La Quinta inn, the operating margin increases on average by 1. 65% when the other variables are held constant. • b 3 = 0. 020. For each additional 1000 sq-ft of office space, the operating margin will increase on average by. 02% when the other variables are held constant. • b 4 = 0. 21. For each additional thousand students the operating margin increases on average by. 21% when the other variables are held constant. 23
Interpreting the Coefficients • b 5 = 0. 41. For additional $1000 increase in median household income, the operating margin increases on average by. 41%, when the other variables remain constant. • b 6 = -0. 23. For each additional mile to the downtown center, the operating margin decreases on average by. 23% when the other variables are held constant. 24
Testing the Coefficients • The hypothesis for each bi is • Excel printout H 0: b i = 0 H 1: b i ¹ 0 Test statistic d. f. = n - k -1 25
Using the Linear Regression Equation • The model can be used for making predictions by – Producing prediction interval estimate for the particular value of y, for a given values of xi. – Producing a confidence interval estimate for the expected value of y, for given values of xi. • The model can be used to learn about relationships between the independent variables xi, and the dependent variable 26 y, by interpreting the coefficients bi
La Quinta Inns, Predictions • Predict the average operating margin of an inn at a site with the following characteristics: – – – 3815 rooms within 3 miles, Closet competitor. 9 miles away, 476, 000 sq-ft of office space, 24, 500 college students, $35, 000 median household income, 11. 2 miles distance to downtown center. MARGIN = 38. 14 - 0. 0076(3815) +1. 65(. 9) + 0. 020(476) +0. 21(24. 5) + 0. 41(35) - 0. 23(11. 2) = 37. 1% 27
Assessment and Interpretation: MBA Program Admission Policy • The dean of a large university wants to raise the admission standards to the popular MBA program. • She plans to develop a method that can predict an applicant’s performance in the program. • She believes a student’s success can be predicted by: – Undergraduate GPA – Graduate Management Admission Test (GMAT) score 28
MBA Program Admission Policy • A randomly selected sample of students who completed the MBA was selected. • Develop a plan to decide which applicant to admit. 29
MBA Program Admission Policy • Solution – The model to estimate is: y = b 0 +b 1 x 1+ b 2 x 2+ b 3 x 3+e y = MBA GPA x 1 = undergraduate GPA [Under. GPA] x 2 = GMAT score [GMAT] x 3 = years of work experience [Work] – The estimated model: MBA GPA = b 0 + b 1 Under. GPA + b 2 GMAT + b 3 Work 30
MBA Program Admission Policy – Model Diagnostics We estimate the regression model then we check: Normality of errors 31
MBA Program Admission Policy – Model Diagnostics We estimate the regression model then we check: The variance of the error variable 32
MBA Program Admission Policy – Model Diagnostics 33
MBA Program Admission Policy – Model Assessment • • 46. 35% of the variation in MBA GPA is explained by the model. The model is valid (p-value = 0. 0000…) GMAT score and years of work experience are linearly related to MBA GPA. Insufficient evidence of linear relationship between 34 undergraduate GPA
Regression Diagnostics - II • The conditions required for the model assessment to apply must be checked. – Is the error variable normally a histogram of the residuals Draw distributed? ^ Plot – Is the error variance constant? the residuals versus y – Are the errors independent? the residuals versus the Plot time periods – Can we identify outlier? 35 – Is multicolinearity (intercorrelation)a problem?
Diagnostics: Multicolinearity • Example: Predicting house price (Xm 18 -02) – A real estate agent believes that a house selling price can be predicted using the house size, number of bedrooms, and lot size. – A random sample of 100 houses was drawn and data recorded. – Analyze the relationship among the four variables 36
Diagnostics: Multicolinearity • The proposed model is PRICE = b 0 + b 1 BEDROOMS + b 2 H-SIZE +b 3 LOTSIZE + e The model is valid, but no variable is significantly related to the selling price ? ! 37
Diagnostics: Multicolinearity • Multicolinearity is found to be a problem. • Multicolinearity causes two kinds of difficulties: – The t statistics appear to be too small. – The b coefficients cannot be interpreted as 38
Remedying Violations of the Required Conditions • Nonnormality or heteroscedasticity can be remedied using transformations on the y variable. • The transformations can improve the linear relationship between the dependent variable and the independent variables. • Many computer software systems allow us to make the transformations easily. 39