Скачать презентацию Introduction to Linear and Logistic Regression n Скачать презентацию Introduction to Linear and Logistic Regression n

02ead836ee452514c762357020b3df4d.ppt

  • Количество слайдов: 30

Introduction to Linear and Logistic Regression Introduction to Linear and Logistic Regression

n n n n Basic Ideas Linear Transformation Finding the Regression Line Minimize sum n n n n Basic Ideas Linear Transformation Finding the Regression Line Minimize sum of the quadratic residuals Curve Fitting Logistic Regression Odds and Probability

Basic Ideas n Jargon n n IV = X = Predictor (pl. predictors) DV Basic Ideas n Jargon n n IV = X = Predictor (pl. predictors) DV = Y = Criterion (pl. criteria) Regression of Y on X Linear Model = relations between IV and DV represented by straight line. (population values) A score on Y has 2 parts – (1) linear function of X and (2) error.

Basic Ideas (2) n n Sample value: Intercept – place where X=0 Slope – Basic Ideas (2) n n Sample value: Intercept – place where X=0 Slope – change in Y if X changes 1 unit If error is removed, we have a predicted value for each person at X (the line): Suppose on average houses are worth about 50. 00 Euro a square meter. Then the equation relating price to size would be Y’=0+50 X. The predicted price for a 2000 square meter house would be 250, 000 Euro

Linear Transformation 1 to 1 mapping of variables via line n Permissible operations are Linear Transformation 1 to 1 mapping of variables via line n Permissible operations are addition and multiplication (interval data) n Add a constant Multiply by a constant

Linear Transformation (2) Centigrade to Fahrenheit 240 n Note 1 to 1 map 200 Linear Transformation (2) Centigrade to Fahrenheit 240 n Note 1 to 1 map 200 160 n Intercept? 120 n Slope? n 212 degrees F, 100 degrees C Degrees F 80 40 32 degrees F, 0 degrees C 0 0 30 60 90 120 Degrees C Intercept is 32. When X (Cent) is 0, Y (Fahr) is 32. Slope is 1. 8. When Cent goes from 0 to 100 (run), Fahr goes from 32 to 212 (rise), and 212 -32 = 180. Then 180/100 =1. 8 is rise over run is the slope. Y = 32+1. 8 X. F=32+1. 8 C.

Standard Deviation and Variance n Square root of the variance, which is the sum Standard Deviation and Variance n Square root of the variance, which is the sum of squared distances between each value and the mean divided by population size (finite population) n Example • 1, 2, 15 Mean=6 • =6. 37

Correlation Analysis n Correlation coefficient (also called Pearson’s product moment coefficient) n If r. Correlation Analysis n Correlation coefficient (also called Pearson’s product moment coefficient) n If r. X, Y > 0, X and Y are positively correlated (X’s values increase as Y’s). The higher, the stronger correlation. n r. X, Y = 0: independent; r. X, Y < 0: negatively correlated

Regression of Weight on Height Ht Wt 61 105 62 120 63 120 65 Regression of Weight on Height Ht Wt 61 105 62 120 63 120 65 160 65 120 68 145 69 175 70 160 72 185 75 210 N=10 mean=67 mean=150 =4. 57 = 33. 99 Correlation (r) =. 94. Regression equation: Y’=-316. 86+6. 97 X

Predicted Values & Residuals Numbers for linear part and error. • Y’ is called Predicted Values & Residuals Numbers for linear part and error. • Y’ is called the predicted value • Y-Y’ the residual (RS) • The residual is the error • Mean of Y’ and Y is the same • Variance of Y is equal to the variance Y’ + RS N Ht Wt Y' RS 1 61 105 108. 19 -3. 19 2 62 120 115. 16 4. 84 3 63 120 122. 13 -2. 13 4 65 160 136. 06 23. 94 5 65 120 136. 06 -16. 06 6 68 145 156. 97 -11. 97 7 69 175 163. 94 11. 06 8 70 160 170. 91 -10. 91 9 72 185 184. 84 0. 16 10 75 210 205. 75 4. 25 mean 67 150. 00 4. 57 33. 99 31. 85 11. 89 V 20. 89 1155. 56 1014. 37 141. 32

Finding the Regression Line Need to know the correlation, standard deviation and means of Finding the Regression Line Need to know the correlation, standard deviation and means of X and Y To find the intercept, use: Suppose r. XY =. 50, X =. 5, mean. X = 10, Y = 2, mean. Y = 5. Slope Intercept

Line of Least Squares n Assume linear relations is reasonable, so the 2 variables Line of Least Squares n Assume linear relations is reasonable, so the 2 variables can be represented by a line. Where should the line go? n Place the line so errors (residuals) are small n n The line we calculate has a sum of errors = 0 It has a sum of squared errors that are as small as possible; the line provides the smallest sum of squared errors or least squares

Minimize sum of the quadratic residuals • Derivation equal 0 Minimize sum of the quadratic residuals • Derivation equal 0

n The coefficients a and b are found by solving the following system of n The coefficients a and b are found by solving the following system of linear equations

Curve Fitting n Linear Regression n Exponential Curve n Logarithmic Curve n Power Curve Curve Fitting n Linear Regression n Exponential Curve n Logarithmic Curve n Power Curve

n The coefficients a and b are found by solving the following system of n The coefficients a and b are found by solving the following system of linear equations

with n Linear Regression n Exponential Curve n Logarithmic Curve n Power Curve with n Linear Regression n Exponential Curve n Logarithmic Curve n Power Curve

Multiple Linear Regression n The coefficients a, b and c are found by solving Multiple Linear Regression n The coefficients a, b and c are found by solving the following system of linear equations

Polynomial Regression n The coefficients a, b and c are found by solving the Polynomial Regression n The coefficients a, b and c are found by solving the following system of linear equations

Logistic Regression n n Variable is binary (a categorical variable that has two values Logistic Regression n n Variable is binary (a categorical variable that has two values such as "yes" and "no") rather than continuous binary DV (Y) either 0 or 1 n n For example, we might code a successfully kicked field goal as 1 and a missed field goal as 0 or we might code yes as 1 and no as 0 or admitted as 1 and rejected as 0 or Cherry Garcia flavor ice cream as 1 and all other flavors as zero.

n n n If we code like this, then the mean of the distribution n n n If we code like this, then the mean of the distribution is equal to the proportion of 1 s in the distribution. For example if there are 100 people in the distribution and 30 of them are coded 1, then the mean of the distribution is. 30, which is the proportion of 1 s The mean of a binary distribution so coded is denoted as P, the proportion of 1 s The proportion of zeros is (1 -P), which is sometimes denoted as Q The variance of such a distribution is PQ, and the standard deviation is Sqrt(PQ)

n n Suppose we want to predict whether someone is male or female (DV, n n Suppose we want to predict whether someone is male or female (DV, M=1, F=0) using height in inches (IV) We could plot the relations between the two variables as we customarily do in regression. The plot might look something like this

n n None of the observations (data points) fall on the regression line They n n None of the observations (data points) fall on the regression line They are all zero or one

n Predicted values (DV=Y)correspond to probabilities If linear regression is used, the predicted values n Predicted values (DV=Y)correspond to probabilities If linear regression is used, the predicted values will become greater than one and less than zero if one moves far enough on the Xaxis n Such values are theoretically inadmissible n

Linear vs. Logistic regression Linear vs. Logistic regression

Odds and Probability Linear regression! Odds and Probability Linear regression!

n n n n Basic Ideas Linear Transformation Finding the Regression Line Minimize sum n n n n Basic Ideas Linear Transformation Finding the Regression Line Minimize sum of the quadratic residuals Curve Fitting Logistic Regression Odds and Probability