Wednesday 4 May 2016

Goodness of fit and the coefficient of determination r2(R-square)


Simple Linear Regression
Linear regression is the most common prediction technique that is used today. The term regression was introduced by Francis Galton. In his paper, Galton found that, “although there was a tendency for tall parents to have tall children and for short parents to have short children, the average height of children born of parents of a given height tended to move or “regress” towards the average height in population as a whole.” In other words, the height of the children of unusually tall or unusually short parents tends to move towards the average height of the population. His friend Karl Pearson, collected more than thousand records of heights of members of a family group and confirmed Galton’s theory of universal regression.

The modern interpretation of regression is however quite different. Linear regression is concerned with the study of two variables X and Y. Where Y is dependent on variable X, X and Y have linear (straight line or slope is constant) relationship and data is normally distributed.

In my discussion, I will be using the term “variable”, a quantity that variable (If it didn't vary, it would be a constant). There are two types of variables, one, dependent variable, called predictor variable, which is dependent on other variables called explanatory variable or independent variable, which do not vary independently (in a statistical sense), but that they tend to vary together. Depending on the context, an independent variable is also known as a "predictor variable," "regressor," "controlled variable," "manipulated variable," "explanatory variable," "exposure variable," and/or "input variable." A dependent variable is also known as a "response variable," "regressand," "measured variable," "observed variable," "responding variable," "explained variable," "outcome variable," "experimental variable," and/or "output variable.

In liner model, we try to “fit” the data to a straight line function or linear function. In other words, variable Y is varying as straight line of another variable X. Data points, are not always follow linear model. A measure of absolute amount of “variability” in a variable is called its variance, which is defined a its average squared deviation from mean. A linear regression line has an equation of the form:
Y = b0 + b1x


Where x is the explanatory variable and Y is the dependent variable. Y, which is read as “expected value of”, indicates a population mean, for Y/x, which is read “Y given x", indicates that we are looking at the possible values of Y when x is restricted to some single value. The slope of the line is b1, and bo is the intercept (the value of y when x = 0).

However, coefficients, b1 and bo, are calculated using the data available and in real life, “prediction” is to predict or forecast the values outside the range of given X values using the equation and coefficient values calculated earlier. The X value is “extrapolated”, assuming that it will be linear. The model is essentially the assumption of “linearity", at least within the range of the observed explanatory data.

There are two ways in which you can compute the values of the parameters to fit a function.    1). Ordinary Least Squares (OLS) method and 2). Maximum Likelihood Method (ML). Nowadays, the least square method (OLS) is widely used to find the numerical values of the parameters to fit a function to a set of data. It is one of the oldest method of statistics and it was first published by the French mathematician, Legendre in the year 1805. After the publication of Legender’s memoir, the famous German mathematician, Carl Friedrich Gauss, published in another memoir in which he mentioned that he had used this method previously as early as 1795.

Since there are many statistics books explain calculations of parameters b0 and b1, I will not make an attempt here to explain the same thing. You can refer any statistics book for detailed calculation of the parameters (estimates).




 Sometimes the symbols  and  are used to represent b1and bo, even though these have Greek letters in them, the “^”, the “hat”, over the b1and bo,  tells that we are dealing with statistics not just parameters.

A linear regression model is going to attempt, using the least squares formulas, to fit a straight line to this set of data. But it is clear that the association between x and y is not linear.





The Least Square (OLS) method is based on following assumptions:
  1. 1. The regression model is linear in the parameters
  2. 2.    In the given data, independent variables (X) are independent from one another
  3. 3.    At every value of X, the observed points should follow roughly normal distribution centered at the fitted value of Y
  4. 4.    Homoscedasticity or equal variance – Given the value of X, the variance is same for all values of X


 Hence, in the above method, though the objective is to find the values of b1and bo, but also to know how close the values are to their counterparts in true real world population or data. In other words, how good is our “prediction” model? Therefore, we need some measure to measure the “reliability” or “precision” of the estimators b1and bo.
In statistics, the precision of an estimate is measured by what is called as “standard error (se)”.



In my next section, I will explain “Goodness of fit” and the coefficient of determination r2

References:
  1. 1.    Galton, Francis.,  “Family Likeness in Stature,”, Proceedings of Royal Society, London, Vol 40, 1886, pp 42-72
  2. 2.    Pearson K., and Lee, A., “On the Laws of Inheritance,” Biometrika, Vol 2, Nov. 1903, pp 357-462.
  3. 3.    Dodge, Y. “The Oxford Dictionary of Statistical Terms”. 2003, OUP. ISBN 0-19-920613-9
  4. 4.    Gujarathi, D., “The Basic Econometrics”. 2004. Tata McGRaw-Hill
  5. 5.    Plackett, R.L., “The discovery of the method of least squares”, 1972, Biometrika, 59, 239–251.
  6. 6.    Seal, H.L., “The historical development of the Gauss linear model”. 1967, Biometrika, 54,1–23


In the above section I discussed about how to calculate regression coefficients and their standard errors. We now consider how well the points are fitted to a line, also called as “goodness of fit”. By plotting “scatter plot” of the X and Y variable, you will be able to analyze the relationship between two paired variables. It is clear from the figure 1, that if all the observations were to lie on the regression line, we would obtain a perfect fit. In a perfect fit, there would no difference, with the points plotting right on the line.

Figure 1

However, this is not always true in real world. Some points (u1,u4) may be above the line (positive) and some points(u3,u4) may be below the line (negative). We hope this error, called residual error, should be as small as possible.


Coefficient of determination, r2, indicates the extent to which the dependent variable is predictable. Let me explain this with the help of a Venn diagram as shown in the figure 2. In this figure circle X represents the variation of X and circle Y represents the variation of Y. 

The overlap of the two circles (shaded area) indicates the extent to which the variation in Y is explained by the variation in X. The greater the extent of the overlap, the greater the variation in Y explained by X. The r2 is simply the measure of overlap. When there is no overlap, r2 is zero and when the overlap is complete, r2 is 1, since 100 percent of variation in Y is explained by X.

References:
  1. 1.    Galton, Francis.,  “Family Likeness in Stature,”, Proceedings of Royal Society, London, Vol 40, 1886, pp 42-72
  2. 2.    Pearson K., and Lee, A., “On the Laws of Inheritance,” Biometrika, Vol 2, Nov. 1903, pp 357-462.
  3. 3.    Dodge, Y. “The Oxford Dictionary of Statistical Terms”. 2003, OUP. ISBN 0-19-920613-9
  4. 4.    Gujarathi, D., “The Basic Econometrics”. 2004. Tata McGRaw-Hill
  5. 5.    Plackett, R.L., “The discovery of the method of least squares”, 1972, Biometrika, 59, 239–251.
  6. 6.    Seal, H.L., “The historical development of the Gauss linear model”. 1967, Biometrika, 54,1–23
  7. 7.    Kennedy, P., “Ballentine: A Graphical Aid for Econometrics,” Australian Economics papers, vol 20, 1981, pp414-416




No comments:

Post a Comment