Regression diagnostic
In statistics, a regression diagnostic is one of a set of procedures available for regression analysis that seek to assess the validity of a model in any of a number of different ways.[1] This assessment may be an exploration of the model's underlying statistical assumptions, an examination of the structure of the model by considering formulations that have fewer, more or different explanatory variables, or a study of subgroups of observations, looking for those that are either poorly represented by the model (outliers) or that have a relatively large effect on the regression model's predictions.
A regression diagnostic may take the form of a graphical result, informal quantitative results or a formal statistical hypothesis test,[2] each of which provides guidance for further stages of a regression analysis.
Introduction
Regression diagnostics have often been developed or were initially proposed in the context of linear regression or, more particularly, ordinary least squares. This means that many formally defined diagnostics are only available for these contexts.
Assessing assumptions
- Distribution of model errors
- Homoscedasticity
- Correlation of model errors
Assessing model structure
- Adequacy of existing explanatory variables
- Partial residual plot
- Ramsey RESET test
- F test for use when there are replicated observations, so that a comparison can be made between the lack-of-fit sum of squares and the pure error sum of squares, under the assumption that model errors are homoscedastic and have a normal distribution.
- Adding or dropping explanatory variables
- Partial regression plot
- Student's t test for testing inclusion of a single explanatory variable, or the F test for testing inclusion of a group of variables, both under the assumption that model errors are homoscedastic and have a normal distribution.
- Change of model structure between groups of observations
- Comparing model structures
Important groups of observations
- Outliers
- Influential observations