One way to compare how well-specified the logarithmic and checkmark size terms are is to plot their respective augmented component-plus-residual (i.e., augmented partial residual) graphs and check for linearity.[*] I do that in Graphics B1 and B2. In each case, the solid line represents the regression line for the size term of interest and the dashed line is a smoothed Lowess[18] fit. The closer the Lowess line is to the size term’s linear regression line, the better that size term is as a predictor of district spending.

A quick look at **Graphic B1** reveals that the logarithmic size model is a poor predictor of district spending. The checkmark model tested in **Graphic B2** demonstrates a noticeably better (though not ideal) fit for the data.

**Graphic B1: Testing for Linearity of ƒ(Size) = ln(Size)
(Logarithmic Model)**

**Graphic B2: Testing for Linearity of f(Size) = ln2(Size) − a * ln(Size)
**

Further investigation lends additional support to the view that the logarithmic model is misspecified while the checkmark model is most likely well specified. Applying the Ramsay RESET test for omitted variables to the logarithmic model (see below) produces a statistically significant result. We can therefore not reject the hypothesis that there are omitted variables.

Logarithmic model

Ramsey RESET test using powers of the fitted values of curexp

Ho: model has no omitted variables

F(3, 2608) = 9.57

Prob > F = 0.0000

However, applying the RESET test to the checkmark model (see below) produces an insignificant result, and hence we can reject the hypothesis that there are omitted variables.

Checkmark model

Ramsey RESET test using powers of the fitted values of curexp

Ho: model has no omitted variables

F(3, 2608) = 1.36

Prob > F = 0.2517

We can conduct a further test of the specification of the checkmark model using Stata’s linktest command, which adds the predicted value (_hat) and the predicted value squared (_hatsq) to the model. The first of these terms should of course be significant, but the second should not be except if we have omitted variables in the model. The linktest output shows an insignificant value for _hatsq, so we can reject the hypothesis that we have omitted variables.

**Graphic B3: Linktest on Checkmark Model**

*To test for multicollinearity, I calculate variance inflation factors for the variables, which are as follows:*

**Graphic B4: Variance Inflation Factors**

*Since all the VIFs are below 10, multicollinearity is not a major concern with the checkmark model. Heteroschedasticity is also not a problem because I have used robust standard errors.*

To test for the normality of the residuals, another OLS assumption, I graph the distribution of the residuals against a normal curve in Figure B3. The upshot of that graph is that while the residuals are not perfectly normal, the deviation from normality isn’t enormous. What are the ramifications of this imperfection in the model? It should perhaps raise the level of uncertainty around the estimate of the size effect on district spending, but is not sufficiently egregious to call into question the huge effect of aggregate income per pupil squared on district spending (see the section titled “Evaluating Public Choice Theory” in the body of the text for a discussion of the magnitude of that effect).

**Graphic B5: Testing for Normality of Residuals
(Checkmark Model)**

[*] This was done using Stata’s acprplot command, which is based on C. L. Mallows, “Augmented partial residuals,” Technometrics, vol 28 (1986), p. 313–319.