In a critique of our recently published study on the relationship between school spending and academic achievement, Bruce Baker, a professor at Rutgers University, raises technical concerns that lead him to question our empirical methodology and qualitative conclusions. The nature of his comments suggests that a select group of previous research, which stand in contrast to our research in both empirical approach and qualitative findings, are methodologically superior and show a positive relationship between spending per pupil and student achievement. We address both the general and technical concerns Baker raises and describe why our research improves over the earlier papers by Papke and Roy.
Baker seeks to make the case that Michigan school districts today are not significantly higher-funded nor more equitably funded, in order to enhance the possibility that earlier findings by Papke and Roy about the impact of funding on relatively low-spending districts still might apply to most of Michigan school districts. First, making additional adjustments to financial data, using an apparent labor cost model, he contends that “Michigan per-pupil spending has risen slightly over time.” Yet the Census Bureau data cited in his analysis shows an unadjusted overall per-pupil spending increase of 14 percent.[i]
Second, Baker looks at deviations of district-level per-pupil spending as “a way of looking at resource equity.” But his analysis lumps together all revenue Michigan districts receive: local, state and federal. A superior approach would exclude federal revenues, which fall outside the purview of state policymakers (whom our study is trying to educate), and focus instead just on local and state revenue allocations. When looking at these two sources of revenue, the evidence strongly suggests that from the resources that state policymakers command Michigan school districts are funded more equitably than they have been in the past.
In the 2007-08 school year, coinciding with the first year of academic and financial data analyzed in our study, the Legislature reinstated a formula that gives the lowest-funded districts twice the per-pupil foundation allowance increase of standard-funded districts. With the exception of a tiny percentage of the state’s “hold harmless” districts, the inflation-adjusted gap in school aid funding between the top and bottom is more than four times smaller.[ii]
To estimate the impact of per-pupil spending on student academic performance in Michigan, we use a regression model based on panel data from Michigan public schools covering grades kindergarten through 12 over the period 2007 to 2013. Since our data varies along two dimensions, years and schools, we are able to incorporate year and school fixed effects in the estimations, which is standard in econometric analysis using panel data. These fixed effects are important to control for unobservable variables that may lead to a spurious relationship between school spending and educational outcomes. Such variables include baseline measures of school quality and rates of student free-lunch eligibility.[iii] The exclusion of either of these fixed effects can result in spurious findings.
Papke uses panel data to test the relationship between aggregate spending and pass rates on the math test for 4th graders. In her analysis, she uses school fixed effects in the Ordinary Least Squares regression models, but excludes year fixed effects. By excluding year fixed effects in her panel estimation procedure, her findings are at risk of being spurious because of the omitted variable bias problem. To address this problem of omitted variable bias, Papke employs an Instrumental Variable estimation strategy and uses the foundation allowance as an instrument, which was imposed by Proposal A to assign a specific per pupil spending amount to each district in Michigan. However, Hyman, in a 2014 working paper, tries to replicate Papke’s econometric model and finds that the foundation allowance variable is a poor instrument in the IV framework, which makes her IV findings less reliable. Also, Hyman could not find a statistical relationship between education spending and math achievement for 4th graders when he includes more control variables in Papke’s main empirical model to control for omitted variable bias, which suggests that Papke’s model was econometrically flawed.
Roy evaluates the impact of Proposal A on 4th grade math and reading scores using panel data on 520 school districts over the period 1993-2001. He finds that in districts with the relatively lowest school spending per student, reading and math scores both improved after Proposal A was implemented. However, most of these estimates are weakly statistically significant, and Roy further finds that districts with relatively higher school spending per student experienced a reduction in math and reading scores. Although his empirical model includes district fixed effects, he excludes year fixed effects. The exclusion of year fixed effects could possibly explain why Roy finds these seemingly conflicting quantitative results.
The concerns of Baker are addressed with a thorough consideration of the literature that points to a statistical relationship between school finance and student achievement, and an explanation of our empirical framework. In Papke and Roy, both papers suffer from omitted variable bias that leads to less reliable findings. This is further supported by Hyman who could not successfully replicate Papke’s results, which suggests that Papke’s econometric methodology suffers from misspecification. We are careful of the omitted variable bias in our empirical methodology, and we include both year and school fixed effects in the estimations to potentially eliminate this bias.
Our study’s methodology and findings are just as robust if not more so than the previous work done on the relationship between school spending and student achievement in Michigan. Taken in the context of the broader scope of academic literature we identified, these results cast doubt on the idea that continued K-12 funding increases alone will drive measurable improvements to student achievement in Michigan.
Papke, L. (2005). The Effects of Spending on Test Pass Rates: Evidence from Michigan. Journal of Public Economics, 89, pp. 821-839.
Hyman, J. (2014). Does Money Matter in the Long Run? Effects of School Spending on Educational Attainment. Working Paper.
Roy, J. (2011). Impact of School Finance Reform on Resource Equalization and Academic Performance: Evidence from Michigan. Education Finance and Policy, 6, pp.137-167.
[i] U.S. Census Bureau data, http://www.census.gov/govs/school/. 1993 per-pupil spending figure of $5,967 adjusts to $9,619.75 in 2013 dollars, compared to $10,948 in actual per-pupil spending for 2013. See https://www.bls.gov/data/inflation_calculator.htm for inflation adjustments.
[ii] Senate Fiscal Agency, https://www.house.mi.gov/hfa/PDF/Briefings/School_Aid_BudgetBriefing_fy15-16.pdf, Figures 8 through 8d. House Fiscal Agency (the 1994 gap of $2,300 adjusts to $3,713 with inflation, making the $848 gap for 2014-15 more than 4x smaller); https://www.house.mi.gov/hfa/PDF/Briefings/School_Aid_BudgetBriefing_fy15-16.pdf, slides 25 through 28 (the 1994 gap of $2,300 adjusts to $3,713 with inflation, making the 2015-16 $778 gap more than 4x smaller).
[iii] Student racial demographics were only available for a limited set of years in the analysis. Though not included in the official regression analysis, a test of adding these effects to the smaller sample of years revealed no discrepancy in our findings about the impacts of spending on student achievement.
Permission to reprint this blog post in whole or in part is hereby granted, provided that the author (or authors) and the Mackinac Center for Public Policy are properly cited.
Permission to reprint any comments below is granted only for those comments written by Mackinac Center policy staff.