The bulk of the academic research suggests that there is no statistically meaningful correlation between school spending and student outcomes. In cases where the correlation is positive and statistically significant, the effects are quite small — suggesting that even large increases in spending are likely to translate into only small academic effects, on average.

One of the nation’s foremost experts on this issue is Stanford University’s Eric Hanushek. In 1997 Hanushek comprehensively reviewed the academic research on the relationship between school resources and student outcomes, which are most often measured by standardized test scores but sometimes include measures of graduation rates and future job earnings. Hanushek examined 377 studies that attempted to measure the relationship between school resources (per-pupil spending, teacher-pupil ratios, teacher experience, etc.) and student performance and controlled for factors that are known to impact student achievement, such as family background. Of the 377 studies, 163 specifically measured the impact of per-pupil spending on student performance. Twenty-seven percent of these 163 studies showed a statistically significant positive relationship. Another 7 percent found a statistically significant negative correlation — meaning the more schools spent, the worse students tended to perform on average. The remaining studies — two-thirds, or 66 percent — found no statistically significant correlation between per-pupil spending and student achievement.[1]

Of these 163 studies, 83 measured per-pupil spending and student performance at the school-building level — as this analysis does. Of these 83, Hanushek reported that just 17 percent found a positive and statistically significant correlation between spending and achievement, while 7 percent found a statistically significant but negative correlation. More than three-quarters, 76 percent, of these studies found no statistically significant correlation between spending and achievement.[2]

A few more recent studies do deviate from this general finding, however. One study, for instance, found that court-ordered funding increases of the 1970s and ‘80s led to long-term measurable improvements in life outcomes for students.[3] This research suggests that it may be possible to boost student achievement through spending more on certain types of schools, but it has limitations.[*] For instance, it finds statistically meaningful positive outcomes for some students only after they were exposed to a 10 percent increase in spending every year for 12 consecutive years of schooling.

There are similar findings from studies of Michigan’s drastic change to school funding just over 20 years ago. In 1994, Michigan voters passed Proposal A, which overhauled the state’s school finance system and created new per-pupil funding guarantees to school districts. Since a large number of districts were set to receive a substantial increase in funding, this created a natural experiment through which researchers could measure the effects of significant changes in per-pupil spending.[†]

Two studies of Proposal A’s impact on student outcomes suggest that increased spending by previously low-funded districts resulted in statistically significant positive gains in test scores.[‡] One of the studies found that the increased funding boosted the pass rates for both fourth- and seventh-grade math, but only by a small amount: A 10 percent increase in spending was correlated with less than a one percentage point increase in pass rates.[§] A later study of roughly the same period found a similar result, but only for previously low-spending school districts and only for fourth- and seventh-grade achievement.[4]

These two studies have limited relevance to the current debates about school funding in Michigan, however. It’s unlikely that public schools would again receive large increases in funding like the ones these studies analyzed; current policy debates about school resources only concern marginal changes to school funding levels. Additionally, their findings show the most positive gains for relatively low-spending schools and little or no gains for relatively high-spending schools. Per-pupil funding has increased in real terms since the time period examined by these studies, and most Michigan schools today would be high-spending ones if compared to the schools these studies analyzed.[5]

As of now, the preponderance of evidence supports Hanushek’s findings, which are probably the most relevant ones for Michigan policymakers who face decisions about school funding. Hanushek summarizes their significance as follows:

The studies, of course, do not indicate that resources never make a difference. Nor do they indicate that resources could not make a difference. Instead they demonstrate that one cannot expect to see much if any improvement simply by adding resources to the current schools.[6]


[*] For one critique of this study’s methodology and the meaningfulness of its findings, see: Jay P. Greene, “Does School Spending Matter After All?” (Jay P. Green’s Blog, May 29, 2015), https://perma.cc/F3XS-99W4.

[†] For more information about Proposal A, see: Patrick L. Anderson, “Proposal A: An Analysis of the June 2, 1993, Statewide Ballot Question” (Mackinac Center for Public Policy, May 1, 1993), https://perma.cc/9BB5-6MFB.

[‡] In additional to these published studies, there is a working paper that finds positive long-run effects of large increases to school funding as a result of Proposal A. Specifically, the research finds that students who were exposed to a 12 percent increase in per-pupil funding each year (or about $1,000 per student) from grades four through seven had a 3.9 percentage point higher college enrollment rate and a 2.5 percentage point higher college graduation rate. Joshua Hyman, “Does Money Matter in the Long Run? Effects of School Spending on Educational Attainment,” Sept. 15, 2014, https://perma.cc/73X6-UUC7.

[§] Leslie E. Papke, “The Effects of Spending on Test Pass Rates: Evidence from Michigan,” Journal of Public Economics 89, no. 5–6 (June 2005): 821–839, https://perma.cc/F3Q2-RA92. Papke later revised these estimates and found that “[g]iven a 10% increase in four-year averaged spending, the estimated average effect on the pass rate varies from about three to six percentage points.” Leslie E. Papke and Jeffrey M. Wooldridge, “Panel Data Methods for Fractional Response Variables with an Application to Test Pass Rates,” Journal of Econometrics 145, no. 1–2 (2008): 121–133, https://perma.cc/H57Z-JW7V.