To measure the impact of state economic development efforts over past decades in Michigan, we obtained a census of businesses in the United States and matched it to one of our own creation. Later we created and used a second, much smaller dataset — constructed with data from the Michigan Economic Development Corporation — to take an additional and closer look at the performance of the Michigan Business Development Program.
The dataset created for this research contains more than 7,300 records, or incentive deals, taken from state reports dating back to the 1980s. This dataset was whittled down to aid in analyses of incentives for deals over $100,000. This helped us limit the range of program areas in which we were working. For example, the state’s “Export Program” provided average incentive amounts of only $3,613 across 732 deals. We found these to be trivial amounts and not worth including in a more discrete analysis of programs and their possible impact. Eliminating deals for programs that averaged less than $100,000 meant a quick whittling of our database to 4,217 entries.
Some of these deals — 215 — did not have an assigned approval date. Unsure when these deals were approved, we removed them from our analysis — leaving records for 4,002 deals. Further, we removed deals where the recipients’ DUNS number was not available, reducing the deals to be analyzed to 2,997.[*] We then attempted to identify matches from the establishments offered these deals to those included in the NETS database and dropped 695 deals that did not have matching DUNS number, leaving 2,302.
Of the 2,302 that remained, 1,890 had just a single incentive deal associated with them. It was important to identify only firms with a single deal for our initial analysis due to the complications associated with estimating impacts when firms struck multiple deals with different incentives and in different years. We analyzed this group first and then — as a robustness check — added back in the remaining 412 companies that had received more than one incentive.
For a comparative analysis of performance between incentivized and nonincentivized establishments, we created a control group that was delineated by what statisticians call “propensity score matching.” This means that we tried to match and compare incentivized firms to similar but nonincentivized firms.
There were five controls used for firms that were offered incentives by the state. These controls were identified by variables such as a shared Standard Industrial Classification Code, establishment category (branch, headquarters, etc.), whether the firm was a subsidiary and establishment size.
Our techniques and model identification strategy follow that used in the study, “Striking a Balance: A National Assessment of Economic Development Incentives,” by Mary Donegan, T. William Lester and Nichola Lowe. One notable difference is that our analysis factors in firms that received multiple incentive deals, whereas the other study captures just a firm’s first incentive deal.
We designed the first model to measure any impact that incentives may have had on employment and sales, and it represents our baseline estimates. In all, we ran seven models that give alternative specifications of that baseline. In the results section that follows, we report the findings from our preferred specification, model two. The impact estimates from model two fall roughly in the middle of the other models’ output.
[*] A DUNS number is the Data Universal Numbering System developed by Dun & Bradstreet that is unique to each firm.