Monitoring is the process by which agencies oversee and check the contractor's performance to be sure it meets the contract's performance standards. Monitoring is the chief means of guarding against contracting problems once the contract is signed. Without monitoring a contract, there is no way of knowing whether the contractor's work is faithful to the contract terms or whether or not citizens and agency officials are satisfied with the service.

A key question for government should be, "Who should monitor the contract?" Several types of employees can be assigned responsibility for checking the contractor's work. One option is to use employees from the department that once performed the service, called "line" or "operating department" monitors. These employees are likely to be very familiar with the program or service. However, having a strong familiarity with operating the program does not necessarily mean that the person will make a good monitor. Supervising a waste disposal unit, for example, is not the same as inspecting contract work, interpreting the contract, or dealing with balky contractors.


An alternative is to use centralized monitoring, meaning the monitor(s) comes from the office that arranged and awarded the contract, usually the purchasing or procurement office. These employees typically know the contract and its provisions well, but are less familiar with the specific program's operation.

There are several advantages to using centralized monitors, instead of line or operating department monitors:

  • Being more removed from the program, they are more likely to be disinterested, objective monitors and to treat contractors more consistently;

  • They can become the basis of an experienced cadre of contracting officers; and

  • The possibility of collusion between program officers and the contractor is reduced.


A comprehensive monitoring system has three main components:

  • Contractor Reports;

  • Inspections; and

  • Citizen Complaints and Surveys.


    Contractor reports are contractor-generated statements of progress. The report details work completed to date; compares work with the contract requirements and previous periods; gives expenditures to date; forecasts work for the entire contract period; and gives a narrative account of problems encountered. It also mentions any contract adjustments believed necessary. When verified by the government monitor, the report becomes the formal statement of contract compliance. Verification is more than a cursory review of the data — it normally requires independent inspections and confirmation of accuracy.


    Inspections and observations vary greatly, depending on a number of considerations such as: the function contracted; the interest of the unit in serious monitoring; and the type of monitoring conducted. Some functions such as solid waste collection may require little monitoring, since poor performance will trigger citizen complaints.[10] However, even with this service, most agencies should check performance by some formal means, such as spot-checking the number of disposal bins unemptied. Other services, such as nursing home care, may require surprise inspections, while still others, fleet maintenance for instance, require periodic or individual inspection. In any case, monitoring must be flexible. For example, a swimming pool inspection should not inconvenience users on hot summer weekends.

    Many inspections use a rating or scorecard system, that indicate, for example, the number of waste disposal spills, or the cleanliness of streets (by a visual rating scorecard). Rating scores and other formal evidence of contractor performance reduce the possibility of arbitrary inspector action.


    The third major type of monitoring activity is conducted through citizen complaints and surveys. Complaints should be formally documented and can either be taken by the contractor and forwarded to the agency, or taken by the agency and forwarded to the contractor. In the former case, the contractor has a chance to handle the matter first, although the monitor should be alerted that a complaint has been registered. Complaints can be supplemented by citizen surveys. Surveys are useful because they also measure citizen satisfaction with the service, while complaints measure only dissatisfaction.

    Some government units rely almost entirely on complaints for contract monitoring. This is done for a variety of reasons, including: 1) the unit does not know how or does not want to monitor; 2) the unit does not believe monitoring is important; or 3) the unit feels that complaints alone provide enough control (as in waste disposal and other high-visibility functions, where it is assumed that if citizens don't complain, things are going well.) While complaints and surveys are useful, they should not, in most cases, entirely take the place of actual inspections and contractor reports.


    The Role of Performance Standards

    The best way to monitor programs is to set reasonable but explicit performance standards in the contract and inspect closely enough to ensure that the contractor meets these standards. Performance standards are specific indicators of the level of contractor performance. Without careful attention to standards, it is impossible to determine if the contractor's performance meets contract specifications. Standards also help the contractor by protecting them from arbitrary monitoring.

    There are several ways to measure whether performance standards are being met. One way is by using output measures. Example: the number of street miles cleaned or tons of solid waste collected. Standards can also rely on more complex measures such as patient satisfaction with nursing care.

    Another option is to use input measures such as the number of registered nurses per shift at a nursing home. The problem with relying primarily on input standards, however, is that they fail to measure the actual performance of the contractor. Having five registered nurses on a shift may meet state standards and the agency contract. It does not, however, prove that patients are being well cared for. Therefore, performance measures based on input standards should be avoided whenever possible.

    Lastly, performance can be measured by using efficiency or effectiveness measures. Efficiency measures demonstrate how inputs relate to outputs. For instance, hours expended mowing lawns versus the actual acres mowed (see Table 5). Effectiveness measures, on the other hand, assess the impact of the service on customers. Efficiency and effectiveness measures are usually better criteria than output or input measures for judging contractor performance. However, they are also more complex, thereby making them more difficult and time-consuming to calculate.

    Table 5






    Park Maintenance

    Weekly Mowing

    Cost per Acre Mowed

    Citizen Satisfaction

    Library Circulation

    Hours of Operation

    Cost per Book Borrowed

    Client Usage

    The best monitored contracts generally have two key elements. First, they are well written with the expected contractor performance spelled out. Without clearly drawn specifications, there is no legal authority to hold the contractor to certain standards. Figure 3 is an example of well-written specifications for a floor-finishing contract.

    Figure 3

    Sample Floor-Finishing Specifications


    Finishing: Apply a minimum of four coats of floor finish, allowing sufficient drying time between each coat.  The last coat only should be applied up to but not touching the baseboard.  All other coats should be applied to within four inches of the baseboard.  (Note: Should there be more than eight hours delay before applying finish after the floor has been cleaned or between coats, the areas must again be cleaned to remove surface dirt and scuff marks before applying finish).

    Source: John Rehfuss, Contracting Out in Government (San Francisco: Jossey-Bass, 1989).

    The second important element that usually goes hand-in-hand with well-monitored contracts is the department having a strong reason to monitor the contract closely. One example is the Los Angeles County Contract City Plan, in which the county provides a variety of services ranging from police patrol to street sign maintenance to 70 cities. In this case there are built-in incentives for diligent contract monitoring because cities are jealous of their municipal prerogatives. Since many citizens and elected officials would rather hire their own staff than continue to contract with the county, careful attention is paid to the efficiency and quality of county contract services. City officials and citizens zealously watch performance and are quick to complain verbally at any lapse.[11]

    Where there is little or no incentive to monitor the service assiduously, corruption can result. Example: In 1979, the Department of Energy engaged in a number of consulting contracts. One of the contracts — a $29,000 contract for "Assessing the technology base" — was actually a contract for typing. Another, a $453,000 contract for assessing industry research, was given to a sole-source contractor who was supposed to perform the work himself, but instead had it done entirely by a subcontractor for $300,000. Only six of the 20 consulting contracts were competitively bid and performance results were not even used in six of the contracts. The contracts were not monitored, but then again, management never had intended to examine them.[12] Most of the contracts were simply intended as a means to expend all available funds before the fiscal year ended.