Staff certification, student-teacher ratios, staff training, facility design, availability of different programs, and school accreditation may indicate school quality, and hence, student achievement, but such input measures ultimately cannot pass for concrete information about student outcomes.
Information about academic outcomes and other measures of student success is highly variable for students with disabilities, adjudicated youth, and at-risk students. Large numbers of these students are often excluded from standardized tests. Most educational programs for difficult-to-educate students are individualized and downplay objective standards of academic achievement. Available information about student achievement is rarely gathered in such a way that it is useful for comparing the effectiveness of different programs in serving a particular type of student. When student performance is reported, it is often measured and reported by the agency or school providing the education; third-party evaluations would be preferable.
Lacking reliable data about results, measures of program quality tend to center on inputs such as staff certification, student-teacher ratios, staff training, facility design, availability of different programs, and school accreditation. While these characteristics may indicate school quality, and hence, student achievement, such input measures ultimately cannot pass for concrete information about student outcomes.
Measuring student performance and comparing it across different placements is complicated by the fact that difficult-to-educate students have widely varying abilities and characteristics. How does one compare a school serving students who are chemically addicted, for example, with one that enrolls a mix of at-risk students, including those with drug and alcohol abuse problems? Are students with multiple physical and mental disabilities comparable to one another in terms of potential achievement or are their characteristics so unique that what is learned from past experience with one student cannot be applied to any other? In rare cases, the nature of a students disabilities are so severe and unique, that perhaps only a handful of schools in the country can accommodate the student.
Schools or students cannot always be judged by the same performance measures. High-school graduation rates or percentage of graduates living independently would be inappropriate for a school serving students with severe mental retardation, for example. But they could be appropriate for at-risk students of average intelligence.
Some placements serve students for relatively short periods of time. Programs for adjudicated youth serve students only for the length of that students sentence or probation. Emergency shelter programs are by their nature short-term placements. A psychiatric treatment and education program may be a crisis intervention in a string of placements for a student. Performance measures should apply to these settings, too, although the short-term nature of these placements obscures their contribution to the students performance.
Schools that do report student results often use their own criteria for evaluating student success. Not only do the assessment criteria selected tend to be those that portray the school most favorably, the criteria may not be comparable across different settings. For example, one school for at-risk students may measure retention rates while another measures attendance rates and a third measures graduation rates. Finally, what makes for student success is never purely quantifiable. Statistical measures cannot capture the ability of students to form healthy relationships, to become integrated into a community, or to value their own self worth. All of these factors make the evaluation and comparison of student results extremely complicated. But they dont justify the absence of data about student outcomes.
Care must be taken to create realistic and meaningful measures of student achievement. In some cases, outcomes should be assessed not only while the student is enrolled in the program, but also after the student has exited in order to determine whether or not the program had lasting influence on student success. Longitudinal evaluations are particularly important in the area of corrections education to determine recidivism rates. Further research is needed to design a useful assessment system so that future students will have the benefit of proven interventions as the basis of any decision made about their placement. Additionally, public and private providers (particularly those serving juvenile offenders) may improve their programs if they were made to compete with one another on the basis of performance.
To the extent that placements are publicly funded, the cost of various placements should be balanced against the results they are likely to bring to a particular student. Says Boys Towns research director Daniel Daly, "Most providers for special-needs students do not focus on outcomes. But public pressure is mounting for them to do so. People want to see results for the dollars they spend."101
Information about student results can also be used to gauge the performance of school personnel. Although public schools generally avoid linking teacher pay to student performance, a number of the private and nonpublic schools in this study based teacher compensation partly on the success of their students.102 Among those schools incorporating student performance into the staff evaluation criteria, rewards took the form of salary bonuses, promotions, and "recertification" for continued employment.