This guide is intended to help elementary school educators as well as school and district administrators develop and implement effective prevention and intervention strategies that promote positive student behavior. The guide includes concrete recommendations and indicates the quality of the evidence that supports them. Additionally, we have described some, though not all, ways in which each recommendation could be carried out. For each recommendation, we also acknowledge roadblocks to implementation that may be encountered and suggest solutions that have the potential to circumvent the roadblocks. Finally, technical details about the studies that support the recommendations are provided in Appendix D.
We, the authors, are a small group with expertise in various dimensions of this topic and in research methods commonly used in behavior research. The evidence we considered in developing this document ranges from experimental evaluations to single-subject research studies (1) to expert analyses of behavioral intervention strategies and programs. For questions about what works best, high-quality experimental and quasi-experimental studies, (2) such as those meeting the criteria of the What Works Clearinghouse, have a privileged position. In all cases, we pay particular attention to patterns of findings that are replicated across studies.
The process for deriving the recommendations began by collecting and examining research studies that have evaluated the impacts of individual, classwide, and school-wide behavioral interventions. Research conducted in the United States in the last 20 years was reviewed by the What Works Clearinghouse (WWC) to determine whether studies were consistent with WWC standards.
Behavioral interventions almost always include multiple components. This bundling of components presents challenges when reviewing levels of evidence for each recommendation because evidence of the impact of specific intervention components on students' behavior cannot formally be attributed to one component of an intervention. Identification of key components of each intervention therefore necessarily relied, to a significant degree, on the panel's expert judgment.
After identifying key components of individual interventions, the interventions and their key components were placed in a working matrix that helped us identify features that were common to multiple interventions and, therefore, were logical candidates for generally successful practices.
The panel determined the level of evidence for each recommendation by considering the effects of the intervention as determined by the WWC (see Table 1), the contribution of each component to the impacts found in the evaluation, and the number of evaluations conducted on the behavioral interventions that included the component.(3)
Strong refers to consistent and generalizable evidence that an intervention strategy or program causes an improvement in behavioral outcomes.(4)
Moderate refers either to evidence from studies that allow strong causal conclusions but cannot be generalized with assurance to the population on which a recommendation is focused (perhaps because the findings have not been widely replicated) or to evidence from studies that are generalizable but have more causal ambiguity than offered by experimental designs (statistical models of correlational data or group comparison designs for which equivalence of the groups at pretest is uncertain).
Low refers to expert opinion based on reasonable extrapolations from research and theory on other topics and evidence from studies that do not meet the standards for moderate or strong evidence.
It is important for the reader to remember that the level of evidence is not a judgment by the panel of how effective each of these five recommended practices would be when implemented in a classroom or school or of what prior research has to say about an intervention's effectiveness or whether the costs of implementing it are worth the benefits it might bestow. Instead, these levels of evidence ratings reflect judgments by the panel of the quality of the existing research literature to support a causal claim that when these recommended practices have been implemented in the past, positive effects on student behaviors have been observed. They do not reflect judgments by the authors of the relative strength of these positive effects or the relative importance of these individual recommendations.
For the levels of evidence in Table 1, we rely on WWC evidence standards to rate the quality of evidence supporting behavioral prevention and intervention programs and practices. The WWC addresses evidence for the causal validity of programs and practices according to WWC standards. Information about these standards is available at http://ies.ed.gov/ncee/wwc/references/review_process. Each study is assessed according to standards and placed into one of three categories:
Following the recommendations and suggestions for carrying out the recommendations, Appendix D presents more information on the research evidence that supports each recommendation.
Michael Epstein (Chair)
University of Nebraska-Lincoln
University of Illinois-Chicago
North Carolina State University
University of South Florida
Research And Training Center For Children's Mental Health
Principal, Harmony Hills Elementary School
Go to Overview
1. Single-subject studies rely on the comparison of intervention effects on a single participant or group of single participants, where outcomes of the participant are compared in nontreatment (baseline) phases and in treatment phases. Some single-subject methods use subsequent withdrawal and reapplication of treatment to estimate effects. Others estimate effects using several baselines with variable-length durations for different subjects (see Horner et al. 2005).
2. Experimental studies, often called randomized controlled trials, estimate effects of interventions by comparing outcomes of participants who are randomly assigned to experimental and one or more comparison groups (Schwartz, Flamant, and Lellouch 1980). Using random assignment rules out any pre-existing differences between groups as a reason for different outcomes and the intervention becomes the probable cause of those differences. Quasi-experimental studies, such as studies that match intervention participants with individuals who are similar on a range of characteristics, also are used to estimate effects of interventions. However, because quasi-experimental approaches cannot rule out pre-existing differences between participants and the group created by matching as reasons for different outcomes, they are considered to be less valid approaches for estimating intervention effects.
3. A number of specific classwide and school-wide interventions are cited in this guide as examples of programs that include both components that align with the panel's recommendations of effective strategies to reduce student behavior problems and rigorous research methods in the study of program effectiveness. Other programs with similar components may be available. The panel recommends that readers consult the WWC website regularly for more information about interventions and corresponding levels of evidence (http://ies.ed.gov/ncee/wwc/reports/).
4. Following the WWC guidelines, we consider a positive, statistically significant effect or an effect size greater than 0.25 as an indicator of positive effects.
5. At the time this practice guide was developed, the WWC did not have standards for assessing the validity of single-subject studies (although a panel was being convened to develop evidence standards for single-subject studies). To ensure that the single subject studies cited in this report met basic criteria for supporting causal statements, a special review process was established for these studies. A review protocol was prepared to assess the design of each study. This protocol was reviewed by the chair of the panel developing evidence standards for single-subject studies. Five WWC reviewers with backgrounds in single-subject research methodology received training on this protocol and then applied the protocol to the relevant single subject studies. Reviewers were directed to identify issues that could compromise the validity of the study, and these issues were examined by a second reviewer. Only studies that reviewers deemed valid are referenced in this practice guide.
6. Studies that were eliminated included those with major design flaws that seriously undermined the technical adequacy of the research, such as comparison studies that did not establish equivalent groups at baseline. In addition, only studies conducted in the United States in the last 20 years that examine the effects on student behavioral outcomes were included in the review.
Publication posted to Education World 07/06/2009
Source: U.S. Department of Education; last accessed on 07/06/2009 at