Research Summary

Enhancing fidelity in adventure education and adventure therapy

Researchers Argue for a Greater Focus on Fidelity

Journal of Experiential Education
2011

The authors of this paper—including a professor of social work—argue that the legitimacy of the fields of adventure education and adventure therapy are lagging behind other fields because of a lack of evidence-based practice. They argue that theory, not research, guides programs and that more quantitative, sound research is needed not only to improve programs, but also to demonstrate their success, replicate results, and gain funding. (For another take on evidence-based practice in outdoor education, see the summary titled “Questions Raised About Evidence-Based Practice in Outdoor Education” in this section of the Research Bulletin.)

Program fidelity is a key component to evidence-based practice that the authors think is often overlooked in adventure education and therapy. According to the authors, “The term fidelity refers to the consistency and quality in which interventions and programs are being implemented.” Fidelity means that program evaluators can clearly establish that programs are being conducted as planned. The authors ask, “How can adventure professionals know if they are doing something well if they do not know what it is that they are doing?”

There are two components to fidelity: adherence and competence. Adherence refers to the precision and consistency with which programs are delivered. Measuring adherence involves evaluating the extent to which the planned sequence of activities is followed during a program. In addition to adherence, the leader’s competence is also a component of fidelity. A group of leaders with varying levels of competence, for example, might adhere to the program schedule, but because of variations in their competence, the programs could vary widely.

For researchers, measuring fidelity is necessary for increasing internal validity. When evaluators know that programs are being delivered as planned with appropriate levels of competence, the authors argue, “the more confident one can be that the outcomes are the result of the adventure therapy and education program as described and not due to certain characteristics of staff or particular activities favored by specific staff that were not part of the protocols.” This kind of evidence can be critical for establishing best practices and securing funding. And the authors cite research in other fields, including psychotherapy and education, that has revealed that program fidelity is a critical component of program success.

Enhancing fidelity in a program involves clearly defining and describing the specific aspects of a program or intervention; properly training staff not only in how to implement the specific program elements, but also in the role and importance of program fidelity; and closely supervising staff to ensure that programs are implemented as planned. The authors note that, especially in adventure education, “there consistently seems to be resistance to the notion of manualization and/or standardization, which is often due to the fear of losing flexibility to respond to unique situations as they arise.” But, the researchers note that fields such as psychotherapy have found ways to balance fidelity and flexibility with “flexible manualized treatment protocols” that allow facilitators to choose among a set of protocols as they see fit. An approach such as this can allow groups to shift their approach as the weather, group dynamics, or student needs demand.

Measuring program fidelity can be as simple as using a checklist to monitor programs as they’re being evaluated. To do this, evaluators create a list of the specific program components, clearly define each component, and then record whether or not each component was present as they observe the program as it is being delivered. Competence can be evaluated through observation (either direct or taped) and can be measured with the use of a rubric that defines specific tasks or skills of the leader and descriptions of different levels of competence. (The paper includes a sample rubric.) Other more indirect methods of measuring fidelity include self-reporting from leaders, interviews, and analyzing participant products, such as written work, presentations, or assessment products.

The authors conclude that “in the pursuit of documenting evidence-based best practices in order to gain credibility as a legitimate field of practice, adventure facilitators, clinicians, and evaluators need to be more intentional in their use of competence and adherence fidelity measures.” They see attention to fidelity as a key component in knowing if programs are truly effective.

The Bottom Line

When evaluating programs, it is critical for researchers to know exactly what they are evaluating. Fidelity—a measure of the consistency with which programs are delivered—is not often measured in adventure education and therapy. Fidelity refers both to the degree to which leaders adhere to the program’s planned sequence of activities as well as their competence in leading the program. Evaluators can measure fidelity with simple field checks to ensure that programs are delivered with consistency and quality. Establishing program fidelity can increase evaluators’ confidence that the outcomes of a program are the result of the program itself, and not other factors related to the ways that individual leaders may stray from the program plan. This kind of rigor is essential in evidence-based practice.