Research Summary

Using evaluability assessment to improve program evaluation for the Blue-throated Macaw Environmental Education Project in Bolivia

Using the Evaluability Assessment Method to Determine Programming’s Intended Outcomes

Applied Environmental Education & Communication
2016

Program evaluation for Environmental Education (EE) programming is essential to determine if a program’s intended goals are being reached. Selecting a relevant and useful program evaluation approach can be difficult. In order to mitigate this issue, an initial Evaluability Assessment (EA) can allow organizations to undertake evaluation more successfully. EA is valuable for new programs in areas with limited educational programming resources. Partnering with local stakeholders is essential while conducting an EA to identify the most useful evaluation criteria. An EA is also designed to highlight potential programming improvements to ensure programs are well implemented and ready for evaluation. The Blue-throated Macaw’s Environmental Education Program (BTMEEP) was recently developed for the Blue-throated Macaw’s Conservation Program (BTMCP) in Bolivia, but due to limited resources, staff, and expertise, the program has been unable to evaluate the success of programming. This study outlined the process of the EA method to increase the BTMEEP readiness for future program evaluation to define the programming goals and identify performance indicators.

BTMCP was implemented in 2003 through the Bolivian nonprofit organization Asociacion Civil Armonia (ACA). The need for a systemic EE program within the BTMCP led to the creation of the BTMEEP in 2013. The BTMEEP hosted educational and outreach opportunities for community members in two municipalities: Trinidad and Santa Ana del Yucama. The Blue-throated Macaw (BTM) is native to both areas, but habitat loss, illegal pet trade, and the use of the bird’s feathers in cultural activities has caused the population to decline more than 70% in the past 50 years. The goal of the BTMEEP was to educate children, youth, and adults about the declining BTM population, major threats to the species, and how community members can make a difference.

The EA method uses a six-step framework to determine agreement on the program objectives, program description, evaluation options, key evaluation questions, and an evaluation plan.
1. Step 1 determines the intended users and other key stakeholders for the program.
2. Step 2 helps clarify the overall program design.
3. Step 3 explores the program reality and determines what outcomes have been achieved thus far.
4. Step 4 assesses the plausibility of the program and determined if the intended outputs were delivered to the expected audience, and if the intended outcomes were likely to occur.
5. Step 5 identifies changes or improvements needed for the program.
6. Step 6 creates a set of evaluation options for managers and staff, based on all of the information gathered in steps 1-5.

In step 1, for this study, a work group was formed, which included the executive director of ACA, BTMEEP project coordinator, and external EA facilitator. This step allowed the EA facilitator to determine the key stakeholders and project personal for the BTMEEP program. Step 2 involved interviews with main stakeholders, such as managers, staff, and interest groups. The EA facilitator conducted a total of 14 interviews with stakeholders. This step resulted in the project’s logic model and project performance indicators to be created for BTMEEP. Step 3 was done by conducting four interviews with teachers involved in the BTMEEP program. The interview responses were then compared to the original goals of the project. Step 4-6 followed the EA method and determined the projects current effectiveness, made any necessary changes or improvements to the project, and compiled all of the information from prior steps to develop a set of evaluation options for BTMEEP staff.

After the EA six-step framework was completed, the BTMEEP had concrete project objectives and goals, a project description, a logic model, key performance indicators, and evaluation options. The BTMEEP logic model (a tool that helps determine what conditions under which a program can work and helps to communicate program design and expectations) created a clear understanding of the programs objectives among stakeholders and project staff. The EA demonstrated that the project goals and expected outputs were realistic. The EA found slight discrepancies between the logic model and the actual project, and those issues were highlighted and discussed among project managers and staff. The logic model was then adjusted to better fit the project’s intended outputs. Additionally, the EA identified 91 project performance indicators, which were intended to be used to accurately evaluate the BTMEEP programs effectiveness.

The use of the EA method may not be useful for well-established EE programs that have clear, obtainable objectives and the resources to create a usable evaluation tool. Additionally, communication issues between project managers, staff and the EA facilitator may have made this study more challenging. The complex schedules of the participants made the collection of information extremely taxing and time consuming.

The authors recommend using the EA method for new EE projects that have limited resources and are finding it challenging to reach desired goals and objectives. The use of the EA method helped the BTMEEP program to identified needed modifications and beneficial changes to its programming structure and effectiveness.

The Bottom Line

The Evaluability Assessment (EA) method is a low-cost assessment tool to determine project objectives and intended outcomes. The Blue-throated Macaws Environmental Education Program (BTMEEP) was an EE program in Bolivia that program focused on educating local community members on the endangered and native Blue-throated Macaw. This program used the EA method to engage stakeholders, identify needed changes, agree upon outputs and outcomes, and develop a suite of evaluation options. The EA method can assist EE programs to achieve clarity around outputs and outcomes, and facilitate future program evaluation.