Having evaluation in mind when designing a program can help ensure the success of future evaluations.
No matter the source of funding for their program — government or private foundation — managers everywhere are feeling greater pressure to demonstrate their programs' effectiveness. And they must do so using scientific methods, not anecdotes about individuals who benefited. Funders want to see data and other hard evidence to justify continuing or expanding a program.
One of the first tasks in gathering evidence about a program's successes and limitations (or failures) is to initiate an evaluation, a systematic assessment of the program's design, activities or outcomes. Evaluations can help funders and program managers make better judgments, improve effectiveness or make programming decisions.[1] Evaluations can describe how a program is operating, show whether it is working as intended, determine whether it has achieved its objectives and identify areas for improvement.
Having a plan for the evaluation is critical, and having it ready when the program launches is best.
CrimeSolutions uses the results of research evaluations to categorize programs and practices as "effective," "promising," or having "no effects." To date, more than 300 programs and practices have been reviewed for their efficacy. Visit CrimeSolutions.ojp.gov to learn more.
An evaluation plan outlines the evaluation's goals and purpose, the research questions, and information to be gathered. Ideally, program staff and an evaluator should develop the plan before the program starts, using a process that involves all relevant program stakeholders.
Having a plan helps ensure that future evaluations are feasible and instructive. Putting the plan in writing helps ensure that the process is transparent and that all stakeholders agree on the goals of both the program and the evaluation. It serves as a reference when questions arise about priorities, supports requests for program and evaluation funding, and informs new staff. An evaluation plan also can help stakeholders develop a realistic timeline for when the program will (or should) be ready for evaluation.
Partners and stakeholders use evaluation plans to clarify a program's purpose, goals and objectives and to describe how program activities are linked to their intended effects. To this end, stakeholders should consider developing a logic model. A logic model visually depicts how a program is expected to work and achieve its goals, specifying the program's inputs, activities, outputs and outcomes.
Figure 1. Sample Logic Model(figure forthcoming)
The evaluation plan should develop goals for future evaluations and questions these evaluations should answer. This information will drive decisions on what data will be needed and how to collect them. For example, stakeholders may be interested in the extent to which the program was implemented as planned. Determining that requires documentation on program design, program implementation, problems encountered, the targeted audience and actual participation. Or, stakeholders might want to know the program's impact on participants and whether it achieved its objectives. In this case, program staff should plan to collect data before implementing the program so an evaluator later can assess any changes attributable to the program.
A program can benefit from multiple evaluations over the course of its design and implementation.
The type and timing of evaluations are important. Evaluation is more difficult and less meaningful after the program ends, because stakeholders cannot use information gathered from the evaluation to alter the program's implementation or to justify continued funding. Conducting certain evaluations, like outcome evaluations, is difficult when a program is too new because program elements, strategies or procedures often still are being adjusted and finalized.
Questions the type of evaluation can answer
When to use the type of evaluation
During the planning stages or beginning of the program's implementation so revisions can be made before the program starts
In the early stages of the program's implementation to provide initial feedback
At the end of the program's development, when the program is stable and unlikely to change in fundamental ways
When designing a program, it is easy to focus only on the immediate decisions that must be made to implement the program and make it operational. But evaluating a program can be challenging or impossible if stakeholders do not plan for evaluation during initial program development. Having evaluation in mind when designing a program can help ensure the success of future evaluations.
Stakeholders need to know the questions they want an evaluation to answer and build the capacity to collect data to answer those questions. For example, if stakeholders want to know what changes resulted from the program, baseline data should be collected before the program begins. This is especially important if the evaluation will use surveys or interviews to assess baseline opinions or behaviors, because asking respondents later to recall prior opinions or behavior may produce biased results. By thinking this through in advance, stakeholders can ensure they conduct any necessary pre-tests before the program begins and establish a method to collect data over the course of the program. Furthermore, planning for a future outcome evaluation — even if the immediate goal is a process evaluation — can be beneficial because at some point many stakeholders will want or need to answer the question "Does it work?" Partnering with an experienced evaluator can help stakeholders identify potential evaluation designs and decide how to collect the required data.
Stakeholders should consider the time and cost of an evaluation effort and build them into the evaluation plan. A general rule of thumb is to budget 10 percent of the total program cost for evaluation. Although completing a process evaluation may require only a few months, a large-scale outcome evaluation may require years and a substantial financial outlay. If stakeholders want the evaluation's results to help improve the program or justify continued funding, they need to make sure the evaluation is completed before the program is slated to end. This is particularly critical for programs that rely on grant funding, which are usually active only for a set period of time.
To help ensure the evaluation is instructive and meaningful, program staff should document the program's design, purpose and objectives so an evaluator can compare them to the program's actual implementation. Without that documentation, an evaluation is unlikely to produce enough meaningful information to justify its cost and level of effort. Having an evaluation plan in place from the beginning with clear requirements for documentation can help ensure that the needed information is actually collected.
Despite the best planning, stakeholders cannot anticipate all aspects of a program's operation before implementation, so an evaluation plan should be responsive to program changes and shifting priorities. As they get new information, stakeholders may find some goals unrealistic or some data impossible to collect, access or track. They should revise the evaluation plan as necessary and document each change, justification and decision point.
In turn, stakeholders should be aware that some evaluations, particularly outcome evaluations, might require staff to operate a program differently than usual to rigorously assess the program's effect. For example, evaluators might ask staff to refrain from altering the program's operation during the evaluation period or to select participants in a different manner, perhaps through a randomized process. Partnering with an evaluator in the early stages of program development and implementation can help program staff understand what may be required of them to successfully evaluate the program later.
Implementing and evaluating a multisite program can be challenging, especially when sites are given latitude to implement the program in ways that suit their specific needs, because goals and designs will vary by site.
When writing an evaluation plan, stakeholders must consider whether sites will be implementing the program uniformly or will have flexibility in their design. If each site has a different strategy, stakeholders need to take that diversity into consideration and note it in the evaluation plan. Each site should create its own documentation, including a timeline and list of goals and objectives, and sites may require different evaluation strategies. Addressing differences across sites in the evaluation plan and monitoring their progress over time helps ensure each site is fully operational and has the necessary data and functionality for future evaluations.
Programs without evaluation plans in place can experience significant challenges during evaluations. If a program does not have an evaluation plan, an evaluability assessment can help determine whether the program can be evaluated and whether an evaluation will produce useful results. A program with an evaluation plan also can benefit from an evaluability assessment, which can gauge how well the evaluation plan was put into action and its effectiveness in preparing the program for an evaluation.
An evaluability assessment analyzes a program's goals, state of implementation, data capacity and measurable outcomes. It can save valuable time and money if it shows the program cannot be evaluated because evaluability assessments cost significantly less than actual evaluations. The evaluability assessment also can provide stakeholders with valuable information on how to alter the program structure to support future evaluations.
The key to developing a program that can be evaluated is to have the goal of future evaluation in mind when designing the program's documentation, goals and implementation. Stakeholders also must continually monitor the program's progress and verify that relevant data are being captured, particularly if the goal is to conduct an outcome evaluation. Although evaluation is not always easy and can sometimes be an imposition to program operations, having an evaluation plan is invaluable to making such efforts as feasible and successful as possible. Program staff should, whenever possible, partner with a university, experienced researcher or sister science agency to help construct the plan. Having an evaluation plan in place will help ensure that future program evaluation is feasible and financially viable and that its results are instructive to program staff and stakeholders.
For More Information
Read a chapter by Finn-Aage Esbensen and Kristy N. Matsuda in Changing Course: Preventing Gang Membership (pdf, 12 pages) to learn more about program evaluations and why having a well-designed evaluation is critical to determining a program's effectiveness.
This article appeared in NIJ Journal No. 275, posted May 2015.
[note 1] Patton, Michael Quinn, Qualitative Research Evaluation Methods, Thousand Oaks, Calif.: Sage, 1987; Rossi, Peter H., & Howard E. Freeman, Evaluation: A Systematic Approach (5th ed.), Newbury Park, Calif.: Sage, 1993.
Alison Brooks Martin was a postdoctoral research associate in NIJ's Office of Research and Evaluation from November 2013 until January 2015.