QUICK SUMMARY
Program evaluations help answer questions about the effectiveness and efficiency of programs or policies. They include process, outcome, and impact evaluations. We briefly describe each and then focus on the value of impact evaluations for determining the unique contribution of a program above and beyond what would exist without it.
STRATEGY DETAILS
Q1. What are different types of program evaluations?
Three common types of evaluations are:
- Process Evaluations: Process evaluations, also called implementation evaluations, review how a program is implemented and focus on how a program actually operates. In the context of a logic model, which we've discussed separately on this website, process questions address inputs, activities, and outputs. One frequent purpose of a process evaluation is to monitor program implementation to ensure compliance with statutory and regulatory requirements, program design requirements, professional standards, and customer expectations.
- Outcome Evaluations: This type of evaluation focuses on the output-outcome portion of the logic model. Outcomes can be immediate effects of a program or more distant, although more distant outcomes are less likely to have a clear linkage to program outputs and more likely to be affected by outside factors. A careful look at unintended outcomes achieved is an important aspect of this type of evaluation too.
- Impact Evaluations: Impact evaluations are designed to measure what a program achieved above and beyond what would have happened without the program, which evaluators call the "counterfactual." The most straightforward way to isolate program impact is to randomly assign subjects (individuals, offices, etc.) to treatment and control groups. A well designed experimental requires a valid statistical sample, including having enough sample size and ensuring that the treatment and control groups remain distinct. An alternative to random assignment aims to construct the control and treatment groups to be similar in ways that are considered important.
Q2. What's the value of an impact evaluation over just tracking outcomes?
To help answer that important question, let's take an example. This is real-life example (the data are real) of a job training program that helped low-income individuals obtain employment. Services were offered to adult men, adult women and youth. Figure 1 shows the key outcomes, in terms of the percentage of individuals in each of those categories who became employed at any point within two years of the program. As you can see, youth did the best, with 63% becoming employed. Adult men did second best, at 57% and adult women had the lowest employment rates, at just under half (49%).
At this point, take a minute to think what you might do with these data if you were running the program and wanted to do the most good with your program budget. One plausible answer is to "do what works" and shift dollars from women and men to youth, since youth are doing the best. What answer would you give?
Now, we'd like you to shift your mindset to that of a program evaluator. That mindset asks, "How did the people in the program do compared to similar people who weren't in the program?" In this case, we can answer that question since the program was evaluated using a randomized control trial. Figure 2 shows how an RCT works, with eligible individuals split (essentially flipping a coin) between program and control groups, so that any difference in their outcomes is due to the program and not because of any observable or unobservable differences between groups.
Figure 3 adds in the results for the control group. You can see that youth who were not in the program did just as well as those in the program. Men in the program did do better than the control group, but the difference is statistically insignificant. Women in the program, however, did do better than the control group. (The impact is 8 percentage points, which is the difference between 49% and 41%). In other words, these results suggest that this program only works for women, which is the exact opposite conclusion as we'd draw from just looking at the outcomes of participants. Examples like these demonstrate why, if you really want to know whether a program works, you need rigorous impact evaluation and not just performance metrics. Performance metrics are very useful for helping manage a program, including setting goals, tracking progress and identifying bottlenecks or problems. But performance metrics aren't very helpful for determining impact. That's the role of program evaluation and, in particular, impact evaluation.
ADDITIONAL RESOURCES
- Web resources: The Jameel Poverty Action Lab (J-PAL) at MIT provides excellent free resources including:
- A library of practical resources for undertaking randomized evaluations
- An Evaluating Social Programs Webinar Series
- Briefs: Overviews of the different types of program evaluations are provided by federal agencies, including the CDC as well as NOAA.
- Evidence to Insights (e2i) Coach: This free online platform created by Mathematica that takes you through five steps to generate your own evidence.
- Gov Innovator podcast interviews:
- Determining if your program is having a positive impact (i.e., impact evaluation 101): David Evans, The World Bank
- Using randomized evaluations to address global poverty and other social policy challenges: Dean Karlan, Yale University and Innovations for Poverty Action
- Three strategies to promote relevance in program evaluations so that findings are useful to policymakers and practitioners: Evan Weissman, MDRC
- Reducing fear of program evaluation: Paul Decker, Mathematica Policy Research
- Using impact evaluation to improve program performance: Rachel Glennerster, J-PAL
- Building an evidence base for agency programs: Chris Spera, Corporation for National and Community Service
- A provider’s perspective on being part of a rigorous evaluation: Sarah Hurley, Youth Village
- Using rigorous program evaluation to learn what works: Rob Hollister, Swarthmore College
CUSTOMIZED ASSISTANCE
Please contact us if your organization needs help doing more with program evaluation, including developing a chief evaluation office, developing a strategy to do more evaluations through internal resources as well as external research partners, or other needs. If you need help conducting rigorous program evaluations, we can help connect you with providers who specialize in evaluations.
PDF VERSION OF THIS PAGE (click here)