Sunday, March 18, 2012

Seven Deadly Sins of Impact Evaluation



SSIR Opinion & Analysis

Impact evaluations—typically, third-party studies that seek to prove a program model's effectiveness—seem to be all the rage in social sector circles these days. Maybe in part that's because the process seems so straightforward: Just commission one when the time is right, and, when all goes well, proudly show off your "stamp of approval." You'll soon receive the resources you need to grow your organization and to influence all the other nonprofits in your field.

The problem is that it's rarely that simple in practice. Consider one youth-serving organization we know, which undertook an impact evaluation—at great expense and with high visibility to its funders—only to have the process cut short when the evaluators discovered that the organization's numerous sites were implementing its program model in wildly different ways. Did that nonprofit have growth potential? Yes. But had its leaders been conducting regular internal measurement, they probably would have realized that their organization was not yet mature enough for the rigors of an impact evaluation.

Pitfalls like this one crop up again and again in our conversations with organizations. In an effort to equip nonprofit leaders with the knowledge they need to make good decisions about impact evaluations, here is our list of the "seven deadly sins" we see nonprofits commit most often:

1. Immaturity. Per the anecdote above, don't pursue impact evaluation until you are crystal clear about your organization's target population, approach, and outcomes, and have internal data that shows you are consistently reaching that population, delivering intended services, and achieving intended outcomes. If you're not sure that's happening and want some help, you're a good candidate for a formative evaluation where, for much less time and money, third-party evaluators will take a "peek under the hood" and suggest how you can improve your model to get it ready for impact evaluation.

2. Deference. Some nonprofit leaders assume that the evaluator should dictate what the evaluation entails—either because the evaluator is the expert, or out of concern that they not be seen as influencing the study. But the truth is, unless you articulate up front what decisions you hope to make coming out of the evaluation (or conversely, what questions you would like to answer), the evaluation will probably not be very useful. No work of any kind should begin until there is clear agreement on what the study will and will not address.

3. Narrowness. Impact evaluations are often designed to answer one question: Do beneficiaries achieve greater outcomes than similar individuals not receiving services? But far too few studies are adequately designed to answer the critical follow-on question: Why or why not? So the nonprofit is left with little to no guidance about what to replicate (if the evaluation is positive) or what to improve (if it wasn't). If you pursue an impact evaluation, make sure evaluators gather data on the inputs (context, staff, beneficiaries, etc.) and outputs (services accessed) of your program, and that they explore qualitative methods (focus groups, in-depth interviews, etc.) that can help interpret the quantitative data they collect.

4. Isolation. Most nonprofits assume an impact evaluation has only two parties: themselves and the evaluators. But creating an evaluation advisory committee in advance of an impact evaluation is a good idea. Often these are volunteer committees comprised of prestigious experts in the nonprofit's field (other evaluators, academics, practitioners, policymakers, etc.) who can advise on the types of thorny issues this post describes. They might meet just three times—to review the evaluation's design, interim results, and conclusions—but their advice can be critical to ensuring a useful evaluation.

5. Myopia. Those new to impact evaluation assume they will receive a "pass" or "fail" mark at the end. In truth, nearly all evaluations result in something in between. If your organization doesn't get an A+, don't assume that you've failed. Instead, before getting started, ensure you develop a shared understanding—among staff, and with funders—of why you are undertaking the evaluation and what the possible outcomes might be. Funders in particular need to recognize the bravery it takes to submit one's organization to outside scrutiny, and not automatically walk away from organizations that receive a B or C, so long as they have a serious plan in place to improve.

6. Finality. Many nonprofit leaders seem to think that an impact evaluation is a one-time exercise. In truth, the most successful nonprofits see measurement—including impact evaluations—as an ongoing exercise in trying to get better, not a "one and done" deal. They constantly measure because they are constantly testing their models in new sites, new contexts, and with adaptations to improve quality or lower cost.

7. Self-exclusion. Some nonprofit leaders equate impact evaluations with randomized control trials and assume that if a comparison group doesn't naturally exist for their work, then impact evaluation is not for them. In truth, there has been a significant amount of innovation in measuring the impact of complex interventions such as advocacy, neighborhood revitalization, and capacity building. While impact often cannot be "proven" in the specific, statistical way it can with randomized control trials, evaluations in such environments can nonetheless result in significant insights about how well an organization's programs are working and how they can be improved. If the organization is ready for an impact evaluation on all other fronts, it's worth exploring the possibility.

Which of these "seven sins" have you personally experienced or seen? How have you gotten around these obstacles?

Sent with Reeder


 (verzonden vanaf tablet)