Wednesday, October 13, 2010

Impact evaluations, good and bad

Chris Blattman

In an interesting new working paper, Michael Clemens and Gabriel Demombynes discuss different levels of rigor in the context of evaluating the impact of a specific intervention. Roughly speaking, they compare the current approach being used in that project (before and after) to a better approach (using ex post matched controls) to the 'best' approach (randomizing ex ante). For instance: one impact currently claimed is increased cell phone penetration, but of course that has been happening everywhere as time passes, so it mostly goes away (as a direct impact) when controls are used.

Part of what makes the paper interesting is that their chosen example is a large (potentially huge) media-friendly intervention: the Millenium Villages Project. Doing evaluation right is important, especially so for exciting but untested ideas. But part of what I found interesting is that there is no randomization, although they show how it could easily be incorporated. It's worth keeping in mind that one can fall somewhat short of the gold standard, or very short, and this makes a big difference; the world is not binary.

So when, if ever, should we not doing a rigorous evaluation? Probably the best answer I've heard to that is: when ambiguity is useful as political cover. For instance, one could argue that conditional-cash-transfer programs (e.g. Oportunidades) are mostly redistributive in nature, and the conditional dimension exists to keep conservatives happy. Doing a rigorous evaluation might suggest that these are not cost-effective ways to, say, increase school attendance rates. Not doing the evaluation allows the primary goal to continue without having to argue for it solely on the cost-effectiveness merits. Or so one could argue.

Sent with Reeder