Friday, February 04, 2011

Show me the Impact!

"The Millennium Villages Project (MVP) is an experimental anti-poverty interventionin villages across Africa. In October, we released evidence that the Project’s official publications were overstating its real effects, and we offered suggestions on improving its impact evaluation. On Tuesday the MVP, whose leadership and staff are aware of our work, continued to greatly overstate its impact."
That is from a post by Michael Clemens and Gabriel Demombynes over at Africa Can End Poverty, a blog edited by Shanta Devarajan, Chief Economist for Africa at the World Bank.   Michael and Gabriel note in particular the lack of evidence that MVP caused the claimed increase in "ownership of mobile phones...from 4% to 30%" in MVP villages.  It turns out that the rate of increase in ownership of cell phones was about the same in surrounding villages that did not have an MVP programs!

Michael, Gabriel and others are far more eloquent than I on the topic of evaluation, including randomized controlled trials (RCTs).  Though RCTs are the gold standard, they are neither possible nor affordable in all circumstances, which means that in practice we need a toolkit of different approaches.

In any case, even when interventions do have an impact, the question should be "compared to what, done by whom, and at whose request?"  This is asked only in rare cases.

Donors and implementing agencies tend to evaluate their own programs, and as long as there was some positive result they declare victory (albeit often with the obligatory "improvements can be made" and "challenges remain" language).

What if a donor (or group of donors) did something like the following?

  1. Take a total of $20 million and divide it into units of $2 million (the approximate amount of money spent in each MVP village).
  2. Give $2 million to five different implementors (e.g., MVP, Oxfam, Care, BRAC, and GlobalGiving) and assign them randomly to a village.
  3. After X years, evaluate each of them according to two measures: a) measured impact based on predefined objective criteria; and b) satisfaction shown by the villagers.
  4. Since learning is what drives increases in quality and productivity, repeat steps 1-3 to see which implementing organizations improved the most.  
  5. Expand the program by a) opening access to more implementors; b) having villagers themselves choose the objective criteria to be used; and c) allowing villages to choose from among the top five or ten implementors.
If someone can round up the money, I am game to help design and oversee an experiment like this.  And would anyone like to suggest a name (and acronym) for it?