Learning from the Experiments That Never Happened : Lessons from Trying to Conduct Randomized Evaluations of Matching Grant Programs in Africa
Matching grants are one of the most common policy instruments used by developing country governments to try to foster technological upgrading, innovation, exports, use of business development services and other activities leading to firm growth. Ho...
Main Authors: | , , , , |
---|---|
Format: | Policy Research Working Paper |
Language: | English en_US |
Published: |
World Bank, Washington, DC
2013
|
Subjects: | |
Online Access: | http://documents.worldbank.org/curated/en/2012/12/17103546/learning-experiments-never-happened-lessons-trying-conduct-randomized-evaluations-matching-grant-programs-africa http://hdl.handle.net/10986/12200 |
Summary: | Matching grants are one of the most
common policy instruments used by developing country
governments to try to foster technological upgrading,
innovation, exports, use of business development services
and other activities leading to firm growth. However, since
they involve subsidizing firms, the risk is that they could
crowd out private investment, subsidizing activities that
firms were planning to undertake anyway, or lead to pure
private gains, rather than generating the public gains that
justify government intervention. As a result, rigorous
evaluation of the effects of such programs is important. The
authors attempted to implement randomized experiments to
evaluate the impact of seven matching grant programs offered
in six African countries, but in each case were unable to
complete an experimental evaluation. One critique of
randomized experiments is publication bias, whereby only
those experiments with "interesting" results get
published. The hope is to mitigate this bias by learning
from the experiments that never happened. This paper
describes the three main proximate reasons for lack of
implementation: continued project delays, politicians not
willing to allow random assignment, and low program take-up;
and then delves into the underlying causes of these
occurring. Political economy, overly stringent eligibility
criteria that do not take account of where value-added may
be highest, a lack of attention to detail in "last
mile" issues, incentives facing project implementation
staff, and the way impact evaluations are funded, and all
help explain the failure of randomization. Lessons are drawn
from these experiences for both the implementation and the
possible evaluation of future projects. |
---|