Learning from the Experiments that Never Happened : Lessons from Trying to Conduct Randomized Evaluations of Matching Grant Programs in Africa

Matching grants are one of the most common policy instruments used by developing country governments to try to foster technological upgrading, innovation, exports, use of business development services and other activities leading to firm growth. However, since they involve subsidizing firms, the ris...

Full description

Bibliographic Details
Main Authors: Campos, Francisco, Coville, Aidan, Fernandes, Ana M., Goldstein, Markus, McKenzie, David
Format: Journal Article
Language:en_US
Published: Elsevier 2014
Subjects:
Online Access:http://hdl.handle.net/10986/18191
Description
Summary:Matching grants are one of the most common policy instruments used by developing country governments to try to foster technological upgrading, innovation, exports, use of business development services and other activities leading to firm growth. However, since they involve subsidizing firms, the risk is that they could crowd out private investment, subsidizing activities that firms were planning to undertake anyway, or lead to pure private gains, rather than generating the public gains that justify government intervention. As a result, rigorous evaluation of the effects of such programs is important. We attempted to implement randomized experiments to evaluate the impact of seven matching grant programs offered in six African countries, but in each case we were unable to complete an experimental evaluation. One critique of development research is publication bias, whereby only “interesting” results get published. Our hope is to mitigate this bias by learning from the experiments that never happened. We describe the three main proximate reasons for lack of implementation: continued project delays, politicians not willing to allow random assignment, and low program take-up; and then delve into the underlying causes of these occurring. Political economy, overly stringent eligibility criteria that do not take account of where value-added may be highest, a lack of attention to detail in “last mile” issues, incentives facing project implementation staff, and the way impact evaluations are funded, and all help explain the failure of randomization. We draw lessons from these experiences for both the implementation and the possible evaluation of future projects.