Skip to content

Not everything that counts can be counted: mixed methods impact evaluations in global health

Authors
S Alba, L. Blok, P. Baatsen, J. Toonen
Publication year
2020

Posted on  by BMJ GH Blogs

Not everything that counts can be counted. And not everything that can be counted counts – William Bruce Cameron

Do vaccination campaigns increase immunization rates in young children?  Do home-visiting programs for new mothers increase exclusive breastfeeding? Studies designed to answer these questions are known as health impact evaluations and are key for global health policy-making.

Counterfactual statistical approaches have long been hailed as the gold standard for health impact evaluations. These approaches are modelled around the statistical paradigm of randomized control trials. In public health this usually means that certain clusters of people (schools, health facilities or villages) are randomly allocated to receive the intervention while others don’t. Primary health outcomes (eg immunization or breastfeeding rates)  are then compared, often in before-after comparisons, to quantify the interventions’ impact.

However, it is now amply recognized in public health that these counterfactual approaches have many limitations. First of all they are difficult to implement: it is often not feasible to only allocate the intervention to some people (or groups of people). Perhaps more importantly, counterfactual approaches do not provide the most useful answers for policy-planning. They provide very context-specific information about an intervention: did the intervention work then and there. But why did these changes occur (or not)? With counterfactual approaches we cannot answer this question – we don’t know what’s in the ‘black box’, what mechanisms were set in motion. This is especially important when no changes can be seen – why is that the case? What should be done differently next time?

Mixed-methods evaluations which integrate statistical methods with qualitative research methods are a very powerful way to overcome this problem: they allow to peak into the black box. Quantitative methods for evaluations include surveys and analysis of routine data. The most commonly used qualitative methods are focus group discussions, in-depth interviews and structured observations. Quantitative methods determine whether there was an effect and the size of the effect – while qualitative methods help to explore how, why and for whom the intervention worked. Thus mixed-methods evaluations can provide a comprehensive understanding of the extent and reasons for change (or lack thereof) in a health intervention.

To ensure the success of a mixed-methods impact evaluation, it is important to mix methods from the start of the evaluation and all throughout to ensure that methods optimally complement each other. In Figure 1 and below we describe the approach developed by evaluators from KIT Royal Tropical Institute, building upon many years’ experience evaluating health interventions for a range of bilateral and multi-lateral donors. An important premise is that a team of experienced evaluators with method-specific expertise should be put together. We believe it is imperative to include local researchers, to ensure that the evaluation questions, methods, results and dissemination are adequately contextualized.

Read the full article in BMJ Global Health.