Counterfactual Impact Evaluation Method

From GM-RKB
Jump to navigation Jump to search

A Counterfactual Impact Evaluation Method is an impact evaluation method that ...



References

2021

  • (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Impact_evaluation#Counterfactual_evaluation_designs Retrieved:2021-11-30.
    • Counterfactual analysis enables evaluators to attribute cause and effect between interventions and outcomes. The 'counterfactual' measures what would have happened to beneficiaries in the absence of the intervention, and impact is estimated by comparing counterfactual outcomes to those observed under the intervention. The key challenge in impact evaluation is that the counterfactual cannot be directly observed and must be approximated with reference to a comparison group. There are a range of accepted approaches to determining an appropriate comparison group for counterfactual analysis, using either prospective (ex ante) or retrospective (ex post) evaluation design. Prospective evaluations begin during the design phase of the intervention, involving collection of baseline and end-line data from intervention beneficiaries (the 'treatment group') and non-beneficiaries (the 'comparison group'); they may involve selection of individuals or communities into treatment and comparison groups. Retrospective evaluations are usually conducted after the implementation phase and may exploit existing survey data, although the best evaluations will collect data as close to baseline as possible, to ensure comparability of intervention and comparison groups.

      There are five key principles relating to internal validity (study design) and external validity (generalizability) which rigorous impact evaluations should address: confounding factors, selection bias, spillover effects, contamination, and impact heterogeneity. * Confounding occurs where certain factors, typically relating to socioeconomic status, are correlated with exposure to the intervention and, independent of exposure, are causally related to the outcome of interest. Confounding factors are therefore alternate explanations for an observed (possibly spurious) relationship between intervention and outcome. * Selection bias, a special case of confounding, occurs where intervention participants are non-randomly drawn from the beneficiary population, and the criteria determining selection are correlated with outcomes. Unobserved factors, which are associated with access to or participation in the intervention, and are causally related to the outcome of interest, may lead to a spurious relationship between intervention and outcome if unaccounted for. Self-selection occurs where, for example, more able or organized individuals or communities, who are more likely to have better outcomes of interest, are also more likely to participate in the intervention. Endogenous program selection occurs where individuals or communities are chosen to participate because they are seen to be more likely to benefit from the intervention. Ignoring confounding factors can lead to a problem of omitted variable bias. In the special case of selection bias, the endogeneity of the selection variables can cause simultaneity bias. * Spillover (referred to as contagion in the case of experimental evaluations) occurs when members of the comparison (control) group are affected by the intervention. * Contamination occurs when members of treatment and/or comparison groups have access to another intervention which also affects the outcome of interest.

      • Impact heterogeneity refers to differences in impact due by beneficiary type and context. High quality impact evaluations will assess the extent to which different groups (e.g., the disadvantaged) benefit from an intervention as well as the potential effect of context on impact. The degree that results are generalizable will determine the applicability of lessons learned for interventions in other contexts.
    • Impact evaluation designs are identified by the type of methods used to generate the counterfactual and can be broadly classified into three categories – experimental, quasi-experimental and non-experimental designs – that vary in feasibility, cost, involvement during design or after implementation phase of the intervention, and degree of selection bias. White (2006)[1] and Ravallion (2008) [2] discuss alternate Impact Evaluation approaches.