In the last few years, a “movement” to explore, develop and test a range of rigorous alternatives to counterfactual methods in impact evaluation has taken an increasingly defined and consistent shape (White & Phillips, 2012; Stern, 2015; Stern, et al., 2012; Befani, Ramalingam, & Stern, 2015; Befani, Barnett, & Stern, 2014). As a principle, it is now largely accepted that a wide range of methodological options are appropriate, under different circumstances, to evaluate the impact of development programmes. However, while apparently solving the problem of scarcity of options, this expansion has created a selection problem. While unsuitable and unfeasible under many real world circumstances, the rigid “gold standard” hierarchy which placed experimental and quasi-experimental evidence at the top and qualitative evidence at the bottom had the (illusory, some might say) benefit of being simple and leading to inevitable and clear choices. Now that some policy fields and institutions have expanded their horizons, recognising that the “best” method or combination of methods is dependent on the evaluation questions, intended uses and attributes of the intervention and evaluation process, we are struggling to make and justify choices. The tool presented in this paper is an attempt to improve this situation and the process of methodological choice, by helping users make an informed and reasoned choice of one or more methods4 for a specific evaluation.