Original language | English |
---|---|
Title of host publication | Research Methods Foundations |
Editors | Paul Atkinson, Sara Delamont, Alexandru Cermat, Joseph W. Sakshaug, Richard A. Williams |
Publisher | SAGE Publications Ltd |
Number of pages | 25 |
ISBN (Electronic) | 9781529748932 |
ISBN (Print) | 9781473965003, 1473965004 |
DOIs | |
Publication status | Published - Jan 2020 |
Abstract
Evaluation research takes many forms and is undertaken for many purposes. All evaluations are, however, orientated to informing improvements to decision-making. Evaluations may be undertaken before, during, or after an intervention, whether in relation to a programme, policy, practice or product. Decision-makers need evaluation evidence relating to the effects produced by an intervention, the mechanisms activated by the intervention, the moderators or contexts needed for mechanisms to be activated by the intervention, the implementation challenges for putting the intervention in place, and the economic costs and benefits of the intervention. The acronym EMMIE summarises these needs. Effects are often measured through randomised controlled trials (RCTs) or their close equivalents, which emphasise internal validity: These studies provide strong evidence on the effect an intervention had where and when it was used. RCTs are relatively weak in regard to external validity: whether findings can be applied at other places and times. They are also weak in understating what it was about an intervention that produced its effects, variations in effects by subgroup, and the conditions for the production of its effects. For these purposes, other theory-driven methods such as those used in realist evaluation are needed. In some cases, evaluations are undertaken to fine-tune the way an intervention is working, in order to gauge how it might best be scaled up. Here, there will be little or no interest in its overall effects. Evaluations need to be designed to answer the specific questions relevant to decisions that need to be taken.