Ensuring evaluation is useful – how to apply the criteria
Key messages
- Start with what the intended users of the evaluation want and need to know, and fit the evaluation questions to the criteria, not vice versa.
- Use the criteria flexibly and selectively to ensure they fit the purpose of the evaluation. The intention is not to rigidly apply a pre-defined approach to using the criteria, nor to use all the criteria for every evaluation. Reflect on how and which of the criteria provide an appropriate framework for the evaluation you are designing, within the budget available.
3.1: How the purpose of the evaluation informs the choice of criteria
An evaluation must be useful to, and used by, the intended primary users. These users may range from programme managers and frontline humanitarian responders to senior management, board members and funders.[1] The intended purpose is usually to improve policy and performance of humanitarian action. EHA can have a learning purpose, for example when oriented towards practitioners and managers who are designing and implementing the humanitarian response. And it may have an accountability purpose, for example when commissioned by governance bodies and funders to inform future resource allocation.
Evaluation is an important means to understand and analyse whether humanitarian action meets the needs and priorities of people affected by crisis, thus fulfilling some element of accountability to affected communities. Yet it is the nature of ongoing engagement and the relationship between humanitarian actors and affected communities that sits at the heart of this accountability relationship (see Chapter 4: Relevance, and section 11.1 Putting people affected by crisis at the centre). Affected people are stakeholders of the evaluation but they are unlikely to be users of the evaluation.
The purpose of the evaluation and the needs of evaluation users should drive the selection and use of the evaluation criteria (see Table 4 below). For example, if the purpose of the evaluation is to inform decision-making to improve the results of humanitarian action for people affected by crisis, the criteria of effectiveness, coverage and inclusion, and relevance are most pertinent. Another evaluation may aim to encourage reflection and learning, for example on the nature of relationships between humanitarian actors (e.g. international and local). In this case, the criterion of inter-connection is most pertinent, and a more facilitative approach to evaluation to support reflection may be appropriate.[2] The evaluation can also contribute to transformational change, particularly by incorporating the priority themes.
3.2: Select and apply the criteria thoughtfully and flexibly
Follow three key steps to apply the criteria to EHA thoughtfully, and to plan the evaluation with a user focus.[3]
Step 1 – Identify the purpose and users of the evaluation
What is the overall purpose of the evaluation? Who are the intended primary users of the evaluation, and what do they need to know to better decide what to do and how in humanitarian action? (Note, there may be many intended users of the evaluation. Identifying the intended primary users helps avoid an unmanageable list of evaluation questions and promotes selective use of the criteria.)
Step 2 – Select the evaluation questions
To meet the needs of the intended users, what key high-level questions should the evaluation seek to answer?[4]
(If possible, consider how to promote genuine participation and leadership of people and communities affected by crisis throughout the evaluative process, starting from design and criteria setting.[5] They are unlikely to use the evaluation, but they are key stakeholders. See section 11.1 Putting people affected by crisis at the centre.
Step 3 – Apply the criteria
To which criteria do your evaluation questions relate? Apply only these criteria to the evaluation.
The full list of criteria are not obligatory for all evaluations of humanitarian action. Identify the criteria that are most relevant and useful to meet the information needs of the evaluation users. What do they need to know to make a difference? Where a certain criterion has two dimensions (e.g. interconnection and coherence), clarify if either or both dimensions are relevant. Time spent consulting the intended users at the outset is key to ensuring that the evaluation reflects their perspectives and priorities, no matter where they are located, geographically and culturally. This helps ensure that inherent power dynamics within the humanitarian system are not automatically replicated in the planning and design of the evaluation. Also, be prepared to adapt the terminology of the criteria to suit the users of the evaluation. Where funding is a constraint, consider how to focus the evaluation on a few key issues that emerge from consultation with users. This, in turn, will inform your selective use of some rather than all criteria.
Table 4: Selecting criteria according to the information needs of evaluation users – some examples

Using the criteria to structure your evaluation
The criteria provide a framework to organise evaluation questions and to structure the evaluation process. For some evaluations, the criteria also provide a framework to structure findings in the final evaluation report. But this may not be most useful for evaluation users. For example, if users are interested in evaluation findings for different sectors – such as protection, health and food security, consider structuring the evaluation report by sector. This could be supplemented with a concluding chapter that summarises findings by criteria.
Footnotes
-
See section 3.3 of ALNAP’s EHA guide (2016) for ways to identify the stakeholders of an evaluation and, among these stakeholders, the intended primary users.
-
See Darcy and Dillon (2020) for the distinction between ‘technical’ evaluation, providing evidence to inform decision-making, and ‘facilitative’ evaluation, to support reflection and learning.
-
See also OECD (2021) on applying the OECD DAC evaluation criteria thoughtfully.
-
See section 6.3 of ALNAP’s EHA guide (2016) for the rationale for selecting a small number of high-level evaluation questions: three to four.
-
Despite strong recognition of the humanitarian imperative and ethical responsibility to ensure that communities access, and benefit from, monitoring and evaluation knowledge, it is hard to make evaluation findings accessible to communities. Several barriers make this practice less common in the humanitarian sector, including resourcing constraints, lack of prioritisation and logistics (see HAG et al, 2024).