Commentary

Revisiting the criteria: Why the consultation on the OECD DAC should be your priority

Last week's European Evaluation Society 2018 conference brought about a really rich debate on the OECD DAC criteria. It came in the context of the OECD’s consultation regarding the use and potential revision of the criteria, which you can access here. We strongly encourage all ALNAP Members to contribute to the consultation.

The deadline is 31 October, and the views of our Membership will be essential to ensure the consultation takes account of the specificities of humanitarian evaluation.

Over the years, we have seen more and more ALNAP Members adopt the OECD DAC criteria within their evaluation frameworks. This is a good thing, because it allows a greater level of comparability between evaluations. But we also believe it is important to encourage critical reflection when using the criteria to structure an evaluation. In 2006, ALNAP published guidance on the definition and application of the criteria to humanitarian action.

Revisiting the criteria_Why the consultation on the OECD DAC should be your priority

Photo credit: Shutterstock/ Daniilantiq

Ten years later, our EHA Guide went further in advocating for serious critical reflection when designing evaluation frameworks and questions: start with the needs of key stakeholder groups, then consider the specificities of the thing being evaluated and then look to build evaluation questions that reflect the eight humanitarian-oriented DAC criteria in a meaningful way. The ALNAP EHA Guide, as well as the OECD and UNEG norms and standards are really useful tools in this regard.

But if we’re talking about going beyond individual reports and amending the DAC criteria themselves, isn’t there a danger of going too far?

As we look to the future, it’s always worth considering and reconsidering the way we judge the worth and value of humanitarian action. Should we be adding criteria to take account of important issues such as gender, protection, resilience or the triple nexus? The ALNAP State of the Humanitarian System Reports, for example, now uses an expanded set of 10 criteria, whilst the ALNAP Global Forum papers also looked at and refined the standard list. But if we’re talking about going beyond individual reports and amending the DAC criteria themselves, isn’t there a danger of going too far? Should we fear building “shopping lists” of topics – important as they each may be – into the standard set of criteria? Or should we worry about asking the impossible of a project by testing it against incompatible standards: to take an oft-cited example with the current criteria, coverage can sometimes entail high costs per beneficiary, which thereby lower the efficiency “score”. Is this a problem?

Wouldn’t adding even more boxes to tick further reduce evaluation to a box-ticking exercise, after all?

Most importantly, I wonder if the challenge for revising the DAC criteria will be to maintain a sort of “reasonable pluralism” about what humanitarian actions should aim to achieve. As they currently stand, the DAC criteria allow evaluations to assess the extent to which a project has achieved its stated objectives whilst simultaneously making space to question the appropriateness of those objectives. Thus, the evaluation of a short-term food distribution to a besieged location in an urban conflict might not have to “report” against the distribution’s impact on community resilience; but it does have space to explore this issue where appropriate. This trick might well be harder to achieve if all evaluations are required to assess humanitarian actions against a long list of substantive goals. Wouldn’t adding even more boxes to tick further reduce evaluation to a box-ticking exercise, after all?

Whatever your view, please do be sure to provide your thoughts on the criteria via the OECD consultation process. Your views are important!