Commentary

Reasons for hope - a seasoned evaluator’s take on ALNAP’s work to revise EHA criteria

Professional evaluators use evaluation criteria almost every day. We have intimate knowledge of how well they work in practice. We may not have an institution to represent us in the process of updating guidance on using DAC criteria to evaluate humanitarian action (EHA), but we have a major interest in helping to make the criteria as good as they can be. After all, we will be the ones selecting and adapting them, using them to generate evidence, and having them guide our analyses.

In 2023, when ALNAP announced it was updating its 2006 guidance, my initial concern was that the voice of long-serving independent evaluators, particularly those who had applied the guidance over its 18-year lifetime, would not be sufficiently heard. I worried the advisory group seemed to be composed of relatively new evaluation managers from humanitarian agencies, and that initial research highlighted concerns of a nature more academic than applied. I worried the rich experience of evaluators might not be sufficiently harvested.

So, I wrote a note to ALNAP proposing to help engage long-term professional evaluators in a series of focus group discussions. Working with ALNAP, I got as far as doing some detailed background work, designing the tools, and identifying informants to consult. Unfortunately, due to work obligations, I was unable to facilitate the discussions in my limited free time.

When ALNAP presented its proposals in June 2024, I was reassured to see it had consulted more than 500 people in multiple events and languages, even if it remained unclear to me exactly who these stakeholders were, and which voices were most prominent. Still, I was glad to see senior evaluators involved in developing the guidance.

Although the ALNAP proposals remained very high-level in the form of a powerpoint presentation, I believe they offered some reasons for evaluators to be encouraged.

Speaking subjectively as an evaluator who has used the ALNAP criteria in around 55 evaluations and evaluative studies since 2006, here’s my personal take on the overall guidance, without getting into the detail of each criterion:

1.Balancing flexibility with standardisation.

I was hoping the revised framework would continue to offer a common reference point for EHA, an updated generic framework with overarching criteria that enable transparency, accountability and comparability across all EHA.

I also hoped it would continue to enable thoughtful adaptation to specific activities, tailored to test specific logics and support the crucial learning purpose, instead of being applied in a ‘mechanistic’ one-size-fits-all manner that would undermine EHA.

So, it’s reassuring that ALNAP proposes to emphasise flexibility as a governing principle while also aiming for sufficient standardisation. I believe flexibility is especially needed as EHA increasingly assesses very different types of humanitarian action, activity, and modality. These invariably include assistance, protection, and advocacy, and at different levels, from the single project to the larger programme, country response, or global strategy.

2. Adapting the criteria to humanitarian specificities.

I was hoping the guide would reflect sector-wide expectations and practices in humanitarian action. It’s fine that ALNAP and stakeholders emphasise alignment with OECD DAC criteria. But I believe it’s more important for the guide to define the specificities of humanitarian action, especially in a context that has evolved considerably since 2006, a period when we have witnessed: a greater prevalence of protracted crises and escalating needs linked to conflict; the growing need for strategic prioritization amid very insufficient resources; and system-wide reforms and related critiques. We also witnessed significant evolutions in coordination mechanisms, cash-based transfers, accountability to affected populations, gender and resilience.

3. Maximising the guidance’s practical utility.

I hoped the guide would help evaluators make transparent judgments by offering descriptions of ‘what good looks like’ in humanitarian action, taking into account the main different types of humanitarian action mentioned above.

ALNAP intends to offer good practice examples and sample evaluation questions, but examples in the 2006 guide were too specific and much less useful than the specific explanations below the criteria.

ALNAP should recognise that evaluations translate the high-level criteria into more specific judgment criteria, and support this process by describing what is typically considered good enough in humanitarian action. In doing so, ALNAP should look no further than referencing its evolving record of humanitarian action captured in the State of the Humanitarian System.

ALNAP could also add great value to the guide by including a glossary of key EHA-related words, since many of these lack authoritative definitions. Even before defining evaluation criteria, the sector lacks even working definitions of many basic terms that are widely used. If you doubt this, please point me to a definition of humanitarian action.

4. The risk of framework overstretch.

I hoped the guide would avoid expanding the scope of humanitarian evaluations by adding a long list of policy expectations which remain to be operationalised and are not necessarily known to be key essential elements in saving lives, reducing suffering, and upholding dignity. Not all policy expectations have the same value in making humanitarian action work, and doing too much with shrinking resources is not a recipe for success.

Thankfully ALNAP does not propose to introduce new criteria, but it will introduce a range of ‘priority themes’ and ‘cross-cutting issues,’ including putting people affected by crisis at the centre along with locally-led humanitarian action. Environment and climate change, gender equality, decolonisation and positionality, and adaptiveness/adaptive management are also included.

I believe ALNAP should guard against scope inflation that could undermine evaluability. Evaluation can help to improve humanitarian action, but ‘transforming’ evaluation criteria is not a route to changing the very real-world problems that humanitarian action seeks to address.