Evaluators are optimistic about ALNAP’s work to update the guidance on using OCED DAC criteria to evaluate humanitarian action (EHA). We use these criteria all the time to assess humanitarian activities of all kinds — from the single aid project to innovative approach, to the inter-agency crisis response, to the multi-country programme, to the institutional strategy. This is not the place for a detailed discussion of each criterion, so here are a few general reflections.
1. Assessing overall humanitarian value
Beyond applying each criterion, EHA must be able to assess the whole of a humanitarian activity. It’s helpful that ALNAP does not propose to include new criteria or a ‘shopping list’ of new elements, as focussing on too many parts can easily obscure an activity’s overall worth.
Ideally ALNAP should promote a holistic approach to assessing ‘humanitarian value’, defined in terms of saving lives, alleviating suffering, and upholding human dignity for people requiring assistance.
Any activity that delivers humanitarian value should be explicitly rooted in international humanitarian law, humanitarian principles, and the humanitarian imperative. If it contributes to economic, political, development, human rights, or social justice value, that is not necessarily humanitarian value. Partly for this reason, EHA always requires a ‘thoughtful’ approach that considers context, never a simplistic or mechanistic one.
To this end, ALNAP should consider introducing the updated criteria with a good discussion about their applicability. Since the criteria were initially developed by the OECD for development projects, what are the pros and cons of using these criteria to assess humanitarian activities? What are the inherent biases, and what could this mean for EHA in practice?
2. Assessing the design of humanitarian activities
It’s good to hear that ALNAP will maintain the three criteria focused on an activity’s design and objectives: relevance, coherence, and coverage, only ‘tweaking’ them where necessary, and giving more focus to coverage.
In my view, relevance means the activity responds to people’s needs and considers their stated priorities. ‘Responsiveness’ is a useful and dynamic concept. An activity that does not define clear objectives, or explicitly refer to needs assessments, or consult intended beneficiaries would normally do poorly on relevance.
Coherence should mean an activity is developed through inter-agency planning to fill gaps, aligns with relevant policy goals, and offers clear ‘added value’ compared to other humanitarian activities—perhaps development and peacebuilding activities too. This ‘external coherence’ should be accompanied by ‘internal coherence’, a useful distinction made in the revised DAC criteria of 2022. An activity that neither has an explicit humanitarian purpose, nor addresses gaps left by other actors, nor engages in coordination activities would be less than coherent.
Coverage should be closely linked with targeting of defined populations in need. It implies the activity should be ambitious enough, manage risks to ‘stay and deliver’, and systematically use advocacy to address needs that cannot be met by assistance. Coverage should consider ‘breadth of coverage’ and ‘depth of coverage’, that is targeting of specific vulnerable subgroups within a population, based on data about gender, age, disability, migration/displacement, and even ‘intersectionality’. An activity that does not target defined populations and specific needs within them would fall short.
3. Assessing the implementation of humanitarian activities
It’s good news that ALNAP proposes to keep the two criteria of effectiveness and efficiency to assess implementation, again with some tweaking of definitions, and further guidance on their application.
Effectiveness means achieving objectives and results, including any differential results across groups. In EHA, effectiveness often gets stretched to include cascading micro results, quality standards, successes and failures, internal and external factors, and a discussion of trade-offs. Yet it’s most important to consider ‘net’ effectiveness, and capture an activity’s main achievements, even when not exactly those intended. And sometimes the learning is an important achievement. An activity that did not deliver largely as intended, nor identify key factors of success, nor learn lessons from failures would be ineffective.
Efficiency should focus on assessing timely delivery and appropriate resourcing (i.e. financial, human, supplies), while describing costs, efficiency factors, and prioritisation rationales. Cost, however, is a problematic measurement of efficiency as unit costs are highly contextual, and a questionable indicator of humanitarian value on its own. Cost effectiveness is usually a better measure. For an activity to be considered efficient, key efficiency factors will have been identified and monitored.
4. Assessing the consequences of humanitarian activities
Importantly, the criteria should offer clearer definitions of humanitarian expectations for impact and sustainability. It’s welcome that ALNAP proposes to refine the definition of impact and replace the concept of connectedness with a new term, while offering specific guidance.
In my view, impact should be defined in terms of how an activity purposefully contributes to reducing the needs, risks, and vulnerabilities of people affected—this is the language used in Nexus policy commitments. Impact could also be used to better assess an activity’s net effects or unintended consequences, such as negative market distortions and aid dependencies.
Instead of connectedness, which relies on an outdated linear crisis model that does not easily fit most protracted crises and their complex dynamics, ALNAP should repurpose the concept of ‘sustainability’ with a focus on assessing whether and how an action’s life-saving and life-sustaining benefits are likely to continue. Key concepts here are exit strategy, community resilience, humanitarian preparedness, and reinforcing essential services and ‘systems’.
In sum…
These are just my unfiltered personal views hoping to contribute, but it’s important that we all care about these criteria and commonly ‘own’ them. Since the 1990s, they have supported a growing culture of humanitarian evaluation, evidence and learning, which has helped to shape an ever-improving humanitarian system. Today’s humanitarian challenges are daunting, and require this culture to flourish further.