Commentary

Five reflections on evidence-informed decision-making from ALNAP’s new study

For me, one of the most exciting things that can happen in a research programme is uncovering results that challenge my expectations and assumptions. Exciting – and often unsettling. The research on humanitarian decision-making that Leah Campbell and I have conducted over the past two years has been both. Broadly, the findings challenge the assumption that the ‘best’ operational decisions are made on the basis of structured consideration, supported by evidence. And one finding in particular is – to say the least – surprising: “more information does not appear…to improve the quality of decisions, nor does it make decisions worse”. Or to put it another way – there is no correlation between the amount of information collected during the decision process and the perceived quality of the decision.

It is possible, of course, that these results are just plain wrong: measuring the quality of decisions in studies of this type is notoriously hard to do. However, while this would be comforting for those of us who believe in evidence-informed humanitarian action, rejecting the results out of hand would also be peculiar and slightly hypocritical, suggesting that we strongly support the use of evidence, just so long as it supports our prejudices and preconceptions. Instead, as this is the best evidence that we have relating to operational humanitarian decision-making - at the moment - and as there is at least some corroboration of the findings in other research on decision-making in analogous situations, it seems more useful to try and investigate the findings further.

The study does not allow us to say with certainty why the relationship that we would expect to see is not there, but it does provide some possible pointers, and in doing so casts light and shadow on the nature and use of information in humanitarian action.

The first thing that stands out is the nature of the decisions made by participants in the study. The three main categories were, in order: Staffing decisions (hiring, allocation of staff); decisions about working with other actors and participating in meetings; and decisions about ways of working (procurement, payments, travel arrangements).

5 reflections on evidence-informed decision-making from ALNAP’s new study

Photo credit: Flickr/Stephen Little

Between them, these types of decisions made up 57%. On the other hand, decisions about whether a response was required, and what sort of response would be most effective, made up only 26%.

The distinction is important, because it reminds us to be more careful and more precise when we talk about the value of evidence. Growing support for evidence-informed humanitarianism has been accompanied by a blurring and muddling of what ‘evidence’ means, and how it can be useful. Just as the term ‘accountability’ has come to cover participatory programming, PSEA and feedback on project performance, and hence become a general shorthand for ‘positive or acceptable relationships with the people affected by crisis’, so the term ‘evidence’ risks being a generic ‘good thing’ to put in project proposals, related to any use of any form of information. But evidence, as we have argued elsewhere is a very specific form of information, and is of central importance to answering certain types of question – particularly the questions: ‘is there a need for intervention?’ and ‘what type of intervention will be most effective in these circumstances?’ Evidence is much less valuable when deciding whether or not to go to an inter-agency meeting, or where to take donors on a field trip. When we look at the decisions that people were actually taking, those which would really benefit from evidential information formed around one quarter of the total – which may explain why there was no correlation with quality across the whole set of decisions.

the term ‘evidence’ risks being a generic ‘good thing’ to put in project proposals, related to any use of any form of information.

This point also illustrates a second interesting reflection – that we really don’t understand the nature or content of operational decision-making. Our decision-making study was titled  ‘Beyond Assumptions’ to emphasise that many of the things we may think about decision-making are belief, not fact. Leaders in-country make decisions about much more than where to respond and how. This reality also raises questions about where these decisions are made and the degree to which programme/country leads still hold power over response design. While outside the scope of this research, it would be interesting to pose questions about the degree to which HQ level decision-making is evidence-based, particularly as recent work from ALNAP and Groupe URD suggests the evidence for this is quite thin.

The third idea that I draw from the research is that information is not, by itself, a good thing. Information can be central to making certain types of decisions well, but only where it is managed correctly, and used as part of an effective decision-making process. ALNAP has considered issues of – and problems with – humanitarian information management elsewhere. What the decision-making research showed was that there is often room for improvement in how information is integrated into the decision-making process. The ‘ideal’ analytical decision aims to use information to identify the single best option from among a set of possible approaches. In order for this to work, three conditions need to be met:

  • Firstly, there should be a diverse set of options covering a range of actions (not just an either : or choice between two well-known, ‘standard’ options).
  • Secondly, those involved in the decision need to have a clear and agreed value by which they decide which option is ‘best’: Cheapest? Most people reached? Most vulnerable people reached?
  • Finally, the decision-maker needs information that allows a systematic and comparative evaluation of the various options against this value, which often means that available secondary information, or word of mouth opinion, is not sufficient; a specific information set needs to be designed, collected and analysed.

The research suggested that, in many cases, decision-makers were either not creating a diverse range of options, clarifying the criteria of value on which the decision was made, or ensuring that the information available allowed a full and useful comparison. Under any of these circumstances, where the decision process is flawed, the use of information cannot by itself be expected to lead to good decisions.

The fourth reflection inspired by the research is on the social nature of humanitarian information. When decision-makers looked for information to use in decisions, they tended to look to their colleagues or peers, rather than seeking out research, evaluations or lessons papers. This has important implications for anyone hoping to improve the evidence base of humanitarian action.

5 reflections on evidence-informed decision-making from ALNAP’s new study_2

Photo credit: DG ECHO

If we want busy colleagues to access research results, we must first ‘socialise’ this research – make it part of humanitarian professionals’ everyday consciousness – the sort of thing that people will talk about, or volunteer if asked for advice. This means going beyond beautiful reports and websites to create opportunities to discuss findings with humanitarians on the ground (as ALNAP is doing with the results of this research). It might also mean a greater focus on integrating research evidence into standards (as Sphere did in the last revision) and SOPs: in over half of the 1035 decisions that formed the basis of this study, the decision-maker referred to organisational procedures in the decision-making process.

And finally – a problem. One area of weakness the study identified for humanitarian decision-making as a whole related to new and unprecedented events. Both personal experience and evidence from past responses are effective in supporting decisions, to the degree that they are relevant to the decision in hand. But in a new situation, approaches based on long experience or evaluative and research evidence break down because the past ceases to be a guide to the present. This matters because economic and political instability, as well as climate change and environmental degradation, threaten the humanitarian community with new types of crisis, where we will not be able to simply replicate past responses. In these circumstances, new forms of decision-making and information management will be required: approaches which are rapid, flexible and iterative. We didn’t find examples of this in the research, and don’t know what these will look like, or how to make them work – so if you do, please let us know!

Key findings from ALNAP’s ‘Beyond Assumptions’ study have been identified for different audience groups. You can find an interactive quiz and flow-chart for in-country decision makers, a policy brief for HQ leadership. Several of the resources have been translated into Arabic, Bahasa, Bangla, French, Spanish and Ukrainian.