Commentary

Do humanitarians have a moral duty to use AI to reduce human suffering? Four key tensions to untangle

Considering the rise in humanitarian need and the massive deficit in funding, the idea that Artificial Intelligence (AI) could help humanitarian organisations to reach more people with fewer resources is attractive. Some consider that humanitarians have a moral duty to use AI to reduce human suffering. Others have deep concerns about the use of AI in humanitarian contexts, including how it might work against participation and localisation.

On May 15-17 2024, I attended the Foreign, Commonwealth and Development Office’s Wilton Park event on AI in humanitarian action where a group of experts debated critical aspects of AI in the humanitarian space. I came away thinking about some key tensions that need further untangling:

  1. What are we talking about when we talk about AI? AI is many things and has as many uses as there are people using it. Some kinds of AI and machine learning have been around for years, whereas Generative AI (applications such as Chat GPT, which generate entirely new content from existing data sets) is rapidly evolving. When discussing risks and benefits of AI, we need to be clear about what kind of AI is being used and for what purpose because the benefits and risks of more traditional AI and those of Generative AI are not all the same.
  2. Is AI fundamentally anti-localisation? While some of the biggest gains from AI are in back-office efficiencies, AI systems tend towards centralisation and flattening rather than decentralisation and nuance. Big Tech’s extractive business models draw benefit and profits upwards to themselves. Does this mean that AI is anti-localisation? How can humanitarians better incorporate local data, local talent and local infrastructure? How can we engage more with groups at the national and regional level who are developing Large Language Models in local languages. How can we do more to hear community perspectives on AI?
  3. How will we take advantage of AI if we don’t invest in the basics? Overall, the sector will need investment in several areas to use any kind of AI safely, responsibly and effectively. Some building blocks for effective use of AI include:
    1. AI literacy, tailored to various roles and parts of the humanitarian system. Community members, frontline staff, technical experts, procurement staff, country directors, CEOs, headquarter offices and board members all have different needs in terms of understanding AI and how it is affecting their work.
    2. Getting our own data sets and tech systems in order so that we reduce the ‘garbage in-garbage out’ problems that can plague AI.. 
    3. Governance policies, risk-management approaches, and sector and agency-specific redlines for various types and uses of AI.
    4. Humanitarian AI Standards, with appropriate operating practices, for different parts of the humanitarian system.
    5. Digital public goods, safe open models and better data sets in more languages.  Transparency around deals between humanitarian agencies and Big Tech companies, especially in terms of intellection property (IP), data privacy, consent, data ownership and benefits.
    6. Assessment and evaluation standards for the various parts of AI systems that will be used in humanitarian assistance, agreed upon audit processes, seals of approval for AI systems that are safe and procurement standards.
    7. The Humanitarian AI Revisited report gives a great overview of these issues.
  4. How can we better collaborate and learn together? We don’t really know the real potential of different kinds of AI in the humanitarian context or the actual risks that are playing out. Better collaboration and learning could help the sector identify emerging good practices and harms. The CDAC Network, MERL Tech, ALNAP and NetHope are all facilitating communities of practice and sharing learning with each other.

Here are some questions we could work on together:

  • Does AI create even more crisis? Is AI contributing to tension within local communities and between communities and humanitarian organisations? Do the environmental costs of AI computing create more disasters and more climate refugees?
  • What are the real costs and efficiencies of AI? If AI frees up time and budget, is the surplus spent improving human aspects of humanitarian work? Do the costs of ensuring AI safety eat into the efficiency gains?
  • How are we conducting assurance? What are the known risks, unintended consequences and unknown outcomes of different uses of AI? What levels of investment are required to create the tools and infrastructure needed to track and assess the impact of AI as well as to test and maintain a model’s performance?
  • What are we learning from all these pilots? And how can we avoid ‘Pilotitis’. A humanitarian AI observatory was suggested for tracking the state of the space and related learning to reduce duplication.
  • What do we already know? We have plenty of lessons from earlier ‘tech for development’ cycles: mobile phones and ICT4D, use of technology in constrained environments, the data revolution, open data and open government partnerships, biometrics and blockchain. I hope we don’t have to learn all these lessons all over again.

While I believe humanitarians are called to reduce human suffering, I am not convinced we have a moral obligation to use AI. We don’t yet know how the benefits and harms of AI play out - there are still far too many unknown unknowns.

A responsible, collaborative, participatory, learning-centered approach to innovation might help us to find an answer to the question of whether AI can truly help us reach more people with quality humanitarian assistance.