
Today, I am excited to share with you a recent exchange I've had with Till Trojer, PhD , AI Officer at EPER - Entraide Protestante Suisse. He shares insights on building AI literacy in NGOs, navigating data security challenges with vulnerable populations, and why "slow and steady wins the race" when it comes to responsible AI adoption.
Till is a strategist and anthropologist who brings over a decade of international experience to the intersection of AI, ethics, and humanitarian action. Trained as a social anthropologist from SOAS (London) with extensive fieldwork across sub-Saharan Africa, Till applies an ethnographic mindset to AI strategy, emphasizing participatory, contextual approaches that remain deeply attuned to local realities. In his current role, he develops organization-wide AI strategy, builds partnerships with universities and NGOs, and leads the ethical integration of AI into humanitarian processes, from governance frameworks to practical tools like multilingual chatbots for asylum seekers. His work is grounded in a fundamental belief that technology must serve people, not the other way around, approaching digital transformation with humility, ethics, and collaboration at its core.
Hi Till, thank you so much for joining this exchange on AI in your NGO work. First, tell me about yourself, how did you come into this field, and what's your exact role name? I'm curious about your trajectory, how you landed in AI for good, and what you do now for HEKS specifically.
My current position is AI Officer for HEKS, one of the larger Swiss NPOs working both in the global South and in Switzerland. This new position sits within the ICT department, but I'm responsible for developing an organization-wide AI strategy, building learning structures for AI, fostering partnerships, and testing AI solutions with our implementation partners.
HEKS created this role because staff immersed in day-to-day operations lack bandwidth to explore AI developments thoroughly. With everyone experimenting with LLMs at varying levels of expertise, having a dedicated coordinator became essential. We're seeing similar AI Officer or Chief AI Officer roles emerging across larger organizations, focused on strategic responsibilities for partnerships and innovation.
Exciting, and great to see such a development of AI roles. What are the requirements to become an AI officer in an organization like HEKS? Given that it’s a new role, I am curious about what you think is important or what you bring to the role, without turning this into a job interview.
I was already working at HEKS within our global cooperation as part of the MEAL team, monitoring and evaluation, for our projects across 30+ countries. Initially, I served as AI advisor for the MEAL team, exploring how AI could improve our monitoring processes. After six months, we recognized AI needed integration within our broader digital transformation strategy.
In an organization like HEKS, technical knowledge isn't as important as other skills. I'm an anthropologist by training with over 10 years of experience across the African continent. My work involved participatory research, community engagement, and critical examination of power dynamics, collaboration methods, and research ethics.
This anthropological foundation aligns perfectly with the humanitarian and development sector's needs. HEKS emphasizes a localization approach. This means working with partners on the ground and ensuring projects address people's actual needs. My field experience in Ethiopia, Liberia, and South Africa provided valuable insights into the sector's realities and the importance of understanding our partners' day-to-day work.
These same considerations apply directly to AI solutions: How do we collect and process data? Who controls the data management lifecycle? Who has access? Who should we partner with?
I've seen promising localized AI initiatives emerge from countries we work with. For instance, Ethiopia has a research center for artificial intelligence, and innovative solutions are coming from Bangladesh and Ukraine. Forging these partnerships aligns with our localization philosophy.
To complement my anthropological background, I completed certified courses in AI strategy and ethics. When LLMs became publicly available, I recognized how they would transform our relationship with technology and impact international development. This coincided with my son's birth and challenges in my academic career, creating an opportunity for a professional pivot. I've been focused on AI in development for about two and a half years now.
Thanks for sharing how your anthropological background connects to this work. It's fascinating to hear your social science approach to technology.
When working with vulnerable populations, many organizations worry about sensitive data in the AI context. Even traditional data gathering involves power dynamics, but with data potentially feeding into language models, risks intensify. How do you address this in your strategy development, and how do you involve your partners on AI ethics?
Data security and ethics are foundational to everything we do. Many don't realize that AI is already embedded in tools we've used for years, like the KoboToolbox for field data collection.
We first need to assess our existing toolset. Industry-standard tools like Kobo, Power BI, and Tableau already incorporate AI and meet our needs 98% of the time, as there's no need to reinvent the wheel. When we do explore new tools for specific needs, that's when deeper questions of data security and ethics become critical.
"Ethics" itself requires clarification. We must ask: whose ethics are we applying? Is it just our organizational perspective, or are we considering the ethical frameworks of the communities we serve?
Additional questions emerge about resource consumption, and many AI tools have significant energy footprints compared to alternatives. We need concrete security and ethical considerations at every project stage and for every tool deployment.
Do you have specific AI use guidelines or policies with your partners? What are the red flags, what's permissible when following guidelines, and what's recommended? Have you developed formal guidelines?
We adhere to humanitarian principles and digital do no harm. We must follow all these existing guidelines.
And they now apply to AI as well?
AI is integrated into many existing tools we already use. We're reviewing our guidelines to determine necessary updates and developing new governance frameworks specific to AI tools.
A concrete example: Six months ago, we engaged with numerous AI solution providers targeting the humanitarian sector, part of the big "AI for social good" and "AI for impact" wave. Our organization initially feared missing opportunities if we didn't adopt these new tools, such as avatar interview platforms.
I admit I was initially swayed by compelling marketing and CEOs promising all-in-one solutions. However, we implemented a rigorous evaluation process: inviting presentations, examining backends, and conducting internal pilots in risk-free environments before any field deployment of those fancy AI tools. This thorough assessment proved crucial.
We discovered significant gaps between promises and reality, many tools failed to meet our standards. Red flags emerged around data storage locations, server security, and encryption methods. Our primary concern remains data security: where information is stored, who has access, and which third parties are involved.
Our organization cannot risk data breaches, which would be catastrophic. Recent headlines demonstrate the consequences of such failures. The key is balancing innovation with robust security, these aren't mutually exclusive. Strong ethical guardrails can coexist with innovation adoption. Our approach is deliberate: "slow and steady wins the race." We must participate in AI advancement without panic, exercising appropriate caution.
Thanks for sharing that experience, testing and evaluating tools against clear criteria is an approach others in the field should follow. With all the hype and new tools emerging, where do you see the most promising applications for organizations like yours?
Chatbots offer the greatest immediate potential. They're already mainstream across businesses and organizations. We've all grown accustomed to chatbot interactions on websites and service lines. They've been standard for at least five years.
We should be providing 24/7 chatbot access for our project participants. We're currently testing Turn.io, a WhatsApp-based chatbot that's been on the market since 2017. UN organizations deployed it following the Ukraine conflict to deliver critical information about the situation, conflict zones, border-crossing rights, and other essential details.
Consider asylum seekers arriving in Switzerland. They face complex asylum laws and a system with limited resources for answering even basic questions: where to find forms, where to take a sick child without proper documentation, and so on. A 24/7 chatbot requires no data collection from participants, it simply shares controlled information in multiple languages while reducing pressure on asylum centers and their employees.
The multilingual capability is crucial beyond just constant availability. Implementation isn't complicated, and successful precedents exist. We're in a month-long pilot phase before attempting rollout to country offices. This technology could potentially reach hundreds of thousands of people.
I agree completely. The ideal AI tool should be user-friendly, proven, and functional in low-connectivity and multilingual environments. WhatsApp is particularly effective because it's already on most mobile phones alongside Facebook.
You mentioned earlier that funding opportunities for AI impact projects are increasing, with various funders supporting AI initiatives for grantees. Could you elaborate on the evolution you've observed, from virtually nothing to AI becoming much more common in funding calls?
Our AI journey began with exploring ready-made LLMs as potential solutions, but we pivoted toward partnerships with universities and researchers.
We've identified specific funding calls for humanitarian and development action that focus on technological innovation, including AI applications. Success often depends on effective framing within grant proposals.
University and research collaborations typically yield smaller initial grants, ideal for organizations like ours to build a foundation. The strategy is progressive: start small, develop a diverse AI innovation portfolio over 1-2 years, then pursue larger funding opportunities. Alternatively, when securing a project-specific grant, look for ways to scale that solution across the organization. This approach is particularly relevant in the international humanitarian sector.
In Switzerland specifically, philanthropic foundations are increasingly prioritizing AI involvement, partly driven by FOMO (fear of missing out), adding it to their upcoming funding agendas. Universities are hosting events like "hack for social good" hackathons. The "AI for good" discourse is generating momentum, creating funding pockets that, while not massive, signal positive change. We're witnessing greater recognition from funding organizations through our connections with other NGOs in AI-focused webinar series.
Regarding your colleagues: as one person, you can't manage every AI initiative. What approach is HEKS taking to develop basic AI literacy, training, and requirements? How do you ensure staff using AI tools understand the risks and best practices?
Just recently, during a workshop at one of our regional offices, I observed the stark variation in our team's AI familiarity. Some colleagues have used ChatGPT since its launch and are already exploring prompt engineering techniques. Others have never engaged with AI tools, citing time constraints, knowledge gaps, and general reluctance.
The skill disparity within our Swiss offices is significant, but globally, the digital divide is even more pronounced. Interestingly, some country offices, particularly in Ukraine and Ethiopia, are actually pioneering innovations that surpass our headquarters' capabilities.
We're collaborating with our Digital Board to develop organization-wide e-learning modules establishing baseline AI competency. In my workshops, I emphasize that "AI" encompasses diverse technologies, we're primarily experiencing large language models (LLMs) currently, but the field is much broader. Building awareness of AI's existing integration in everyday tools, from facial recognition to recommendation algorithms, provides a practical foundation.
External networking with peer organizations, researchers, and specialists has proven invaluable. Since April, we've launched a webinar series focusing on tools relevant to our Microsoft-based infrastructure, like Co-pilot, but also events around the broader implication of AI for humans and society. Recognizing time constraints, we're aiming to create short instructional videos rather than extensive workshops, as staff need efficient ways to learn these tools amid their daily responsibilities.
Our approach is tiered: establish fundamental literacy across the organization, then develop specialized training for specific teams, legal staff working with asylum seekers, translators, and field teams in places like Ukraine all have distinct needs. The essential element is providing that foundational understanding as an organizational responsibility.
That's all about HEKS specifically. What would you recommend to others beginning their AI journey? For organizations without a dedicated AI officer, what key advice would you offer, considering different organizational sizes?
For larger organizations, establishing a dedicated focal point for AI coordination is valuable. This approach addresses two critical areas: internal capacity building to enhance efficiency, and external application in projects to benefit participants. Ideally, create an AI officer position or form an AI board with diverse perspectives, representatives from ICT, compliance, data security, and field operations, to address governance, regulations, and innovation holistically.
Smaller NGOs without resources for dedicated positions should leverage networks and external expertise. I've personally provided AI strategy consultations for smaller Swiss organizations. Joining industry webinars and ensuring at least one team member has allocated time for AI exploration is crucial.
After establishing your foundation, focus on testing and piloting. Evaluate tools, build relationships with providers, and create risk-free testing environments such as sandboxes where no sensitive data is used. Stay engaged with developments, assess which solutions fit your needs, and consider whether to adopt existing tools or develop custom solutions through research partnerships. The key is experimentation in controlled environments, embracing the learning process, including failures, as valuable experience.