This is the last in a series of four posts reflecting on a conversation about donor-centred fundraising and growing donor love with Professor Jen Shang PhD, Philanthropic Psychologist, and Co-Founder and Co-Director at the Institute for Sustainable Philanthropy in the UK.
For each blog, I have edited the relevant excerpt from the interview as a starting point then expanded on it with my own thoughts before offering some questions for review and application within your own organisations. Here is the outline of the 4 posts:
Part 1 – Moving beyond donor-centred vs community-centric fundraising
Part 2 – Instead of “donor as hero”, why not “donor as fellow human being”?
Part 3 – Isn’t loving your donors just fancy personalisation and segmentation?
Part 4 – Is AI capable of growing love?
While I’ve been studying philanthropic psychology and ways to apply it fundraising and the saviourism issue, the use of AI suddenly became a hot talking point. This started in late 2022 and hasn’t let up since then. It appeared every second person, whether fundraiser, family or friend, was asking me whether I thought AI was going to replace me as a fundraising copywriter. But although AI might be able to do the mechanics of my work, I seriously questioned whether it could build authentic relationships. I decided to add AI to the list of things to ask Jen about.
INTERVIEW EXCERPT 4
June: Do you think AI is ever going to be capable of applying philanthropic psychology principles to copywriting and fundraising? Or as you put it, is it going to be capable of growing love?
Prof Jen: I think if humans are capable of growing love, then AI would be, but we have to solve the human growing love problem first because AI is following is the statistical pattern of human behaviour.
If individuals ever felt that they were making genuine connections with other individuals, the planet and the animals, and it is through an AI agent that they feel they’re making those genuine connections, I think that would be great. The question is, will AI will ever get there? I don’t think so.
My understanding of large language modelling – a subset of Generative AI – is that the algorithm itself can reproduce patterns, but it has no understanding.
So it can spit out the most likely sequence from the question sequence you give it, but it has no understanding about either your language sequence or the language sequence that it generates. So from that perspective, strictly speaking, currently there’s no artificial intelligence because there’s no understanding or comprehension by the machine. It’s pure pattern recognition.
So if the only pattern the machine can recognise is love patterns amongst humans, then they can multiply that really quickly. But if all that they get from humans is hatred, depression, negativity, then that’s what they’re going to spit out.
I also think trust is a really, really expensive commodity. To think about an artificial intelligence agent performing at a level that never violates human trust? I mean, if anybody has ever had that experience, I would like to know.
Of course, humans can let us down as well, but there’s genuine love between broken humans that can take us all forward.
As I mentioned earlier, a lot of people ask me if AI is coming for my job. It’s a question that remains on my radar.
After all, many non-profits are already using AI in their fundraising in different ways:
- Data analysis
- Donor profiling and segmentation
- Donor research and prospect generation
- Automation of processing donations
- Admin support and customer service.
From my perspective, it takes a long time to craft a direct mail letter using proven direct response techniques and philanthropic psychology principles. If ChatGPT can knock it up in a matter of seconds, it’s better I know sooner rather than later – and I’ll adjust my business model accordingly (or retire early).
So I decided I’d better test it out. I asked ChatGPT to write a fundraising direct mail letter for a generic overseas development charity. After a few prompts to make it longer and include a story, the bot and I ended up here:
It’s not great – full of jargon and generic language – but to be fair, I hadn’t given it much specific information to work with. However, when I asked for philanthropic psychology principles to be applied, this is what I got:
ChatGPT removed the story and added the yellow highlighted sections.
As you can see, it’s not quite how using a philanthropic psychology approach to fundraising works!
The bot hadn’t yet learned what philanthropic psychology actually is, because there weren’t enough examples for it to use as a starting point.
I’m not suggesting that me having a bit of fun with ChatGPT should be taken as proof that AI won’t be taking over my job anytime soon. AI capabilities are changing so rapidly and the information it can learn from is growing exponentially every day. It’s precisely because of this I’m interested in exploring ethical questions and different scenarios around the use of AI in fundraising.
In theory, we might say that in the future we’d expect AI to be able to fundraise according to philanthropic psychology principles. The fact you won’t meet most of your donors in person helps this theory along – because if the majority of your communications with donors are in print or online, then there’s the potential for AI to become better at recognising the patterns which both make donors feel good and get them to give more.
It’s important to bear in mind that applying philanthropic principles well means you need to know your donors intimately. And you need to know the identities which are activated for them when they deal with your organisation.
For example, let’s consider two fictional organisations in the international aid and development sector. When a donor gives to Methodist Worldwide Aid, they may be giving because they identify as part of the Methodist community, whereas when the same donor gives to Save Children Everywhere, they’re doing so because of their identity as someone committed to social justice. So even for two organisations doing similar work, the AI required would need very specific personalised data.
However, philanthropic psychology is more than just learning and applying knowledge. At its core, it’s about growing love.
So for me, this raises a number of issues:
1. The fear is AI will replace us and take our jobs. For this to happen, AI would need to be capable of growing love AND to be better at it than us. And for it to be even a possibility that AI learns to grow love, we would need to get better at growing love first, because AI is just following us. Are humans good enough at growing love for AI to learn from us how to do it?
2. Can AI replace us in live donor events? You might be able to use AI to write your speech, but can a bot replace the authenticity of someone speaking in person? I would argue that even if you aren’t a great speaker, simply by being a real person you strengthen the relationship with your donors.
3. For AI to get better at writing direct mail fundraising, it will need a lot of examples of good direct mail fundraising using philanthropic psychology principles. If I started using AI to generate DM appeals for my clients, how would they feel? Would they be OK with their appeals being used by AI to learn and contribute to future appeals for other organisations?
4. Could you ever send a bot of yourself to speak to a donor one-on-one (like this influencer did with her fans)? What if the donor felt they had a more authentic relationship with the bot than with you? If AI can love humans better than humans, then we don’t need humans anymore. If AI is able to replace humans’ capacity to have relationship with each other, then as humanity we are pointless.
AI is already being used in fundraising, but so far it’s confined to behind-the-scenes tasks which help grow donor love. AI is not yet building authentic relationships or creating genuine connections with donors. Those heart and soul connections are still driven by humans. Whether AI can grow love is a very different question to whether it can assist with some fundraising tasks.
AI has a big role to play in our fundraising future, but relationships are about people. And I seriously question whether any bot can ever replace that. What do you think?
What now?
Here are some questions to kickstart your thinking in this area:
1. What do you want your donor experience of relationship to be like? Are you OK if your donors deal with a bot for customer service and donor admin kinds of tasks? How would your donors feel about dealing with a bot instead of a person?
2. How would you feel if an agency (like mine) used ChatGPT to help write your appeals?
3. How would your donors feel if you disclosed you used ChatGPT to write your communications? How would you feel if other charities used ChatGPT and therefore are using your appeals to learn?
4. Do you feel you need to disclose to your donors if you’re using AI? Does it depend on what you’re using AI for?