Why I won't use generative AI and LLMs in fundraising

I do not and I will not use generative AI in my writing or research for charities. It’s as simple as that.

If you’re interested or shocked, please read on. I could talk about this all day.

My principles

I consume and produce thoughtfully in my personal life. I am all-too aware of my ecological footprint, living in the Global North. I buy second-hand wherever I can (as my kids will attest – many a birthday gift is from eBay), I make and create, and support small businesses wherever possible. I repair, reuse, and repurpose. As far as is possible I avoid shopping with companies that exploit their staff or our planet. I think about consumption, and where my ‘stuff’ comes from. I have social, environmental and ethical principles that I try not to compromise.

So why would I behave differently in my professional life?

While acknowledging a gut-feeling mistrust of autonomous machines (which may be related to having seen both The Outer Limits S2E5 and The Matrix in the 90s), I have read widely in order to shape the following principles which guide my decisionmaking:

I do not steal

Plagiarism is not an option. I will not use a machine that has scraped published writing without payment or credit to the writers, whose work is now being turned into statistical probabilities to churn out the average of all possible writing. I will not pass off as my own work which I did not write.

I don’t do cookie-cutter writing

My clients deserve better: bids which sound like them, which contain life, meaning, and understanding of their work. Funders don’t want to read another formulaic bid that sounds like all the others. They want to know why they should invest in you and your work, not what a machine has calculated is the average next word. I have no interest in mass-producing pages of writing. Most people can tell the difference if they are reading lots of documents, and funders are telling us they are frustrated by reading multiple copies of the same paragraph. Funders respond more favourably when applicants show they have taken the time to understand and write personally, as different and individual as they are. Quality should always win over quantity.

I don’t spam people

We know that generic round-robin letters do not get good results from Trusts, but the increasing use of generative AI is flooding funders with more of these impersonal, irrelevant applications. In response, we are already seeing funders narrowing their scope, funding only previously-funded organisations, or closing to unsolicited applications altogether. We should be building personal relationships and showing funders that we value them and their assessors. The alternative is a funding landscape closed to those without contacts.

I don’t waste time

Researching Trusts and Foundations that are relevant to my clients is a specialist job. It requires me to dig down into the small print, to find several sources of information and unpick anything that can help me understand whether applying to this funder is worth the time investment, and crucially, how to connect with their decisionmaker and help them think “yes, this organisation gets what we’re trying to achieve”. Over time, I have learned which funders do what they say they do, and which have an ‘open’ funding portal but apparently no available funds. Spending time sifting out poor matches and prioritising funders that are most likely to invest in my clients makes me more efficient and yields better results.

I don’t do blind acceptance

I acknowledge my privilege in having been to university to think about how to think, to understand, challenge, and synthesise my own thoughts on issues, and to do so in two languages. How to research, compare, and create something, by first understanding a question and its context. I think it’s risky to place our faith in a model based on endorsement of all the bias already baked into society. Working in the social impact sector, we should be always pushing to challenge our own, and society’s, learned biases, to make change possible. How could we hope to make society better if we stop evolving our own thinking, and allow models regurgitating outdated language to choose our words for us?

Why it matters

I don’t often struggle to write, and I understand some people find it far harder than I do to write in English. However, I don’t think that generative AI is the force for good that its promoters want us to believe. It isn’t democratising writing. It is encouraging people to doubt their abilities, and believe that a machine can do it ‘better’. It is another Industrial Revolution, reducing humans to mere operatives of machines, and increasing the relentless pressure for more and faster outputs. But you can only have two of Quick, Good, or Cheap, never all three, and having untangled a few clients’ attempts at getting a LLM to create a funding application, I don’t think generative AI can give you Good.

I’m talking here about the objections I have to the use of generative AI specifically in Trusts and Foundations research and bidwriting, where I spend most of my time these days. I have many other concerns, not least the environmental implications of the immense amount of energy required to power AI tools, the security and privacy risks, and concerns about how much more easily information can be controlled when it is highly centralised.

I have strong feelings about the intentional destruction of critical thinking at a time when critique and challenge feel more necessary than ever. Sometimes it’s almost enough to make me go and live in the woods whittling spoons.

Given all the above, I hope any human still reading this will understand why my writing will always be 100% human, too.

Ally Rea