‘Charities, step up! AI will change how humans behave’
CAF’s Head of Policy delves into how AI will affect human behaviour itself, and what this will mean for charities.
This article was written by Rhodri Davies, who leads Giving Thought, CAF’s in-house think tank focussing on current and future issues affecting philanthropy and civil society. Rhodri has spent nearly a decade specialising in public policy around philanthropy and the work of charities, and has researched, written and presented on a wide range of topics. He has a first-class degree in Mathematics and Philosophy from the University of Oxford.
Artificial intelligence – or AI for short – is one of those buzzwords that, unless you’ve been living underneath a rock for some time, you’ll almost certainly have heard a lot about recently.
The growing influence of AI on our lives can’t be ignored, and it’s being seen almost everywhere.
This has even filtered into the world of charity. Parkinson’s UK, for example, has recently launched a project using AI to develop better early diagnoses for the disease, while the Lindbergh Foundation in the US is applying AI to video from drone surveillance in game reserves, to develop algorithms that can predict poacher behaviour.
But as AI adoption grows, we’re becoming increasingly aware that as well as opening up amazing new opportunities, the technology also brings significant risks and may create new challenges for society. Many of these are things that charities will need to deal with in the future, but the sector as a whole is currently notably absent from the debate.
Personally, I find artificial intelligence one of the most promising, fascinating and (at times) terrifying developments in the world today.
Will AI create a post-work utopia or a dystopian nightmare? Can we expect social media filter bubbles and fake news to intensify and further erode democracy? These are questions that are analysed at length in the latest discussion paper from CAF’s Giving Thought thinktank, Machine-Made Goods: Charities, Philanthropy and Artificial Intelligence.
But it is not just at a societal level that AI is going to have an impact. We are already seeing signs of the ways in which AI might change the very way we interact with each other. I’ve focused below on three equally important examples: gender attitudes, the development of children and desensitisation.
It’s been noted by some that where chatbots and conversational AI assistants have been developed with human characteristics, they are very often, well… Female. Given that the relationship between AI assistants and their human users is likely to be one of servitude, there are real dangers if this relationship also has wider connotations in terms of gender dynamics. Why, one might justifiably ask, would we choose to hard-code old biases into new technology? In this sense, the conventional choice of female voices for AI interfaces makes an existing human problem even worse.
As conversational AI interfaces become ubiquitous in our homes, children are increasingly interacting with them – including during formative stages of speech and social development. Could the ways in which we converse with these AI interfaces fundamentally alter how we learn to speak and behave? For instance, will interacting with a voice-operated assistant that’s required to do our bidding from a very young age lead to children expecting the same in human interactions, and able to speak only in commands?
How we interact with AI systems could affect more than just the ways in which we speak. There’s already evidence that prolonged interaction with robots can lead children to develop anti-social or abusive tendencies. This could place an immense strain on children’s charities, with significant sums that may need to be invested in therapy programmes and similar initiatives.
Desensitisation through distance
Some researchers have raised concerns that if we’re able to automate our interactions or to outsource responsibility for decision-making to algorithmic processes, this will result in desensitisation and a lack of moral responsibility for our actions.
Even if we set the rules for these processes, the fact that we don’t make the decisions first-hand may lead humans to be more flippant. This could lead to existing societal problems becoming more acute – for instance, if an algorithm decides that changes to a family’s financial arrangements mean they can no longer pay their rent, and there is no human to mediate the decision, then will they simply be evicted onto the street? In this example, as in many others, charities would have to pick up the pieces and may face a disproportionate burden.
Despite these worrying examples, there is a real opportunity to harness AI to deliver social and environmental good. But in order to ensure this positive outcome, civil society has to be involved globally with policymakers and the tech industry in shaping the direction of AI development.
This involvement has to be thorough and meaningful and it must start now.
The future of AI will be what we choose to make it: whilst it’s a long road ahead, we still have plenty of time to avoid The Terminator’s Skynet-style dystopia.