How Cancer Research UK is exploring AI and voice tech
Rob Leyland, Innovation Manager at Cancer Research UK gave us a behind-the-scenes insight into the charity’s work on chatbots and voice-enabled tech to inform and educate.
The technology innovation team at Cancer Research UK (CRUK) is dedicated to looking at how future technology trends could help drive engagement with the public and further the charity’s goal of saving lives through research.
We sat down with Rob Leyland, Innovation Manager, to talk about some of the charity’s exciting trials and experiments with emerging technology.
Charity Digital News: What technology have you been focusing on over the last 12 months?
Rob Leyland: In the past year there’s been a focus on conversational artificial intelligence (AI), things like chatbots, voice enabled devices, and natural language processing – looking at how those could be useful and where the impact lies.
We trialled a chatbot on our fundraising pages to help answer the most common questions people had. We’ve also been testing chatbots internally – a lot of our internal teams spend quite a bit of time answering the same questions around certain processes again and again.
Chatbots could be useful to free that time up and people can get an instant answer, so people working in those teams can focus time on more complex stuff that really needs a human being doing it, instead of having their flow of work interrupted. So it could have a significant effect on productivity.
CDN: What other tech trends are on your radar?
RL: As well as the prevalence of voice assistants on mobile phones, we’ve seen the rapid rise of smart enabled home speakers such as Google Home and Amazon Echo (Alexa). We’re really keen to understand how this technology will help us interact with people in new ways and to see if voice can provide services that we couldn’t elsewhere.
CDN: How far along are you with experimenting with voice-enabled tech?
RL: So far we’ve launched two new skills on Amazon Alexa to test out if voice can provide services that we couldn’t elsewhere, and understand more about the tech.
One of those is a ‘flash briefing’ skill, which is a news update people can ask Alexa for, using content from our science blog, which explains our research breakthroughs, updates and general cancer news as well, and already has a base of really interested readership.
The other skill we’ve released so far is to let people track the amount of alcohol they consume, which is very much around the prevention strand of our research strategy. We want to help people understand what increases their risk of cancer and provide them with tools to reduce that risk and give them health information.
A lot of people have smart-enabled speakers in their kitchen so it felt like it could be a useful place to have this skill.
After testing that out with a small group of people we found that it did pass a lot of those questions we had, so we developed our releasable skill for the store with more fleshed out features. People were able to set a goal to add drinks during the week as they went along, and get an update on how they were doing against their goal of the maximum they wanted to drink in the week.
CDN: What was the process of creating the Alexa alcohol tracker skill?
RL: It started out as a design sprint where we spent a week looking at voice technology to reduce peoples’ risk of cancer by influencing and aiding their behaviour change. In a week we went from that challenge to developing a fairly basic skill – it couldn’t do everything but could present a realistic experience to that particular test script. What we wanted to check that it was it useful and usable, and that they were they happy to conduct that sort of activity on a voice device.
It was pretty vital to that being a success that we marry up our tech team and subject matter experts in health information to get the advice that we were giving right, and introduce features such as providing the UK guideline weekly limit as a default option.
CDN: Have you met your goals so far with the project?
RL: In terms of coverage of the skill it was pretty successful – over 30,000 people watched a video of this skill being used in our Stevenage store, where we took the Echo and tested it out with users but also filmed it to show people how to use it. That was great to spread awareness of the link between consuming alcohol and cancer risk which still not that many people know, so that was definitely one of the goals met.
As yet we’ve not seen that interest and coverage translate to the same sort of scale in terms of people actually using the skill on their own Echo. And so with voice we are taking a cautious approach, using what we’ve launched as a benchmark to understand how many people are choosing this as a preferred channel.
From the data we’ve got we are trying to take a harder look at how people are going to start using these devices, as it does feel a lot like people are still in accustomisation and normalisation phase of using them.
CDN: What else is in the pipeline with voice-enabled tech?
RL: Amazon launched charity donations through Alexa in the US earlier this year with a significant number of charities, with plans to roll it out in the UK soon. We want to see how that’s going in the US first as it’s a more mature market, but it’s a potentially very exciting channel to use.
We’ve also been reviewing our search and data strategies to see how we can make the information we provide electronically more easily understandable by voice assistants. These assistants often use snippets from search engine results and their own knowledge bases to provide answers to questions that people ask directly.
We’ve found that asking general questions is the top use people have for voice devices, so it’s an important area to get right if we want to have the same sort of presence for voice queries as we do on the internet.