This Chatbot Aims to Steer People Away From Child Abuse Material

  Reading time 3 minutes

There are huge volumes of child sexual abuse photos and videos online—millions of pieces are removed from the web every year. These illegal images are often found on social media websites, image hosting services, dark web forums, and legal pornography websites. Now a new tool on one of the biggest pornography websites is trying to interrupt people as they search for child sexual abuse material and redirect them to a service where they can get help.

Since March this year, each time someone has searched for a word or phrase that could be related to child sexual abuse material (also known as CSAM) on Pornhub’s UK website, a chatbot has appeared and interrupted their attempted search, asking them whether they want to get help with the behavior they’re showing. During the first 30 days of the system’s trial, users triggered the chatbot 173,904 times.

“The scale of the problem is so huge that we really need to try and prevent it happening in the first place,” says Susie Hargreaves, the chief executive of the Internet Watch Foundation (IWF), a UK-based nonprofit that removes child sexual abuse content from the web. The IWF is one of two organizations that developed the chatbot being used on Pornhub. “We want the results to be that people don’t look for child sexual abuse. They stop and check their own behavior,” Hargreaves says.

The chatbot appears when someone searches Pornhub for any of 28,000 terms it has identified that it believes could have links to people looking for CSAM. And searches can include millions of potential keyword combinations. The popup, which has been designed by anti-child abuse charity the Lucy Faithfull Foundation alongside the IWF, will then ask people a series of questions and explain that what they are searching for may be illegal. The chatbot tells people it is run by the Lucy Faithfull Foundation and says it offers “confidential, nonjudgmental” support. People who click a prompt saying they would like help are offered details of the organization’s website, telephone help line, and email service.

“We realized this needs to be as simple a user journey as possible,” says Dan Sexton, the chief technology officer at the IWF. Sexton explains the chatbot has been in development for more than 18 months and involved multiple different groups as it was designed. The aim is to “divert” or “disrupt” someone who may be looking for child sexual abuse material and to do so using just a few clicks.

The key to the system’s success is at the heart of its premise: Does this kind of behavioral nudge stop people from looking for CSAM? The results can be difficult to measure, say those involved with the chatbot project. If someone closes their browser after seeing the chatbot, that could be considered a success, for example, but it is impossible to know what they did next.

Leave a Reply

Your email address will not be published. Required fields are marked *