INHOPE | Webinar Recap: How Predators Online hide in Plain Sight
Article
Events & Campaigns

Webinar Recap: How Predators Online hide in Plain Sight

Over 200 participants joined our latest webinar to learn from Trust and Safety expert David Hunter (VP of Platform Trust & Safety at Resolver, a Kroll business), how to recognise and uncover threat actor networks online.

"Maintaining a safe user experience requires monitoring the darkest corner of the internet," stated David at the start of his presentation. During the webinar, he took attendees on a journey through one of these dark corners containing content predators curate for consumption. Or as Resolver describe it, "Content of Interest to Predators" (COITP).

What is COITP?

COITP refers to collections of openly accessible content of children. In most cases, the content is not created or uploaded by perpetrators but is found on social media platforms and curated into sharable collections. These collections can include imagery like back-to-school pictures, children at pools, gym practice etc. What makes this content difficult to detect, is that in isolation, the pictures usually do not pose any risks and will look normal to most users, explained David. Despite the obvious privacy violations, platforms often lack guidelines for viewing, engaging with, or curating such collections, making them difficult to detect.

COITP as Gateway to Abusive Content

Resolver's experience shows that this material can serve as a gateway for individuals with sexual interest in children to connect and access increasingly abusive content. David shared, that these collections often represent only a fraction of larger offender networks operating beneath the surface. Perpetrators within these networks assist each other in discovering new exploitative content, sometimes even curating collections of specific victims.

Nowadays, these networks are also used to exchange tips on creating realistic AI-generated abuse content. By curating collections of a particular child online perpetrators can train AI models to 'nudify' the images and depict the children in abusive situations. These developments are particularly concerning for children with an established online presence, like kids of family vloggers or celebrities. Usually, hundreds of their pictures and videos are accessible to the public, which creates ample opportunities for offenders to generate AI-generated child sexual abuse material (CSAM).

Combatting COITP through Network Intelligence

Disrupting these networks requires a holistic approach. "This is not just about looking at individual pieces of content," David elaborated, "it's looking at the whole network so that we can spot problems." By using graph data science, Resolver manages to visualise connections between actors to observe how networks develop and grow. The central actors curating the sharing content are the ones holding the network together, explained David. Once the central actors have been identified through network intelligence, Resolver conducts targeted investigations to take down the entire network.

Policy Recommendations for Platforms

Regardless of whether or not this content is considered illegal, there are measures platforms can and should take to combat the distribution of these collections, David emphasised:

  • Don't review content in isolation, consider uploaders intent and overall behaviour when moderating content.
  • Instead of simply analysing the 'severity' of content depicting minors, focus on the user collecting the images. Just because the collected images are not illegal does not mean they are not harmful.
  • Forbid the creation of collections in platform community guidelines. Guidelines do not have to be informed by whether or not content is illegal, but should represent the environment the platform wants to foster for their users.
  • If creating collections is an essential aspect of the user experience, platforms can consider establishing protections that prevent people from sharing collections with other users.



Resolver, a Kroll business is a real-time risk intelligence company driven to protect people, brands and assets from reputational damage, security threats and online harm. The company specialises in providing risk intelligence through training AI technology to detect and mitigate harmful content online.

Learn more and get in touch with Resolver here.

Webinar Recap: How Predators Online hide in Plain Sight
17.04.2024
'

Learn more and get in touch with Resolver here

'