Article

Understanding the work of Digital First Responders Part 1

This year we focused on the first line of defence combatting the crisis of illegal content online. Digital first responders are the Trust and Safety operations, content moderation teams and analysts who put out the fire that is child sexual abuse material (CSAM).

This is our fourth annual summit and by far the largest. Four years ago, we started with 34 attendees and by this year we had over 300 people in attendance. President Jean-Christophe Le Toquin emphasised that INHOPE has made people and technology its key focus areas to drive effectiveness in our efforts to achieve our mission.

On Day One our speakers discussed best practices for fighting CSAM in the tech industry. They left you with the question: what can you do for your digital first responders from the information you learned at the summit?

Unaccountability at scale

We started day one by discussing unaccountability at scale on the internet. Dr. Paul Vixie gave a moving keynote on TOR, Whois Privacy and how the classical rule of law cannot exist in the information age. Having spent the second half of his career making the internet safer, Dr. Vixie knows how about effective recourse from key internet players. “Cooperation against online threats can happen at scale, but we need to find better ways to cooperate.”

People first, understanding the role of analysts & content moderators

This leads us to the work that is being done in Microsoft by Damien Vaught who focuses on the wellness and resilience of Trust and Safety teams. He stated, “achieving and maintaining a people-first culture is vital.” This means creating a culture based on wellness, transparency and a foundation of trust. Building culture starts with having a critical conversation and asking questions like:

  • Do I know how to support my colleagues?
  • How do we ensure a diverse team?
  • How do you want to be supported?
  • What efforts will you make to be more resilient?

How the technology industry detects harms on platforms in 2021

With the evolving set of risk areas and challenges that Trust and Safety teams face today, we must be cognizant of how the technology industry detects new and unknown harms on online platforms. We discussed this topic with David Hunter from Crisp. He said, “we infiltrate bad actor groups online to predict what they could potentially do next, so platforms have actionable intelligence to prevent harm.” Crisp looks for any harmful behaviour across the surface, deep and dark web and information with key entities, key risks and key new tradecrafts. David commented that this is looked at so Crisp can understand how predators behave.

Crisp blends AI and a human intelligence layer, which consists of content training for 100 threat analysts that come from very different areas. David said, “Ultimately, we want to let mainstream platforms – game, file hosting, kids’ networks to know about existing harms.”

Automation solutions of content moderation

David went on to chair our first panel discussion with Rolando Vega from Pinterest and Almudena Lara from Google. Automation has had a huge impact on the fight against CSAM with 38 million reports made to NCMEC over the past two years. Technology has allowed perpetrators to create and share more contact but also allows us to create new solutions for fighting back. Almudena said that Google’s key techniques are hash-matching (detects previously known CSAM) and machine learning classifiers (detect new content). With the support of the classifiers, 2 billion images a month can be assessed.

Rolando remarked that Pinterest leans heavily on machine learning to protect users and find unsafe content. Companies and people should commit time to automate solutions through safety by design, cross-collaboration and sharing information to help tackle this issue as an industry is key.

The human part of content moderation

Improved automation could ease human involvement in assessing content. This leads us into our second panel discussion, the human aspect of content moderation with Margaux Liquard from Yubo and Jessica Marasa from Twitch. We asked the question, where does human intelligence add value? Jessica stated that humans have the skills to recognise grooming, for example in chat dialogue. Margaux stressed the difference between Moderation and Trust & Safety. Moderation – content is assessed because a guideline is breached. Trust & Safety is anticipating and preventing it from happening. Margot stated, “When it comes to endangerment – detection is focused on outputs. Suspicious threat rather than confirmed behaviour. This is where human intelligence adds a value.” Assessment is not black and white.

We then asked, how do you see the landscape of human Trust and Safety changing over the next five years? Margaux mentioned, “User safety regulations will change. We see this with the DSA in Europe, the Online Harm Bill in the UK. There is a clear need to democratise T&S processes especially around detection.” Jessica said, “There should be more education for parents. How to detect harm education for minors also. There is also the need for interaction between governments and regulations.”

What is a government’s role in removing CSAM?

That leads us to our final presentation on Day One which was addressed by Julie Inman-Grant from the e-safety Commissioner. Who provided a government perspective into the removal of CSAM. Advanced technologies and tools have enhanced the capabilities of law enforcement in combatting CSAM. Governments around the world are increasingly stepping up in the fight.

Julie stated that “The New Online Safety Act reforms will see expanded powers across all platforms where harm is now occurring.” The Safety by Design framework is a set of tools that help industries evaluate the safety of their systems and recommend solutions. Ireland has announced the establishment of their online safety commission, and other countries are also in talks with eSafety. Governments need to support companies in doing the right thing in tackling CSAM on their platforms; cross-sectorial consensus is critical.

Read about Understanding the work of Digital First Responders Part 2.

This Summit marks an opportunity to have an open forum discussion. You don't have to wait until the next INHOPE Summit to join this discussion. If you'd like to get more involved, leave us your details here to receive information on upcoming events and activities.

Understanding the work of Digital First Responders Part 1
30.09.2021 - by INHOPE
Photo by INHOPE
'

INHOPE has made people and technology its key focus areas to drive effectiveness in our efforts to achieve our mission.

'