The Verge: Tip line that guards against child sexual abuse images may struggle against AI

Anew report from Stanford’s Internet Observatory said the National Center for Missing and Exploited Children’s CyberTipline needs to improve its technology if it wants to meet the influx of AI-generated child sexual abuse material (CSAM).
Apr 22

The report said CyberTipline, the first defense against CSAM on the internet, already struggles to handle the high volume of calls it receives. According to the center, the tip line received 36.2 million CSAM reports in 2023, a 12 percent increase from 2022.

The rise of AI-generated CSAM would only make things worse. The tip line receives reports from online platforms like Facebook and Google through a manual reporting API, but this API doesn’t ensure all important fields are filled out. Stanford also notes that content moderation staff turnover quickly in tech companies, so there’s usually no consistency in reporting.

The nonprofit center, which gets a large chunk of its budget from Congress, often offers lower salaries than other trust and safety teams within the industry, so it has a hard time retaining staff. It also has trouble integrating with external data sources, uses different platforms to read reports from companies, and cannot keep up with the latest technological innovations. For example, it can’t store images and other sensitive data on cloud services because of legal restrictions that keep reported data on servers only for 90 days, which is significantly less time than it takes to take any sort of legal action.

Stanford recommends that online platforms improve their CSAM reporting API to work to ensure all necessary information is included, which allows the tip line agents to follow up on cases correctly. It also suggested the center seek a higher budget from Congress so it can prioritize hiring technical experts and improving its technology stack.

The New York Times reports that there has been an increase of deepfaked AI-generated nudes on the internet and in schools, adding to an already overwhelmed system. Fake AI-made nudes of celebrities like Taylor Swift made the rounds on X earlier this year. The attorneys general of all 50 states called on Congress to establish a commission to fight AI-created CSAM. In December, it was found that some datasets used by companies to train their AI models included CSAM images.