Microsoft: Microsoft Joins Thorn and All Tech Is Human to enact strong child safety commitments for generative AI

May 3
This initiative, led by Thorn, a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human, an organization dedicated to collectively tackling tech and society’s complex problems, aims to mitigate the risks generative AI poses to children. The principles also align to and build upon Microsoft’s approach to addressing abusive AI-generated content. That includes the need for a strong safety architecture grounded in safety by design, to safeguard our services from abusive content and conduct, and for robust collaboration across industry and with governments and civil society. We have a longstanding commitment to combating child sexual exploitation and abuse, including through critical and longstanding partnerships such as the National Center for Missing and Exploited Children, the Internet Watch Foundation, the Tech Coalition, and the WeProtect Global Alliance. We also provide support to INHOPE, recognizing the need for international efforts to support reporting. These principles will support us as we take forward our comprehensive approach.
As a part of this Safety by Design effort, Microsoft commits to take action on these principles and transparently share progress regularly. Full details on the commitments can be found on Thorn’s website here and below, but in summary, we will:

  • DEVELOP: Develop, build and train generative AI models to proactively address child safety risks
  • DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process.
  • MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks.


Today’s commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. This collective action underscores the tech industry’s approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society.We will also continue to engage with policymakers on the legal and policy conditions to help support safety and innovation. This includes building a shared understanding of the AI tech stack and the application of existing laws, as well as on ways to modernize law to ensure companies have the appropriate legal frameworks to support red-teaming efforts and the development of tools to help detect potential CSAM.We look forward to partnering across industry, civil society, and governments to take forward these commitments and advance safety across different elements of the AI tech stack. Information-sharing on emerging best practices will be critical, including through work led by the new AI Safety Institute and elsewhere.