Technology companies such as Microsoft, Meta, Google, and OpenAI are taking a stand against child sexual abuse images (CSAM) resulting from the use of AI technology. They have pledged to combat this issue by implementing security measures into their design principles. In 2023, over 104 million files suspected of containing CSAM were reported in the US alone.
These AI-generated images pose significant risks for child safety, which is why organizations like Thorn and All Tech is Human are working with tech giants like Amazon, Google, Meta, Microsoft and others to protect minors from AI misuse. The security by design approach adopted by these technology companies aims to prevent the easy creation of abusive content using AI.
Cybercriminals can easily use generative AI to create harmful content that exploits children. To proactively address child safety risks in AI models, measures are being put in place such as training AI models to avoid reproducing abusive content and watermarking them with information indicating they were generated by AI. Additionally, companies are evaluating and training their models for child safety before releasing them to the public.
Google has implemented tools that stop the spread of CSAM material using a combination of hash matching technology and AI classifiers. The company also reviews content manually and works with organizations like the US National Center for Missing and Exploited Children to report incidents. By investing in research, deploying detection measures and actively monitoring their platforms, technology companies are taking steps to safeguard children online. Their focus is on ensuring that AI is used responsibly and does not contribute to the exploitation or harm of minors.