TikTok is introducing a new policy where content generated using artificial intelligence will be labeled when uploaded from certain platforms. This move is aimed at preventing the spread of misinformation on the social media platform. The announcement was made by Adam Presser, TikTok’s Head of Operations & Trust and Safety, during a recent interview on ABC’s “Good Morning America.”
Presser revealed that TikTok’s users and creators are excited about the possibilities of AI in enhancing their creativity and connecting with their audiences. However, the platform wants to ensure that users can differentiate between fact and fiction. In the past, TikTok encouraged users to label content that was generated or edited by AI, specifically when it includes realistic images, audio, and video.
The new labeling policy is consistent with TikTok’s commitment to promoting transparency and protecting its users from misinformation. By identifying content created with AI, TikTok aims to empower users to make informed decisions about the content they consume on the platform. This initiative reflects the platform’s ongoing efforts to foster a safe and trustworthy online community for its users.