TikTok Begins Labeling AI-Generated Content with Limitations

TikTok has started to label AI-generated content that comes from certain other platforms, such as DALL-E-3. This move follows similar actions by Meta, another social media giant. The new system uses technology that can recognize certain types of AI content based on its metadata.

Metadata is like a digital footprint that can tell where and how a piece of content was created.

The platform aims to make users aware of AI-generated images and videos by tagging them with labels. This effort is part of the guidelines set by the Coalition for Content Provenance and Authority (C2PA), which many big tech companies support.

These guidelines suggest that AI-generated content should carry metadata that indicates it was made by AI. Companies like Google, Microsoft, Adobe, and others are working on tools that can detect this metadata.

Despite these measures, TikTok acknowledges that not all AI-generated content will be caught. This is because it's relatively easy to erase metadata from digital content. Simple actions like taking a screenshot, recording a screen, or using another app to re-export a video can remove this digital footprint.

Even OpenAI, the creator of DALL-E 3 and ChatGPT, admits that metadata is not foolproof and can be removed either on purpose or by accident.

This means that while TikTok can label some AI content, deepfake videos, which can be more deceptive and harmful, might still slip through without labels. These deepfakes could potentially harm people by using their images without permission.

The new steps by TikTok are a start towards addressing these issues, but they won't completely solve the problem of unlabelled AI content circulating online. The company's effort is part of a broader push by tech firms to increase media literacy and inform the public about the origins of content they come across online.


Read next: Apple Pulls iPad Pro Ad After Criticism
Previous Post Next Post