“We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI,” Meta said in a blogpost.
This new policy will go into effect next year and will be required globally.
Advertisers will have to disclose whenever an ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered.
This includes depicting a real person as saying or doing something they did not say or do, depicting a realistic-looking person that does not exist or a realistic-looking event that did not happen, or altering footage of a real event that happened, and depicting a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event, the company mentioned.
Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad.
Discover the stories of your interest
Meta said that it will add information to the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. Meanwhile, signalling the misuse of AI to spread misinformation, a deepfake video of actress Rashmika Mandanna has gone viral on the internet.
After the AI-generated video went viral on the internet, the actress expressed disappointment with the viral fake video and pointed out the possible misuse of technology to malign someone’s image.