Media

Nearly 4,000 celebrities found to be victims of deepfake pornography


More than 250 British celebrities are among the thousands of famous people who are victims of deepfake pornography, an investigation has found.

A Channel 4 News analysis of the five most visited deepfake websites found almost 4,000 famous individuals were listed, of whom 255 were British.

They include female actors, TV stars, musicians and YouTubers, who have not been named, whose faces were superimposed on to pornographic material using artificial intelligence.

The investigation found that the five sites received 100m views in the space of three months.

The Channel 4 News presenter Cathy Newman, who was found to be among the victims, said: “It feels like a violation. It just feels really sinister that someone out there who’s put this together, I can’t see them, and they can see this kind of imaginary version of me, this fake version of me.”

Since 31 January under the Online Safety Act, sharing such imagery without consent is illegal in the UK, but the creation of the content is not. The legislation was passed in response to the proliferation of deepfake pornography being created by AI and apps.

My Blonde GF: a disturbing story of deepfake pornography

In 2016, researchers identified one deepfake pornography video online. In the first three-quarters of 2023, 143,733 new deepfake porn videos were uploaded to the 40 most used deepfake pornography sites – more than in all the previous years combined.

Sophie Parrish, 31, from Merseyside, discovered that fabricated nude images of her had been posted online prior to the legislation being introduced.

She told Channel 4 News: “It’s just very violent, very degrading. It’s like women don’t mean anything, we’re just worthless, we’re just a piece of meat. Men can do what they like. I trusted everybody before this.”

There is a consultation on how the Online Safety Act, which has faced numerous delays, will be enforced and applied by the broadcasting watchdog Ofcom.

An Ofcom spokesperson said: “Illegal deepfake material is deeply disturbing and damaging. Under the Online Safety Act, firms will have to assess the risk of content like this circulating on their services, take steps to stop it appearing and act quickly to remove it when they become aware.

“Although the rules aren’t yet in force, we are encouraging companies to implement these measures and protect their users now.”

A Google spokesperson said: “We understand how distressing this content can be, and we’re committed to building on our existing protections to help people who are affected.

“Under our policies, people can have pages that feature this content and include their likeness removed from search. And while this is a technical challenge for search engines, we’re actively developing additional safeguards on Google search – including tools to help people protect themselves at scale, along with ranking improvements to address this content broadly.”

Ryan Daniels of Meta, which owns Facebook and Instagram, said: “Meta strictly prohibits child nudity, content that sexualises children, and services offering AI-generated non-consensual nude images. While this app [that creates deepfakes] remains widely available on various app stores, we’ve removed these ads and the accounts behind them.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.