AI’s new low: Taylor Swift targeted with explicit deepfake images

Pop star Taylor Swift has become a victim of AI-generated pornographic images that flooded social media. The viral posts underscore the concerning implications of mainstream artificial intelligence technology.

Sexual harassment

These fabricated images, depicting the singer in explicit and suggestive poses, were seen by millions of social media users on X, formerly known as Twitter, before being removed.

[embedded content]

According to reports, Reddit banned non-consensual nude deepfakes after they started appearing on the site in 2018.

Concerns have been raised that AI and deepfake platforms will allow for ordinary users to sexually harass people, particularly women and children. In the US, there have already been reports of teenage girls being bullied by their male classmates who have made fake nude images of them.

In 2020 Vox made a report that the biggest threat of deepfakes is pornography. Actress Kristen Bell found out from her husband that there were deepfake pornographic images of her on the internet.

Brand implications

Last year, the SABC cautioned the public about fraudulent AI-generated videos featuring portrayals of journalists and presenters endorsing an investment product.

The emergence of deepfake technology poses multifaceted challenges for brands. First and foremost, the risk of reputation damage looms large, as convincing deepfake videos can be crafted to disseminate false narratives, potentially harming a brand’s standing.

Moreover, the erosion of consumer trust becomes a pressing concern, as the authenticity of video content, especially involving brand representatives, may come into question, impacting how marketing messages are received and perceived by audiences.