Taylor Swift fans were quick to come to the singer’s defense following a recent wave of A.I.-enabled NSFW images.
As you may have seen, "Protect Taylor Swift" and related phrases were going viral this week in response to the slew of fabricated images. Per an ensuing report from TMZ, the images in question were making the rounds, namely on Twitter, for roughly a day before apparent action started to be taken.
Screenshots shared by fans, as seen below, showed that some accounts, seemingly some responsible for spreading the images, had been removed from the platform.
Of course, there’s no point in reaching out to anyone at Twitter, a site I still refuse to refer to as X, as they were previously reported to have gutted their communications department and literally replaced it with shit emoji.
As for the possibility of a lawsuit, a Daily Mail tabloid report cites as a source as suggesting that legal action is under consideration, like so:
"Whether or not legal action will be taken is being decided but there is one thing that is clear: these fake AI generated images are abusive, offensive, exploitative, and done without Taylor’s consent and/or knowledge."
The misinformation-driven, public figure-emulating aspect of modern-day A.I. tech has been a concern for some time now. Several artists have threatened legal action in response to such fabrications, including Sexyy Red. Last November, the Hood Hottest Princess artist publicly called out a scammy-looking company for using an emulation of her voice.
"My lawyer bout to have a field day," she told fans when the advertisements were brought to her attention.
Last year also saw a number of artists condemning several viral A.I.-created tracks. Bad Bunny, for example, minced no words when speaking out about one such track, calling it "that shit of a song that’s viral on TikTok."