Nude pictures on the internet
Taylor Swift having sex in a football stadium? This is how you don’t fall victim to AI images
Alleged naked pictures of US pop star Taylor Swift, some of which were viewed millions of times, were online for hours before the “X” platform deleted them. The recordings were not real, but AI-generated. How can such content be exposed?
US pop star Taylor Swift recently had to experience this, as several pictures of her in sexually explicit, sometimes pornographic poses were distributed on X on Wednesday and were viewed there millions of times before the platform had the fakes deleted.
Swift fakes probably come from the Telegram group
According to a report by the portal “404 Media”, which specializes in Internet topics, the fakes originated in a Telegram group and only then ended up on X. Its security team made it clear on Friday morning, almost a day later, that the distribution “was not “Consensual nudity” is strictly prohibited and there is a “zero tolerance policy” in this regard.
In fact, the account of the main disseminator of the images has now been deleted – but the damage was long since done.
Debunking AI images: a guide
While some of the Swift fakes could be quickly exposed as such with common sense, others are much more difficult. So what should you pay attention to if you find an image or video suspicious?
With regard to possible AI content, it applies to both inner like that too to check the external logic of the recording. What does that mean?
“Internal logic” includes the following aspects:
- Do the hands, fingers or even teeth look realistic? Does the person pictured have too many or too few fingers, or perhaps even several rows of teeth? (Note: AI is getting better and better in this area, so there should soon be no more anomalies here.)
- Is the person’s hair sticking up strangely or does it seem strangely “fuzzy”?
- Are the transitions on the collar, on the ear or even on the finger jewelry consistent?
- Are there any image errors, such as unexplained spots in the subject’s hair or beard?
- How are buildings, trees or other faces depicted in the background? If they are distorted, this could be an indication of AI use. If an alleged location is mentioned: Check Google Maps, for example, to see whether it actually looks like that.
- AI can (still) have problems representing three-dimensionality.
- Blank stare: Do people who are not looking directly at the “camera” have unfocused eyes? Then there’s probably something wrong here.
- If the lighting, reflections or shadows don’t match, your alarm bells should ring. This also applies if the aspects mentioned are too perfect – this is usually not the case in everyday situations and is therefore suspicious.
- When watching suspicious videos, pay particular attention to facial and jaw movements. If they appear doll-like, you are probably dealing with a fake.
- Even if the upper body and face do not act synchronously, the head appears to be attached or mask-like, the video was probably generated using AI.
In what context should the recording have been taken?
While the “internal logic” of a recording primarily revolves around what is shown in the image or video, checking the “external logic” is intended to establish context.
This includes:
- When and where was the recording supposed to have been taken? Is the information plausible?
- Did the disseminator make the recording himself or could he or she actually have been there?
- Should what is shown depict a real event or does the disseminator write that it is fictional content? (Tip: Try to find suspicious content using a reverse search – images are often stolen by AI artists and uploaded in a different context.)
- Can you find further recordings of the scene online? In today’s smartphone age, it is unlikely that attention-grabbing situations have not been captured by other people. Caution is also advised if such content is not picked up in traditional media.
- Today’s cameras produce very sharp images and videos. However, if the content is very unclear, this could be a concealment tactic.
More tips:
- With the AI provider “Midjourney”, fake images can be found using keyword searches.
- There are providers (for example “Hive Moderation” or “Huggingface” ), which want to recognize supposed AI content, but sometimes works better, sometimes worse. Therefore, consider the results of such verification tools as an indication, but not as proof, that a piece of content is AI-generated.
Sources: “404 Media” / “X”