Scroll through social media for five minutes and you'll encounter AI-generated content. Sometimes it's obvious. Sometimes it's terrifyingly convincing. And increasingly, it's being used to manipulate, deceive, and flood the internet with what I call "AI slop": low-quality, AI-generated content designed to game algorithms, spread misinformation, or simply waste your time.
As someone who's spent years in tech leadership and now works with some of the most talented players in the PR and communications space at Lookatmedia, I've become obsessed with this problem. Let me share what I've learned about identifying AI slop and the tools that actually work.
The Three Types of AI Slop You'll Encounter
Before we dive into detection, let's categorize what we're dealing with:
1. AI-Generated Text
This is the most common form. Blog posts, social media content, news articles, product descriptions. All can be generated by LLMs like ChatGPT, Claude, or open-source models. Sometimes it's benign (someone automating their content calendar). Sometimes it's malicious (fake news, spam, or manipulation).
2. AI-Generated Images
From Midjourney to DALL-E to Stable Diffusion, AI image generators have become shockingly good. Fake photos of celebrities, fabricated "news" images, and synthetic profile pictures are everywhere.
3. AI-Generated Video & Audio
The newest frontier. Deepfakes, synthetic voices, and entirely fabricated video content. This is where the stakes are highest and detection is hardest.
How to Spot AI-Generated Text
AI text has gotten remarkably good, but there are still tells:
The "Perfect" Problem
AI text is often too polished. Real humans make small mistakes, use colloquialisms inconsistently, and have genuine voice quirks. AI tends toward a sanitized, corporate-speak middle ground.
Red flags to watch for:
- Overuse of phrases like "It's important to note," "In today's fast-paced world," or "Let's dive in"
- Perfect parallel structure in every list
- Lack of genuine opinions or controversial takes
- Smooth transitions that feel mechanical rather than conversational
- Consistent sentence length and rhythm (humans vary more)
The Hallucination Check
AI confidently makes things up. Check any specific claims: statistics, quotes, studies referenced. If an article cites a "2024 Harvard study" but you can't find it, that's a red flag.
"The most dangerous AI text isn't the obviously fake stuff. It's the 95% accurate content with confidently stated false details mixed in."
Strange Special Characters
One dead giveaway of AI-generated text is the presence of unusual special characters. AI models sometimes insert characters that look similar to standard letters but are actually different Unicode characters, like using a Cyrillic "а" instead of a Latin "a", or inserting invisible formatting characters.
The em-dash tell: AI loves em-dashes (—). If you see text littered with em-dashes instead of regular hyphens or commas, that's a red flag. Most humans don't naturally reach for the em-dash character when typing. AI models, trained on polished editorial content, use them constantly.
Watch for text that looks normal but behaves strangely when you try to search for specific words, or copy-paste that produces unexpected results. AI outputs also frequently include inconsistent quote styles ("curly" vs "straight" quotes) that flip back and forth mid-document.
Best Tools for Text Detection
Most accurate for articles and essays. Struggles with heavily edited AI text but excellent for raw output detection.
Designed for content publishers. Good at detecting paraphrased AI content and includes plagiarism checking.
Claims highest accuracy rates. Good for academic and professional content verification.
How to Spot AI-Generated Images
Despite massive improvements, AI images still have consistent problems:
The Hand Problem
Check the hands. AI still struggles with fingers. You'll often see six fingers, fused digits, or anatomically impossible positions. Hands in pockets or behind backs are AI's way of hiding its weakness.
Background Inconsistencies
AI focuses on the main subject. Look at the background for:
- Text that's gibberish or partially formed
- Objects that fade into nothing or blend unnaturally
- Architectural elements that don't make geometric sense
- People in the background with distorted features
The Uncanny Symmetry
AI loves symmetry. Real photos have natural asymmetry in faces, clothing, and environments. If something looks too perfect, it probably is.
Texture Tells
Zoom in on textures. Hair, fabric, and skin often have a waxy, overly smooth quality. Teeth and eyes may look slightly off, too uniform or unnaturally bright.
Best Tools for Image Detection
Detects content from DALL-E, Midjourney, Stable Diffusion, and more. API available for integration.
Free tier available. Good accuracy on most major AI image generators.
Not AI detection per se, but great for verifying if an "original" image exists elsewhere or has been manipulated.
How to Spot AI-Generated Video & Audio
This is where things get scary. Deepfakes have improved dramatically, but they're not perfect:
Lip Sync Issues
Watch the mouth carefully. Deepfakes often have subtle timing issues where audio and lip movement don't quite match, especially on complex sounds like "M," "B," and "P."
Eye Problems
Blinking patterns in deepfakes are often unnatural, either too regular or not enough. The eyes may also have an unnatural glossiness or not track properly.
Edge Artifacts
Look at the edges of the face, especially around the hairline and jawline. You'll often see subtle flickering, blurring, or color inconsistencies where the fake face meets the real background.
Audio Tells
Cloned voices often have:
- Unnatural breathing patterns
- Consistent pacing (real speech varies in speed)
- Lack of verbal fillers like "um" and "uh"
- Emotional flatness or inappropriate emphasis
Watermarks: The Easiest Tell
Here's the low-hanging fruit: many AI video generators embed visible or semi-visible watermarks. This is actually good news for detection, if you know what to look for.
Sora (OpenAI's video generator) includes watermarks in its outputs. Meta AI's video tools similarly embed identifiable markers. These watermarks might appear as small logos in corners, subtle patterns overlaid on the video, or metadata embedded in the file itself.
Bad actors often try to crop, blur, or re-encode videos to remove these watermarks, but that process itself can leave artifacts. Look for unusual cropping, inconsistent compression around edges, or suspiciously tight framing that might be hiding a removed watermark.
Best Tools for Video/Audio Detection
Enterprise-grade deepfake detection. Real-time analysis with confidence scoring.
Specifically designed for detecting AI-generated speech and cloned voices.
Mobile-friendly deepfake detection. Good for quick verification of social media videos.
Why This Matters for PR & Media Professionals
If you work in communications, PR, or journalism, AI detection isn't optional. It's essential. One fake quote, one manipulated image, one deepfake video can destroy credibility and careers.
This is exactly why we built Lookatmedia with AI verification at its core. Our Copy Desk AI doesn't just help you write. It helps you verify. When you're crafting press releases, fact-checking claims, or evaluating source material, you need tools that catch AI hallucinations and synthetic content before they become embarrassing corrections.
"In PR, your reputation is your product. One piece of AI slop that slips through can undo years of trust-building."
We've built our tools from the ground up with one goal: help PR professionals and freelancers combat AI hallucinations and fakes. This isn't bolted-on functionality. It's the foundation of everything we do.
Want tools built for PR verification?
Lookatmedia's AI-powered newsroom and Copy Desk AI are designed specifically for communications professionals who can't afford to get burned by synthetic content.
Explore Lookatmedia →A Framework for Verification
Here's the mental checklist I use when evaluating any content:
- Source check: Is this from a verified, reputable source? Can you trace it back to the original?
- Cross-reference: Are other credible sources reporting the same thing?
- Tool verification: Run it through relevant detection tools (text, image, or video)
- Expert consultation: When stakes are high, get a human expert to review
- Gut check: Does this seem too good/bad/sensational to be true? That's often a signal
The Arms Race Continues
I won't sugarcoat it: this is an arms race, and the generators are often ahead of the detectors. As AI improves, so must our detection methods and our media literacy.
The good news? The patterns I've shared above remain relatively consistent across AI generations. The technology changes, but the fundamental tells (the uncanny perfection, the hallucinations, the edge artifacts) tend to persist in some form.
Stay skeptical. Stay informed. And when the stakes are high, verify everything.
Working in PR or communications? Don't fight AI slop alone. Lookatmedia gives your team the AI-powered tools to verify content, generate authentic pitches, and protect your reputation in an age of synthetic media.