Can we really tell whats made by Sora 2? 11 tips to help spot AI-generated video
Date:
Wed, 15 Oct 2025 11:00:00 +0000
Description:
Sora 2 makes AI video look more real than ever. Learning how to question and verify what we see will become an essential everyday skill.
FULL STORY ======================================================================
Sora 2 is here. OpenAIs upgraded text-to-video model can generate short, super realistic video clips from a prompt. While we already knew AI video was improving fast, Sora 2 crosses a new threshold: a lot of its output looks genuinely real.
Whether that excites or terrifies you probably depends on how you feel about AI in general. But this isnt a debate about whether its good or bad. Its
about something more practical: Can we actually tell when a video was made by Sora 2? The answer is complicated.
Because no matter how good you think you are at spotting AI, you can still be fooled. Even people who work in this space get tricked. That doesnt make you stupid; it just means the technology has evolved.
You might wonder why it matters so much. Isnt AI just a bit of fun? Well, sure. Today, its a fake AI cat, and its mostly harmless. But if we dont build the skill of spotting AI now, tomorrow it could be a fake politician, a fake arrest, or a fake friend.
The problem is that Sora 2 has quietly fixed most of the old AI giveaways. Things that used to expose AI instantly, like blurs, weird hands, and impossible physics. Sora 2 isnt perfect, but its dramatically better at all
of them. But there are cracks if you know where (and how) to look. Spotting Sora 2 videos isnt just about what you see; its about how you think. 1. Look behind the subject
Sora 2 is great at making the main subject look convincing. However, the background can still give it away. Think buildings with impossible proportions, walls that shift, lines that dont quite meet, and background characters doing bizarre things. Were naturally drawn to the person or animal in the foreground, but with AI, the truth might be hiding just behind them.
2. Pay attention to physics
Real life has rules that AI doesnt always play by. Watch for objects that suddenly appear or vanish, lighting that doesnt match the environment,
shadows that fall the wrong way, reflections that show nothing or motion that feels a little too smooth. Even when the overall aesthetic looks right, physics glitches are still one of the clearest tells. 3. Notice movement that feels off
Some people get an uncanny valley feeling when they look at fake humans in AI images and videos, which is often down to creepy movements. Like humans that might blink too much, smile too smoothly or move like jerky puppets. But even non-human things can glitch, like static objects that gently wobble, hair
that blows in non-existent wind, fabric that moves for no reason. AI loves adding tiny animations everywhere. It makes the world feel alive, but in the wrong way. 4. Look for blurs, noise and smudges
Sora 2 is impressive, but compression still gets weird. Youll sometimes see grainy patches, warped or melted textures, smudged areas where something was edited out or overly clean spots that look airbrushed. This is exactly why bodycam-style or low-res footage is already so popular on Sora 2 and so dangerous. It naturally looks messy, which makes all of these flaws harder to spot, and Sora 2 can blend into that aesthetic almost perfectly. 6. Tap into your emotions
AI content is often engineered to provoke a strong emotion. Like shock, awe, sadness or anger. It doesnt matter which, as long as you react and share. The problem is that when youre emotional, youre far less likely to stop and question what youre seeing. If a video makes you instantly furious or deeply moved, thats your cue to pause. Manipulation is easier when youre
overwhelmed. 7. Be wary of watermarks
Some Sora 2 videos include a subtle Sora watermark that moves through the frame. Perfect, right? Problem solved? Not so fast. Relying on watermarks is risky. People can crop them out, blur them or even add fake ones to make AI content look more authentic. And when a watermark has been removed, there are usually clues, like add aspect ratios, black bars or awkward cropping. 8. Scrutinize the account, not just the video
As content becomes harder to verify, the source becomes even more important. Always check the account sharing it, as there might be some obvious red
flags. For example, is it a random viral page built on shocking or
sensational clips? Much more likely to be AI. Do they ever include sources or context in the caption? If not, thats another clue. The less transparent the account is, the more cautious you should be. 9. Check the comments
Comments are often the first place someone screams AI!, so theyre worth checking. But be careful. Creators can delete comments, filter out words like fake or AI, or turn comments off entirely. So just because no one is questioning the video doesnt mean everyone believes it. Sometimes it just means no one is allowed to question it. 10. Cross-check with reality
If its genuinely news, then other reputable outlets are going to be covering it, so check them. Most newsrooms spend a lot of time authenticating video footage, checking where its come from, contacting sources, tracing the original upload and digging into the metadata. Whole teams are trained to verify video, so if it only exists in a single viral TikTok, be skeptical.
11. Slow down
This is probably the most important skill. We see so much content, scrolling and sharing at speed, and thats exactly when we get caught out especially by emotionally-charged videos. Slowing down gives your brain time to spot the cracks.
And go easy on yourself. You wont catch every AI video. No one will. But learning to question what we see regularly and with curiosity is the new
media literacy. Its not just about avoiding embarrassment over a fake video. As AI and reality blur more and more, this skill wont just be useful; itll be essential.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the
Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too. You might also like Flattery from AI isn't just annoying it might be ...
OpenAI's rumored 'always on' AI device sounds terrifying What is AI? Everything you need to know about Artificial ...
======================================================================
Link to news story:
https://www.techradar.com/ai-platforms-assistants/openai/can-we-really-tell-wh ats-made-by-sora-2-11-tips-to-help-spot-ai-generated-video
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)