Recent advances in video generation have produced vivid content that are often indistinguishable from real videos, making AI-generated video detection an emerging societal challenge. Prior AIGC detection benchmarks mostly evaluate video without audio, target broad narrative domains, and focus on classification solely. Yet it remains unclear whether state-of-the-art video generation models can produce immersive, audio-paired videos that reliably deceive humans and VLMs. To this end, we introduce Video Reality Test, an ASMR-sourced video benchmark suite for testing perceptual realism under tight audio–visual coupling, featuring the following dimensions: (i) Immersive ASMR video-audio sources. Built on carefully curated real ASMR videos, the benchmark targets fine-grained action–object interactions with diversity across objects, actions, and backgrounds. (ii) Peer-Review evaluation. An adversarial creator–reviewer protocol where video generation models act as creators aiming to fool reviewers, while VLMs serve as reviewers seeking to identify fakeness. Our experimental findings show: The best creator Veo3.1-Fast even fools most VLMs: the strongest reviewer (Gemini 2.5-Pro) achieves only 56% accuracy (random 50%), far below that of human experts (81.25%). Adding audio improves real–fake discrimination, yet superficial cues such as watermarks can still significantly mislead models. These findings delineate the current boundary of video generation realism and expose limitations of VLMs in perceptual fidelity and audio–visual consistency.
We release the complete ASMR based Video Reality Test corpus: real videos, extracted images, prompts, and outputs from 13 different video-generation settings (OpenSoraV2 variants,Wan2.2 variants, Sora2,Veo3.1-fast, Hunyuan,StepFun, etc.). For each of the 149 scenes we therefore provide 1 + k clips (with k = 13 fakery families). Both ModelScope and Hugging Face mirrors host identical contents.
Please contact us via email (wjqkoko@foxmail.com) to update and submit your model results.
| Model | Veo3.1 Fast |
Sora2 | Wan2.2 A14B |
Wan2.2 5B |
Opensora V2 |
Hunyuan Video |
Step Video |
Avg. (↑) | Rank |
|---|---|---|---|---|---|---|---|---|---|
| Random | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 13 |
| Human | 81.25 | 91.25 | 86.25 | 91.25 | 91.25 | 91.25 | 91.25 | 89.11 | 1 |
| Open-source Models | |||||||||
| Qwen3-VL-8B* | 57.79 | 87.69 | 55.56 | 56.28 | 51.50 | 54.50 | 83.84 | 63.88 | 5 |
| Qwen3-VL-30B-A3B | 51.08 | 51.35 | 49.44 | 54.74 | 47.09 | 49.74 | 81.68 | 54.87 | 12 |
| Qwen2.5-VL-72B | 49.50 | 71.07 | 51.05 | 51.00 | 54.50 | 53.50 | 81.50 | 58.87 | 10 |
| Qwen3-VL-235B-A22B | 56.53 | 80.75 | 53.89 | 52.66 | 50.79 | 48.19 | 90.53 | 61.91 | 8 |
| GLM-4.5V | 54.64 | 63.75 | 54.90 | 57.59 | 66.24 | 61.01 | 87.13 | 63.61 | 6 |
| Proprietary Models | |||||||||
| GPT-4o-mini | 52.50 | 51.78 | 53.68 | 50.50 | 53.00 | 50.50 | 89.00 | 57.28 | 11 |
| GPT-4o | 51.50 | 51.27 | 55.26 | 55.50 | 56.50 | 56.50 | 95.00 | 60.22 | 9 |
| GPT-5 (Preview) | 54.55 | 95.43 | 55.26 | 57.50 | 56.78 | 56.50 | 93.97 | 67.14 | 4 |
| Gemini-2.5-Flash | 47.72 | 87.56 | 53.55 | 55.44 | 55.15 | 53.06 | 78.63 | 61.59 | 7 |
| + Audio | 52.55 | 93.65 | 53.55 | 55.44 | 55.15 | 53.06 | 78.63 | 63.15 | 8 |
| Gemini-2.5-Pro | 51.56 | 84.49 | 59.09 | 60.21 | 62.30 | 65.76 | 87.98 | 67.34 | 3 |
| + Audio | 56.00 | 87.72 | 59.09 | 60.21 | 62.30 | 65.76 | 87.98 | 68.44 | 3 |
| Gemini-3-Pro-Preview | 77.89 | 89.90 | 57.67 | 73.87 | 65.83 | 80.90 | 87.94 | 76.27 | 2 |
* Note: Qwen3 links direct to the Qwen organization as specific Qwen3 repositories may be internal or under the Qwen2.5 umbrella currently.
| Inputs | Image | Text | GPT-4o -mini |
GPT-4o | Gemini-2.5 -Flash |
Gemini-2.5 -Pro |
Avg. (↓) | Rank |
|---|---|---|---|---|---|---|---|---|
| Opensora-V2 | ||||||||
| Text2Vid | ✘ | ✔ | 14.00 | 10.00 | 28.72 | 39.18 | 22.98 | 8 |
| ImgText2Vid | ✘ | ✔ | 12.00 | 15.00 | 30.21 | 35.16 | 23.59 | 9 |
| Text2Img2Vid | ✔ | ✔ | 14.00 | 18.00 | 45.36 | 43.75 | 30.28 | 10 |
| Wan2.2 | ||||||||
| Text2Vid-A14B | ✘ | ✔ | 12.00 | 13.00 | 24.47 | 21.74 | 17.80 | 5 |
| ImgText2Vid-A14B | ✔ | ✔ | 8.89 | 7.78 | 23.53 | 26.19 | 16.10 | 3 |
| ImgText2Vid-5B | ✔ | ✔ | 7.00 | 13.00 | 30.53 | 33.33 | 20.97 | 6 |
| HunyuanVideo | ||||||||
| Text2Vid | ✘ | ✔ | 8.00 | 7.00 | 25.51 | 18.56 | 14.77 | 2 |
| ImgText2Vid | ✔ | ✔ | 7.00 | 15.00 | 26.53 | 42.39 | 22.73 | 7 |
| Sora2 | ||||||||
| Text2Vid | ✘ | ✔ | 16.00 | 9.00 | 100.00 | 97.89 | 55.72 | 12 |
| ImgText2Vid | ✔ | ✔ | 8.25 | 3.09 | 95.79 | 79.17 | 46.58 | 11 |
| + Audio | ✔ | ✔ | 8.25 | 3.09 | 97.89 | 88.66 | 49.47 +2.89 | 11 |
| - Watermark | ✔ | ✔ | 8.25 | 6.19 | 25.00 | 24.74 | 16.55 | 4 |
| StepVideo | ||||||||
| Text2Vid | ✘ | ✔ | 84.00 | 92.00 | 73.68 | 86.81 | 83.62 | 13 |
| Veo3.1-Fast | ||||||||
| ImgText2Vid | ✔ | ✔ | 11.00 | 5.00 | 16.16 | 17.00 | 12.54 | 1 |
| + Audio | ✔ | ✔ | 11.00 | 5.00 | 19.20 | 25.00 | 15.05 +2.51 | 1 |
Q1-1. Are current VLMs reliable at distinguishing real and generated videos?
Q1-2. Does adding audio improve VLM detection performance?
Q2-1. Can current video generation models successfully fool VLMs?
Q2-2. What factors affect the realism of generated videos?
Real
Sora-2
Veo-3.1-fast
Real
Sora-2
Veo-3.1-fast
Real
Sora-2
Veo-3.1-fast
Real
Sora-2
Open Sora
Kling
Wan-A14B
@misc{wang2025videorealitytest,
title={Video Reality Test: Can AI-Generated ASMR Videos fool VLMs and Humans?},
author={Jiaqi Wang and Weijia Wu and Yi Zhan and Rui Zhao and Ming Hu and James Cheng and Wei Liu and Philip Torr and Kevin Qinghong Lin},
year={2025},
eprint={2512.13281},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.13281},
}