Content generated by artificial intelligence has become so lifelike that it’s often impossible to tell whether a video or an image floating through social media is real or fake.
More than a dozen online tools claim they can tell the difference between what’s real and what’s AI by looking for hidden watermarks, composition errors and other digital clues.
Sign up to The Nightly’s newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.
The reality is more mixed, according to a battery of tests conducted by The New York Times. While many tools did a good job detecting some AI content, they were not accurate enough to offer users complete confidence.
The findings suggest that these detectors can help confirm suspicions about AI-generated media, but it is hard to rely on any of them to make definitive rulings.
That presents fresh challenges for internet users and fact-checkers trying to manage the AI fakery that has flooded social media in recent months.
Overall, we found that any conclusions drawn by the tools should be supported by other research, like details in official photographs or news reports.
Many people nevertheless see the detection tools, which now analyse not just imagery but also video and audio, as powerful arbiters of truth at a crucial moment when AI-generated content is coursing through social media and deceiving users during breaking news moments.
The tools are being adopted by banks and insurance companies trying to spot AI-powered fraud, by teachers looking for plagiarism, and by internet sleuths trying to verify images and videos circulating through social media.
“You’re never going to have a detection tool that is able to 100 per cent detect whether AI has been used in text, images, video, whatever form it is,” said Mike Perkins, a professor at British University Vietnam who studied AI detectors and found that text detectors were unreliable. As the AI generators improve, he said, AI detectors will struggle to catch up, creating an “arms race.”
Our tests reviewed more than a dozen AI detectors and chatbots capable of identifying fake video, audio, music and images, conducting more than 1,000 scans in all.
Many could detect basic fakes.
Most AI fakes circulating online today do not take a lot of work to create: Users can type in simple prompts and receive a lifelike image or video of real people. That kind of content flooded the internet soon after Nicolás Maduro, Venezuela’s ousted president, was arrested in January.
To test this, we asked ChatGPT, the AI chatbot made by OpenAI, to create a photograph of two people laughing. It produced a lifelike image that nevertheless contained several markers of AI imagery: the lighting, composition and features were all a bit too perfect, not to mention that a hand seemed to ripple unnaturally.
Many of the AI detectors quickly spotted that the image was AI-generated, with a few exceptions. ChatGPT, for example, couldn’t detect the fake image that it had created just moments earlier.
(The Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.)
AI detectors are generally trained on huge collections of AI-generated works, learning to spot the digital signals left behind by the AI tools.
The Times shared the results of the tests with the AI detector companies. Many of them responded that no detector would be completely accurate all of the time. In a sign of how fast-moving the AI detection business is, several companies said they were on the brink of releasing major updates to their models that would perform better.
“This will be an ongoing battle of determining ‘Is this AI or not?’ for the foreseeable future,” wrote Anatoly Kvitnitsky, the CEO of AI or Not, in an email. The company ran additional tests on the images that its public model failed to identify, finding that its newest model could correctly identify them as AI.
They struggled with more complex images.
The detectors struggled more with images like a fictional scene of a seaside port with very few markers that it’s AI-made.
This may be because some of the detectors are mostly trained to identify faces so they can be used for security and anti-fraud purposes.
Few detectors can analyse videos.
AI videos are rapidly becoming the next threat to social media. The release of Sora, an AI video generator app created by OpenAI, led to a surge of fake videos on social media — with few labels from social media companies indicating they were fake.
Only a few AI detectors are capable of analysing video and audio. The detectors that could had mixed results.
Video and audio have emerged as key security threats for businesses: Imagine a call from a CEO that was actually an AI replica of that person’s voice, or a video conference with an AI character that looks real.
Detection companies invested a lot of money to spot such fakery, offering the ability to identify whether video, audio or music is AI-made, even by analysing live video conference feeds. Some analyses highlighted which parts of a video were fake and which were deemed real.
They are better at detecting fake audio.
AI-generated audio quickly leaped ahead of images and video to become especially lifelike.
Tools like those from ElevenLabs create remarkably lifelike voices, complete with breaths, pauses and dynamic intonation. Such voices are used for viral videos and memes, but also for telephone scams and impersonations.
Seven of the AI detectors and chatbots we tested could check for fake audio, and Sensity and Resemble.ai did the best with it. Even when the audio was heavily altered, the tools managed to conclude with high confidence that the voices or music was AI-generated. They also spotted the real voices in our tests.
They did a better job identifying real images.
One risk of AI detectors is that they could call something fake when it is real, throwing chaos into developing news events or raising doubts about genuine images.
When a gruesome image of a charred body circulated on social media at the start of the conflict between Israel and Hamas, some observers dismissed it as an AI-generated fake. Several experts said it was probably real, but by then the doubts had become widespread.
Overall, the detectors did a better job of spotting real images than fake ones.
They also performed well at analysing real videos — like recordings from an iPhone or news clips downloaded from the internet. Although AI-generated audio fooled some of the detectors, they all correctly labelled a clip of a reporter reading AI-generated text as the real deal.
Real images that were edited with AI presented unique challenges.
Some AI fakes blend real content with AI-generated media to create lifelike fakes that are even harder for the naked eye to detect. The White House, for instance, posted an altered image of a woman who was arrested in Minneapolis last month. Most of the AI detectors thought the altered image was real.
We found that most detectors missed alterations in our tests, too.
© 2026 The New York Times Company




