Technology Outpaces Human Perception
Technology is advancing at a pace that often leaves our ability to comprehend it far behind.
Artificial intelligence is now capable of producing images and videos that appear incredibly lifelike. In fact, most folks are pretty confident in their skills to distinguish between the real and the artificial, so why not put that confidence to the test?
Below, you’ll find six pairs of images. In each pair, one is genuine, while the other is an AI creation. See if you can identify which is which. The answers are provided at the end of this article.
When examining the photos, consider these tips from photo editors: Do the individuals seem overly refined for their environment? Is their facial symmetry unnaturally perfect? Are their clothing and appearance showing realistic wear and tear? Are there stray hairs noticeable around their head?
John Villasenor, a professor at UCLA specializing in law and engineering, recommends watching for “inconsistencies in lighting and details that seem off.”
Research from the Royal Society of Open Science revealed that an average person’s ability to differentiate between AI-generated and real images was only 31 percent accurate.
Interestingly, the study showed participants often caught fakes much more frequently than they realized themselves.
Anatoly Kvitnitsky, the CEO of AI or Not, collaborates with businesses to identify computer-made images. He points out that it’s not always just about what’s in the foreground.
“The human eye should pay attention to the background details. AI excels in crafting a plausible main subject, but sometimes a person’s face might lack clarity when placed behind other elements. In videos, you might notice that people appear oddly still,” he stated.
“If there’s a car in the background, check the license plate; it may look off. Headlines on signs can sometimes be nonsensical. AI might handle the foreground well, but the background still has its flaws.”
However, this advantage may not last long. Earlier AI models often featured noticeable glitches, like misaligned facial features or unnatural accessories, but those issues have largely been resolved. Kvitnitsky notes that current technology can even create realistic skin textures and imperfections.
“There’s a sort of arms race happening between creators and detectors,” Villasenor observed. “As methods for creating images improve, so too will the tools for detecting them.”
Kvitnitsky’s company assists clients—like insurance firms—in verifying images of vehicle damages, ID cards, and checks through detailed pixel-level analysis. This helps determine whether an image originated from a real camera.
Images produced by widely used programs like Google Gemini, Adobe Firefly, and ChatGPT can often be traced back to their creation source due to embedded codes indicating their origin.
On the other hand, for those not equipped with advanced detection technology, the odds seem stacked against us. A study from the UK, published in November 2025, found that even highly skilled facial recognizers could only identify human faces accurately 54 percent of the time.
The overwhelming presence of computer-generated images in media and advertising may be conditioning people to accept AI faces as normal.
Misapplication of this technology could lead to significant real-world consequences. For instance, in 2024, a Hong Kong financial official was tricked into transferring $25 million after a video call with what appeared to be his company’s CFO, only to discover later that the footage was AI-generated. The money was transferred legitimately, despite the deception.
Kvitnitsky believes this kind of issue poses a long-term risk to society.
“My biggest worry about AI is that it will breed skepticism towards what people see and hear,” he remarked. “We might sometimes encounter something authentic and yet dismiss it as fake. That could reinforce our biases.”
Another pertinent example surfaced recently following the death of drug lord Nemesio “El Mencho” Oseguera Cervantes at the hands of Mexican authorities. The next day, a photo allegedly showing model Maria Julissa sitting with him circulated online, claiming they were in a romantic relationship.
Julissa denied any association with El Mencho, but the potential fallout from being linked to cartel activities is clear.
As the line between reality and AI-generated content continues to blur, Kvitnitsky admits that, under certain conditions, even he could be misled by convincing AI imagery.
“I’m a father of three and run a company focused on AI detection, but if I received a picture suggesting something had happened to one of my kids, my emotional response could easily override my critical thinking. I’d just react based on what I saw,” he confessed.
Answers: 1) B, 2) B, 3) A, 4) B, 5) A, 6) A

