How AI Deepfakes Are Breaking Our Ability to Trust What We See

Sebastian Hills
15 Min Read
Image Credit: Jake Nebov on Unsplash

For most of human history, seeing was believing. That basic assumption is collapsing. Within the first week of 2026, AI-generated images and videos around major news events created so much confusion that experts say we’re facing a fundamental breakdown in trust online.

The problem isn’t just that fake content looks real. It’s that real content now looks fake to many people. UC Berkeley computer science professor Hany Farid’s recent research found that people are equally likely to call something real fake as they are to call something fake real. We’ve lost our ability to tell the difference.

President Donald Trump’s Venezuela operation in early January 2026 immediately triggered a flood of AI-generated images, old videos, and altered photos across social media. After an Immigration and Customs Enforcement officer fatally shot a woman in Minneapolis on Wednesday, many people circulated a fake, likely AI-edited image of the scene.

Late in 2025, AI-generated videos showed Ukrainian soldiers apologizing to Russia and surrendering en masse. The videos fooled officials and spread widely before anyone could debunk them.

Instagram head Adam Mosseri wrote on Threads that for most of his life, he could assume photographs and videos were accurate captures of real moments. “This is clearly no longer the case and it’s going to take us, as people, years to adapt,” he said.

Mosseri predicted internet users will shift from assuming content is real by default to starting with doubt, paying more attention to who shares something and why. “This is going to be incredibly uncomfortable for all of us because we’re genetically predisposed to believing our eyes,” he wrote.

In January 2024, fraudsters using deepfake technology impersonated a company’s CFO on a video call, tricking an employee in Hong Kong into transferring $25 million. Deepfake fraud cases surged 1,740% in North America between 2022 and 2023, with financial losses exceeding $200 million in Q1 2025 alone.

Deloitte predicts generative AI could drive U.S. fraud losses from $12.3 billion in 2023 to $40 billion by 2027—a 32% annual growth rate.

A 2024 McAfee study found 1 in 4 adults have experienced an AI voice scam, with 1 in 10 having been personally targeted. Scammers need as little as three seconds of audio to create a voice clone with an 85% voice match, easily scraped from social media, podcasts, or YouTube videos.

Voice clone fraudsters have impersonated U.S. Secretary of State Marco Rubio to communicate with foreign ministers.

Jeff Hancock, founding director of the Stanford Social Media Lab, explained that AI is undermining our default trust in communication. “That’s going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces,” he said.

The old tricks for spotting fakes, checking for the wrong number of fingers or unnatural movements—are disappearing as technology improves. “In terms of just looking at an image or a video, it will essentially become impossible to detect if it’s fake. I think that we’re getting close to that point, if we’re not already there,” Hancock said.

Research shows automated detection systems experience 45-50% accuracy drops when confronted with real-world deepfakes compared to laboratory conditions. Human ability to identify deepfakes hovers at just 55-60%, barely better than random chance.

Cybersecurity firm DeepStrike estimates deepfakes increased from roughly 500,000 online in 2023 to about 8 million in 2025, with annual growth nearing 900%.

Farid noted that accuracy gets significantly worse when people view content with political themes because confirmation bias takes over. “When I send you something that conforms to your worldview, you want to believe it. You’re incentivized to believe it,” he said. “And if it’s something that contradicts your worldview, you’re highly incentivized to say, ‘Oh, that’s fake.’ And so when you add that partisanship onto it, it blows everything out of the water.”

Research confirms humans cannot consistently identify AI-generated voices, often perceiving them as identical to real people. Survey data across eight countries shows prior exposure to deepfakes increases belief in misinformation. Social media news consumers are more vulnerable to deepfakes, and this effect persists regardless of intelligence.

The existence of deepfakes creates what experts call the “liar’s dividend”, the ability to dismiss authentic recordings as probable fakes. This creates a double problem where neither believing nor disbelieving evidence can be justified.

Political and geopolitical risks are harder to measure but potentially more harmful. These attacks target processes rather than individuals. Many corporate safeguards assume voice confirmation adds security. Deepfakes turn that assumption into a weakness.

In the lead-up to the November 2024 U.S. presidential election, threat actors integrated verified news source features into fake content they produced. A CNN news format was mimicked alongside AI-generated images depicting then-President Joe Biden in critical condition in hospital.

During Germany’s February 2025 election, a deepfake video misused Russian intelligence agency communication and featured AI-generated narration with falsehoods about bomb threats, poisoned ballots, and imminent attacks on polling stations.

In the Netherlands’ October 2025 general election, political candidates from the PVV party shared deepfake images of rival politicians being led away by police in handcuffs. Most such content from candidates did not include labels disclosing it had been generated using AI tools.

Renee Hobbs, a professor of communication studies at the University of Rhode Island, said the main struggle for researchers is that people face mental exhaustion trying to verify everything they see. Asking individuals to constantly question every piece of content isn’t realistic.

Current security mechanisms are failing badly against this threat. Rob Greig, Arup’s Chief Information Officer, reflected on the $25 million fraud: “Audio and visual cues are very important to us as humans, and these technologies are playing on that”.

UNESCO researchers note we’re approaching a “synthetic reality threshold, a point beyond which humans can no longer distinguish authentic from fabricated media without technological assistance”.

However, not all AI-generated content is deceptive, nor is all deceptive content produced by AI. Fact-checkers and studies emphasize that most deception still relies on taking real content out of context. A video claiming to show Ukrainian troops surrendering in the Kursk region on March 11, 2025 re-purposed footage from 2022.

Reframed footage is designed to mislead people and can be convincing because it’s “real”, not AI-generated or synthetic. But over time, it damages trust as viewers become concerned they cannot take anything at face value.

Experts increasingly point to provenance rather than perception, focusing on the source and chain of custody of media rather than trying to judge whether content looks real.

Practical tools include cryptographic signing of videos or audio at the moment they’re created, secure information that records when and how content was produced, and verification through trusted platforms.

The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections including secure provenance such as media signed with secret codes, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications.

One simple protocol could stop many attacks: “All fund transfers over $10,000 requested via video or email must have voice confirmation via a trusted, pre-registered phone number”. This would have stopped the $25 million Arup fraud regardless of how perfect the deepfake was.

The key is shifting focus from awareness to following rules. Stop trying to make every employee a detection expert. Instead, design strong verification procedures and train employees to follow them without exception.

Siwei Lyu, a professor of computer science who helps maintain an open-source AI detection platform called DeepFake-o-meter, said everyday internet users can boost their detection skills by paying attention. People should ask themselves why they trust or distrust what they see.

“In many cases, it may not be the media itself that has anything wrong, but it’s put up in the wrong context or by somebody we cannot totally trust,” Lyu said. “So I think, all in all, common awareness and common sense are the most important protection measures we have”.

Organizations across sectors depend on trusted information flows. Deepfakes threaten this trust at its foundation—whether through falsified medical records, CEO impersonations that could crash stock prices, or fabricated evidence submitted to insurers.

Most organizations respond with technical fixes: deploy new detectors, update verification systems, train staff. Yet this approach misses the deeper shift underway. It treats deepfakes as isolated threats rather than symptoms of a deeper weakness in how institutions produce, validate, and share knowledge.

AI agents can now solve complex security tests meant to verify humans. Another recent study of 126 participants put several large language models to the Turing test and found they guessed OpenAI’s GPT-4.5 was the human 73% of the time.

According to the FBI, undercover North Korean IT workers have successfully landed jobs at more than 100 U.S. companies by posing as remote workers. These scams, which often rely on AI tools, have funneled hundreds of millions of dollars a year to North Korea.

Kelly Jones, Cisco Systems Inc.’s chief people officer, says the company has added special identity checks to its application process. Google and other major corporations now require meeting new hires in person partly for this reason.

The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users reasons to recycle old photos and videos to increase emotion around viral news moments. The mix of false information is creating a stronger breakdown of trust online, especially when it mixes with real evidence.

Meta should not abandon third-party fact checking globally, and it can still reverse the decision to end its program in the U.S., according to Full Fact’s 2025 report. As platforms develop community-based fact-checking models, they must work with high-quality, independent fact checkers, experts who are well funded and can act quickly.

Very large platforms using AI-powered tools need to provide better labeling of potentially harmful content and stop increasing it with their automatic recommendation systems. They should stop acting as speakers for false information in their endless pursuit of more engagement to push advertising money up.

Deepfakes are moving toward real-time creation that can produce videos closely resembling the details of a human’s appearance, making it easier for them to avoid detection systems. The frontier is shifting from static visual realism to creating live or near-live content rather than pre-made clips.

Farid explained the trajectory simply: “This is the worst it will ever be”. Whatever quality deepfakes have achieved now, they will only get better.

The threat of misinformation and disinformation, now massively increased by synthetic media, is ranked by the World Economic Forum among the world’s top risks, noting the increasingly blurred line between AI- and human-generated content.

The Generative AI market is projected to grow 560% between 2025-2031, reaching $442 billion. 46% of fraud experts have encountered synthetic identity fraud, 37% voice deepfakes, and 29% video deepfakes.

Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented.

The deepfake challenge is ultimately less about artificial intelligence itself and more about whether human systems can adapt to a new reality. Trust will not be preserved through perfect detection or rules alone. It will depend on shared effort.

Technologists need to build tracking systems into how content is created. Institutions need to update how they verify information and make decisions. And citizens need to learn how to question content carefully without assuming everything is fake.

The foundations of believability will not rebuild themselves, but they can be rebuilt if responsibility is shared across those who create, govern, and consume digital media.

We are not merely facing a crisis of false information, that is, lies spread with deliberate intent to deceive. We’re facing a crisis of knowing itself. Deepfakes don’t just introduce lies into our information system—they damage the very mechanisms by which societies build shared understanding.

The challenge ahead isn’t technical. It’s social, institutional, and deeply human. We’re learning to navigate a world where our oldest and most reliable sense, vision, can no longer be fully trusted without help.

Share This Article
notification icon

We want to send you notifications for the newest news and updates.