Wong Edan's

We’ve Lost. Reality is a Deepfake Now.

February 08, 2026 • By Azzar Budiyanto

Gila! Just three days ago, the internet — and my already fragile sanity — collectively screamed, “Reality is losing the deepfake war.” The Verge said it, Matt Gross on LinkedIn echoed it, Facebook users threw up their hands, and even Nilay Patel’s Decoder podcast dedicated an episode to this existential dread. And you know what? Sudah kuduga (I knew it). We, the humble inhabitants of this digital circus, are not just losing; we’ve already surrendered the very concept of verifiable truth to the relentless, insidious march of AI-generated content. Welcome to the new normal, where your eyes lie to your brain, and the internet is just one giant hallucination.

The Great Unraveling: What Even IS “Reality” Anymore?

Let’s be brutally honest here. The term “deepfake” used to conjure images of cheesy celebrity face-swaps, a niche corner of the internet for… well, let’s just say specific interests. Cute, right? Like a toddler playing with Photoshop. Now? Now it’s an existential threat, a digital pathogen infecting our collective consciousness. We’re not talking about simple video edits anymore, folks. We’re talking about sophisticated AI models churning out content so convincing, so nuanced, so utterly indistinguishable from reality, that even seasoned professionals are fooled. And the speed at which this technology evolves? It’s enough to make a “Wong Edan” like me want to smash my modem and move to a cave.

From Prank to Pandemic: A Brief (and Depressing) History

Remember when Deepfakes were built on Generative Adversarial Networks (GANs)? Ah, the good old days. Two neural networks locked in an eternal struggle: one (the generator) creating fakes, the other (the discriminator) trying to spot them. It was an arms race, but at least the fakes often had that tell-tale shimmer, a slightly off-kilter blink, or some dodgy artifact. You could feel something was wrong, even if you couldn’t pinpoint it. It was like a bad knock-off branded shirt – you knew it wasn’t quite right.

Then came the OpenAI Sora announcement. Suddenly, the game changed from “bad knock-off” to “perfect counterfeit that even the brand owner can’t verify.” Sora, with its ability to generate photorealistic videos up to a minute long from a text prompt, complete with complex scene dynamics, consistent character appearances, and subtle emotions, ripped the rug out from under any remaining optimism. It’s not just deepfakes; it’s deep realities. And it’s not just video. Text, audio, images – everything is now fair game for AI’s creative, and often deceptive, prowess.

The problem isn’t just “deepfakes” anymore; it’s the broader category of “AI-generated content,” which quickly devolves into “slop,” and then morphs into “disinformation.” The lines are so blurred, you need a microscope and a team of AI ethicists to even begin to understand what you’re looking at. And who has time for that when you’re endlessly scrolling through your feed, half-asleep, looking for cat videos?

“Not long ago, I posted here about how I never want to see AI images or videos anywhere ever again. Well, The Verge’s Jessica Weatherbed just… confirmed my deepest fears.”

— Matt Gross, LinkedIn

Matt, my friend, you and me both. Except my fears extend to everything. My morning coffee, my bank statement, that viral video of a cat playing the piano – is any of it real? Or is it all just code-generated hallucinations tailored to my specific data profile? The thought is enough to make one consider a career as a hermit.

The Losing Battlegrounds: Where We’re Getting Our Asses Kicked

So, why are we losing so badly? It’s not one silver bullet, but a Gatling gun of systemic failures, technological imbalances, and plain old human incompetence.

1. The Asymmetrical Arms Race: Creators vs. Detectors

This is the fundamental problem. Creating convincing deepfakes is getting easier, faster, and cheaper. Detecting them, however, is an uphill sprint against a digital tsunami. AI models for generation are designed to produce outputs that are statistically indistinguishable from real data. Detection models, on the other hand, are constantly playing catch-up, trying to identify subtle artifacts that the next generation of generative AI will inevitably iron out. It’s like trying to fight a ghost with a magnifying glass. The moment you find a new tell, the ghost learns to hide it better. The generator learns to bypass the discriminator. It’s baked into the very architecture of GANs, and diffusion models aren’t far behind in this race to absolute realism.

Consider this snippet:


# Simplified GAN Training Loop
for epoch in range(num_epochs):
    # Train Generator
    z = torch.randn(batch_size, latent_dim)
    fake_images = generator(z)
    g_loss = -discriminator(fake_images).mean()
    g_loss.backward()
    optimizer_g.step()

    # Train Discriminator
    real_images = get_real_data()
    d_loss_real = discriminator(real_images).mean()
    d_loss_fake = discriminator(fake_images.detach()).mean()
    d_loss = d_loss_fake - d_loss_real + gradient_penalty
    d_loss.backward()
    optimizer_d.step()

This simple loop highlights the core dynamic: the generator gets better at fooling the discriminator, which then forces the discriminator to get better at detecting. But the generator always has the offensive advantage; it’s dictating the terms of engagement. It only needs to find one way to fool, while the discriminator needs to defend against all ways.

2. The Human Factor: Our Brains Are The Original Deepfake Detectors, and They’re Bad At It

We are, fundamentally, pattern-matching machines. Our brains are hardwired to interpret visual and auditory stimuli as truth. For millennia, “seeing is believing” was a reliable heuristic for survival. Now, that heuristic is a vulnerability. When faced with something that looks and sounds utterly real, our default setting is to believe it. Skepticism requires conscious effort, critical thinking, and often, prior knowledge that we simply don’t have time or energy to apply to every single piece of content we consume.

And let’s not forget confirmation bias. If a deepfake confirms what we already want to believe, our critical faculties switch off faster than a dodgy Wi-Fi connection during a crucial download. This is gold for disinformation campaigns, as The Atlantic pointed out way back in October 2025 (or will point out, time is a flat circle, my friends, especially with AI messing with it), specifically referencing how “a deepfake president molds perception to serve his own interests.” It’s terrifying because it leverages our own cognitive biases against us. Dasar manusia! (Typical humans!)

3. Metadata Standards: A Bureaucratic Dream, a Practical Nightmare

Efforts like the Content Authenticity Initiative (C2PA) are noble. The idea is brilliant: embed cryptographic metadata into content at the point of creation, detailing its origin, modifications, and whether AI was involved. Think of it as a digital birth certificate and medical record for every image, video, and audio file.

But noble intentions often pave the road to digital hell. Why is this falling flat?

  • Messy Standards and Fragmentation: Everyone wants to do their own thing. Adobe, Microsoft, Google, camera manufacturers – they all have stakes, and getting them to agree on a universally enforced standard is like herding digital cats.
  • Lack of Adoption: Not all creators use tools that embed C2PA. Many don’t even know what it is. And for those who do, it’s often an opt-in feature. The incentives aren’t there for mass adoption.
  • Easy Stripping: A significant amount of content passes through multiple platforms, gets re-compressed, re-uploaded, screenshotted, and re-shared. Metadata, especially in informal sharing channels (WhatsApp, Telegram, DMs), gets stripped faster than paint on a cheap car.
  • Open-Source Dilemma: Many powerful AI generation tools are open source. Who’s going to enforce metadata embedding on a random GitHub repo that’s used to generate some truly unhinged content? Nobody, that’s who.
  • Malicious Intent: Even if metadata is present, a malicious actor can simply strip it, forge it, or generate new content without it. The system relies on trust, and trust is the first casualty in any war.

It’s like building a secure vault door, but leaving the back entrance wide open, or worse, making the vault door out of paper. The intention is good, but the execution and widespread enforcement are utterly lacking.

4. The Slop Problem: Drowning in AI Gunk

This is perhaps the most insidious aspect. It’s not just the expertly crafted deepfakes that are eroding reality; it’s the sheer, mind-numbing volume of AI-generated “slop.” Low-effort, mass-produced images, bland articles, uncanny valley videos that serve no real purpose other than to fill feeds and generate clicks. This “slop” desensitizes us. It normalizes the uncanny, blurs the line between real and fake, and makes us less equipped to spot the truly dangerous deepfakes when they appear.

Social media platforms are overwhelmed. Their moderation teams, already understaffed and overworked, simply cannot keep up with the deluge. Imagine trying to filter a firehose of raw sewage with a tea strainer. That’s their daily reality. And as the volume of AI content grows exponentially (which it will, because it’s cheap to produce), the task becomes not just difficult, but mathematically impossible.

5. Legal and Ethical Quagmires: When Laws Can’t Keep Up with Code

Our legal systems are ancient, designed for a pre-digital, pre-AI world. They move at the speed of snails, while AI moves at the speed of light. Consider the complexities:

  • Jurisdictional Nightmares: A deepfake generated in one country, consumed in another, and causing harm in a third. Which laws apply? Who has jurisdiction? International cooperation on digital law is notoriously difficult.
  • Freedom of Speech vs. Harm: Where do you draw the line? Is a satirical deepfake protected speech, even if it uses a public figure’s likeness? What about a deepfake that falsely implicates someone in a crime? The nuance is lost in the current legal framework.
  • Proof and Attribution: Proving who created a deepfake, especially with anonymous networks and open-source tools, is incredibly challenging. And even if you prove it, how do you hold them accountable?
  • Lack of Proactive Legislation: Governments are reactive, not proactive. They wait for the disaster to happen before even beginning to consider legislation, by which point the technology has evolved three generations past their understanding.

This isn’t just about politicians; it’s about average people. Imagine a deepfake porn video of you, or a deepfake audio recording of you making racist remarks. The damage is done instantly, the viral spread is unstoppable, and seeking legal recourse is a protracted, expensive, and often futile battle. Your life is ruined before the lawyers even finish their first coffee. It’s a terrifying thought, no?

6. The “Wong Edan” Conundrum: Do We Even Care Anymore?

Here’s where my “Wong Edan” personality really kicks in. The cynical, perhaps fatalistic, truth: a significant portion of the public either doesn’t care, or actively embraces the blurred reality. Why? Because it’s entertaining. Because it confirms their biases. Because it allows for plausible deniability (“Oh, that’s just an AI-generated image!”).

We’ve grown so accustomed to highly edited, airbrushed, filter-heavy content that the leap to AI-generated “perfection” is less a shock and more a natural progression. We live in an age of manufactured authenticity, where influencers peddle carefully curated (and often fake) lives. The public has been desensitized to the artificial for years. Deepfakes are just the logical next step in our collective journey into blissful ignorance. Perhaps this is humanity’s true destiny – to live in a self-constructed digital dream, oblivious to the code that stitches it together. Gila!

Why AI Labeling Efforts Are Falling Flat (The Deep Dive, for those who still have hope)

Let’s double-click on the failure of labeling, because this was supposed to be our shield, our beacon of truth in the digital storm. Why is it more like a leaky umbrella in a typhoon?

Technical Challenges Are Not Insignificant

  • Watermarking: Digital watermarks can be embedded (like visible logos or invisible patterns). But advanced image/video processing techniques (resizing, cropping, compression, noise addition) can degrade or remove them. Adversarial attacks can specifically target and erase them.
  • Cryptographic Signatures: While promising, they only work if the entire chain of custody is secure. If a file is opened, edited by a non-compliant tool, or transcoded, the signature can be invalidated or lost.
  • Computational Overhead: Embedding robust, tamper-proof metadata for every piece of content generated at scale (especially video) adds significant computational cost and time, which creators and platforms often want to avoid.
  • The “Untraceable” Generation: Many cutting-edge AI models are designed for maximal realism, not for embedding forensic markers. Retrofitting them with robust labeling capabilities without degrading performance or being easily bypassed is a continuous challenge.

// Conceptual pseudo-code for a bypass
function remove_ai_metadata(file_path):
    # Load image/video
    content = load_media(file_path)

    # Apply adversarial attack to degrade or remove watermarks
    content_processed = apply_adversarial_noise(content, epsilon=0.1)

    # Strip standard metadata headers (EXIF, XMP, C2PA)
    content_stripped = strip_all_metadata(content_processed)

    # Re-encode to remove any lingering forensic traces
    content_reencoded = reencode_media(content_stripped, quality=90)

    save_media(content_reencoded, new_file_path)
    return new_file_path

The above illustrates a malicious actor’s straightforward approach. It doesn’t take a genius to figure out how to scrub a digital file clean.

Economic Disincentives and Platform Politics

This is where it gets truly cynical. Do platforms and some content creators really want clear, undeniable labels on everything? Not always.

  • Engagement Over Authenticity: Misinformation and sensational deepfakes often generate massive engagement. Clear labels might reduce that. Platforms are driven by metrics, and “viral” often trumps “true.”
  • Cost of Enforcement: Implementing robust labeling, detection, and enforcement mechanisms is expensive. It requires significant investment in technology, moderation staff, and legal teams. Many platforms prefer to put out fires reactively rather than prevent them proactively.
  • “Plausible Deniability” for Platforms: If it’s hard to definitively prove something is AI-generated, platforms can claim they’re doing their best and avoid stricter regulations or legal culpability.

User Adoption and Awareness: Apathy is Our Downfall

Let’s be real. How many of us check the EXIF data on every photo, or seek out C2PA content credentials? Very few. Most users consume content passively. Even if a label is present, it’s easily ignored, dismissed, or simply not understood.

  • Label Fatigue: If every piece of content has a label (“AI-generated,” “Human-edited,” “Authentic”), the labels quickly lose meaning. It becomes just another piece of digital clutter.
  • Lack of Education: The general public is largely unaware of what content credentials are, why they matter, or how to interpret them. This isn’t a technical literacy problem; it’s a fundamental digital literacy gap.
  • Trust Erosion: If people already distrust institutions and media, why would they suddenly trust a “Content Authenticity” label provided by those same entities? The well of trust has been poisoned.

The Human-in-the-Loop Problem: Slow and Error-Prone

Even with the best AI detection, there will always be a need for human review, especially for nuanced or borderline cases. But humans are slow, expensive, and prone to error and bias. Scaling human moderation to match the scale of AI-generated content is impossible. The ratio of content to human moderators is already astronomically skewed, and AI generators are only widening that gap.

The Stakes: What Happens When Reality is a Deepfake?

This isn’t just about fun and games. This is about the foundational pillars of our society. The consequences of losing the deepfake war are catastrophic:

  • Erosion of Democracy: We’ve already seen how “fake news” impacts elections. Deepfakes amplify this by making it impossible to verify critical information. A deepfake of a political candidate confessing to a crime, or a fabricated video of a world leader declaring war, could have devastating, real-world consequences before anyone can even debunk it. Donald Trump’s “War on Reality,” as The Atlantic called it, will be fought with AI.
  • Destruction of Reputation and Privacy: Imagine your face, your voice, used to create content that utterly destroys your personal or professional life. The damage is irreversible, even if proven fake. This is particularly terrifying for women, who are disproportionately targeted by non-consensual deepfake pornography.
  • Financial Fraud and Scams: Deepfake audio and video can be used to impersonate CEOs, family members, or bank officials, leading to sophisticated scams and massive financial losses. “I never want to trust AI to read and write my contracts,” said Docusign’s CEO. Good luck with that, mate.
  • Weaponization of Information: State actors and malicious groups can leverage deepfakes for propaganda, psychological warfare, and to sow discord and distrust within populations. Imagine entire historical narratives being rewritten with AI-generated “evidence.”
  • The End of Shared Reality: If we can’t agree on what’s real, how do we have productive discourse? How do we make collective decisions? Society itself relies on a shared understanding of truth. When that’s gone, we descend into chaos, tribalism, and a battle of competing, AI-generated realities.

The “Wong Edan” Prescription: Can We Win? (Hint: No, but We Can Try Not to Die Horribly)

So, as a certified “Wong Edan,” do I think we can win this war? Hahaha! Oh, you’re funny. No, we cannot “win” in the traditional sense. The technology is too powerful, too decentralized, too rapidly evolving, and our human vulnerabilities are too profound. But we can, perhaps, mitigate the damage and try to build some semblance of resilience.

Here’s my cynical, yet pragmatic, “Wong Edan” action plan for survival:

1. Education, Education, Education (and then some more)

  • Critical Thinking as a Core Skill: This needs to be taught from primary school onwards. How to evaluate sources, identify biases, question visual evidence, and understand the capabilities of AI. It’s not just tech literacy; it’s existential literacy.
  • Public Awareness Campaigns: Governments, NGOs, and tech companies need to fund massive, ongoing campaigns to educate the public about deepfakes and AI-generated content. Not just “beware,” but “here’s how to spot it” and “here’s what to do if you see it.”
  • Journalistic Standards Reinvention: News organizations must adopt hyper-vigilant verification processes and be transparent about their own usage of AI. Credibility will be their most valuable currency.

2. Stronger, Enforced Platform Policies (with a Kick)

  • Mandatory Disclosure: Platforms MUST make it mandatory for users to disclose if content is AI-generated. And there must be real, impactful penalties for non-compliance (not just a slap on the wrist).
  • Proactive Detection and Removal: Platforms need to invest significantly more in AI-powered detection systems, not just for deepfakes, but for all forms of AI-generated misinformation and harmful “slop.” This isn’t just a cost; it’s a societal responsibility.
  • Transparency Reports: Regular, detailed reports on AI-generated content detected, labeled, and removed, along with trends and challenges.

3. Open-Source Detection Tools (Fighting Fire with Fire)

If malicious actors use open-source generative AI, we need open-source detection and forensic tools. This levels the playing field somewhat, allowing independent researchers, journalists, and individuals to verify content without relying solely on corporate or government gatekeepers.

4. Legislative Action (Slow, but Necessary)

  • Global Coordination: This is a global problem requiring global solutions. International treaties and agreements on deepfake regulation, attribution, and prosecution are essential, however difficult to achieve.
  • Specific Deepfake Legislation: Laws that explicitly address the creation, distribution, and malicious use of deepfakes, with clear definitions, penalties, and mechanisms for victims’ recourse.
  • Liability for Platforms and Creators: Holding platforms accountable for the spread of harmful deepfakes, and creators accountable for their malicious creations, could provide necessary incentives.

5. Embracing a New “Media Literacy”

We need to accept that “seeing is believing” is dead. Long live “questioning everything.” This means a fundamental shift in how we consume information. Every image, every video, every audio clip should be approached with a healthy dose of skepticism, especially if it’s sensational or confirms our existing biases. It’s exhausting, I know, but the alternative is intellectual slavery to algorithms.

Conclusion: The Deepfake Abyss and Our Grim Future

So, reality is losing the deepfake war. The Verge got it right. We’re standing on the precipice of an information apocalypse, where truth is relative, and every pixel is a potential lie. The technology is advancing at an exponential rate, our institutions are struggling to keep up, and human nature, with its susceptibility to bias and its hunger for sensationalism, is proving to be our Achilles’ heel.

As a “Wong Edan,” I look at this landscape with a mix of despair and perverse fascination. It’s a mess of our own making, a Frankenstein’s monster born of technological prowess and human weakness. We might not “win” this war, but we have a choice: either we surrender completely to the digital illusion, or we fight tooth and nail for every scrap of verifiable truth, armed with skepticism, education, and the faint hope that critical thinking isn’t entirely obsolete. Prepare yourselves, my friends. The future is going to be incredibly, terrifyingly… interesting. And probably fake. Sudah kuduga.