Fake media is getting real. Deceptively real.
AI-generated synthetic media, deepfakes, and cheapfakes – even AI voice cloning – are getting so crazy accurate at impersonating real life that it’s becoming nearly impossible to distinguish fact from fiction. The rise of generative AI is so dizzying that policymakers and regulators can’t keep pace (though some are trying).
Celebrity deepfakes and the “Bold Glamour” TikTok beauty filter are just the beginning. Before the decade’s out, experts estimate that up to 90% of all the content you see on the internet will be synthetically generated. Which is insane! So, those Instagram profiles you’re scoping? AI played a hand in making everyone look like a Kardashian. The campaign video you just watched? Totally fake.
The dangers of the proliferation of fake media are real and urgent. Left unchecked, it could lead to the erosion of trust and transparency in society. Put another way: Getting duped by fake media is bad. But not believing anything to be real? That’s worse.
On the latest episode of The Next Great Thing podcast, I spoke to Jeff McGregor, CEO of Truepic, about the very real dangers of this technoethical crisis. Named one of TIME's Best Inventions of 2022, Truepic’s secure camera technology aims to restore trust and transparency in digital media across the internet by verifying and authenticating digital media at the point of creation. By capturing, signing, and sealing the metadata inside of any photo or video, Truepic’s tech can create a tamper-evident digital fingerprint that can be tracked across the web.
Check out all podcast episodes.
One great thing I learned: We’ll never move faster than AI can create to detect what’s fake. Instead, we need to shift our focus to authenticating what’s real. We’re all still buzzing about ChatGPT (and now, GPT-4). Maybe deep down, simmering beneath the excitement, is fear. Fear that the world is now moving at the speed of generative AI…and we just can’t keep up. As it keeps getting better and better at duping us into thinking that the digital content we see is real, it’s also getting better and better at “covering up” the very elements inside digital content that would ordinarily clue us into knowing what’s been altered or tampered with.
AI really is that good at deception. And it keeps getting craftier.
That’s why Jeff strongly believes that ensuring the authenticity of visual media is the most viable and scalable path forward. It’s a logical and smart approach, and perhaps our best shot at containing the wildfire of dangerous misinformation and nefarious propaganda online. By proactively establishing a framework that ensures a photo or video’s authenticity – and that also tracks and records the history of a piece of visual content as it moves across the internet ecosystem – maybe, just maybe, we can stay one step ahead of the manipulative media landscape and preserve more than a modicum of trust in our institutions, our relationships, and in society.
He’s not alone. Companies like Microsoft, Sony, and Adobe have joined Truepic in the Coalition for Content Provenance and Authenticity, or the C2PA. It’s a multi-stakeholder organization that’s designing shared file standards for visual media across the internet.
Jeff sums up the gravity of what we’re up against: “One of the fundamentally scary things about what's happening on the internet today is that, as more disinformation and misinformation spreads and as technologies that allow for the creation of synthetic content continue to proliferate, we start to end up in a world where we can't trust anything. We can't even trust the real content because…we’ve been victims of fraud or we've seen disinformation in real time on our social media platforms. And that creates a dynamic where we start to call everything into question. So, if anything can be faked, nothing is real anymore. And it starts to sow a societal level of distrust in the information that we're consuming.”
Truepic’s solution is a great start to tackling the problem. Image authentication is a powerful tool that can help us discern what's real and what's not. But it will be on everyone who develops software or any digital experience where manipulation can run rampant to consider adding image authentication to the user experience. Over time, it can help to train users to look up what’s real so they continue to believe what’s real and help them avoid falling prey to deceptive and nefarious digital media.
How are you thinking about this problem? What types of apps or web experiences need this technology now (and you can’t say TikTok or Instagram!)? Share your comments below – I’d love to hear your thoughts.