“I saw it with my own eyes!” Fake, Real, and the Challenge of Reality in the Era of AI

ByDr. Julie M. Albright


"But - I saw it with my own eyes!"

 

The phrases “I’ll believe it when I see it” and “I saw it with my own eyes” are deeply ingrained in our language and culture, reflecting a common attitude towards truth and skepticism. They underscore our reliance on our senses; on our own, individual personal experience and visual evidence as the ultimate tests for believing in the veracity of a claim or event. This perspective, however, opens up a fascinating dialogue about the nature of belief, perception, and truth, as we enter an Alice in Wonderland era of advanced technology and information by AI.

 

The Crucial Role of Provenance Markers in AI-Created Data

 

In the era of digital creation, distinguishing between what’s real and what’s fabricated has become increasingly challenging. The proliferation of AI-generated content — from Dall-E to Midjourney, to the newly announced Sora from Open AI which will create “fake videos”  — while opening new avenues for creativity and innovation, also presents a significant challenge in the form of disinformation. Provenance markers offer consumers a reliable way to discern the origins of digital content, acting as a crucial defense against the spread of fake information.

 

What Are Provenance Markers?

Provenance markers are digital signatures embedded within the data of digital content, such as images, videos, and text. These markers provide a verifiable history of the content’s creation and distribution, detailing the tools or platforms used in its generation. For AI-created data, these markers are indispensable in verifying the authenticity of the content and the integrity of its source.

OpenAI’s Implementation of C2PA Watermarks

OpenAI’s decision to embed watermarks from C2PA into the images generated by DALL-E 3 represents a forward-thinking approach to content verification. These watermarks consist of two components: an invisible metadata element and a visible CR symbol located in the top left corner of each image. The invisible metadata serves as a digital fingerprint, detailing the AI tools involved in the content’s creation, while the visible CR symbol acts as a straightforward identifier for consumers to recognize the content’s AI origins. These watermarks are among the first “tools” that consumers can use to discern real from fake, despite what “their own eyes” thought they saw.

Conclusion and Next Steps

While “I’ll believe it when I see it” and “I saw it with my own eyes” express our fundamental human inclination towards empirical evidence as a basis for belief, the reliability of our senses and the integrity of visual evidence are getting more complex and challenging. In navigating the complexities of truth and veracity, especially within the digital realm, a more sophisticated approach that goes beyond mere seeing to understanding and verifying is essential. Open AI’s use of watermarks is a first step. “Faking It” is the next stage of cyber warfare and manipulation through fake information. Creating digital tools for you to help  you tell real from fake, to protect you from the increasing onslaught of fake news and disinformation  is one of the most important projects  of our time.  Hats off to Open AI for taking a step in the right direction.