- Teb's Lab
- Posts
- Provenance Authentication of AI-Generated Content
Provenance Authentication of AI-Generated Content
Plus, new intro to Python materials!
The Lab Report
I’m Tyler Elliot Bettilyon (Teb) and this is the Lab Report: Our goal is to deepen your understanding of software and technology by explaining the concepts behind the news.
If you’re new to the Lab Report you can subscribe here.
If you like what you’re reading you’ll love one of our classes. Schedule a training from our catalog or request a custom class consultation.
From The Lab
This month, we released a revised and expanded version of our Intro to Python materials. The materials include new sections on Classes, Regular Expressions, and simple data analysis. They’re also explicitly designed as a series of eight 2-hour class sessions.
As always, these materials are open source with an incredibly permissive Public Domain license. Use them however you see fit, and if you’d like to schedule a training led by yours truly, just click here.
If you’re looking for a free sample of my teaching, I’m running a free session of the section on Python Classes this Wednesday, March 6th, at 6:00pm MST; simply join this Zoom meeting to attend.
Today’s Lesson
The raw images and manifest JSON we generated as part of today’s lesson can be viewed on Github.
Provenance Authentication of AI-Generated Content
Generative AI’s ability to deceive continues to break boundaries. Image and audio generators keep improving, and frontier models such as Sora demonstrate that completely fabricated videos are also a near-term concern.
Whether it’s porn, phone calls from Joe Biden, news stories, or pictures of food on DoorDash — AI is faking everything.
Without catastrophizing too much, I think it’s reasonable to be concerned about the democratization and automation of this kind of fakery. Powers once reserved for a few well-resourced groups and individuals are now in the hands of every internet creep, would-be propagandist, and click-farming shyster.
Today’s lesson concerns one of the tools being rolled out to combat this emerging media misinformation ecosystem: provenance authentication.
Provenance authentication is any mechanism that allows someone to verify the origin and history of a piece of media. One such method, led by Adobe’s Content Authenticity Initiative (CAI), is The Coalition for Content Provenance and Authenticity’s (C2PA) provenance model.
Membership in and support of C2PA has been growing fast. In February, OpenAI added C2PA signing to its flagship image generator (DALL-E 3) and Meta announced that Facebook, Instagram, and Threads will all add an interface to display C2PA information for supported media. Reports also suggest that Nikon, Sony, and Cannon will add C2PA support directly to some camera models.
Right now, the list of C2PA members includes Adobe, Google, Microsoft, Intel, AWS, the BBC, and many more.
So, What Is C2PA?
C2PA is a system for creating cryptographically verifiable metadata, which can then be attached to various types of data. This metadata system uses a chain of cryptographic signatures to establish the provenance, authorship, edit history, (and more) of a particular piece of media.
In the simplest possible terms, C2PA allows content creators to do the following in a cryptographically verifiable and tamper-evident manner:
Sign their original media.
Attach various metadata to the media, such as a list of edits performed on a photo.
Specify any “parent” media, such as a previous version of a photo.
These abilities are powered by a “manifest” that is tied to a piece of media via cryptographic hashing and signing. The manifest can be attached directly to metadata-supporting media formats (such as PNG and JPEG images) and/or stored separately.
The manifest contains:
A list of assertions, which can be any statements of fact about the data, such as what camera captured the image or a list of edits applied.
A claim, which is a bundling of all the assertions to the media at a given moment in time.
A claim signature that ties the claim to a cryptographic key associated with a publisher, specific device, AI tool, or individual who is “signing” the whole manifest.
All of this is typically encoded using Concise Binary Object Representation (CBOR) and tacked onto the image as metadata. C2PA also supports storing the manifest and image data separately.
An official diagram of the C2PA Manifest, from https://c2pa.org/specifications/specifications/2.0/specs/C2PA_Specification.html
How It Works
Old standbys — cryptographic hashing and public key encryption — are at the heart of the C2PA specification.
A key aspect of cryptographic systems is that all parties can independently verify certain things are true. With C2PA, those things are about establishing a “chain of trust” starting with the Certificate Authorities and ending with certainty about the provenance of some data. C2PA’s protocol specifications leverage existing technologies and Public Key Infrastructure to establish chains of trust about how a piece of media came to be.
In an ideal use case — where everyone involved makes an effort to be C2PA compliant — this chain allows users to verify everyone/everything that made changes to the data, in which order, all the way back to its source. The manifest provides us with a record of everyone who signed it, and if any changes were made without a signature, C2PA will detect that has occurred.
A major weakness of the protocol is that the metadata is trivially easy to remove. Simple, minor changes to the original data can also easily break the cryptographic binding to its manifest. This means that C2PA only gives us confidence about data that have a matching manifest — it tells us nothing about data without a manifest.
I will use this adorable image of a bear that ChatGPT generated as an example to motivate and explore critical aspects of C2PA.
Note that this is actually not the original image. In tests, my publishing platform converted the PNG to a JPEG and stripped the metadata. Run it through the C2PA verify tool to see for yourself and find the original here, which will verify properly (as shown below).
Hashing is a generic and widespread technique to take some input data and deterministically produce an output (called a “hash code” or a “digest”) of a specified length (such as 32 bits or 1024 bits). Hashing is used in database indexing; to create key-value data structures such as JavaScript’s Object and Python’s Dictionary (generically called a Hash Table); to create checksums, and more.
To ensure security, cryptographic hashing places further constraints on the hash function. Without getting too lost in the details, a “cryptographically secure” hash function’s hash codes are guaranteed to uniquely identify the data used to create it; no other data can result in the same hash code. The C2PA manifests are made “tamper evident” by cryptographically hashing various portions of the manifest and raw image data individually and embedding those hashes in the manifest.
Different aspects of the data are hashed separately. People who receive the image and the manifest can recompute the hashcode and verify that they match.
Once all data and the assertions have been hashed individually, they are “bound” to each other using a cryptographic hash function once again. This time, the hash function takes the data and the assertions simultaneously and produces a hashcode for the entire “claim.” This binding allows us to verify that the entire bundle hasn’t been tampered with. It also allows us to physically separate the manifest from the original data and reattach it later by verifying the hash code.
If any alterations are made to the media itself or the metadata, then the computed hash codes won’t match, and C2PA-aware systems can flag that the image has been altered.
Finally, the publisher, creator, camera, and/or other entities associated with the media sign the claim using a public key encryption scheme. This scheme uses much of the same infrastructure that powers HTTPS/TLS. Any signatories use their private key to encrypt the claim hash. The raw and encrypted hash are both embedded in the manifest.
The media and assertions are bound and signed.
Finally, the public “certificate” is also embedded into the manifest. Existing Public Key Infrastructure allows systems and people to verify certificates’ authenticity via the Certificate Authorities who issue them. The certificate also contains the public key that end users need to decrypt the signature. If the decrypted value matches the raw value, we can prove that the certificate owner signed this entire manifest (provided someone hasn’t stolen their private key).
If the decrypted signature and claim hash match, users can go on to verify the rest of the hashes for the original data and assertions to prove the data hasn’t been tampered with.
The complete specification is complicated. We have glossed over some details for the sake of brevity and approachability. At the risk of losing some readers in the weeds, let’s look a little closer at the protocol details and some of the open-source tools C2PA has published.
Dissecting a C2PA Compliant Image
If you want to repeat any of these steps, or examine the outputs generated, check out our Github repository for the raw images and manifest outputs.
First, I used the C2PA verify app to validate the original image. Notice that this image already has two links in its chain: one for the original image produced by DALL-E and another for its publication via ChatGPT.
The C2PA command line tool allows us to view a JSON representation of the manifest. Recall that the manifest attached to our image is stored in a binary format called CBOR.
$ c2patool -d cape-bear.webp
{
"active_manifest": "urn:uuid:bcc56165-0bf4-47e0-be9c-cd25be17b335",
"manifests": {
"urn:uuid:bcc56165-0bf4-47e0-be9c-cd25be17b335": {
"claim": {
"alg": "sha256",
"assertions": [
{
"hash": "uH2AWcGg9rc+ksEeappGi35hDHvwDZq6MSghk8Nt4gI=",
"url": "self#jumbf=c2pa.assertions/c2pa.thumbnail.ingredient.jpeg"
...
The raw data confirms what the verify app showed: this image has two manifests — one for the original image created by DALL-E and another for when ChatGPT published the image. The most recent manifest is the “active manifest” and we can see it identified by a UUID in the JSON above.
The JSON representation of the manifest is 159 lines and includes nearly all the information we’d need to verify the image’s provenance, including the hash codes used in the images above.
C2PA’s tool does not include the certificate in the JSON. Instead, we use a different command to extract the certificates in a standard PEM format:
% c2patool --certs cape-bear.webp
-----BEGIN CERTIFICATE-----
MIIDKTCCAhGgAwIBAgIUTkkWa/Nuvvyy5UHYHXXP6uhNoQ4wDQYJKoZIhvcNAQEM
BQAwSjEaMBgGA1UEAwwRV2ViQ2xhaW1TaWduaW5nQ0ExDTALBgNVBAsM
...
Let’s Make Some Changes
First, I converted this image to a .png using Image Magick.
magick cape-bear.webp cape-bear.png
Unfortunately, that process destroyed the metadata. This is to be expected. Datatype conversion changes the image. The hash code of the png won’t match the one in the manifest generated from the webp file. This demonstrates a fundamental limitation of C2PA: it’s trivially easy to strip the metadata from the image. In my experimentation, keeping the metadata is much more work; I accidentally stripped it several times.
I could proliferate this credentialless image to create confusion about its origin. Or I could sign it myself and claim that Open AI is stealing my original work. In fact, I could make all kinds of bogus, falsified, and fraudulent material and sign it using C2PA.
This is by design: C2PA’s trust model lets you verify who handled a piece of media, not what media is accurate or valuable. My signature on the image proves that I handled it, but it’s up to users to decide if I am trustworthy.
Here’s what the verify tool shows for the converted png:
Oops, what happened to my manifest?
To simulate good stewardship of a C2PA image, I used the C2PA tool to label my image as a derivative of the original .webp file. This required me to create my own manifest, in which I included two assertions: that I was the author and that I converted the image.
{
"claim_generator": "Teb's Lab Demo",
"assertions": [
{
"label": "stds.schema-org.CreativeWork",
"data": {
"@context": "https://schema.org",
"@type": "CreativeWork",
"author": [
{
"@type": "Person",
"name": "Tyler Bettilyon"
}
],
"actions": [
{
"action": "c2pa.converted"
}
]
}
}
]
}
Then, I bound the new manifest and old image to the new png using the open-source C2PA Tool:
c2patool cape-bear.png -p cape-bear.webp -o signed-cape-bear.png -m added-manifest.json
Because I do not have an actual certificate from a Certificate Authority, the tool used its default certificate and gave me this warning:
Note: Using default private key and signing certificate. This is only valid for development. A permanent key and cert should be provided in the manifest definition or in the environment variables.
This demonstrates another weakness of the protocol: many people involved in creating and editing media are regular people who do not have public keys or certificates nor the technical know-how to obtain and publish them. For C2PA to achieve broad adoption and use, PKI must be made more accessible to those people.
Anyway, the verify tool now tells us this image has three manifests: the two you saw above and the one I just added. Apparently, I accidentally destroyed one of the thumbnails. The tool also warns us that “This Content Credential was issued by an unknown source.” It does display my name, and the raw manifest maintained my “converted” action, although it wasn’t shown in the verify app.
Finally, I spent hours trying to edit the image in a way that wouldn’t destroy the content credentials wholesale… and failed. I hoped to produce a nice warning in the verify app that, “this image has been tampered with.” Unfortunately, every editor I tried discarded the metadata, so I just ended up with this:
The cute bear is so grumpy that no one will validate its provenance.
Limitations and The Future
It’s wonderful to see support for improving trust and security on the internet. Some big players have already signed on to the C2PA standard. If integrations on major search platforms, social media sites, and Adobe’s media editing empire go well, more publishers and creators will invest in establishing the provenance of their work.
In addition to some of the weaknesses we highlighted above, there are serious detractors. For example, cryptographer Dr. Neal Krawetz points to the limited capacity of C2PA’s trust model to verify certain claims and argues that C2PA has significant flaws.
Ultimately, C2PA will only help users identify that honest people are honest. And to be clear, that is useful. Knowing the New York Times signed a particular photograph can help you decide how to view that photo.
But C2PA won’t do much with respect to the vast majority of images that do not have any C2PA metadata. There is also a risk of granting a false sense of security if “signed” data are considered credible without much consideration regarding who did the signing.
Such is life on the internet.
Teb’s Tidbits
Overcorrecting from previous allegations of racial bias, Google’s Gemini generated racially diverse images of Nazis.
A privacy-focused class action lawsuit in California implicates sex toy seller Adam and Eve and Google Analytics in leaking IP addresses and search history.
Amazon will require warrants for law enforcement to access footage from Ring doorbells.
The US government continues to spend big on bolstering the computer chip manufacturing industry.
The source of the fake Joe Biden robocalls has confessed and is not apologetic.
Remember…
The Lab Report is free and doesn’t even advertise. Our curricula is open source and published under a public domain license for anyone to use for any purpose. We’re also a very small team with no investors.
Help us keep providing these free services by scheduling one of our world class trainings or requesting a custom class for your team.
Reply