The Power of Digital Provenance in the Age of AI

1 Like

@fria
I don't like #C2PA. It relies on X.509 certificate authorities to serve as verifiers.
We need less of that, not more IMO.

1 Like

For learning, why is this bad?

@overdrawn98901
Huh. I thought federation between Discourse and the rest of the fedi was only one way.
That's cool that you were able to reply to me.

Anyway,

I think it's bad because, well, look at how TLS certs work: if you add any extra certificate authority you are basically allowing them to MITM your traffic, which means basically no one modifies their store.
Will C2PA work similarly?
Will it be clear which CA verified the image?

Either way, I think non-technical people will not know how or bother to add extra CAs that they trust, which means that society will basically be held hostage to a bunch of preselected arbiters of truth.

1 Like

Relevant: https://social.coop/@cwebber/114381964560395005
h/t @cwebber

of course you can receive the replies and stuff :slight_smile:

It was mentioned on the downsides so now I’m wondering why you’re exactly repeating it?
Quote:

Limitations

Lack of Support

Content Credentials will need widespread support at every level, from hardware OEMs to photo editing software vendors and AI generators to sites that host and display images. The rollout of Content Credentials will be slow, although more and more companies are starting to support them.

There are still major players missing support like Apple and Android, which is a big problem considering how many images are taken, edited, and shared on smartphones. Once photos taken from your phone can be imbued with Content Credentials in the default camera app, we’ll see much wider adoption I think.

Easy to Remove

In my testing, any edits from a program that doesn’t support Content Credentials will render them unreadable after that point. This problem won’t be as bad if and when support for Content Credentials becomes widespread, since you can just decide not to trust images without them, sort of like not trusting a website without HTTPS. Platforms could even display a warning.

But for now, removing Content Credentials won’t be noticed.

Reliant on Certificate Authorities

The system shares a flaw with HTTPS in that you need to rely on trusted Certificate Authorities to verify the validity of the information, except that Content Credentials are trying to verify a lot more information than just who originally made the image.

Since anyone can add their own Content Credentials to an image, a warning is displayed similar to a certificate warning in your browser that the Content Credentials come from an untrusted entity.

Complexity

One of the issues I ran into while researching was just how complex the standard is, since it needs to cover so many use cases and situations. This is pure speculation, but I can imagine the sheer complexity makes it unattractive for platforms to implement and maintain, which could be contributing to the very slow and partial rollout we’re seeing on the platforms of even founding members of the project like the BBC.

I think this will be less of an issue as it rolls out however, as platforms will likely be able to use each other’s implementations, or at least reference them when implementing it on their platform.

The standard is still in early stages and there’s plenty of room to shape it and improve it in the future, so make your voice heard about how you want to see it implemented. I think with more awareness about Content Credentials, platforms will feel more pressure to support them, so if you want to see this feature on your favorite platform, speak up and gather support.

When this was first rolled out by the BBC, I heard concerns about potential censorship. The idea is that since you now rely on centralized entities to say this is legitimate/authentic video/photo, those entities could choose and pick the “honnest journalists” and not grant access to others.

1 Like

Well it’s not really saying that a video is authentic per se, it’s just trying to show where the media came from. Like a TLS certificate is showing what website you’re on and not whether you should trust that website, similar concept.

It’s possible I guess that they would try to silence specific people, but that would be pretty difficult I think since the focus is on implementing content credentials in hardware, software, etc. So your camera for example, it’ll say that the photo was taken with a certain camera, and then you add your own name to the photo if you want. Then similarly, if you edit in photoshop, it’ll say what you did to it in photoshop, at least that’s the idea anyway.

Indeed, I suspect this will happen. I don’t know of a way for Privacy Guides to authenticate its own media, for example.

Similar problem to Bluesky’s new verification scheme, where they will let certain orgs like The NY Times verify their journalists, but undoubtedly will never let us verify our own team members (you can imagine this will be the case for any number of small organizations).

There’s a series of articles about C2PA on this blog which I found insightful:

1 Like