The Sigstore Trust Model

Dan Lorenc
6 min readDec 9, 2021

--

I hope this post can help reduce confusion around exactly how Sigstore’s trust model works, and how trust flows from the community root down to each short-lived certificate. For more background, read A Deep Dive on Fulcio, It’s Ten O’Clock, Do You Know Where Your Private Keys Are?, and A New Kind of Trust Root first.

Sigstore’s support for OIDC and more specifically — Google, Microsoft, and GitHub as Identity Providers — rightly causes concerns about reliance on centralization and “hyperscalers” for identity.

This post is to hopefully explain exactly how this trust works, and how to use Sigstore without actually relying on any centralized identity infrastructure!

Trust root

In Sigstore, the trust flows from the community root down to the hyperscaler identity providers, not the other way around! Clients like cosign start with the TUF root established during our root key signing ceremony, and then use this to verify that all other signing material is still valid and authorized — including Google, GitHub and Microsoft. These keys are managed by a set of trusted community key holders across the world, with no single organization in control of the entire root.

In addition, Google/Microsoft/GitHub are only one set of valid identity providers! Anyone can run their own and get it trusted by the Sigstore root (and therefore all Sigstore clients). This can be done using OIDC or any other valid key material accepted by TUF (including PGP).

The only real difference between federated trust roots and the hyperscalers is that we currently trust the hyperscalers to issue identity tokens globally (for any email address), while federated providers are only accepted for specific namespaces. This means that you can add your own OIDC endpoint for your own domain, but not for someone else’s. This relies on a simple convention today (we only issue certificates matching the domain of the OIDC endpoint URL), but this could easily be extended to support more complex workflows using the ACME protocol for proof of domain ownership.

Here’s what this looks like overall (with the original graphviz):

Sigstore Without Big Tech!

In the diagram above we can see a few trust flows that clearly do rely on the big identity providers — this is roughly anything going through the email-based system or the custom identity systems like GitHub Actions and EKS. But there are also a few flows that **do not** go through these paths at all! If you want to add your own, here’s how!

If you already have a SPIFFE endpoint setup, then you can skip those first two steps. You now have a fully keyless trust root that can issue identity tokens using any scheme you want, as long as they’re for your domain. All of those tokens can then be used to retrieve short-lived code signing certificates from Sigstore, and those certificates will be automatically trusted by all Sigstore clients.

Don’t want to use SPIFFE/SPIRE, or the keyless system? That’s fine too! We’re also happy to accept other existing trust roots using TUF sub-delegations. These can be done using a similar flow, except via a pull request to the root-signing repository.

Sigstore vs PGP

When comparing Sigstore to PGP — the first thing to note is that Sigstore actually works with PGP! You can store signatures and keys in the Rekor transparency log, which can help detect key compromise and make recovery/revocation easier. Our TUF delegations also support PGP keys, so if you’re really set on using PGP for your community you can still get a delegation and reuse our trust root.

What Sigstore provides is an alternative to the Web of Trust, which is only one component of PGP. PGP can be used to generate keys and use those to sign and verify blobs with no PKI, but it can also be used with the Web of Trust ecosystem. This is probably going to be the most controversial part of this post, but in my opinion the Web of Trust is fundamentally broken and has not been able to scale to the needs of the broader open source ecosystem.

But wait — Debian, Ubuntu, RHEL, Node.js, and others all use PGP — how can it be broken? Simple! They’ve all built their own PKI. The Debian package repo signing keys are stored in a package and distributed in each built image. They ARE NOT fetched from key servers. The Node.js project uses a long table/script in their README.md file. Other projects are similar.

I really love the idea of the Web-of-Trust, but it just doesn’t work in practice. My basic theory is that key-management is too hard for the average human long-term. I lose my keys constantly and have to make new ones. But the Web-of-Trust relies on people generating, publishing, and verifying/signing the keys of others. Every time someone loses a key and has to start over — an entire portion of the Web disappears and has to be rebuilt. With a strong, healthy trust network this wouldn’t be a big deal. But unfortunately we never get to that point (because too many people lose their keys too frequently), so key distribution has always required out of band mechanisms.

I’d love to be proven wrong here. I really do like the idea of a fully decentralized identity system, but in my opinion, web of trust just isn’t that. Some of the new decentralized identity work going on in the W3C looks promising, but very early. I bet there are some new startups working on this in the blockchain space too :)

Sigstore Tradeoffs

I’ll be the first to say that Sigstore isn’t perfect for everyone! It was designed with a few key tradeoffs and constraints in mind that I think represent the best available today for most people, but it’s a magic solution. The big design decisions in Sigstore were based around these principles:

  • Centralized infrastructure is not ideal, but Transparency can mitigate many of the problems.
  • Over long periods of time, humans are bad at key management but great at protecting email accounts.

Transparency and Centralized Infrastructure

Transparency logs are a relatively new technology, and we’re still learning how they can be best used. The original RFC was written in 2013, while PGP dates back to 1981! They’ve been proven at scale in the Web PKI space where they’ve been protecting browsers and internet users against misbehaving CAs over the last several years. They’ve also started to see usage in supply chains, with Go Module Transparency from Filippo Valsorda and Russ Cox, the rget project from Brandon Philips, and the original Binary Transparency design from Mozilla. Transparency logs are now even being used for firmware transparency in the Google Pixel phones!

With transparency at the core, Sigstore’s centralized infrastructure is Trusted but Verifiable — users should not need to trust us to do anything other than keep the log running. And even that part can be fixed over time! Certificate Transparency logs operate in a federated manner today, with many organizations running their own logs that gossip data between them. Certificate Authorities are even required to log certs to multiple independent instances! If we can get to that point, we can completely remove the single point of failure of the Sigstore infrastructure!

Summary

I hope this post helps show how the trust model in Sigstore actually works, including the role large identity providers and big tech companies actually play here. There are valid concerns with Sigstore’s architecture, but I believe many of them can be mitigated over time. Nothing is perfect, but I strongly believe that the Sigstore design has the best chance of widespread adoption and usage by anyone!

--

--