Policy and Attestations

Best Practices for Supply Chain Security

Dan Lorenc
7 min readJul 24, 2021

This blog post covers some best practices to keep in mind when generating metadata for supply chain security and policy systems. The advice here is generic, but will use vulnerability scans as an example to explain some of the concepts. This post will cover:

  • Signatures vs. Attestations
  • Supply Chain Provenance
  • How to design safe policy systems
  • How to think about vulnerability scans and deployment policy
Photo by Glenn Carstens-Peters on Unsplash

Background

Policy engines like OPA/Gatekeeper, Kyverno, and Kubewarden play a critical role in software supply chain security by giving teams control over what can run in a production environment. They can be used to restrict user accounts and permissions, or even to disable access to entire feature sets of the Kubernetes API, all based on rich metadata and policy languages. Spurred by the recent rise in supply chain attacks, there’s been a push to verify properties of the software artifacts themselves in addition to their runtime configuration.

Software artifacts are generally opaque blobs that can’t easily be inspected for safety, so it’s more common to reason about how they came to be, rather than what is in them. We can’t apply policy on individual function definitions, we apply policy on who built the software, how they built it, and where the code came from. This trail of breadcrumbs is typically referred to as the provenance of a piece of software.

Provenance

Software provenance is usually generated automatically as part of the build process, and is cryptographically signed to prevent tampering and establish a non-repudiable chain of custody. Projects like cosign make this easy to integrate into your build system and policy engine, but there are other ways available. This generally looks like:

While provenance is a giant topic and deserves several posts on its own, provenance is only half of the supply chain problem. Even a tamper-free artifact can be full of known (and unknown!) vulnerabilities that can be just as dangerous. This is where efforts like the NTIA’s SBOM initiative, vulnerability scanners, and SCA tooling come in. These tools allow you to scan a binary artifact, container, or SCM repository and generate reports about dependencies that can be joined against databases of known vulnerabilities.

These tools make it easy to monitor the artifacts running in your production environment for known vulnerabilities. Some systems allow for configuring alerts or even automated remediation policies. While these systems can help you take action after a vulnerability is found in code you have running, it would be a lot nicer to prevent these from hitting production in the first place.

This has created a natural push to try to integrate these systems into deployment policy engines that can block deployments before they’re created, by signing vulnerability scan metadata and presenting it to these same policy systems. I’ll argue that this is the wrong approach in general, but it can be fixed with a few tweaks!

Signatures vs attestations

To understand how to correctly model vulnerability scans and provenance in general, we first have to cover exactly what a digital signature is and proves. At a high level, a signature is created using a key pair and an artifact. The key pair consists of a public key and a private key. The user signs an artifact using the private key, and others can then verify the signature using the public key. The private key must be kept secret, but the public key is distributed widely. In this example, Alice signs an artifact using her private key, and Bob verifies it using Alice’s public key.

If the signature is cryptographically sound, it can be used to prove that the holder of the private key used the private key to sign the artifact. That’s all it proves though! It doesn’t prove the user intended to sign the artifact (they might have been tricked), or that they intended to make any specific claim about the artifact (signatures are commonly used to indicate approval as well as revocation!).

This is where higher level systems come into play. Rather than signing an artifact directly, users create some kind of document that captures their intent behind signing the artifact, and any specific claims being made as part of this signature. Terminology varies, but I like the layering model defined in the In-Toto project. You can read here for more.

A document that contains a cryptographically secure reference to the artifact (usually via a hash), and a set of specific claims about that artifact is referred to as a Statement. These claims can be used to express (and later prove) anything you can think of! They can represent manual approval, artifact provenance, automated test results, an audit trail, or more! When this Statement is cryptographically signed, it is then referred to as an Attestation.

In this example, Alice creates a Statement about an artifact and signs it using her private key, creating an Attestation. Bob can then verify the signature in that Attestation, allowing him to trust the claims inside. Bob can then use those claims to decide whether or not to allow this artifact, using a policy system:

Best Practices

While attestations are very flexible, there are a few best practices to make them useful in practice and eliminate some common foot-guns. For example, attestations should be simple and easily machine readable, making it easy to safely work with them before signatures are validated and the data is trusted.

A more important consideration (and relevant back to our vulnerability discussion) is that systems that verify attestations must be carefully designed to work correctly if an attacker can delete or hide any specific attestation or set of attestations. Signatures can guarantee a file has not been tampered with, but they can’t guarantee the file arrives at all. To be safe, your systems should be designed to fail closed rather than open.

One way to do this is to use the Principle of Monotonicity when authoring your attestations and data model. If we model a policy engine as a simple for loop that processes a set of attestations and returns a single boolean to indicate allow or deny, there are two ways this could be structured:

A “Fail Open” policy system.
A “Fail Closed” policy system.

We can see the basic difference between falling closed vs. open. The system must contain a list of required attestations that are compared to the list present. Each attestation processed moves the system closer from deny to allow. The omission of any single attestation cannot move the system to allow.

Modeling vulnerabilities

We can imagine of a few ways to model vulnerabilities. The simplest (and incorrect) approach is to scan a container, resulting in a list of known vulnerabilities, then sign this report producing anattestation. The policy system could then verify this attestation, and examine the contents of this scan report before making a decision. The problem here is that we now have to decide what the system should do if there is no report, or there is an empty one.

This is dangerous, and it’s too easy to accidentally write a system that allows a deployment if an attacker is able to hide this report. It’s safer to flip the script! The vulnerability scan can still be generated and signed, but the attestation should also contain the actual policy decision itself. It should say “here are the results of my scan, and I have reviewed and approved this report”. Then, the system can be authored to require this approval, rather than the absence of any severe vulnerabilities.

This subtle difference also has a lot of auxiliary benefits! Anyone that has worked with vulnerability days knows the systems are very very noisy and full of false positives. This means some amount of manual review is always required, and these “review attestations” provide a natural place to capture those decisions.

The second benefit is timeliness. In addition to false positives, vulnerability data is temporal. A lack of vulnerabilities today is not a lack of them in the future. Regulations typically require users to scan containers every X days, so these attestations can be designed with TTLs in mind by including a timestamp in the review decision. The policy can then express “I reviewed this on X and approved it”, and the system can enforce a configurable expiration period, rejecting old attestations to fail closed.

Summary

To wrap this up, you can build a great internal supply chain story by keeping a few things in mind:

  • Signatures are a great start, but don’t tell the entire story.
  • Think about expressing specific claims, and using signatures as implementation detail to prevent these claims from being tampered with.
  • Write these attestations as positive statements. Think: “This is good” vs. “This is bad”.
  • Attestations can contain large amounts of information, but they should contain a summary that can be evaluated easily by a policy engine
  • Design your policy system to fail closed instead of open. If closed is too scary or dangerous for your usage patterns, consider alerting on violations.

In action

We’re working to make all of this easy to setup and customize in the Sigstore and TektonCD projects. If you’re interested in trying this out, give Tekton Chains a try today! It’s possible to configure your build system to automatically generate detailed provenance attestations that can be fed into a variety of policy engines or queried later for post-hoc auditing. Reach out on Twitter if you’re interested in helping out or have any questions!

--

--

Dan Lorenc
Dan Lorenc

Responses (3)