In the last post of the series about Sigstore, I will look at the most exciting part of the implementation – ephemeral keys, or what the Sigstore team calls keyless signing. The post will go over the second and third scenarios I outlined in Implementing Containers’ Secure Supply Chain with Sigstore Part 1 – Signing with Existing Keys and go deeper into the experience of validating artifacts and moving artifacts between registries.

Using Sigstore to Sign with Ephemeral Keys

Using Cosign to sign with ephemeral keys is still an experimental feature and will be released in v1.14.0 (see the following PR). Signing with ephemeral keys is relatively easy.

$ COSIGN_EXPERIMENTAL=1 cosign sign 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Generating ephemeral keys...
Retrieving signed certificate...
 Note that there may be personally identifiable information associated with this signed artifact.
 This may include the email address associated with the account with which you authenticate.
 This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later.
 By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs.
Are you sure you want to continue? (y/[N]): y
Your browser will now be opened to:
https://oauth2.sigstore.dev/auth/auth?access_type=online&client_id=sigstore&code_challenge=e16i62r65TuJiklImxYFIr32yEsA74fSlCXYv550DAg&code_challenge_method=S256&nonce=2G9cB5h89SqGwYQG2ey5ODeaxO8&redirect_uri=http%3A%2F%2Flocalhost%3A33791%2Fauth%2Fcallback&response_type=code&scope=openid+email&state=2G9cB7iQ7BSXYQdKKe6xGOY2Rk8
Successfully verified SCT...
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
"562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample" appears to be a private repository, please confirm uploading to the transparency log at "https://rekor.sigstore.dev" [Y/N]: y
tlog entry created with index: 5133131
Pushing signature to: 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample

You are sent to authenticate using OpenID Connect (OIDC) via the browser. I used my GitHub account to authenticate.

Once authenticated, you are redirected back to localhost, where Cosign reads the code query string parameter from the URL and verifies the authentication.

Here is what the redirect URL looks like.

http://localhost:43219/auth/callback?code=z6dghpnzujzxn6xmfltyl6esa&state=2G9dbwwf9zCutX3mNevKWVd87wS

I have also pushed v2 and v3 of the image to the registry and signed them using the approach above. Here is the new state in my registry.

wdt_ID Artifact Tag Artifact Type Artifact Digest
1 v1 Image sha256:9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4
2 sha256-9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4.sig Signature sha256:483f2a30b765c3f7c48fcc93a7a6eb86051b590b78029a59b5c2d00e97281241
3 v2 Image sha256:d4d59b7e1eb7c55b0811c3dfd3571ab386afbe6d46dfcf83e06343e04ae888cb
4 sha256-d4d59b7e1eb7c55b0811c3dfd3571ab386afbe6d46dfcf83e06343e04ae888cb.sig Signature sha256:8c43d1944b4d0c3f0d7d6505ff4d8c93971ebf38fc60157264f957e3532d8fd7
5 v3 Image sha256:2e19bd9d9fb13c356c64c02c574241c978199bfa75fd0f46b62748f59fb84f0a
6 sha256:2e19bd9d9fb13c356c64c02c574241c978199bfa75fd0f46b62748f59fb84f0a.sig Signature sha256:cc2a674776dfe5f3e55f497080e7284a5bd14485cbdcf956ba3cf2b2eebc915f

If you look at the console output, you will also see that one of the lines mentions tlog in it. This is the index in Rekor transaction log where the signature’s receipt is stored. For the three signatures that I created, the indexes are:

5133131 for v1
5133528 for v2
and 5133614 for v3

That is it! I have signed my images with ephemeral keys, and I have the tlog entries that correspond to the signatures. It is a fast and easy experience.

Verifying Images Signed With Ephemeral Keys

Verifying the images signed with ephemeral keys is built into the Cosign CLI.

$ COSIGN_EXPERIMENTAL=1 cosign verify 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 | jq . > flasksample-v1-ephemeral-verification.json
Verification for 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- Existence of the claims in the transparency log was verified offline
- Any certificates were verified against the Fulcio roots.

The outputs from the verification of flasksample:v1, flasksample:v2, and flasksample:v3 are available on GitHub. Few things to note about the output from the verification.

  • The output JSON contains the logIndexas well as the logID, which I did assume I could use to search for the receipts in Rekor. I have some confusion about the logID purpose, but I will go into that a little later!
  • There is a body field that I assume is the actual signature. This JSON field is not yet documented and is hard to know with such a generic name.
  • The type field seems to be a free text field. I would expect it to be something more structured and the values to come from a list of possible and, most importantly, standardized types.

Search and Explore Rekor

The goal of my second scenario – Sign Container Images with Ephemeral Keys from Fulcio is not only to sign images with ephemeral keys but also to invalidate one of the signed artifacts. Unfortunately, documentation and the help output from the commands are scarce. Also, searching on Google how to invalidate a signature in Rekor yields no results. I decided to start exploring the Rekor logs to see if that may help.

There aren’t many commands that you can use in Rekor. The four things you can do are: get records; search by email, SHA or artifact; uploadentry or artifact; and verify entry or artifact. Using the information from the outputs in the previous section, I can get the entries for the three images I signed using the log indexes.

$ rekor-cli get --log-index 5133131 > flasksample-v1-ephemeral-logentry.json
$ rekor-cli get --log-index 5133528 > flasksample-v2-ephemeral-logentry.json
$ rekor-cli get --log-index 5133614 > flasksample-v3-ephemeral-logentry.json

The outputs from the above commands for flasksample:v1, flasksample:v2, and flasksample:v3 are available on GitHub.

I first noted that the log entries are not returned in JSON format by the Rekor CLI. This is different from what Cosign returns and is a bit inconsistent. Second, the log entries outputted by the Rekor CLI are not the same as the verification outputs returned by Cosign. Cosign verification output provides different information than the Rekor log entry. This begs the question: “How does Cosign get this information?” First, though, let’s see what else Rekor can give me.

I can use Rekor search to find all the log entries that I created. This will include the ones for the three images above and, theoretically, everything else I signed.

$ rekor-cli search --email toddysm_dev1@outlook.com
Found matching entries (listed by UUID):
24296fb24b8ad77aaf485c1d70f4ab76518483d5c7b822cf7b0c59e5aef0e032fb5ff4148d936353
24296fb24b8ad77a3f43ac62c8c7bab7c95951d898f2909855d949ca728ffd3426db12ff55390847
24296fb24b8ad77ac2334dfe2759c88459eb450a739f08f6a16f5fd275431fb42c693974af3d5576
24296fb24b8ad77a8f14877c718e228e315c14f3416dfffa8d5d6ef87ecc4f02f6e7ce5b1d5b4e95
24296fb24b8ad77a6828c6f9141b8ad38a3dca4787ab096dca59d0ba68ff881d6019f10cc346b660
24296fb24b8ad77ad54d6e9bb140780477d8beaf9d0134a45cf2ded6d64e4f0d687e5f30e0bb8c65
24296fb24b8ad77a888dc5890ac4f99fc863d3b39d067db651bf3324674b85a62e3be85066776310
24296fb24b8ad77a47fae5af8718673a2ef951aaf8042277a69e808f8f59b598d804757edab6a294
24296fb24b8ad77a7155046f33fdc71ce4e291388ef621d3b945e563cb29c2e3cd6f14b9ba1b3227
24296fb24b8ad77a5fc1952295b69ca8d6f59a0a7cbfbd30163c3a3c3a294c218f9e00c79652d476

Note that the result lists UUIDs that are different from the logID properties in the verification output JSON. You can get log entries using the UUID or the logIndex but not using the logID. The UUIDs are not present in the Cosign output mentioned in the previous section, while the logID is. However, it is unclear what the logID can be used for and why the UUID is not included in the Cosign output.

Rekor search command supposedly allows you to search by artifact and SHA. However, it is not documented what form those need to take. Using the image name or the image SHA yield no results.

$ rekor-cli search --artifact 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample
Error: invalid argument "562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample" for "--artifact" flag: Key: '' Error:Field validation for '' failed on the 'url' tag
$ rekor-cli search --sha sha256:9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4
no matching entries found
$ rekor-cli search --sha 9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4
no matching entries found

I think the above are the core search scenarios for container images (and other artifacts), but it seems they are either not implemented or not documented. Neither the Rekor GitHub repository, the Rekor public documentation, nor the Rekor Swagger have any more details on the search. I filed an issue for Rekor to ask how the artifacts search works.

Coming back to the main goal of invalidating a signed artifact, I couldn’t find any documentation on how to do that. The only apparent options to invalidate the artifacts are either uploading something to Rekor or removing the signature from Rekor. I looked at all options to upload entries or artifacts to Rekor, but the documentation mainly describes how to sign and upload entries using other types like SSH, X509, etc. It does seem to me that there is no capability in Rekor to say: “This artifact is not valid anymore”.

I thought that looking at how Rekor verifies signatures may help me understand the approach.

Verifying Signatures Using Rekor CLI

I decided to explore how the signatures are verified and reverse engineer the process to understand if an artifact signature can be invalidated. Rekor CLI has a verify command. My assumption was that Rekor’s verify command worked the same as the Cosign verify command. Unfortunately, that is not the case.

$ rekor-cli verify --artifact 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Error: invalid argument "562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1" for "--artifact" flag: Key: '' Error:Field validation for '' failed on the 'url' tag
$ rekor-cli verify --entry 24296fb24b8ad77a8f14877c718e228e315c14f3416dfffa8d5d6ef87ecc4f02f6e7ce5b1d5b4e95
Error: invalid argument "24296fb24b8ad77a8f14877c718e228e315c14f3416dfffa8d5d6ef87ecc4f02f6e7ce5b1d5b4e95" for "--entry" flag: Key: '' Error:Field validation for '' failed on the 'url' tag

Unfortunately, due to a lack of documentation and examples, I wasn’t able to figure out how this worked without browsing the code. While that kind of digging is always an option, I would expect an easier experience as an end user.

I was made aware of the following blog post, though. It describes how to handle account compromise. To put it in context, if my GitHub account is compromised, this blog post describes the steps I need to take to invalidate the artifacts. I do have two problems with this proposal:

  1. As you remember, in my scenario, I wanted to invalidate only the flasksample:v2 artifact, and not all artifacts signed with my account. If I follow the proposal in the blog post, I will invalidate everything signed with my GitHub account, which may result in outages.
  2. The proposal relies on the consumer of artifacts to constantly monitor the news for what is valid and what is not; which GitHub account is compromised and which one is not. This is unrealistic and puts too much manual burden on the consumer of artifacts. In an ideal scenario, I would expect the technology to solve this with a proactive way to notify the users if something is wrong rather than expect them to learn reactively.

At this point in time, I will call this scenario incomplete. Yes, I am able to sign with ephemeral keys, but this doesn’t seem unique in this situation. The ease around the key generation is what they seem to be calling attention to, and it does make signing much less intimidating to new users, but I could still generate a new SSH or GPG key every time I need to sign something. Trusting Fulcio’s root does not automatically increase my security – I would even argue the opposite. Making it easier for everybody to sign does not increase security, either. Let’s Encrypt already proved that. While Let’s Encrypt made an enormous contribution to our privacy and helped secure every small business site, the ease, and accessibility with which it works means that every malicious site now also has a certificate. The lock in the address bar is no longer a sign of security. We are all excited about the benefits, but I bet very few of us are also excited for this to help the bad guys. We need to think beyond the simple signing and ensure that the whole end-to-end experience is secure.

I will move to the last scenario now.

Promoting Sigstore Signed Images Between Registries

In the last scenario I wanted to test the promotion of images between registries. Let’s create a v4 of the image and sign it using an ephemeral key. Here are the commands with the omitted output.

$ docker build -t 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v4 .
$ docker push 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v4
$ COSIGN_EXPERIMENTAL=1 cosign sign 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v4

The Rekor log index for the signature is 5253114. I can use Crane to copy the image and the signature from AWS ECR into Azure ACR.

$ crane copy 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v4 tsmacrtestcssc.azurecr.io/flasksample:v4
$ crane copy 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:sha256-aa2690ed4a407ac8152d24017eb6955b01cbb0fc44afe170dadedc30da80640a.sig tsmacrtestcssc.azurecr.io/flasksample:sha256-aa2690ed4a407ac8152d24017eb6955b01cbb0fc44afe170dadedc30da80640a.sig

Also, let’s validate the ephemeral key signature using the image in Azure ACR.

$ COSIGN_EXPERIMENTAL=1 cosign verify tsmacrtestcssc.azurecr.io/flasksample:v4 | jq .
Verification for tsmacrtestcssc.azurecr.io/flasksample:v4 --
The following checks were performed on each of these signatures:
 - The cosign claims were validated
 - Existence of the claims in the transparency log was verified offline
 - Any certificates were verified against the Fulcio roots.

Next, I will sign the image with a key stored in Azure Key Vault and verify the signature.

$ cosign sign --key azurekms://tsm-kv-usw3-tst-cssc.vault.azure.net/sigstore-azure-test-key-ec tsmacrtestcssc.azurecr.io/flasksample:v4
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
Pushing signature to: tsmacrtestcssc.azurecr.io/flasksample
$ cosign verify --key azurekms://tsm-kv-usw3-tst-cssc.vault.azure.net/sigstore-azure-test-key-ec tsmacrtestcssc.azurecr.io/flasksample:v4
Verification for tsmacrtestcssc.azurecr.io/flasksample:v4 --
The following checks were performed on each of these signatures:
 - The cosign claims were validated
 - The signatures were verified against the specified public key
[{"critical":{"identity":{"docker-reference":"tsmacrtestcssc.azurecr.io/flasksample"},"image":{"docker-manifest-digest":"sha256:aa2690ed4a407ac8152d24017eb6955b01cbb0fc44afe170dadedc30da80640a"},"type":"cosign container image signature"},"optional":null}]

Everything worked as expected. This scenario was very smooth, and I was able to complete it in less than a minute.

Summary

So far, I have just scratched the surface of what the Sigstore project could accomplish. While going through the scenarios in these posts, I had a bunch of other thoughts, so I wanted to highlight a few below:

  • Sigstore is built on a good idea to leverage ephemeral keys for signing container images (and other software). However, just the ephemeral keys alone do not provide higher security if there is no better process to invalidate the signed artifacts. With traditional X509 certificates, one can use CRL (Certificate Revocation Lists) or OCSP (Online Certificate Status Protocol) to revoke certificates. Although they are critiqued a lot, the process of invalidating artifacts using ephemeral keys and Sigstore does not seem like an improvement at the moment. I look forward to the improvements in this area as further discussions happen.
  • Sigstore, like nearly all open-source projects, would benefit greatly from better documentation and consistency in the implementation. Inconsistent messages, undocumented features, myriad JSON schemas, multiple identifiers used for different purposes, variable naming conventions in JSONs, and unpredictable output from the command line tools are just a few things that can be improved. I understand that some of the implementation was driven by requirements to work with legacy registries but going forward, that can be simplified by using OCI references. The bigger the project grows, the harder it will become to fix those.
  • The experience that Cosign offers is what makes the project successful. Signing and verifying images using the legacy X.509 and the ephemeral keys is easy. Hiding the complexity behind a simple CLI is a great strategy to get adoption.

I tested Sigstore a year ago and asked myself the question: “How do I solve the SolarWinds exploit with Sigstore?” Unfortunately, Sigstore doesn’t make it easier to solve that problem yet. Having in mind my experience above, I would expect a lot of changes in the future as Sigstore matures.

Unfortunately, there is no viable alternative to Sigstore on the market today. Notary v1 (or Docker Content Trust) proved not flexible enough. Notary v2 is still in the works and has yet to show what it can do. However, the lack of alternatives does not automatically mean that we avoid the due diligence required for a security product of such importance.  Sigstore has had a great start, and this series proves to me that we’ve got a lot of work ahead of us as an industry to solve our software supply chain problems.

In my previous post, Implementing Containers’ Secure Supply Chain with Sigstore Part 1 – Signing with Existing Keys, I went over the Cosign experience of signing images with existing keys. As I concluded there, the signing was easy to achieve, with just a few hiccups here and there. It does seem that Cosign does a lot behind the scenes to make it easy. Though, after looking at the artifacts stored in the registry, I got curious of how the signatures and attestations are saved. Unfortunately, the Cosign specifications are a bit light on details, and it seems they were created after or evolved together with the implementation. Hence, I decided to go with the reverse-engineering approach to understand what is saved in the registry.

At this post’s end, I will validate the signatures using Cosign CLI and complete my first scenario.

The Mystery Behind Cosign Artifacts

First, to be able to store Cosign artifacts in a registry, you need to use an OCI-compliant registry. When Cosign signs a container image, an OCI artifact is created and pushed to the registry. Every OCI artifact has a manifest and layers. The manifest is standardized, but the layers can be anything that can be packed in a tarball. So, Cosign’s signature should be in the layer pushed to the registry, and the manifest should describe the signature artifact. For image signatures, Cosign tags the manifest with a tag that uses the following naming convention sha-<image-sha>.sig.

From the examples in my previous post, when Cosign signed the 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 image, it created a new artifact and tagged it sha256-9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4.sig. Here are the details of the image that was signed.

And here are the details of the signature artifact.

What is Inside Cosign Signature Artifact?

I was curious about what the signature artifact looks like. Using Crane, I can pull the signature artifact. All files are available in my Github test repository.

# Sign into the registry
$ aws ecr get-login-password --region us-west-2 | crane auth login --username AWS --password-stdin 562077019569.dkr.ecr.us-west-2.amazonaws.com

# Pull the signature manifest
$ crane manifest 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:sha256-9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4.sig | jq . > flasksample-v1-signature-manifest.json

# Pull the signature artifact as a tarball and unpack it into ./sigstore-signature
$ crane pull 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:sha256-9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4.sig flasksample-v1-signature.tar.gz
$ mkdir sigstore-signature
$ tar -xvf flasksample-v1-signature.tar.gz -C ./sigstore-signature
$ cd sigstore-signature/
$ ls -al
total 20
drwxrwxr-x 2 toddysm toddysm 4096 Oct 14 10:08 .
drwxrwxr-x 3 toddysm toddysm 4096 Oct 14 10:07 ..
-rw-r--r-- 1 toddysm toddysm  272 Dec 31  1969 09b3e371137191b52fdd07bdf115824b2b297a2003882e68d68d66d0d35fe1fc.tar.gz
-rw-r--r-- 1 toddysm toddysm  319 Dec 31  1969 manifest.json
-rw-r--r-- 1 toddysm toddysm  248 Dec 31  1969 sha256:00ce5fed483997c24aa0834081ab1960283ee9b2c9d46912bbccc3f9d18e335d
$ tar -xvf 09b3e371137191b52fdd07bdf115824b2b297a2003882e68d68d66d0d35fe1fc.tar.gz 
tar: This does not look like a tar archive

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now

Surprisingly to me, the inner tarball (09b3e371137191b52fdd07bdf115824b2b297a2003882e68d68d66d0d35fe1fc.tar.gz) does not seem to be a tarball, although it has the proper extensions. I assumed this was the actual signature blob, but I couldn’t confirm without knowing how to manipulate that archive. Interestingly, opening the file in a simple text editor reveals that it is a plain JSON file with an .tar.gz extension. Looking into the other two files  manifest.json and sha256:00ce5fed483997c24aa0834081ab1960283ee9b2c9d46912bbccc3f9d18e335d it looks like all the files are some kind of manifests, but not very clear what for. I couldn’t find any specification explaining the content of the layer and the meaning of the files inside it.

Interestingly, Cosign offers a tool to download the signature.

$ cosign download signature 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 | jq . > flasksample-v1-cosign-signature.json

The resulting signature is available in the GitHub repository. The page linked above claims that you can verify the signature in another tool, but I couldn’t immediately find details on how to do that. I decided to leave this for some other time.

The following stand out from this experience.

  • First (and typically a red flag when I am evaluating software), why the JSON file has a tarball file extension? Normally, this is malicious practice, and it’s especially concerning in this context. I am sure it will get fixed, and an explanation will be provided now that I have filed an issue for it.
  • Why are there so many JSON files? Trying to look at all the available Cosign documentation, I couldn’t find any architectural or design papers that explain those decisions. It seems to me that those things got hacked on top of each other when a need arose. There may be GitHub issues discussing those design decisions, but I didn’t find any in a quick search.
  • The signature is not in any of the files that I downloaded. The signature is stored as an OCI annotation in the manifest called dev.cosignproject.cosign/signature. So, do I even need the rest of the artifact?
  • Last but not least, it seems that Cosign tool is the only one that understands how to work with the stored artifacts, which may result in a tool lock-in. Although there is a claim that I can verify the signature in a different tool, without specification, it will be hard to implement such a tool.

What is Inside Cosign Attestation Artifact?

Knowing how the Cosign signatures work, I would expect something similar for the attestations. The mental hierarchy I have built in my mind is the following:

+ Image
  - Image signature
  + Attestation
    - Attestation signature

Unfortunately, this is not the case. There is no separate artifact for the attestation signature. Here are the steps to download the attestation artifact.

# Pull the attestation manifest
$ crane manifest 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:sha256-9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4.att | jq . > flasksample-v1-attestation-manifest.json

# Pull the attestation artifact as a tarball and unpack it into ./sigstore-attestation
$ crane pull 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:sha256-9bd049b6b470118cc6a02d58595b86107407c9e288c0d556ce342ea8acbafdf4.sig flasksample-v1-attestation.tar.gz
$ mkdir sigstore-attestation
$ tar -xvf flasksample-v1-attestation.tar.gz -C ./sigstore-attestation/
sha256:728e26b36817753d90d7de8420dacf4fa1bcf746da2b54bb8c59cd047a682198
c9880779c90158a29f7a69f29c492551261e7a3936247fc75f225171064d6d32.tar.gz
ff626be9ff3158e9d2118072cd24481d990a5145d10109affec6064423d74cc4.tar.gz
manifest.json

All attestation files are available in my Github test repository. Knowing what was done for the signature, the results are somewhat what I would have expected. Also, I think I am getting a sense of the design by slowly reverse-engineering it.

The manifest.json file describes the archive. It points to the config sha256:728e26b36817753d90d7de8420dacf4fa1bcf746da2b54bb8c59cd047a682198 file and the two layers c9880779c90158a29f7a69f29c492551261e7a3936247fc75f225171064d6d32.tar.gz and ff626be9ff3158e9d2118072cd24481d990a5145d10109affec6064423d74cc4.tar.gz. I was not sure what the config file sha256:728e26b36817753d90d7de8420dacf4fa1bcf746da2b54bb8c59cd047a682198 is used for, so I decided to ignore it. The two layer JSONs (which were both JSON files, despite the tar.gz extensions) were more interesting, so I decided to dig more into them.

The first thing to note is that the layer JSONs (here and here) for the attestations have a different format from the layer JSON for the signature. While the signature seems to be something proprietary to Cosign, the attestations have a payload type application/vnd.in-toto+json, which hints at something more widely accepted. While this is not an official IANA media type, there is at least in-toto specification published. The payload looks a lot like a Base64 encoded string, so I gave it a try.

# This decodes the SLSA provenance attestation
$ cat ff626be9ff3158e9d2118072cd24481d990a5145d10109affec6064423d74cc4.tar.gz | jq -r .payload | base64 -d | jq . > ff626be9ff3158e9d2118072cd24481d990a5145d10109affec6064423d74cc4.tar.gz.payload.json

# And this decodes the SPDX SBOM
$ cat c9880779c90158a29f7a69f29c492551261e7a3936247fc75f225171064d6d32.tar.gz | jq -r .payload | base64 -d | jq . > c9880779c90158a29f7a69f29c492551261e7a3936247fc75f225171064d6d32.tar.gz.payload.json

Both files are available here and here. If I want to get the SBOM or the SLSA provenance, I need to get the  predicate value from the above JSONs, JSON decode it, and then use it if I want. I didn’t go into that because that was not part of my goals for this experiment.

Note one thing! As you remember from the beginning of the section, I expected to have signatures for the attestations, and I do! They are not where I expected though. The signatures are part of the layer JSONs (the ones with the strange extensions). If you want to extract the signatures, you need to get the signatures value from them.

# Extract the SBOM attestation signature
$ cat c9880779c90158a29f7a69f29c492551261e7a3936247fc75f225171064d6d32.tar.gz | jq -r .signatures > flasksample-v1-sbom-signatures.json

# Extract the SLSA provenance attestation signature
cat ff626be9ff3158e9d2118072cd24481d990a5145d10109affec6064423d74cc4.tar.gz | jq -r .signatures > flasksample-v1-slsa-signature.json

The SBOM signature and the SLSA provenance signature are available on GitHub. For whatever reason, the key ID is left blank.

Here are my takeaways from this experience.

  • Cosign uses a myriad of nonstandard JSON file formats to store signatures and attestations. These still need documentation and to be standardized (except the in-toto one).
  • To get to the data, I need to make several conversations from JSON to Base64 to JSON-encoded, which increases not only the computation power that I need to use but also the probability of errors and bugs, so I would recommend making that simpler.
  • All attestations are stored in a single OCI artifact, and there is no way to retrieve a single attestation based on its type. In my example, if I need to get only the SLSA provenance (785 bytes), I still need to download the SBOM, which is 1.5 MB. This is 1500 times more data than I need. The impact will be performance and bandwidth cost, and at scale, would make this solution the wrong one for me.

Verifying Signatures and Attestations with Cosign

Cosign CLI has commands for signature and attestation verification.

# This one verifies the image signature
$ cosign verify --key awskms:///61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 > sigstore-verify-signature-output.json

Verification for 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key

# This one verifies the image attestations
$ cosign verify-attestation --key awskms:///61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 > sigstore-verify-attestation-output.json

Verification for 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key

The output of the signature verification and the output of the attestation verification are available in Github. The signature verification is as expected. The attestation verification is more compelling, though. It is another JSON file in non-standard JSON. It seems as though the developer took the two JSONs from the blobs and concatenated them, perhaps unintentionally. Below is a screenshot of the output loaded in Visual Studio Code. Notice that there is no comma between lines 10 and 11. Also, the JSON objects are not wrapped in a JSON array. I’ve opened another issue for the non-standard JSON output from the verification command.

With this, I was able to complete my first scenario – Sign Container Images With Existing Keys Stored in a KMS.

Summary

Here is the summary for the second part of my experience:

  • It seems that the Sigstore implementation grew organically, and some specs were written after the implementation was done. Though many pieces are missing specifications and documentation, it will be hard to develop third-party tooling or even maintain the code easily until those are written. The more the project grows, the harder and slower it will be to add new capabilities, and the risk of unintended side effects and even security bugs will grow.
  • There are certain architectural choices that I would question. I have already mentioned the issue with saving all attestations in a single artifact and the numerous proprietary manifests and JSON files. I would also like to dissect the lack of separation between Cosign CLI and Cosign libraries. If they were separate, it would be easier to use them in third-party tooling to verify or sign artifacts.
  • Finally, the above two tell me that there will be a lot of incompatible changes in the product going forward. I would expect this from an MVP but not from a V1 product that I will use in production. If the team wants to move to a cleaner design and more flexible architecture, a lot of the current data formats will change. This, of course, can be hidden behind the Cosign CLI, but that means that I need to take a hard dependency on it. It will be interesting to understand the plan for verification scenarios and how Cosign can be integrated with various policies and admission controllers. Incompatible changes in the verification scenario can, unfortunately, result in production outages.

My biggest concern so far is the inconsistent approach to the implementation and the lax architectural principles and documentation of the design decisions. On the bright side, verifying the signatures and the attestations using the Cosign CLI was very easy and smooth.

In my next post, I will look at the ephemeral key signing scenario and the capabilities to revoke signed artifacts. I will also look at the last scenario that involves the promotion and re-signing of artifacts.

Photo by ammiel jr on Unsplash

Today, the secure supply chain for software is on top of mind for every CISO and enterprise leader. After the President’s Executive Order (EO), many efforts were spun off to secure the supply chain. One of the most prominent is, of course, Sigstore. I looked at Sigstore more than a year ago and was excited about the idea of ephemeral keys. I thought it might solve some common problems with signing. Like, for example, reducing the blast radius if a signing key is compromised or signing identity is stolen.

Over the past twelve months, I’ve spent a lot of time working on a secure supply chain for containers at Microsoft and gained a deep knowledge of the use cases and myriad of scenarios. At the same time, Sigstore gained popularity, and more and more companies started using it to secure their container supply chains. I’ve followed the project development and the growth in popularity. In recent weeks, I decided to take another deep look at the technology and evaluate how it will perform against a few core scenarios to secure container images against supply chain attacks.

This will be a three-part series going over the Sigstore experience for signing containers. In the first part, I will look at the experience of signing with existing long-lived keys as well as adding attestations like SBOMs and SLSA provenance documents. In the second part, I will go deeper into the artifacts created during the signing and reverse-engineer their purpose. In the third part, I will look at the signing experience with short-lived keys as well as promoting signatures between registries.

Before that, though, let’s look at some scenarios that I will use to guide my experiment.

Containers’ Supply Chain Scenarios

Every technology implementation (I believe) should start with user scenarios. Signing container images is not a complete scenario but a part of a larger experience. Below are the experiences that I would like to test as part of my experiment. Also, I will do this using the top two cloud vendors – AWS and Azure.

Sign Container Images With Existing Keys Stored in a KMS

In this scenario, I will sign the images with keys that are already stored in my cloud key management systems (ASKW KMS or Azure Key Vault). The goal here is to enable enterprises to use existing keys and key management infrastructure for signing. Many enterprises already use this legacy signing scenario for their software, so there is nothing revolutionary here except the additional artifacts.

  1. Build a v1 of a test image
  2. Push the v1 of the test image to a registry
  3. Sign the image with a key stored in a cloud KMS
  4. Generate an SBOM for the container image
  5. Sign the SBOM and push it to the registry
  6. Generate an SLSA provenance attestation
  7. Sign the SLSA provenance attestation and push it to the registry
  8. Pull and validate the SBOM
  9. Pull and validate the SLSA provenance attestation

A note! I will cheat with the SLSA provenance attestations because the SLSA tooling works better in CI/CD pipelines than with manual Docker build commands that I will use for my experiment.

Sign Container Images with Ephemeral Keys from Fulcio

In this scenario, I will test how the signing with ephemeral keys (what Sigstore calls keyless signing) improves the security of the containers’ supply chain. Keyless signing is a bit misleading because keys are still involved in generating the signature. The difference is that the keys are generated on-demand by Fulcio and have a short lifespan (I believe 10 min or so). I will not generate SBOMs and SLSA provenance attestations for this second scenario, but you can assume that this may also be part of it in a real-life application. Here is what I will do:

  1. Build a v1 of a test image
  2. Push the v1 of the test image to a registry
  3. Sign the image with an ephemeral key
  4. Build a v2 of the test image and repeat steps 2 and 3 for it
  5. Build a v3 of the test image and repeat steps 2 and 3 for it
  6. Invalidate the signature for v2 of the test image

The premise of this scenario is to test a temporary exploit of the pipeline. This is what happened with SolarWinds Supply Chain Compromise, and I would like to understand how we might be able to use Sigstore to prevent such an attack in the future or how it could reduce the blast radius. I don’t want to invalidate the signatures for v1 and v3 because this will be similar to the traditional signing approach with long-lived keys.

Acquire OSS Container Image and Re-Sign for Internal Use

This is a common scenario that I’ve heard from many customers. They import images from public registries, verify them, scan them, and then want to re-sign them with internal keys before allowing them for use. So, here is what I will do:

  1. Build an image
  2. Push it to the registry
  3. Sign it with an ephemeral key
  4. Import the image and the signature from one registry (ECR) into another (ACR)
    Those steps will simulate importing an image signed with an ephemeral key from an OSS registry like Docker Hub or GitHub Container Registry.
  5. Sign the image with a key from the cloud KMS
  6. Validate the signature with the cloud KMS certificate

Let’s get started with the experience.

Environment Set Up

To run the commands below, you will need to have AWS and Azure accounts. I have already created container registries and set up asymmetric keys for signing in both cloud vendors. I will not go over the steps for setting those up – you can follow the vendor’s documentation for that. I have also set up AWS and Azure CLIs so I can sign into the registries, run other commands against the registries and retrieve the keys from the command line. Once again, you can follow the vendor’s documentation to do that. Now, let’s go over the steps to set up Sigstore tooling.

Installing Sigstore Tooling

To go over the scenarios above, I will need to install the Cosign and Rekor CLIs. Cosign is used to sign the images and also interacts with Fulcio to obtain the ephemeral keys for signing. Rekor is the transparency log that keeps a record of the signatures done by Cosign using ephemeral keys.

When setting up automation for either signing or signature verification, you will need to install Cosign only as a tool. If you need to add or retrieve Rekor records that are not related to signing or attestation, you will need to install Rekor CLI.

You have several options to install Cosign CLI; however, the only documented option to install Rekor CLI is using Golang or building from source (for which you need Golang). One note: the installation instructions for all Sigstore tools are geared toward Golang developers.

The next thing is that on the Sigstore documentation site, I couldn’t find information on how to verify that the Cosign binaries I installed were the ones that Sigstore team produced. And the last thing that I noticed after installing the CLIs is the details I got about the binaries. Running cosign version and rekor-cli version gives the following output.

$ cosign version
  ______   ______        _______. __    _______ .__   __.
 /      | /  __  \      /       ||  |  /  _____||  \ |  |
|  ,----'|  |  |  |    |   (----`|  | |  |  __  |   \|  |
|  |     |  |  |  |     \   \    |  | |  | |_ | |  . `  |
|  `----.|  `--'  | .----)   |   |  | |  |__| | |  |\   |
 \______| \______/  |_______/    |__|  \______| |__| \__|
cosign: A tool for Container Signing, Verification and Storage in an OCI registry.

GitVersion:    1.13.0
GitCommit:     6b9820a68e861c91d07b1d0414d150411b60111f
GitTreeState:  "clean"
BuildDate:     2022-10-07T04:37:47Z
GoVersion:     go1.19.2
Compiler:      gc
Platform:      linux/amd64Sigstore documentation site
$ rekor-cli version
  ____    _____   _  __   ___    ____             ____   _       ___
 |  _ \  | ____| | |/ /  / _ \  |  _ \           / ___| | |     |_ _|
 | |_) | |  _|   | ' /  | | | | | |_) |  _____  | |     | |      | |
 |  _ <  | |___  | . \  | |_| | |  _ <  |_____| | |___  | |___   | |
 |_| \_\ |_____| |_|\_\  \___/  |_| \_\          \____| |_____| |___|
rekor-cli: Rekor CLI

GitVersion:    v0.12.2
GitCommit:     unknown
GitTreeState:  unknown
BuildDate:     unknown
GoVersion:     go1.18.2
Compiler:      gc
Platform:      linux/amd64

Cosign CLI provides details about the build of the binary, Rekor CLI does not. Using the above process to install the binaries may seem insecure, but this seems to be by design, as explained in Sigstore Issue #2300: Verify the binary downloads when installing from .deb (or any other binary release).

Here is the catch, though! I looked at the above experience as a novice user going through the Sigstore documentation. Of course, as with any other technical documentation, this one is incomplete and not updated with the implementation. There is no documentation on how to verify the Cosign binary, but there is one describing how to verify Rekor binaries. If you go to the Sigstore Github organization and specifically to the Cosign and Rekor release pages, you will see that they’ve published the signatures and the SBOMs for both tools. You will also find binaries for Rekor that you can download. So you can verify the signature of the release binaries before installing. Here is what I did for Rekor CLI version that I had downloaded:

$ COSIGN_EXPERIMENTAL=1 cosign verify-blob \
    --cert https://github.com/sigstore/rekor/releases/download/v0.12.2/rekor-cli-linux-amd64-keyless.pem \
    --signature https://github.com/sigstore/rekor/releases/download/v0.12.2/rekor-cli-linux-amd64-keyless.sig \
    https://github.com/sigstore/rekor/releases/download/v0.12.2/rekor-cli-linux-amd64

tlog entry verified with uuid: 38665ab8dc42600de87ed9374e86c83ac0d7d11f1a3d1eaf709a8ba0d9a7e781 index: 4228293
Verified OK

Verifying the Cosign binary is trickier, though, because you need to have Cosign already installed to verify it. Here is the output if you already have Cosign installed and you want to move to a newer version:

$ COSIGN_EXPERIMENTAL=1 cosign verify-blob \
    --cert https://github.com/sigstore/cosign/releases/download/v1.13.0/cosign-linux-amd64-keyless.pem 
    --signature https://github.com/sigstore/cosign/releases/download/v1.13.0/cosign-linux-amd64-keyless.sig 
    https://github.com/sigstore/cosign/releases/download/v1.13.0/cosign-linux-amd64

tlog entry verified with uuid: 6f1153edcc399b22b016709a218127fc7d5e9fb7071cd4812a9847bf13f65190 index: 4639787
Verified OK

If you are installing Cosign for the first time and downloading the binaries from the release page, you can follow a process similar to the one for verifying Rekor releases. I have submitted an issue to update the Cosign documentation with release verification instructions.

I would rate the installation experience no worse than any other tool geared toward hardcore engineers.

Let’s get into the scenarios.

Using Cosign to Sign Container Images with a KMS Key

Here are the two images that I will use for the first scenario:

$ docker images
REPOSITORY                                                 TAG       IMAGE ID       CREATED         SIZE
562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample   v1        b40ba874cb57   2 minutes ago   138MB
tsmacrtestcssc.azurecr.io/flasksample                      v1        b40ba874cb57   2 minutes ago   138MB

Using Cosign With a Key Stored in AWS KMS

Let’s go over the AWS experience first.

# Sign into the registry
$ aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 562077019569.dkr.ecr.us-west-2.amazonaws.com
Login Succeeded

# And push the image after that
$ docker push 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1

Signing the container image with the AWS key was relatively easy. Though, be careful when you omit the host and make sure you add that third backslash; otherwise, you will get errors. Here is what I got on the first attempt, which puzzled me a little.

$ cosign sign --key awskms://61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Error: signing [562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1]: getting signer: reading key: kms get: kms specification should be in the format awskms://[ENDPOINT]/[ID/ALIAS/ARN] (endpoint optional)
main.go:62: error during command execution: signing [562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1]: getting signer: reading key: kms get: kms specification should be in the format awskms://[ENDPOINT]/[ID/ALIAS/ARN] (endpoint optional)

$ cosign sign --key awskms://arn:aws:kms:us-west-2:562077019569:key/61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
Error: signing [562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1]: recursively signing: signing digest: getting fetching default hash function: getting public key: operation error KMS: GetPublicKey, failed to parse endpoint URL: parse "https://arn:aws:kms:us-west-2:562077019569:key": invalid port ":key" after host
main.go:62: error during command execution: signing [562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1]: recursively signing: signing digest: getting fetching default hash function: getting public key: operation error KMS: GetPublicKey, failed to parse endpoint URL: parse "https://arn:aws:kms:us-west-2:562077019569:key": invalid port ":key" after host

Of course, when I typed the URIs correctly, the image was signed, and the signature got pushed to the registry.

$ cosign sign --key awskms:///61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
Pushing signature to: 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample

Interestingly, I didn’t get the tag warning when using the Key ID incorrectly. I got it when I used the ARN incorrectly as well as when I used the Key ID correctly. Also, I struggled to interpret the error messages, which made me wonder about the consistency of the implementation, but I will cover more about that in the conclusions.

One nice thing was that I was able to copy the Key ID and the Key ARN and directly paste them into the URI without modification. Unfortunately, this was not the case with Azure Key Vault 🙁 .

Using Cosign to Sign Container Images With Azure Key Vault Key

According to the Cosign documentation, I had to set three environment variables to use keys stored in Azure Key Vault. It looks as if service principal is the only authentication option that Cosign implemented. So, I created one and gave it all the necessary permissions to Key Vault. I’ve also set the required environment variables with the service principal credentials.

As I hinted above, my first attempt to sign with a key stored in Azure Key Vault failed. Unlike the AWS experience, copying the key identifier from the Azure Portal and pasting it into the URI (without the https:// part) won’t do the job.

$ cosign sign --key azurekms://tsm-kv-usw3-tst-cssc.vault.azure.net/keys/sigstore-azure-test-key/91ca3fb133614790a51fc9c04bd96890 tsmacrtestcssc.azurecr.io/flasksample:v1
Error: signing [tsmacrtestcssc.azurecr.io/flasksample:v1]: getting signer: reading key: kms get: kms specification should be in the format azurekms://[VAULT_NAME][VAULT_URL]/[KEY_NAME]
main.go:62: error during command execution: signing [tsmacrtestcssc.azurecr.io/flasksample:v1]: getting signer: reading key: kms get: kms specification should be in the format azurekms://[VAULT_NAME][VAULT_URL]/[KEY_NAME]

If you decipher the help text that you get from the error message: kms specification should be in the format azurekms://[VAULT_NAME][VAULT_URL]/[KEY_NAME], you would assume that there are two ways to construct the URI:

  1. Using the key vault name and the key name like this
    azurekms://tsm-kv-usw3-tst-cssc/sigstore-azure-test-key
    The assumption is that Cosign automatically appends .vault.azure.net at the end.
  2. Using the key vault hostname (not URL or identifier) and the key name like this
    azurekms://tsm-kv-usw3-tst-cssc.vault.azure.net/sigstore-azure-test-key

The first one just hung for minutes and did not complete. I’ve tried it several times, but the behavior was consistent.

$ cosign sign --key azurekms://tsm-kv-usw3-tst-cssc/sigstore-azure-test-key tsmacrtestcssc.azurecr.io/flasksample:v1
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
^C
$

I assume the problem is that it tries to connect to a host named tsm-kv-usw3-tst-cssc , but it seems that it was not timing out. The hostname one brought me a step further. It seems that the call to Azure Key Vault was made, and I got the following error:

$ cosign sign --key azurekms://tsm-kv-usw3-tst-cssc.vault.azure.net/sigstore-azure-test-key tsmacrtestcssc.azurecr.io/flasksample:v1
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
Error: signing [tsmacrtestcssc.azurecr.io/flasksample:v1]: recursively signing: signing digest: signing the payload: keyvault.BaseClient#Sign: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=04b07795-xxxx-xxxx-xxxx-02f9e1bf7b46;oid=f4650a81-f57d-4fb3-870c-e84fe859f68a;numgroups=1;iss=https://sts.windows.net/08c1c649-bfdd-439e-8e5b-5ff31c72ce4e/' does not have keys sign permission on key vault 'tsm-kv-usw3-tst-cssc;location=westus3'. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287" InnerError={"code":"ForbiddenByPolicy"}
main.go:62: error during command execution: signing [tsmacrtestcssc.azurecr.io/flasksample:v1]: recursively signing: signing digest: signing the payload: keyvault.BaseClient#Sign: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="The user, group or application 'appid=04b07795-8ddb-461a-bbee-02f9e1bf7b46;oid=f4650a81-f57d-4fb3-870c-e84fe859f68a;numgroups=1;iss=https://sts.windows.net/08c1c649-bfdd-439e-8e5b-5ff31c72ce4e/' does not have keys sign permission on key vault 'tsm-kv-usw3-tst-cssc;location=westus3'. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287" InnerError={"code":"ForbiddenByPolicy"}

Now, this was a very surprising error. And mainly because the AppId from the error message (04b07795-xxxx-xxxx-xxxx-02f9e1bf7b46) didn’t match the AppId (or Client ID) of the environment variable that I have set as per the Cosign documentation.

$ echo $AZURE_CLIENT_ID
a59eaa16-xxxx-xxxx-xxxx-dca100533b89

Note that I masked parts of the IDs for privacy reasons.

My first assumption was that the AppId from the error message was for my user account, with which I signed in using Azure CLI. This assumption turned out to be true. Not knowing the intended behavior, I filed an issue for the Sigstore team to clarify and document the Azure Key Vault authentication behavior. After restarting the terminal (it seems to restart is the norm in today’s software products 😉 ), I was able to move another step forward. Now, having only signed in with the service principal credentials, I got the following error:

$ cosign sign --key azurekms://tsm-kv-usw3-tst-cssc.vault.azure.net/sigstore-azure-test-key tsmacrtestcssc.azurecr.io/flasksample:v1
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
Error: signing [tsmacrtestcssc.azurecr.io/flasksample:v1]: recursively signing: signing digest: signing the payload: keyvault.BaseClient#Sign: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadParameter" Message="Key and signing algorithm are incompatible. Key https://tsm-kv-usw3-tst-cssc.vault.azure.net/keys/sigstore-azure-test-key/91ca3fb133614790a51fc9c04bd96890 is of type 'RSA', and algorithm 'ES256' can only be used with a key of type 'EC' or 'EC-HSM'."
main.go:62: error during command execution: signing [tsmacrtestcssc.azurecr.io/flasksample:v1]: recursively signing: signing digest: signing the payload: keyvault.BaseClient#Sign: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadParameter" Message="Key and signing algorithm are incompatible. Key https://tsm-kv-usw3-tst-cssc.vault.azure.net/keys/sigstore-azure-test-key/91ca3fb133614790a51fc9c04bd96890 is of type 'RSA', and algorithm 'ES256' can only be used with a key of type 'EC' or 'EC-HSM'."

Apparently, I have generated an incompatible key! Note that RSA keys are not supported by Cosign, as I documented in the following Sigstore documentation issue. After generating a new key, the signing finally succeeded.

$ cosign sign --key azurekms://tsm-kv-usw3-tst-cssc.vault.azure.net/sigstore-azure-test-key-ec tsmacrtestcssc.azurecr.io/flasksample:v1
Warning: Tag used in reference to identify the image. Consider supplying the digest for immutability.
Pushing signature to: tsmacrtestcssc.azurecr.io/flasksample

OK! I was able to get through the first three steps of Scenario 1: Sign Container Images With Existing Keys from KMS. Next, I will add some other artifacts to the image – aka attestations. I will use only one of the cloud vendors for that because I don’t expect differences in the experience.

Adding SBOM Attestation With Cosign

Using Syft, I can generate an SBOM for the container image that I have built. Then I can use Cosign to sign and push the SBOM to the registry. Keep in mind that you need to be signed into the registry to generate the SBOM. Below are the steps to generate the SBOM (nothing to do with Cosign). The SBOM generated is also available in my Github test repo.

# Sign into AWS ERC
$ aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 562077019569.dkr.ecr.us-west-2.amazonaws.com

# Generate the SBOM
$ syft packages 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1 -o spdx-json > flasksample-v1.spdx

Cosign CLI’s help shows the following message how to add an attestation to an image using AWS KMS key.

cosign attest --predicate <FILE> --type <TYPE> --key awskms://[ENDPOINT]/[ID/ALIAS/ARN] <IMAGE>

When I was running this test, there was no explanation of what the --type <TYPE> parameter was. I decided just to give it a try.

$ cosign attest --predicate flasksample-v1.spdx --type sbom --key awskms:///arn:aws:kms:us-west-2:562077019569:key/61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Error: signing 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1: invalid predicate type: sbom
main.go:62: error during command execution: signing 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1: invalid predicate type: sbom

Trying spdx-json as a type also doesn’t work. There were a couple of places here and here, where Cosign documentation spoke about custom predicate types, but none of the examples showed how to use the parameter. I decided to give it one last try.

$ cosign attest --predicate flasksample-v1.spdx --type "cosign.sigstore.dev/attestation/v1" --key awskms:///arn:aws:kms:us-west-2:562077019569:key/61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Error: signing 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1: invalid predicate type: cosign.sigstore.dev/attestation/v1
main.go:62: error during command execution: signing 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1: invalid predicate type: cosign.sigstore.dev/attestation/v1

Obviously, this was not yet documented, and it was not clear what values could be provided for it. Here the issue asking to clarify the purpose of the --type <TYPE> parameter. From the documentation examples, it seemed that this parameter could be safely omitted. So, I gave it a shot! Running the command without the parameter worked fine and pushed the attestation to the registry.

$ cosign attest --predicate flasksample-v1.spdx --key awskms:///arn:aws:kms:us-west-2:562077019569:key/61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Using payload from: flasksample-v1.spdx

One thing that I noticed with the attestation experience is that it pushed a single artifact with .att at the end of the tag. I will come back to this in the next post. Now, let’s push the SLSA attestation for this image.

Adding SLSA Attestation With Cosign

As I mentioned above, I will cheat with the SLSA attestation because I do all those steps manually and docker build doesn’t generate SLSA provenance. I will use this sample for the SLSA provenance attestation.

$ cosign attest --predicate flasksample-v1.slsa --key awskms:///arn:aws:kms:us-west-2:562077019569:key/61c124fb-bf47-4f95-a805-65dda7cd08ae 562077019569.dkr.ecr.us-west-2.amazonaws.com/flasksample:v1
Using payload from: flasksample-v1.slsa

Cosign did something, as we can see on the console as well as in the registry – the digest of the .att artifact changed.

The question, though, is what exactly happened?

In the next post of the series, I will go into detail about what is happening behind the scenes, where I will look deeper at the artifacts created by Cosign.

Summary

To summarize my experience so far, here is what I think.

  • As I mentioned above, the installation experience for the tools is no worse than any other tool targeted to engineers. Improvements in the documentation would be beneficial for the first-use experience, and I filed a few issues to help with that.
  • Signing with a key stored in AWS was easy and smooth. Unfortunately, the implementation followed the same pattern for Azure Key Vault. I think it would be better to follow the patterns for the specific cloud vendor. There is no expectation that each cloud vendor will follow the same naming, URI, etc. patterns; changing those may result in more errors than benefits for the user.
  • While Cosign hides a lot of the complexities behind the scenes, providing some visibility into what is happening will be good. For example, if you create a key in Azure Key Vault, Cosign CLI will automatically create a key that it supports. That will avoid the issue I encountered with the RSA keys, but it may not be the main scenario used in the enterprise.

Next time, I will spend some time looking at the artifacts created by Cosign and understanding their purpose, as well as how to verify those using Cosign and the keys stored in the KMS.

In Part 1 of the series Signatures, Key Management, and Trust in Software Supply Chains, I wrote about the basic concepts of identities, signatures, and attestation. In this one, I will expand on the house buying scenario, that I hinted about in Part 1, and will describe a few ways to exploit it in the physical world. Then, I will map this scenario to the digital world and delve into a few possible exploits. Throughout this, I will also suggest a few possible mitigations in both the physical as well as the digital world. The whole process as you may have already known is called threat modeling.

Exploiting Signatures Without Attestation in the Offline World

For the purpose of this scenario, we will assume that the parties involved are me and the title company. The document that needs to be signed is the deed (we can also call it the artifact). Here is a visual representation of the scenario:

Here is how the trust is established:

  • The title company has an inherent trust in the government.
  • This means that the title company will trust any government-issued identification like a driving license.
  • In my meeting with the title company, I present my driving license.
  • The title company verifies the driving license is legit and establishes trust in me.
  • Last, the title company trusts the signature that I use to sign the deed in front of them.
  • From here on, the title company trusts the deed to proceed with the transaction.

As we can see, establishing trust between the parties involves two important conditions – implicit trust in a central authority and verification of identity. Though, this process is easily exploitable with fake IDs (like fake driving license) as shown in the picture below.

In this case, an imposter can obtain a fake driving license and impersonate me in the transaction. If the title company can be fooled that the driving license is issued by the government, they can falsely establish trust in the imposter and allow him to sign the deed. From there on, the title company considers the deed trusted and continues with the transaction.

The problem here is with the verification step – the title company does not do a real-time verification if the driving license is legitimate. The verification step is done manually and offline by an employee of the title company and relies on her or his experience to recognize forged driving licenses. If this “gate” is passed, the signature on the deed becomes official and will not be verified anymore in the process.

There is one important step in this process that we didn’t mention yet. When the title company employee verifies the driving license, she or he also takes a photocopy of the driving license and attaches it to the documentation. This photocopy becomes part of the audit trail for the transaction if later on is discovered that the transaction needs to be reverted.

Exploiting Signatures Without Attestation in the Digital World

The above process is easily transferable to the digital world. In the following GitHub project I have an example of signing a simple text file artifact.txt. The example uses self-signed certificates for verifying the identity and the signature.

There are two folders in the repository. The real folder contains the files used to generate a key and X.509 certificate that is tied to my real identity and verified using my real domain name toddysm.com. The fake folder contains the files used to generate a key and X.509 certificate that is tied to an imposter identity that can be verified with a look-alike (or fake) domain. The look-alike domain uses homographs to replace certain characters in my domain name. If the imposter has ownership of the imposter domain, obtaining a trusted certificate with that domain name is easily achievable.

The dilemma you are presented with is, which certificate to trust – the one here or the one here. When you verify both certificates using the following commands:

openssl x509 -nameopt lname,utf8 -in [cert-file].crt -text -noout | grep Subject:
openssl x509 -nameopt lname,utf8 -in [cert-file].crt -text -noout | grep Issuer:

they both return visually indistinguishable information:

Subject: countryName=US, stateOrProvinceName=WA, localityName=Seattle, organizationName=Toddy Mladenov, commonName=toddysm.com, emailAddress=me@toddysm.com
Issuer: countryName=US, stateOrProvinceName=WA, localityName=Seattle, organizationName=Toddy Mladenov, commonName=toddysm.com, emailAddress=me@toddysm.com

It is the same as looking at two identical driving licenses, a legitimate one and a forged one, that have no visible differences.

The barrier for this exploit using PGP keys and SSH keys is even lower. While X.509 certificates need to be issued by a trusted certificate authority (CA), PGP and SSH keys can be issued by anybody. Here is a corresponding example of a valid PGP key and an imposter PGP key. Once again, which one would you trust?

Though, compromising CAs is not something that we can ignore. There are numerous examples where forged certificates issued by legitimate CAs are used:

Let’s also not forget that Stuxnet malware was signed by compromised JMicron and Realtec private keys. In the case of compromised CA, malicious actors don’t even need to use homographs to deceive the public – they can issue the certificate with the real name and domain.

Unlike the physical world though, the digital one misses the very important step of collecting audit information when the signature is verified. I will come back to that in the next post of the series where I plan to explore the various controls that can be put to increase security.

Based on the above though, it is obvious that the trust whether in a single entity or a central certificate authority (CA), has highly diminished in recent years.

Oh, and don’t trust the keys that I published on GitHub! 🙂 Anybody can copy them or generate new ones with my information – unfortunately obtaining that information is quite easy nowadays.

Exploiting Signatures With Attestation in the Offline World

Let’s look at the example I introduced in the previous post where more parties are involved in the process of selling my house. Here is the whole scenario!

Because I am unable to attend the signing of the documents, I need to issue a power of attorney for somebody to represent me. This person will be able to sign the documents on my behalf. First and foremost, I need to trust that person. But my trust in this person doesn’t automatically transfer to the title company that will handle the transaction. For the title company to trust my representative, the power of attorney needs to be attested by a certified notary. Only then will the title company trust the power of attorney document and accept the signature of my representative.

Here is the question: “How the introduction of the notary increases the security?” Note that I used the term “increase security”. While there is no 100% guarantee that this process will not fail…

By adding one more step to the process, we introduce an additional obstacle that reduces the probability for malicious activity to happen, which increases the security.

What the notary will eventually prevent is that my “representative” forcefully makes me sign the power of attorney. My security is compromised and now my evil representative can use the power of attorney to sell my house to himself for just a dollar. The purpose of the notary is to attest that I willfully signed the document and was present (and in good health) during the signing. Of course, this can easily be exploited if both, the representative and the notary are evil, as shown in the below diagram.

As you can see in this scenario, all parties have valid government-issued IDs that the title company trusts. However, the process is compromised if there is collusion between the malicious actor (evil representative) and the notary.

Other ways to exploit this process are if the notary or my representative are both or individually impersonated. The impersonation is described in the section above – Exploiting Signatures Without Attestation in the Offline World.

Exploiting Signatures With Attestation in the Digital World

There is a lot of talks recently about implementing attestation systems that will save signature receipts in an immutable ledger. This is presented as the silver bullet solution for signing software artifacts (check out the Sigstore project). Similar to the notary example in the previous section, this approach may increase security but it may also have a negative impact. Because they compare themselves to Let’s Encrypt, let me take a stab at how Let’s Encrypt impacted the security on the Web.

Before Let’s Encrypt, only owners that want to invest money to pay for valid certificates had HTTPS enabled on their websites. More importantly, though, browsers showed a clear indicator when a site was using plain HTTP protocol and not the secure one. From a user’s point of view it was easy to make the decision that if the browser address bar was red, you should not enter your username and password or your credit card. Recognizing malicious sites was relatively easy because malicious actors didn’t want to spend the money and time to get a valid certificate.

Let’s Encrypt (and the browser vendors) changed that paradigm. Being free, Let’s Encrypt allows anybody to issue a valid (and “trusted”??? 🤔) certificate and enable HTTPS for their site. Not only that but Let’s Encrypt made it so easy that you can get the certificate issued and deployed to your web server using automation within seconds. The only proof you need to provide is the ownership of the domain name for your server. At the same time, Google led the campaign to change the browser indicators to show a very mediocre lock icon in the address bar that nobody except maybe a few pays any attention to anymore. As a result, every malicious website now has HTTPS enabled and there is no indication in the browser to tell you that it is malicious. In essence, the lock gives you a false sense of security.

I would argue that Let’s Encrypt (and the browser vendors) in fact decreased the security on the web instead of increasing it. Let me be clear! While I think Let’s Encrypt (and the browser vendors) decreased the security, what they provide had a tremendous impact on privacy. Privacy should not be discounted! Though in marketing messages those two terms are used interchangeably and this is not for the benefit of the users.

In the digital world, the CA can play the role of the notary in the physical world. The CA verifies the identity of the entity that wants to sign artifacts and issues a “trusted” certificate. Similar to a physical world notary, the CA will issue a certificate for both legit as well as malicious actors, and unlike the physical world, the CA has very basic means to verify identities. In the case of Let’s Encrypt this is the domain ownership. In the case of Sigstore that will be a GitHub account. Everyone can easily buy a domain or register a GitHub account and get a valid certificate. This doesn’t mean though that you should trust it.

Summary

The takeaway from this post for you should be that every system can be exploited. We learn and create systems that reduce the opportunities for exploitation but that doesn’t make them bulletproof. Also, when evaluating technologies we should not only look at the shortcomings of the previous technology but also at the shortcoming of the new shiny one. Just adding attestation to the signatures will not be enough to make signatures more secure.

In the next post, I will look at some techniques that we can employ to make signatures and attestations more secure.

Photo by Erik Mclean on Unsplash

 

 

For the past few months, I’ve been working on a project for a secure software supply chain, and one topic that seems to always start passionate discussions is the software signatures. The President’s Executive Order on Improving the Nation’s Cybersecurity (EO) is a pivotal point for the industry. One of the requirements is for vendors to document the supply chain for software artifacts. Proving the provenance of a piece of software is a crucial part of the software supply chain, and signatures play a main role in the process. Though, there are conflicting views on how signatures should work. There is the traditional PKI (Public Key Infrastructure) approach that is well established in the enterprises, but there are other traditional and emerging technologies that are brought up in discussions. These include PGP key signatures, SSH key signatures, and the emerging ephemeral key (or keyless) signatures (here, here, and lately here).

While PKI is well established, the PKI shortcomings were outlined by Bruce Schneier and Carl Elisson more than 20 years ago in their paper. The new approaches are trying to overcome those shortcomings and democratize the signatures the same way Let’s Encrypt democratized HTTPS for websites. Though, the question is whether those new technologies improve security over PKI? And if so, how? In a series of posts, I will lay out my view of the problem and the pros and cons of using one or another signing approach, how the trust is established, and how to manage the signing keys. I will start with the basics using simple examples that relate to everyday life and map those to the world of digital signatures.

In this post, I will go over the identity, signature, and attestation concepts and explain why those matter when establishing trust.

What is Identity?

Think about your own experience. Your identity is you! You are identified by your gender, skin color, facial and body characteristics, thumbprint, iris print, hair color, DNA etc. Unless you have an identical twin, you are unique in the world. Even if you are identical twins, there are differences like thumbprints and iris prints that make you unique. The same is true for other entities like enterprises, organizations, etc. Organizations have names, tax numbers, government registrations, addresses, etc. As a general rule, changing your identity is hard if not impossible. You can have plastic surgery but you cannot change your DNA. The story may be a bit different for organizations that can rename themselves, get bought or sold, change headquarters, etc. but it is still pretty easy to uniquely identify organizations.

All the above points that identities are:

  • unique
  • and impossible (or very hard) to change

In the digital world, identities are an abstract concept. In my opinion, it is wrong to think that identities can be changed in both the physical and the digital world. Although we tend to think that they can be changed, this is not true – what can be changed is the way we prove our identity. We will cover that shortly but before that, let’s talk about trust.

If you are a good friend of mine, you may be willing to trust me but if you just met me, your level of trust will be pretty low. Trust is established based on historical evidence. The longer you know me, and the longer I behave honestly, the more you will be willing to trust me. Sometimes I may not be completely honest, or I may borrow some money from you and not return them. But I may buy you a beer every time we go out and offset that cost and you may be willing to forgive me. It is important to note that trust is very subjective, and while you may be very forgiving, another friend of mine maybe not. He may decide that I am not worth his trust and never borrow me money again.

How do We Prove Our Identity?

In the physical world, we prove our identity using papers like a driving license, a passport, an ID card, etc. Each one of those documents is issued for a purpose:

  • The driving license is mainly used to prove you can drive a motorized vehicle on the US streets. Unless it is an enhanced driving license, you (soon) will not be able to use it to board a domestic flight. However, you cannot cross borders with your driving license and you cannot use it to even rent a car in Europe (unless you have an international driving license).
  • To cross borders you need a passport. The passport is the only document that is recognized by border authorities in other countries that you visit. You cannot use your US driving license to cross the borders in Europe. The interesting part is that you do not need a driving license to get a passport or vice versa.
  • You also have your work badge. Your work badge identifies you as an employee of a particular organization. Despite the fact that you have a driving license and a passport, you cannot enter the buildings without your badge. However, to prove to your employer that you are who you are for them to issue you the badge, you must have a driving license or a passport.

In the digital world, there are similar concepts to prove our identity.

  • You can use a username, password and another factor (2FA/MFA token) to prove your identity to a particular system.
  • App secret that you can generate in a system can also be used to prove your identity.
  • OAuth or SSO (single sign-on) token issued by a third party is another way to prove your identity to a particular system. That system though needs to trust the third party.
  • SSH key can be an alternate way to prove your identity. You can use it in conjunction with username/password combination or separately.
  • You can use PGP key to prove your identity to an email recipient.
  • Or use a TLS certificate to prove the identity of your website.
  • And finally, you can use an X.509 certificate to prove your identity.

As you can see, similar to the physical world, in the digital world you have multiple ways to prove your identity to a system. You can use more than one way for a single system. The example that comes to mind is GitHub – you can use app secret or SSH key to push your changes to your repository.

How Does Trust Tie to the Concepts Above? Let’s say that I am a good developer. My code published on GitHub has a low level of bugs, it is well structured, well documented, easy to use, and updated regularly. You decide that you can trust my GitHub account. However, I also have DockerHub account that I am negligent with – I don’t update the containers regularly, they have a lot of vulnerabilities, and are sloppily built. Although you are my friend and you trust my GitHub account, you are not willing to trust my DockerHub account. This example shows that trust is not only subjective but also based on context.

OK, What Are Signatures?

Here is where things become interesting! In the physical world, a signature is a person’s name written in that person’s handwriting. Just the signature does not prove my identity. Wikipedia’s entry for signature defines the traditional function of a signature as follows:

…to permanently affix to a document a person’s uniquely personal, undeniable self-identification as physical evidence of that person’s personal witness and certification of the content of all, or a specified part, of the document.

The keyword above is self-identification. This word in the definition has a lot of implications:

  • First, as a signer, I can have multiple signatures that I would like to use for different purposes. I.e. my identity may use different signatures for different purposes.
  • Second, nobody attests to my signature. This means that the trust is put in a single entity – the signer.
  • Third, a malicious person can impersonate me and use my signature for nefarious purposes.

Interestingly though, we are willing to accept the signature as proof of identity depending on the level of trust we have in the signer. For example, if I borrow $50 from you and give you a receipt with my signature the I will pay you back in 30 days, you may be willing to accept it even if you don’t know me too much (i.e. your level of trust is relatively low). This is understandable because we decide to lower our level of trust to just self-identification. I can increase your level of trust if I show you my driving license that has my signature printed on it and you can compare both signatures. However, showing you my driver’s license is actually an attestation, which is covered in detail below.

In the digital world, to create a signature, you need a private key and to verify a signature, you need a public key (check the Digital Signature article on Wikipedia). The private and the public key are related and work in tandem – the private key signs the content and the public key verifies the signature. You own both but keep the private secret and publish the public to everybody to use. From the examples I have above, you can use PGP, SSH, and X.509 to sign content. However, they have differences:

  • PGP is a self-generated key-pair with additional details like name and email address included in the public certificate, that can be used for (pseudo)identification of the entity that signs the content. You can think of it as similar to a physical signature, where, in addition to the signature you verbally provide your name and home address as part of the signing process.
  • SSH is also a self-generated key pair but has no additional information attached. Think of it as the plain physical signature.
  • With X.509 you have a few options:
    • Self-generated key-pair similar to the PGP approach but you can provide more self-identifying information. When signing with such a private key you can assume that it is similar to the physical signature, where you verbally provide your name, address, and date of birth.
    • Domain Validated (DV) certificate that validates your ownership of a particular domain (this is exactly what Let’s Encrypt does). Think of this as similar to a physical signature where you verbally provide your name, address, and date of birth as well as show a utility bill with your name and address as part of the signing process.
    • Extended Validation (EV) certificate that validates your identity using legal documents. For example, this can be your passport as an individual or your state and tax registrations as an organization.
      Both, DV and EV X.509 certificates are issued by Certificate Authorities (CA), which are trusted authorities on the Internet or within the organization.

Note: X.509 is actually an ITU standard defining the format of public-key certificates and is at the basis of the PKI. The key pair can be generated using different algorithms. Though, the term X.509 is used (maybe incorrectly) as a synonym for the key-pair also.

Without any other variables in the mix, the level of trust that you may put on the above digital approaches would most probably be the following: (1-Lowest) SSH, (2) PGP and self-signed X.509, (3) DV X,509, and (4-Highest) EC X.509. Keep in mind that DV and EV X.509 are actually based on attestation, which is described next.

So, What is Attestation?

We finally came to it! Attestation, according to Meriam-Webster dictionary, is an official verification of something as true or authentic. In the physical world, one can increase the level of trust in a signature by having a Notary attest to the signature (lower level of trust) and adding government apostille (higher level of trust used internationally). In many states notaries are required (or highly encouraged) to keep a log for tracking purposes. While you may be OK with having only my signature on a paper for $50 loan, you certainly would want to have a notary attesting to a contract for selling your house to me for $500K. The level of trust in a signature increases when you add additional parties who attest to the signing process.

In the digital world, attestation is also present. As we’ve mentioned above, CAs act as the digital notaries who verify the identity of the signer and issue digital certificates. This is done for the DV and EV X.509 certificates only though. There is no attestation for PGP, SSH, and self-signed X.509 certificates. For digital signatures, there is one more traditional method of attestation – the Timestamp Authority (TSA). The TSA’s role is to provide an accurate timestamp of the signing to avoid tampering with the time by changing the clock on the computer where the signing occurs. Note that the TSA attests only for the accuracy of the timestamp of signing and not for the identity of the signer. One important thing to remember here is that without attestation you cannot fully trust the signature.

Here is a summary of the signing approaches and the level of trust we discussed so far.

Signing Keys and Trust

Signing Approach Level of Trust
SSH Key 1 - Lowest
PGP Key 2 - Low
X.509 Self-Signed 2 - Low
X.509 DV 3 - Medium
X.509 EV 4 - High

Now, that we’ve established the basics let’s talk about the validity period and why it matters.

Validity Period and Why it Matters?

Every identification document that you own in the physical world has an expiration date. OK, I lied! I have a German driving license that doesn’t have an expiration date. But this is an exception, and I can claim that I am one of the last who had that privilege – newer driving licenses in Germany have an expiration date. US driving licenses have an expiration date and an issue date. You need to renew your passport every five years in the US. Different factors determine why an identification document may expire. For a driving license, the reason may be that you lost some of your vision and you are not capable of driving anymore. For a passport, it may be because you moved to another country, became a citizen, and forfeit your right to be a US citizen.

Now, let’s look at physical signatures. Let’s say that I want to issue a power of attorney to you to represent me in the sale of my house while I am on a business trip for four weeks in Europe. I have two options:

  • Write you a power of attorney without an expiration date and have a notary attest to it (else nobody will believe you that you can represent me).
  • Write you a power of attorney that expires four weeks from today and have a notary attest to it.

Which one do you think is more “secure” for me? Of course the second one! The second power of attorney will give you only a limited period to sell my house. While this does not prevent you from selling it in a completely different transaction than the one I want, you are still given some time constraints. The counterparts in the transaction will check the power of attorney and will note the expiration date. If there is a final meeting four weeks and a day from now, that will require you to sign the final papers for the transaction, they should not allow you to do that because the power of attorney is not valid anymore.

Now, here is an interesting situation that often gets overlooked. Let’s say that I sign the power of attorney on Jan 1st, 2022. The power of attorney is valid till the end of day Jan 28th, 2022. I use my driving license to identify myself to the notary. My driving license has an expiration date of Jan 21st, 2022. Also, the notary’s license expires on Jan 24th, 2022. What is the last date that the power of attorney is valid? I will leave this exploration for one of the subsequent posts.

Time constraints are a basic measure to increase my security and prevent you from selling my house and pocketing the money later in the year. I will expand on this example in my next post where I will look at different ways to exploit signatures. But the basic lesson here is: the more time you have to exploit something, the higher probability there is for you to do so. Also, another lesson is: put an expiration date on all of your powers of attorney!

How does this look in the digital world?

  • SSH keys do not have expiration dates. Unless you provide the expiration date in the signature itself, the signature will be valid forever.
  • PGP keys have expiration dates a few years in the future. I just created a new key and it is set to expire on Jan 8th, 2026. If I sign an artifact with it and don’t provide an expiration date for the signature, it will be considered valid until Jan 8th, 2026.
  • X.509 certificates also have long expiration dates – 3, 12, or 24 months. Let’s Encrypt certificates have 3 months expiration dates. Root CA certificates have even longer expiration dates, which can be dangerous as we will explore in the future. Let’s Encrypt was the first to reduce the length of validity of their certificates to increase the security of certificate compromise because domains change hands quite often. Enterprises followed suit because the number of stolen enterprise certificates is growing.

Note: In the next post, I will expand a little bit more into the relationships between keys and signatures but for now, you can use them as the example above where I mention the various validity periods for documents used for the power of attorney.

Summary

If nothing else, here are the main takeaways that you should remember from this post:

  • Signatures cannot infer identities. Signatures can be forged even in the digital world.
  • One identity can have many signatures. Those signatures can be used for different purposes.
  • For a period of time, a signature can infer identity if it is attested to. However, the longer time passes, the lower the trust in this signature should be. Also, the period of time is subjective and dependent on the risk level of the signature consumer.
  • To increase security, signatures must expire. The shorter the expiration period, the higher the security (but also other constraints should be put in place).
  • Before trusting a signature, you should verify if the signed asset is still trustable. This is in line with the zero-trust principle for security: “Never trust, always verify!”.

Take a note that in the last bullet point, I intentionally use the term “asset is trustable” and not “signature is valid”. In the next post, I will go into more detail about what that means, how signatures can be exploited, and how context can provide value.

Featured image by StockSnap.