Secrets engines make up the fifth objective in the Vault certification journey. This objective covers the following sub-objectives:
- Define secrets engines
- Choose a secret method based on use case
- Contrast dynamic secrets vs. static secrets and their use cases
- Define transit engine
Define secrets engines#
Note: In the official list of sub-objectives for this certification objective the Define secrets engines sub-objective is listed as the last of four sub-objectives. I think it makes more sense to go through it first because all of the other sub-objectives build upon what secrets engines are.
Secrets engines allow us to work with secrets in a third-party system. I say third-party system, because that will most likely be the case. There are secrets engines that do not involve a third-party system too, but they are fewer in number. Generally, a secrets engine store, generate, or encrypts data.
There are a number of secrets engines that are enabled by default when you start up a Vault server. What secrets engines are available depends a bit on your server, but there are two which are always mounted by default and they are important to be aware about:
- The
cubbyhole
secrets engine is a special secrets engine where each Vault token has its own secrets storage. A secret stored in a cubbyhole for one token can only be retrieved by that same token. - The
identity
secrets engine is another special secrets engine used for managing identities (or entities) in Vault. It is the internal identity management solution for Vault. If you have enabled multiple auth methods on your server your users might use any of them to sign in. An entity is tied to a specific user no matter what auth method that user used to sign in to Vault. Theidentity
secrets engine is used to administer these entities.
Note that none of these two special secrets engines can be disabled.
To enable a new secrets engine using the CLI:
$ vault secrets enable <name>
Replace <name>
with the secrets engine you want to enable. Currently the following official secrets engine are supported:
- Active Directory
- AliCloud
- AWS
- Azure
- Consul
- Cubbyhole
- Databases (with plugins for e.g. Cassandra, Couchbase, Elasticsearch, MongoDB, MSSQL, Oracle, PostgreSQL, Redis, Snowflake, and more)
- Google Cloud
- Google Cloud KMS
- Identity
- Key Management (Azure Key Vault, AWS KMS, GCP Cloud KMS)
- Key/Value (version 1 and 2)
- KMIP
- Kubernetes
- MongoDB Atlas
- Nomad
- LDAP
- PKI (certificates)
- RabbitMQ
- SSH
- Terraform Cloud
- TOTP
- Transform
- Transit
- Venafi (certificates)
It is clear that this is a long list. To see which secrets engines you have currently enabled in your Vault server run the following command:
$ vault secrets list
Path Type Accessor Description
---- ---- -------- -----------
azure/ azure azure_2f066eac n/a
cubbyhole/ cubbyhole cubbyhole_255a0b95 per-token private secret storage
identity/ identity identity_1b99e3f9 identity store
kv/ kv kv_35836191 n/a
kvv2/ kv kv_5beecabf n/a
secret/ kv kv_cdd3836c key/value secret storage
sys/ system system_2d023c0b system endpoints used for control, policy and debugging
I have enabled a few secrets engines as you can tell from the output.
Apart from the secrets engines listed above it is also possible to create your own plugins to extend the support in Vault to other systems. Of course I will not go through how to use each and every secrets engine in this post, but we’ll see a few of them in use.
The lifecycle of a secrets engine generally follows these steps:
- Enable the secrets engine in Vault.
- Tune the global settings of the secrets engine. This step is not required if you accept the default settings.
- Use the secrets engine to generate secrets, store secrets, and more, for users and applications.
- Disable the secrets engine when no longer needed. This step might not happen very often, usually you keep a secrets engine for a forseeable future once you have started using it.
To get our hands dirty we will take a look at two common secrets engines that will most likely appear in the certification exam in one way or another: the key/value secrets engine. This secrets engine exists in two versions. The difference between the two versions is that version 2 supports versioned secrets. Let’s begin by enabling version 1:
$ vault secrets enable kv
Success! Enabled the kv secrets engine at: kv/
The key/value secrets engine allows me to store static secrets as key/value pairs at paths below the kv/
path. Let’s store some data at kv/database/password
:
$ vault kv put kv/database/password password=p@ssw0rd
Success! Data written to: kv/database/password
Now let’s read this data back:
$ vault kv get kv/database/password
====== Data ======
Key Value
--- -----
password p@ssw0rd
That was easy enough. Let’s now perform the same steps using version 2 of the key/value secrets engine. I begin by enabling the secrets engine at the kvv2/
path:
$ vault secrets enable -version=2 -path=kvv2 kv
Success! Enabled the kv secrets engine at: kvv2/
Next let’s store my database password:
$ vault kv put kvv2/database/password password=p@ssw0rd
======= Secret Path =======
kvv2/data/database/password
======= Metadata =======
Key Value
--- -----
created_time 2023-09-08T20:19:25.450796Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
The output looks different than before. Notably we see that the output includes metadata about the secret, and among this metadata is the version
. Remember that version 2 of the key/value secrets engine supports versioned secrets. What this means in practice is that I can store new versions of the same secret (i.e. rotating the secret) at the same path, but still retrieve old versions if needed. This is not possible with version 1 of the key/value secrets engine. If I write to a path that already exists version 1 of key/value secrets engine would just replace the value stored at that path.
Another thing to note is that although I wrote the data to kvv2/database/password
the actual data is available at kvv2/data/database/password
and the metadata is available at kvv2/metadata/database/password
. This is a bit confusing but this is how this secrets engine works. It is important to remember this when writing policies for applications that must use this secrets engine. We could write policies that allow access to the metadata but not the data, or vice versa.
Now let’s read this data back:
$ vault kv get kvv2/database/password
======= Secret Path =======
kvv2/data/database/password
======= Metadata =======
Key Value
--- -----
created_time 2023-09-08T20:19:25.450796Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
password p@ssw0rd
We see that I get both the metadata and data back when I read the secret path.
HashiCorp has great tutorials on the key/value secrets engine that you should go through in order to get some hands-on practice using them, it will be beneficial for the certification. See the following tutorials:
- Static secrets: Key/value secrets engine
- Versioned Key/value secrets engine
- Compare key/value secrets engine v1 and v2
We have seen how to work with static secrets in Vault. In a later section we will see examples of dynamic secrets and how the concept of a lease exists for those kinds of secrets, similar to what we saw for tokens in the previous post.
Choose a secret method based on use case#
Similarly to how we choose auth methods based on the use-case we have, or the available third-party systems that integrate with Vault, we also choose the secrets engines we want to use based on what our use-case is and which third-party systems we might already be using.
There are obvious use-cases where the choice of secrets engine is easy to make:
- Let’s say we want our application to be able to access blobs stored in Amazon S3. The AWS secrets engine seems apt in this case.
- If I run my applications in Azure and I need Azure credentials in order to access databases, key vaults, storage, or anything else in Azure then the Azure secrets engine is the obvious choice.
- If we are working with a given database system we will probably want to use database secrets engine with the correct plugin.
- If I need to be able to generate certificates for my applications and clients I will probably use the PKI secrets engine.
We see that to choose a secrets engine is not that difficult. It depends on what types of secrets you want to work with and what third-party systems (e.g. AWS, Azure, GCP) you are already using.
Contrast dynamic secrets vs. static secrets and their use cases#
Static secrets are secrets that doesn’t change. At lease they do not change very often. There is a big issue with static secrets.
Dynamic secrets are generated on-the-fly when we ask for them, and they expire once the secret lease expires. If regular secrets would be known as Secrets 1.0, then dynamic secrets would be Secrets 2.0. To illustrate the power of dynamic secrets we’ll go through an example.
I will use the Azure secrets engine to generate credentials for a service principal on-the-fly. I will not spend time on explaining the Azure-specific configuration in detail, we’ll instead focus on what we need to configure in Vault. Let’s begin by enabling the Azure secrets engine at the path azure/
:
$ vault secrets enable azure
Success! Enabled the azure secrets engine at: azure/
We must allow Vault to administer Azure service principals for us, so we must configure the secrets engine with credentials to Azure. In Azure I configure a service principal that Vault can use, and I add this service principal as an owner of my subscription as well as give it permissions to create new service principals in my Azure AD tenant. I store some required values for this in environment variables AZURE_SUBSCRIPTION_ID
, AZURE_TENANT_ID
, AZURE_CLIENT_ID
, and AZURE_CLIENT_SECRET
. Now I am ready to configure the secrets engine:
$ vault write azure/config \
subscription_id=$AZURE_SUBSCRIPTION_ID \
tenant_id=$AZURE_TENANT_ID \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
Success! Data written to: azure/config
At this point we technically know the password of this service principal that Vault will be using. That is not ideal. Most secrets engines allow us to rotate the credential that is used for administrative purposes, so that only Vault knows the secret value. To do this run the following command:
$ vault write -f azure/rotate-root
Success! Data written to: azure/rotate-root
Note that the old secret is actually not deleted, and you must go do that manually. The new secret is being used, and only Vault knows its value.
Next I must configure a role in the secrets engine. A role describes what permissions will be granted to the dynamic secrets that are generated. I will call my role demo-role
and it will get the contributor role (here the role refers to the Azure RBAC role) on a specific resource group named rg-vault-demo
in my Azure subscription:
$ vault write azure/roles/demo-role ttl=5m azure_roles=-<<EOF
[
{
"role_name": "Contributor",
"scope": "/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/rg-vault-demo"
}
]
EOF
Success! Data written to: azure/roles/demo-role
I could have provided multiple Azure RBAC roles for various scopes if I wanted to. Note that I also configured that the time-to-live (TTL) of the dynamic secret should be five minutes (ttl=5m
). To have Vault generate dynamic secrets I run the following command:
$ vault read azure/creds/demo-role
Key Value
--- -----
lease_id azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH
lease_duration 5m
lease_renewable true
client_id d4af1d42-d4c7-4c49-b8ec-9fb8920350e1
client_secret wZP8Q<masked>NOadO
We are provided with client_id
and client_secret
that can be used to interact with Azure. Here we also see that I get a lease_id
, a lease_duration
, and a lease_renewable
property. The lease ID is the handle for this secret that allows me to perform actions similar to what we saw for tokens in the previous post. I can lookup a given lease ID to get information about it:
$ vault lease lookup azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH
Key Value
--- -----
expire_time 2023-09-08T21:20:38.971253+02:00
id azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH
issue_time 2023-09-08T21:15:38.971252+02:00
last_renewal <nil>
renewable true
ttl 2m8s
I can renew the lease as long as it is renewable (i.e. it has not expired and renewable
is set to true
):
$ vault lease renew azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH
Key Value
--- -----
lease_id azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH
lease_duration 5m
lease_renewable true
When I am done using the secret and want to revoke it early I can do so:
$ vault lease revoke azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH
All revocation operations queued successfully!
If I try to lookup my lease ID now I get an error:
$ vault lease lookup azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH
error looking up lease id azure/creds/demo-role/HsUWYY9Ue37CXdI84bZ7e2UH: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/sys/leases/lookup
Code: 400. Errors:
* invalid lease
It is clear that working with leases for dynamic secrets seems a bit better than working with leases for tokens due to the fact that there is a separate lease_id
for dynamic secrets. This means we could delegate the administration of the lease to a separate user or application, without giving away the actual secret value.
If the TTL expires by itself the secret is revoked, and the generated service principal in Azure is deleted. This is the power of dynamic secrets compared to static secrets with a possibly infinite lifetime.
So when should we use dynamic secrets? My answer is that use them whenever you can. Sometimes it might not be possible to do so, if the secrets engine you use does not support it (e.g. key/value secrets engine) or if you have a legacy system where it is difficult to implement the handling of dynamic secrets. Before you decide that it is too difficult to implement dynamic secrets in your legacy app, read up on the sidecar pattern and the Vault agent to see if they might help.
Define transit engine#
The transit engine is used for cryptography-as-a-service (CaaS) or encryption-as-a-service (EaaS). It is used for various encryption services on data in-transit (thus the name). This secrets engine have a few different use-cases but the primary use-case concerns offloading cryptographic operations from applications, and allowing applications to encrypt data before it is stored in the application’s primary data store. The transit secrets engine allows you to standardize how you handle encryption in all of your applications.
An important point to note is that the transit secrets engine does not itself store any data in Vault! It is your applications responsibility to persist the encrypted values to secure storage.
Enabling the transit secrets engine is done using the following command:
$ vault secrets enable transit
Success! Enabled the transit secrets engine at: transit/
I could also have enabled the transit secrets engine at a non-default path by adding the -path=<path>
flag like so:
$ vault secrets enable -path=mytransit transit
Success! Enabled the transit secrets engine at: mytransit/
With the secrets engine enabled I can create an encryption-key named my-encryption-key
with default settings:
$ vault write -f transit/keys/my-encryption-key
Key Value
--- -----
allow_plaintext_backup false
auto_rotate_period 0s
deletion_allowed false
derived false
exportable false
imported_key false
keys map[1:1694077952]
latest_version 1
min_available_version 0
min_decryption_version 1
min_encryption_version 0
name my-encryption-key
supports_decryption true
supports_derivation true
supports_encryption true
supports_signing false
type aes256-gcm96
The default type
of the encryption key is aes256-gcm96
, if I wanted to I could provide a different type by adding the -type=<type>
flag.
Finally I can use my new encryption-key to encrypt some data using the transit/encrypt/
path for my key. Note that the data we want to encrypt must be base64-encoded due to the fact that Vault accepts any binary data and not just text:
$ vault write transit/encrypt/my-encryption-key \
plaintext=$(echo "s3cr3t p4ssw0rd" | base64)
Key Value
--- -----
ciphertext vault:v1:mMzuz5ZvqPmfupPUYplgKcwSWzu2XSpOoybXrgSqfYW9T3f8qBBH6XZbAW0=
key_version 1
We got back ciphertext
which is the encrypted value. The ciphertext
value starts with vault
which indicates that Vault has handled the encryption of this value. Following this is v1
which indicates that the first version of my key (I only have one version at this time) was used to encrypt the data.
Once again, note that Vault has not stored this ciphertext
value. It is our application’s responsibility to store this value wherever it needs to be stored.
To get my plaintext data back I first decrypt the data using the transit/decrypt/
path for my key:
$ vault write transit/decrypt/my-encryption-key \
ciphertext=vault:v1:mMzuz5ZvqPmfupPUYplgKcwSWzu2XSpOoybXrgSqfYW9T3f8qBBH6XZbAW0=
Key Value
--- -----
plaintext czNjcjN0IHA0c3N3MHJkCg==
And I decode the value from base64:
$ echo czNjcjN0IHA0c3N3MHJkCg== | base64 -d
s3cr3t p4ssw0rd
This was a short look at the transit secrets engine and what it is mainly used for. We will revisit this topic in part 10 of this course: Explain encryption as a service.