Let's take a look at how we can configure TLS for our microservice using Vault
The first thing we need to do is to create a mount point in Vault for our TLS certificates
core/vault_namespace/vault.tf - 01_pki_mount
resource "vault_mount" "pki" {
path = "pki"
type = "pki"
description = "PKI mount for application"
default_lease_ttl_seconds = 86400
max_lease_ttl_seconds = 31536000
namespace = vault_namespace.namespace.path
}
Then we need to configure a root certificate since we are going to use self signed certs.
core/vault_namespace/vault.tf - 02_pki_ca
resource "vault_pki_secret_backend_root_cert" "ca" {
backend = vault_mount.pki.path
type = "internal"
common_name = "${var.environment}.minecraft.internal"
ttl = "31536000"
format = "pem"
private_key_format = "der"
key_type = "rsa"
key_bits = 4096
exclude_cn_from_sans = true
ou = "Development"
organization = "HashiCraft"
namespace = vault_namespace.namespace.path
}
Once this has been configured we can apply this configuration and move over to the application config.
First we need to create a role, the role defines the properties for our application certificate
app/vault.tf - 03_pki_role
resource "vault_pki_secret_backend_role" "app" {
backend = var.vault_pki_path
name = "app_role"
ttl = 2592000 // 30 days
allow_ip_sans = true
key_type = "rsa"
key_bits = 4096
allow_subdomains = true
allowed_domains = ["${var.environment}.minecraft.internal"]
}
Next let's create a terraform resource that will create a new certificate in Vault.
app/vault.tf - 04_pki_cert
resource "vault_pki_secret_backend_cert" "app" {
backend = var.vault_pki_path
name = vault_pki_secret_backend_role.app.name
common_name = "app.${var.environment}.minecraft.internal"
ttl = "168h" // 7 days
}
Now let's create a secret in kubernetes where the we can store the certificate.
app/vault.tf - 05_pki_cert
resource "kubernetes_secret" "pki_certs" {
metadata {
name = "minecraft-pki-${var.environment}"
}
type = "kubernetes.io/tls"
data = {
"tls.key" = vault_pki_secret_backend_cert.app.private_key
"tls.crt" = vault_pki_secret_backend_cert.app.certificate
}
}
Then we can modify our deployment to add the volume
app/vault.tf - 06_pki_volume
volume {
name = kubernetes_secret.pki_certs.metadata.0.name
secret {
secret_name = kubernetes_secret.pki_certs.metadata.0.name
}
}
And also the volume mount to the container
app/k8s_deployment.tf - 07_pki_volume_mount
volume_mount {
name = kubernetes_secret.pki_certs.metadata.0.name
mount_path = "/etc/tls"
read_only = true
}
Now we have this let's apply the configuration and we can redeploy our application.
If we take a look at the secret that has been created
kubectl exec -it pod /bin/bash
cd /etc/tls
cat tls.key
Now we have TLS configured for the application, let's see how we can configure some database secrets using a similar method.
We have already created the database but let's create a vault db mount and configure it so that we can generate dynamic database credentials.
app/k8s_deployment.tf - 08_db_mount
resource "vault_database_secrets_mount" "minecraft" {
depends_on = [azurerm_postgresql_firewall_rule.minecraft]
path = "database/minecraft_${var.environment}"
postgresql {
name = "minecraft"
username = "${azurerm_postgresql_server.minecraft.administrator_login}@${azurerm_postgresql_server.minecraft.name}"
password = random_password.root_password.result
connection_url = "postgresql://{{username}}:{{password}}@${azurerm_postgresql_server.minecraft.fqdn}:5432/${azurerm_postgresql_database.minecraft.name}"
verify_connection = true
allowed_roles = [
"reader",
"writer",
"importer"
]
}
}
Once the mount has been created, let's create a role, this role is going to be a very permissive role that allows full access for the database migrations. We are going to use a very short TTL for this.
app/vault.tf - 09_db_role_import
resource "vault_database_secret_backend_role" "importer" {
name = "importer"
backend = vault_database_secrets_mount.minecraft.path
db_name = vault_database_secrets_mount.minecraft.postgresql[0].name
creation_statements = [
"CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT ${azurerm_postgresql_server.minecraft.administrator_login} TO \"{{name}}\";"
]
default_ttl = "100"
max_ttl = "100"
}
Next let's create a reader role
app/vault.tf - 10_db_role_reader
resource "vault_database_secret_backend_role" "reader" {
//depends_on = [kubernetes_job.sql_import]
name = "reader"
backend = vault_database_secrets_mount.minecraft.path
db_name = vault_database_secrets_mount.minecraft.postgresql[0].name
creation_statements = [
"CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT SELECT ON counter TO \"{{name}}\";",
]
}
And finally let's create a writer role
app/vault.tf - 11_db_role_writer
resource "vault_database_secret_backend_role" "writer" {
//depends_on = [kubernetes_job.sql_import]
name = "writer"
backend = vault_database_secrets_mount.minecraft.path
db_name = vault_database_secrets_mount.minecraft.postgresql[0].name
creation_statements = [
"CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT SELECT ON counter TO \"{{name}}\";",
"GRANT INSERT ON counter TO \"{{name}}\";",
"GRANT UPDATE ON counter TO \"{{name}}\";",
"GRANT DELETE ON counter TO \"{{name}}\";",
]
}
Once we have all of this in place, let's create a kubernetes secret to store the credentials in.
app/vault.tf - 12_db_secret
data "vault_generic_secret" "db_creds" {
path = "${vault_database_secrets_mount.minecraft.path}/creds/writer"
}
resource "kubernetes_secret" "db_writer" {
metadata {
name = "minecraft-db-${var.environment}"
}
data = {
username = data.vault_generic_secret.db_creds.data.username
password = data.vault_generic_secret.db_creds.data.password
}
}
And with all this done, let's add the volume to our deployment.
app/k8s_deployment.tf - 13_db_volume
volume {
name = kubernetes_secret.db_writer.metadata.0.name
secret {
secret_name = kubernetes_secret.db_writer.metadata.0.name
}
}
And then the volume mount to the container
app/k8s_deployment.tf - 14_db_volume_mount
volume_mount {
name = kubernetes_secret.db_writer.metadata.0.name
mount_path = "/etc/db_secrets"
read_only = true
}
We can now take a look at the secret that has been created.
kubectl exec -it pod /bin/bash
cd /etc/db_secret
cat username
cat password
Now we have the database secrets created for the application, let's see how we can configure boundary to allow database access for humans.
app/boundary.tf - 15_boundary_library
resource "boundary_credential_library_vault" "db" {
name = "${var.environment}-db-credentials"
description = "Database credentials for ${var.environment} environment"
credential_store_id = var.boundary_credential_store_id
path = "${vault_database_secrets_mount.minecraft.path}/creds/reader"
http_method = "GET"
credential_type = "username_password"
}
Next we need to create a target
app/boundary.tf - 16_boundary_target
resource "boundary_target" "db" {
name = "${var.environment}-db"
description = "Database for ${var.environment} environment"
scope_id = var.boundary_scope_id
type = "tcp"
address = azurerm_postgresql_server.example.fqdn
default_port = 5432
default_client_port = 5432
brokered_credential_source_ids = [
boundary_credential_library_vault.db.id
]
}
Finally we need to create a role that can use this target and the credentials.
app/boundary.tf - 17_boundary_role
locals {
boundary_user_accounts = jsondecode(var.boundary_user_accounts)
}
resource "boundary_role" "db_users" {
name = "DB Access"
description = "Access to the database"
scope_id = var.boundary_scope_id
principal_ids = [for user, details in local.boundary_user_accounts : details.id]
grant_strings = ["id=*;type=*;actions=*"]
}
Let's log into the boundary CLI and test this out.