Skip to main content

Terraform Stacks with Amazon Web Services

·3359 words·16 mins
Terraform Stacks Aws Hcp

Are you curious about Terraform stacks and want to get started with stacks for Amazon Web Services (AWS), then this guide is for you.

During the public beta period you can manage up to 500 stack resources in your organization. Due to the nature of how stacks work you can quickly reach this limit. Keep this in mind.
Book Release: Terraform Authoring and Operations Professional Study Guide (AWS edition)
·212 words·1 min
Book Terraform Aws

An Introduction to Terraform Stacks
#

A stack consists of one or more components. Each component is created in one or more instances called deployments. A component is similar to a Terraform module, and a deployment is similar to an instance of a Terraform module with a given set of input values.

Examples of possible components in a Terraform stack on AWS are:

  • An S3 bucket component
  • A VPC component
  • An AWS Lambda component
  • An EKS cluster component

You build your Terraform stack from a number of components. Next you create deployments of these components. One deployment creates one instance of each component that is part of the stack.

Reasons for adding different deployments on AWS include:

  • Create multiple different environments (development, staging, production)
  • Create copies of your infrastructure in multiple different region (eu-west-1, us-east-1, us-west-2)

I think a more logical way of visualizing a stack with its deployments and components, is this:

graph LR; A1[Stack]:::stack-->B1["Deployment (development)"]:::deployment; B1-->C1["Component (VPC)"]:::component; B1-->D1["Component (EC2 instance)"]:::component; A1-->B2["Deployment (production)"]:::deployment; B2-->C2["Component (VPC)"]:::component; B2-->D2["Component (EC2 instance)"]:::component; classDef stack fill:#fff,color:#000,stroke:#000 classDef deployment fill:#02A8EF,color:#fff,stroke:#000 classDef component fill:#EC585D,color:#fff,stroke:#000

In this post we will consider two components:

  • A VPC component
  • An EC2 instance component

These components will be created in three deployments:

  • development
  • staging
  • production

Apart from components and deployments, there is one more concept to introduce: orchestration rules. An orchestration rule allows us to specify conditions for when a plan operation for a deployment should be automatically approved. The orchestration rule has access to a context variable with results from the plan phase.

Dynamic Credentials with AWS
#

A prerequisite to work with stacks is to be able to authenticate to the target provider.

Stacks use workload identity for provider authentication. This is a more secure way of interacting with AWS from HCP Terraform. However, there is a setup step where you configure a trust relationship between HCP Terraform and AWS.

Once this trust relationship is set up the interaction between HCP Terraform and the target platform (AWS in this case) follows this pattern:

sequenceDiagram HCP Terraform -->> HCP Terraform: Generate workload identity token HCP Terraform->>AWS: Send workload identity token AWS ->> HCP Terraform: Get public signing key HCP Terraform ->> AWS: Return key AWS -->> AWS: Verify token AWS ->> HCP Terraform: Return temporary IAM credentials HCP Terraform -->> HCP Terraform: Set credentials in environment HCP Terraform ->> AWS: Use credentials to create resources

The trust relationship is configured for each HCP Terraform organization, project, stack, deployment, and operation (either plan or apply).

Create a new Terraform configuration (i.e. an empty directory).

Create a file named variables.tf with variables for your stacks deployment names, your HCP Terraform organization name, your HCP Terraform project name, and finally the stack name:

variable "deployment_names" {
  type        = list(string)
  description = "List of Terraform stack deployment names"
}

variable "organization_name" {
  type        = string
  description = "HCP Terraform organization name"
}

variable "project_name" {
  type        = string
  description = "HCP Terraform project name"
}

variable "stack_name" {
  type        = string
  description = "Terraform stack name"
}

Create a variables file named terraform.tfvars with the following content (change the placeholders for your values):

deployment_names  = ["development", "staging", "production"]
organization_name = "<Your HCP Terraform organization name>"
project_name      = "<Your HCP Terraform project name>"
stack_name        = "aws-stack"

Create a file named providers.tf and configure the AWS provider:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.72.1"
    }
  }
}

provider "aws" {
  region = "eu-west-1"
}

I am using implicit provider configurations coming from my AWS CLI (except for the region argument which I explicitly set). If you don’t have the AWS CLI installed I recommend that you do it. Instructions are available in the documentation.

Create a file named main.tf and add the following configuration1:

data "tls_certificate" "provider" {
  url = "https://app.terraform.io"
}

resource "aws_iam_openid_connect_provider" "hcp_terraform" {
  url = "https://app.terraform.io"

  client_id_list = [
    "aws.workload.identity",
  ]

  thumbprint_list = [
    data.tls_certificate.provider.certificates[0].sha1_fingerprint,
  ]
}

locals {
  sub = [for deployment in var.deployment_names : join(":", [
    "organization",
    var.organization_name,
    "project",
    var.project_name,
    "stack",
    var.stack_name,
    "deployment",
    deployment,
    "operation",
    "*"
  ])]
}

data "aws_iam_policy_document" "oidc_assume_role_policy" {
  statement {
    effect = "Allow"

    actions = ["sts:AssumeRoleWithWebIdentity"]

    principals {
      type        = "Federated"
      identifiers = [aws_iam_openid_connect_provider.hcp_terraform.arn]
    }

    condition {
      test     = "StringEquals"
      variable = "app.terraform.io:aud"
      values   = ["aws.workload.identity"]
    }

    condition {
      test     = "StringLike"
      variable = "app.terraform.io:sub"
      values   = local.sub
    }
  }
}

data "aws_iam_policy" "administrator_access" {
  arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}

resource "aws_iam_role_policy_attachment" "administrator_access" {
  policy_arn = data.aws_iam_policy.administrator_access.arn
  role       = aws_iam_role.hcp_terraform_stacks.name
}

resource "aws_iam_role" "hcp_terraform_stacks" {
  name               = "hcp-terraform-stacks"
  assume_role_policy = data.aws_iam_policy_document.oidc_assume_role_policy.json
}

I have added one subject (sub) for each deployment in the stack.

The role is granted the administrator access policy, this is used only for simplifying our Terraform stack testing (but it is not left there for production, right?).

Finally, create a file named outputs.tf with the following content:

output "role_arn" {
  value = aws_iam_role.hcp_terraform_stacks.arn
}

You will need the role ARN output value later when we configure our stack in HCP Terraform.

Initialize the Terraform configuration:

$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "5.72.1"...
- Finding latest version of hashicorp/tls...
- Installing hashicorp/aws v5.72.1...
- Installed hashicorp/aws v5.72.1 (signed by HashiCorp)
- Installing hashicorp/tls v4.0.6...
- Installed hashicorp/tls v4.0.6 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

Next, apply the configuration:

$ terraform apply -auto-approve
...
Plan: 2 to add, 0 to change, 0 to destroy.
...
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

role_arn = "arn:aws:iam::<your aws account id>:role/hcp-terraform-stacks"

If we inspect the IAM role in our AWS account we can see that the correct subjects have been configured:

Federated credentials for HCP Terraform

We are now ready to work with Terraform stacks.

Terraform Stacks CLI
#

Terraform stacks come with a dedicated CLI tool named tfstack.

There are a few different options for how to install it. You can find the binary for different architectures on the HashiCorp releases website.

I am using a MacBook, so I prefer to use Homebrew:

$ brew tap hashicorp/tap
$ brew install hashicorp/tap/tfstacks
==> Auto-updating Homebrew...
==> Fetching hashicorp/tap/tfstacks
==> Downloading https://releases.hashicorp.com/tfstacks/0.5.0/tfstacks_0.5.0_darwin_arm64.zip
==> Installing tfstacks from hashicorp/tap
🍺  /opt/homebrew/Cellar/tfstacks/0.5.0: 5 files, 20.1MB, built in 3 seconds
==> Running `brew cleanup tfstacks`...
$ tfstacks -v
0.5.0

You need an alpha release version of Terraform to work with stacks:

$ terraform version
Terraform v1.10.0-alpha20240926
on darwin_arm64

I downloaded an alpha version of Terraform and placed it in a temporary directory. Then I added that directory in the beginning of my path to make sure it will be used first:

$ PATH="/Users/mattias/my/temp/dir/:$PATH"

I did not persist this change outside of the current terminal session.

The Terraform stacks CLI has four different commands:

  • tfstacks init: similar to the normal terraform init command, it downloads configuration dependencies and creates a .terraform.lock.hcl file.
  • tfstacks providers lock: creates or updates the .terraform.lock.hcl file.
  • tfstacks validate: validates the stack configuration, similar to terraform validate.
  • tfstacks plan: plans the configuration through HCP Terraform.

Create a New Stack
#

Terraform stack files use the file ending .tfstack.hcl (and .tfdeploy.hcl specifically for deployments). Stacks are written using HCL, but the blocks in these files are not part of the normal Terraform HCL.

In a new directory, create a file named variables.tfstack.hcl and add the following variable declarations:

variable "region" {
  type        = string
  description = "AWS region"
}

variable "name_suffix" {
  type = string
}

variable "identity_token" { 
  type      = string 
  ephemeral = true
}

variable "role_arn" {
  type = string
}

These variable declarations should look familiar for most Terraform users. However, there is one new feature appearing for the identity_token variable. The variable block takes an optional ephemeral argument. Setting this argument to true makes sure that Terraform does not persist the value to the state file.

Create a new file named providers.tfstack.hcl with the following content:

required_providers {
  aws = {
    source  = "hashicorp/aws"
    version = "5.72.1"
  }
}

provider "aws" "this" {
  config {
    region = var.region
    
    assume_role_with_web_identity {
      role_arn           = var.role_arn
      web_identity_token = var.identity_token
    }
  }
}

Two things to note:

  1. We specify required providers using a required_providers block in the root of the document. This is the same block that we usually specify as a nested block in the terraform block for a normal Terraform configuration.
  2. There is a new kind of provider block with two labels (one for the name of the provider, and one for a logical handle to refer to this specific provider). In a regular Terraform configuration the provider block only has one label (for the name of the provider). The provider block has one nested config block where we pass the specific configuration for this provider. The configuration is using variables defined in variables.tfstack.hcl.

The new type of provider block allows us to configure multiple provider instances of the same provider. We could also use the for_each meta argument inside of the provider block. This would come in handy if we want to create one provider instance for a list of multiple AWS regions.

Create a file named components.tfstack.hcl with the following content:

component "vpc" {
  source = "./modules/vpc"

  inputs = {
    name_suffix = var.name_suffix
  }

  providers = {
    aws = provider.aws.this
  }
}

component "instance" {
  source = "./modules/instance"

  inputs = {
    name_suffix = var.name_suffix
    vpc_id      = component.vpc.vpc_id
  }

  providers = {
    aws = provider.aws.this
  }
}

Two component blocks are configured. Each component block has the following three arguments:

  • A source argument. The value points at a local Terraform module.
  • An inputs map that passes values to the variables defined in the Terraform module. We do not specify literal values as input, instead we reference the variables we defined in variables.tfstack.hcl.
  • A providers map that passes in provider configurations to the module. We reference the providers we configured in providers.tfstack.hcl using the provider.<name>.<handle> syntax (e.g. provider.aws.this).

Next we configure deployments. Create a new file named deployments.tfdeploy.hcl. We will create three deployments (development, staging, production). The deployments look almost identical, so only the development deployment is shown next:

identity_token "aws" {
  audience = [ "aws.workload.identity" ]
}

locals {
  aws_region = "eu-west-1"
  role_arn   = "arn:aws:iam::<your aws account id>:role/hcp-terraform-stacks"
}

deployment "development" {
  inputs = {
    region         = local.aws_region
    name_suffix    = "development"
    identity_token = identity_token.aws.jwt
    role_arn       = local.role_arn
  }
}

The identity_token block creates an identity token that will be used to authorize the AWS provider. This will work since we have already set up the trust relationships between HCP Terraform and AWS (see dynamic credentials with AWS above).

The deployment block has one label for the name of the deployment. Inside of the deployment block there is an inputs map that takes literal values for the input variables we defined in variables.tfstack.hcl. These values are passed to the components.

The only thing that differs for the staging and production deployments is the name of the deployment block (i.e. staging and production, respectively), and the values passed to the name_suffix variable (again, staging and production, respectively).

Finally, add an orchestration rule to the deployments.tfdeploy.hcl file:

orchestrate "auto_approve" "successful_plan" {
  check {
    condition = context.plan.applyable
    reason    = "A plan operation failed"
  }
}

The orchestrate block takes two labels. The first label is the type of orchestration rule, currently only auto_approve can be used. The second label is the name of the orchestration rule.

An orchestration rule allows you to automate the approval of a plan based on the content of a context variable. This variable contains information about the changes that the plan contains. In my orchestrate block I have configured a condition that requires the plan to be applyable. This essentially means that the plan was successful.

Create the Modules
#

We have to create the two modules that our components are referencing. These modules are simple Terraform configurations creating a VPC, and an EC2 instance, respectively. We will not spend time understanding them in any depth since they should be familiar for AWS users.

In the same directory as the stack configuration files, create a new directory named modules with two subdirectories named vpc and instance:

$ mkdir -p modules/vpc
$ mkdir -p modules/instance

Create the VPC module in modules/vpc/main.tf with the following content:

terraform {
  required_version = "~> 1.6"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.72.1"
    }
  }
}

variable "name_suffix" {
  type = string
}

resource "aws_vpc" "this" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "terraform-stacks-${var.name_suffix}"
  }
}

resource "aws_subnet" "instance" {
  cidr_block = "10.0.10.0/24"
  vpc_id     = aws_vpc.this.id

  tags = {
    Name = "terraform-stacks-instance"
  }
}

output "subnet" {
  value = {
    id = aws_subnet.instance.id
  }
}

Likewise, create the EC2 instance module in modules/instance/main.tf with the following content:

terraform {
  required_version = "~> 1.6"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.72.1"
    }
  }
}

variable "name_suffix" {
  type        = string
  description = "Name suffix for resources"
}

variable "subnet" {
  type = object({
    id = string
  })
}

data "aws_ami" "ubuntu" {
  most_recent = true

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["099720109477"] # Canonical
}

resource "aws_instance" "this" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"
  subnet_id     = var.subnet.id

  tags = {
    Name = "instance-${var.name_suffix}"
  }
}

Prepare the Stack For HCP Terraform
#

The current Terraform stack framework requires a file named .terraform-version that contains the version of Terraform that you use to create the stack.

Create this file with the following command:

$ terraform version -json | jq -r .terraform_version > .terraform-version

Next run the tfstacks init command to download providers and create the dependency lock file:

$ tfstacks init

Success! Configuration has been initialized and more commands can now be executed.

You can also validate that everything is working with the validate command:

$ tfstacks validate
Success! Terraform Stacks configuration is valid and ready for use within Terraform
Cloud.

The validate command reports that the stacks configuration is valid and we can use it with Terraform Cloud … Seems like the old name Terraform Cloud lives on here, it should of course be HCP Terraform.

Publish Your Stack as a GitHub Repository
#

You need to publish your stack configuration to a git repository. This is currently the only supported way to work with stacks on HCP Terraform.

I am using GitHub, so I will create a GitHub repository.

First initialize a git repository in the working directory of your Terraform stack:

$ git init
$ git add . && git commit -m "Initial commit"

If you have not configured the GitHub CLI I recommend that you follow the documentation for how to do this to simplify your interaction with GitHub.

Create a new repository on GitHub for your stack using the GitHub CLI:

$ gh repo create \
    --description "Terraform Stacks 101" \
    --private \
    --remote origin \
    --source . \
    --push

All the prerequisites are finally done so that we can move over to HCP Terraform.

Enable Terraform Stacks in HCP Terraform
#

The Terraform Stacks beta is not enabled by default on HCP Terraform. To enable it, go to your organization settings and check the Stacks checkbox:

Enable Stacks in HCP Terraform

Creating a Stack In HCP Terraform
#

Create a new project on HCP Terraform. Note that there is a limit of 500 stack resources during the beta period. We will not come anywhere near that in this example.

The name of the project should be the same name you configured for the federated credentials on AWS (in my case it is terraform-stacks-101). You could also provide an optional project description.

Create a new stack in your new project. Notice how workspaces and stacks are separate concepts.

A stack must be connected to a GitHub repository where the stack source code is located. Pick the version control provider that you have configured (see the documentation for details on how to configure a version control provider in HCP Terraform).

Select the repository where your stack source code is located.

Give the stack the same name that you configured when you set up the federated credentials on AWS (in my case it is aws-stack) and provide an optional description of the stack. There are a number of advanced options you can configure, but we will ignore them for now. If you want HCP Terraform to fetch the configuration from the repository when you have created the stack, then select the Fetch configuration after HCP Terraform creates stack checkbox.

HCP Terraform starts the process of preparing the configuration for the stack.

After a while the status changes and you see that the deployments have started rolling out.

Scrolling further down on the page you can see the Deployments rollout section. You can see your three deployments development, staging, and production.

Click on the development deployment to enter the deployment view.

You can further dive into the details by clicking on Plan 1 to see the status of the deployment.

Two things of interest to note:

  1. HCP Terraform has gone through a plan operation, followed by an apply operation, followed by a replan operation. What is this replan? This is a new feature concerning partial plans. I will cover that in a different blog post. In this particular case the replan operation makes no difference.
  2. We see that our orchestration rule has been applied to automatically approve the change since the plan was successful. If we did not have an orchestration rule, we would have had to manually approve the plan before it would be applied.

Go back to the stack overview page and view the Deployment rollout section. After a few minutes we see that all of the deployments have been rolled out successfully.

You could try to make a change to your stack components (i.e. the underlying modules themselves), push the change to GitHub, and watch as a new deployment rollout kicks in.

Destroy the stack
#

When you are done experimenting with your stack it is time to delete it.

You currently need to delete each deployment separately before you can delete the stack itself. You could force delete the stack, but all stack resources would be left untouched in your AWS environment.

Open one of your deployments. Click on Destruction and deletion in the menu on the left hand side, and select Create destroy plan.

Create a destroy plan for a deployment

Let the destroy plan run until it reports back that the process was successful.

Repeat this process for each deployment in the stack.

When all deployments have been destroyed it is time to go back to the stack overview page and select Destruction and deletion in the menu on the left hand side, then select Force delete stack aws-stack.

Delete the stack from HCP Terraform

The stack is deleted!

Key takeaways
#

This has been an introduction to Terraform stacks in the context of Amazon Web Services. Key takeaways from this post are:

  • Terraform stacks are a new way to scale your Terraform deployments.
  • A Terraform stack is a different concept than a HCP Terraform workspace. You can have both stacks and workspaces concurrently.
  • A Terraform stack consists of one or more components. Each component is created in a number of instances called deployments.
  • HCP Terraform offers a good visibility of all the components and deployments that are part of your stack.
  • Orchestration rules allow you to automatically approve a deployment based on conditions that you configure.
  • Stacks are still in preview (as of October 2024) and there is a current limit of 500 resources that can be managed through stacks.

  1. Most of this configuration comes from the excellent blog post by Bruno Schaatsbergen at https://www.hashicorp.com/blog/access-aws-from-hcp-terraform-with-oidc-federation ↩︎

Mattias Fjellström
Author
Mattias Fjellström
Cloud architect · Author · HashiCorp Ambassador