The last time I wrote about Terraform stacks was over one year ago.
A lot has happened since then, and the two articles linked above are no longer accurate. Terraform stacks went GA at HashiConf 2025 in San Francisco. When features go GA you want to start using them for your production workloads, but what if your production workloads are currently using HCP Terraform workspaces?
In this blog post I will go through the steps required to migrate HCP Terraform workspaces to Terraform stacks. This will be achieved using a new feature in the tf-migrate command-line tool. Note that as of this writing this feature is in beta, and some things discussed below might change before it leaves the beta stage.
There is an official tutorial on how to migrate from workspaces to stacks in the HashiCorp documentation. That tutorial takes a few shortcuts. First of all it does not use OIDC authentication, and second of all it does not connect the resulting stack to a VCS repository. There are a few other quirks in that tutorial that I do not like.
Why migrate from workspaces to stacks?#
Before you set out to move your workspaces to Terraform stacks you should ask yourself if this is the right move for your environment.
Terraform stacks is a declarative way to manage infrastructure environments. You configure how environments look in .tfstack.hcl files together with normal Terraform modules. Then you configure the instances of the environment in .tfdeploy.hcl files.
The main benefit of stacks over workspaces is that stacks allow you to simplify provisioning multiple environments that should look the same.
Examples of environments that would benefit from Terraform stacks:
- Infrastructure that is provisioned in multiple copies across many cloud regions for redundancy, disaster recovery, or performance reasons.
- An infrastructure baseline that is provisioned to each new cloud account.
- Infrastructure for an application that is provisioned in multiple different environments (development, testing, staging, production).
Managing these types of environments using workspaces on HCP Terraform means you will have a lot of workspaces. For a given cloud infrastructure that is provisioned across 10 different AWS accounts and to 10 different regions could give you up to 100 different workspaces1, requiring a lot of infrastructure management for the HCP Terraform resources. With stacks you would declaratively define these combinations of AWS accounts and regions in a single stack and have HCP Terraform manage them.
Prerequisites#
If you would like to follow along in this demo you need to set up following prerequisites:
- A Terraform configuration that contains a single resource: an Azure resource group. This code should exist in a VCS repository (I use GitHub).
- A project named demo-source-project on HCP Terraform.
- A workspace named demo-source-workspace in the demo-source-project on HCP Terraform. This workspace is connected to the VCS repository, and it should have an existing state file (so run at least one
terraform apply). - A global variable-set on HCP Terraform named
AZURE-GLOBAL-OIDCcontaining the following environment variables:TFC_AZURE_RUN_CLIENT_IDwith the client ID of the app registration you use for authentication to your Azure environmentTFC_AZURE_PROVIDER_AUTHwith the valuetrue.ARM_TENANT_IDwith the ID of your Azure tenant.ARM_SUBSCRIPTION_IDwith the ID of your Azure subscription.
To be fair, the variable set mentioned above does not have to be global but it simplifies things a bit. If we don’t make it global we have to perform an additional step at some point where we apply the variable set to a new project that is created in the migration.
To configure all the prerequisites you can use the Terraform configuration available at my GitHub repository below:
Accompanying repository for my blog post
Check in the prerequisites/oidc directory.
The root directory of the repository contains the root Terraform configuration for this demo.
The initial Terraform configuration consists of three files and has the following structure at the beginning of the migration:
$ tree .
.
├── main.tf
├── outputs.tf
└── variables.tf
Migrate workspaces to stacks#
The prerequisites in place to start the migration.
Install tf-migrate#
The tool we will use to perform the migration is called tf-migrate. This is an official migration tool provided by HashiCorp.
The version of tf-migrate used in this blog post is v2.0.0-beta1:
$ tf-migrate -v
tf-migrate v2.0.0-beta1
on darwin_arm64
See releases for available binary versions, or see the docs for instructions on how to install tf-migrate.
Disconnect workspace from your VCS#
The first step is to disconnect your existing workspace from your VCS repository. The reason for this is to avoid any changes being applied accidentally during the migration. The instructions below explain how to do this in the graphical interface. If you manage your workspace using Terraform you can perform the same change from there.
Open your workspace on HCP Terraform, then click on Settings in the menu on the left:

Click on Version Control in the new menu that appears on the left:

Click Change source in the VCS details section:

Select CLI-Driven Workflow:

Finally, click on Update VCS settings:

Add the following cloud block to the terraform block in your Terraform configuration to make sure the configuration uses the correct state file stored in your workspace:
terraform {
# ... other code omitted
cloud {
# use your own organization name below
organization = "mattias-fjellstrom"
workspaces {
name = "demo-source-workspace"
}
}
}
Modularize your Terraform configuration#
Stacks do not work with root modules. Each component in a Terraform stack is a Terraform module. To make sure our configuration is ready to migrate to stacks we must modularize it. The tf-migrate tool has a command to help you perform this step.
Before modularizing the Terraform configuration make sure the following is true:
- The root module defines all required providers in a
required_providersblock. - Providers must be configured using variable references only (i.e. no hard-coded values).
- Run
terraform initto initialize the working directory.
Next, run the tf-migrate modules create command in the root of the repository (the output is truncated):
$ tf-migrate modules create
✓ Found 3 terraform files in the root directory
✓ Extracted HCP Terraform data to identify the workspaces controlled
by the configuration.
You're about to begin the modularization process.
Please read the following important notes carefully: ...
Confirmation required ... ?
Only 'yes' or 'no' will be accepted as input.
Type 'yes' to approve proceed with the modularization process.
Type 'no' to cancel and abort.
Enter a value: yes
✓ Found 1 HCP Terraform workspaces associated with the configuration.
✓ Deleted backend block cloud from terraform block during modularization
✓ Successfully generated modularized configuration in modularized_config
directory
✓ Modularization process completed successfully
Your modularized configuration files are available in the
"modularized_config" directory.
The working directory now has the following contents:
$ tree .
.
├── main.tf
├── modularized_config
│ ├── backend.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── providers.tf
│ ├── terraform_modules
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── terraform.tf
│ └── variables.tf
├── outputs.tf
└── variables.tf
All the contents of the modularized_config directory was created by the tf-migrate modules create command. We could now delete the root-level .tf files if we want because we will no longer be using them.
Prepare the stack configuration#
Go into the new modularized_config directory and run a terraform init:
Initializing HCP Terraform...
Initializing modules...
- terraform_module in terraform_modules
Initializing provider plugins...
- Finding hashicorp/azurerm versions matching "4.52.0"...
- Installing hashicorp/azurerm v4.52.0...
- Installed hashicorp/azurerm v4.52.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
HCP Terraform has been successfully initialized!
The next step is to run tf-migrate stacks prepare to generate the default Terraform stack configuration files required to migrate the workspace to a non-VCS-driven stack. We will change this to a VCS-driven stack later on.
In the modularized_config directory, run the tf-migrate stacks prepare command. When asked to name the new stack specify demo-destination-stack (or any other name that you want to use), and when asked to name the new project specify demo-destination-project (or any other name you want to use). If you are following along the demo in this blog post then use the suggested names above to make sure it works with the OIDC authentication.
The output below is truncated slightly:
$ tf-migrate stacks prepare
✓ Environment readiness checks completed
✓ Extracted terraform configuration data from current directory
Enter the name of the stack to be created: demo-destination-stack
Enter the name of a new project under which the stack will be
created (project must not already exist): demo-destination-project
✓ Fetched latest state file for workspace: demo-source-workspace
✓ Parsed state file for workspace: demo-source-workspace
✓ Extracted variables from terraform configuration
✓ Extracted providers from terraform configuration
✓ Extracted outputs blocks from terraform configuration
✓ Created components from module blocks from terraform configuration
✓ Created deployments for workspaces provided
✓ Stack configuration files generated successfully
✓ Completed sanity check: terraform stacks init
✓ Completed sanity check: terraform stacks fmt
✓ Completed sanity check: terraform stacks validate
─────────────────────────────────────────────────────────────────────────────
🎉 The `tf-migrate stacks prepare` command completed successfully.
─────────────────────────────────────────────────────────────────────────────
Once again we get a log of what events are taking place, and details about the next steps we should take.
At this point the working directory now has the following contents:
$ tree .
.
├── main.tf
├── modularized_config
│ ├── _stacks_generated
│ │ ├── components.tfcomponent.hcl
│ │ ├── deployment.tfdeploy.hcl
│ │ ├── outputs.tfcomponent.hcl
│ │ ├── providers.tfcomponent.hcl
│ │ ├── terraform_modules
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
│ │ └── variables.tfcomponent.hcl
│ ├── backend.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── providers.tf
│ ├── stacks_migration_infra
│ │ ├── main.tf
│ │ ├── output.tf
│ │ └── variables.tf
│ ├── terraform_modules
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── terraform.tf
│ └── variables.tf
├── outputs.tf
└── variables.tf
Note that we have two new directories inside of the modularized_config directory:
- _stacks_generated contains the Terraform stacks configuration files. These files are the ones we will use to manage the stack going forward (but we will move them to a new repository, see more on that later in this blog post).
- stacks_migration_infra contains a Terraform configuration for performing the migration from the workspace to the stack.
Configure Azure authentication for the stack#
There are a few things we need to fix in the generated stack configuration files before we run the migration.
First, in the deployment.tfdeploy.hcl file, we got the following deployment block:
deployment "demo-source-workspace" {
inputs = {
location = null
stacks = null
}
import = true
}
We might want to provide a new name for the deployment, and we need to fix the null inputs.
Change the deployment block to the following:
deployment "swedencentral" {
inputs = {
location = "swedencentral"
stacks = "true"
}
import = true
}
Note that I change the stacks input to true (it was false in the root module). This value is used as a tag for the resource group that this configuration deploys. Since we now will manage the resource group using stacks I set this input to true.
I changed the name of the deployment to swedencentral. I need to edit the migration Terraform configuration in the stacks_migration_infra directory to take this name change into account. Specifically, I need to change the default value of the workspace_deployment_mapping variable in stacks_migration_infra/variables.tf to the following:
variable "workspace_deployment_mapping" {
default = {
demo-source-workspace = "swedencentral"
}
}
The value of this variable is a mapping from a workspace (the key) to a stack (the value).
Next we need to configure authentication to Azure. In variables.tfcomponent.hcl add the following variables:
# ... previous code omitted
variable "identity_token" {
type = string
ephemeral = true
description = "Identity token for provider authentication"
}
variable "client_id" {
type = string
ephemeral = true
description = "Azure app registration client ID"
}
variable "subscription_id" {
type = string
ephemeral = true
description = "Azure subscription ID"
}
variable "tenant_id" {
type = string
ephemeral = true
description = "Azure tenant ID"
}
Update the provider block in providers.tfcomponent.hcl to use the new variable:
provider "azurerm" "this" {
config {
features {}
use_cli = false
use_oidc = true
oidc_token = var.identity_token
subscription_id = var.subscription_id
client_id = var.client_id
tenant_id = var.tenant_id
}
}
Open up deployment.tfdeploy.hcl again and configure input for authentication by reading the AZURE-GLOBAL-OIDC variable set and using environment variables from it and also add an identity_token block where the identity token for OIDC authentication is generated:
identity_token "azurerm" {
audience = ["api://AzureADTokenExchange"]
}
store "varset" "azure" {
name = "AZURE-GLOBAL-OIDC"
category = "env"
}
deployment "swedencentral" {
inputs = {
# authentication input
identity_token = identity_token.azurerm.jwt
subscription_id = store.varset.azure.ARM_SUBSCRIPTION_ID
tenant_id = store.varset.azure.ARM_TENANT_ID
client_id = store.varset.azure.TFC_AZURE_RUN_CLIENT_ID
# module inputs
location = "swedencentral"
stacks = "true"
}
import = true
}
At this point you would also look through the rest of the files to see if there are any other required changes you need to do. For this simple demo we can move on to the next step.
Perform the migration#
We are ready to perform the actual migration.
Run the following command from the modularized_config directory (the output is truncated):
$ tf-migrate stacks execute
✓ Stack configuration path found: <truncated>/modularized_config/_stacks_generated
✓ Successfully validated stack configuration found in dir: <truncated>/modularized_config/_stacks_generated
✓ Using dir: <truncated>/modularized_config/stacks_migration_infra for terraform operations
✓ Init command ran successfully
✓ Plan command ran successfully and changes are detected
✓ Apply command ran successfully in 1m7.335355625s
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Terraform Migrate has successfully executed the migration plan for your stack.
Please verify the migration in the HCP Terraform UI.
The migration completed successfully!
Connect the stack to VCS#
The last step to work effectively with our Terraform stack is to connect it to our VCS provider.
We need to decide if we want to reuse the same repository we used for our workspace, or if we want to move to a different repository. For the sake of this demo I will create a new repository and add the stack configuration files to it.
Start by creating a new directory for the repository and initialize a new git repository inside of it:
$ # cd to where you want to create the local git repo
$ mkdir hcp-terraform-migrated-stack
$ cd hcp-terraform-migrated-stack
$ git init
Copy all the contents of the _stacks_generated directory, including .terraform-version and .terraform.lock.hcl (note, your path might differ):
$ cp -a _stacks_generated/. ../../hcp-terraform-migrated-stack
Edit the deployment.tfdeploy.hcl file by removing the import = true statement from the swedencentral deployment:
deployment "swedencentral" {
inputs = {
# authentication input
identity_token = identity_token.azurerm.jwt
subscription_id = store.varset.azure.ARM_SUBSCRIPTION_ID
tenant_id = store.varset.azure.ARM_TENANT_ID
client_id = store.varset.azure.TFC_AZURE_RUN_CLIENT_ID
# module inputs
location = "swedencentral"
stacks = "true"
}
# import = true <--- remove this
}
Now state all files and commit (except possibly the .terraform directory if you happen to have it):
$ git add -- . ':!.terraform'
$ git commit -m "Initial commit"
I like to use the GitHub CLI to create repositories, so that is what I will do next:
$ gh repo create --private --push --source .
Go to HCP Terraform and find your stack. Click on Settings in the menu on the left:

Click on Version Control in the new menu that appear on the left:

Go through the steps of selecting your VCS provider and the repository.
Once you have connected a VCS repository, add a new deployment to the deployment.tfdeploy.hcl file:
deployment "westeurope" {
inputs = {
identity_token = identity_token.azurerm.jwt
subscription_id = store.varset.azure.ARM_SUBSCRIPTION_ID
tenant_id = store.varset.azure.ARM_TENANT_ID
client_id = store.varset.azure.TFC_AZURE_RUN_CLIENT_ID
location = "westeurope"
stacks = "true"
}
}
Commit and push the change to the main branch and see how a new deployment is added for the stack:

How to migrate multiple workspaces#
The key to understanding how to migrate multiple workspaces to the same stack is in understanding what the stacks_migration_infra directory contains.
In essence it contains the following resources:
resource "tfe_project" "stack_project" {
name = var.project_name
organization = var.organization_name
}
resource "tfe_stack" "stack" {
name = var.stack_name
project_id = tfe_project.stack_project.id
}
resource "tfmigrate_stack_migration" "stack_migration" {
config_file_dir = var.stacks_config_file_dir
organization = var.organization_name
name = tfe_stack.stack.name
project = tfe_project.stack_project.name
terraform_config_dir = var.terraform_config_dir
workspace_deployment_mapping = var.workspace_deployment_mapping
}
First of all, here you have the option to use an existing project instead of creating a new project. If so, replace the tfe_project resource to a data source.
Next, the tfmigrate_stack_migration resource is where the migration takes place. It uses the generated stack configuration (the config_file_dir argument), the modularized Terraform configuration (the terraform_config_dir argument), and the mapping from workspace to stack deployment (the workspace_deployment_mapping argument).
To add additional deployments you need to extend the value of the workspace_deployment_mapping argument with additional mappings, and add corresponding deployment blocks to the stack configuration.
Key takeaways#
What just happened?
We started out with a VCS-connected workspace on HCP Terraform and ended up with a VCS-connected stack. In the process we used the tf-migrate tool to help us modularize our Terraform configuration and to generate stack configuration files for us. All of these steps could have been done manually.
The actual migration happens using an intermediate Terraform configuration that is created for us where the tf-migrate provider for Terraform is used under the covers. You can study this Terraform configuration (see the generated stacks-migration-infra directory) to learn how it works, which might give you an idea for how to do this at scale.
Terraform stacks is a great way to manage similar infrastructure environments at scale.
Since the inception of Terraform stacks there has been no clear approach for how to migrate from HCP Terraform workspaces to stacks. There is now a recommended approach to take: migrate using the tf-migrate tool.
In this blog post we went through the whole process from installing tf-migrate to performing a migration of a workspace to a stack.
To be fair, you can configure this in one and the same workspace using multiple provider aliases but you will have a large blast radius in this single workspace. ↩︎
