This is a sample chapter of my upcoming book Terraform Authoring and Operations Professional Study Guide (AWS edition). This chapter is a warm-up chapter where I go through a complete journey of using Terraform, as well as cover the basics of the HashiCorp Configuration Language.
Get the book here: leanpub.com/terraform-professional-certification
You are on a journey to achieve the Terraform Authoring and Operations Professional Certification. You have already been through the basics of Terraform more than a few times. Perhaps you have achieved the Terraform Associate Certification? Perhaps you are planning on taking both certifications in the coming weeks or months?
No matter what your situation is, there could be gaps in your understanding on authoring and operating Terraform. Perhaps there has been some time since you worked actively with Terraform.
To bring you up to speed, this chapter will go through a complete journey from installing Terraform locally all the way to configuring workspaces and deploy resources via HCP Terraform.
A Journey With Terraform#
You want to write your first Terraform configuration and create cloud infrastructure on Amazon Web Services (AWS).
Terraform Configuration#
I use the definition of Terraform configuration from the official documentation:
A Terraform configuration is a complete document in the Terraform language that tells Terraform how to manage a given collection of infrastructure. A configuration can consist of multiple files and directories.
You have read most of the documentation on Terraform and AWS, you have watched more Terraform Tuesdays1 than you can remember, and now you feel that you are ready to get your hands dirty with Terraform.
The first step you need to take is to install Terraform on your local system. You go to the Terraform documentation to find the instructions relevant for your system.
In this imaginary scenario you are on a MacBook, so you make your life easy by using Homebrew:
$ brew tap hashicorp/tap
$ brew install hashicorp/tap/terraform
Once Terraform is installed you verify that it is available in your Terminal2:
$ terraform version
Terraform v1.9.4
on darwin_arm64
You want to create infrastructure on AWS, so you go to the AWS console and sign in using your credentials. Once you are signed-in you arrive at the AWS console home:
Terraform requires AWS credentials to be able to create infrastructure on AWS. You could let Terraform use your own AWS credentials, but it is a better idea to create dedicated credentials for Terraform.
You go to the Identity and Access Management (IAM) service by typing IAM into the search bar at the top and selecting the IAM service:
You want to create a new IAM user3 for Terraform so you click on Users in the left-hand menu, then you click on Create user:
You enter terraform as the user name, then you click on Next:
On the permissions page you select Attach policies directly. You search for AdministratorAccess in the search bar. You select the policy named AdministratorAccess and then click on Next:
You understand that the AdministratorAccess permission is more than Terraform needs, but you are in an experimental mood so you let it slide for now.
You have arrived at the review page, and since everything looks good you click on Create user:
Creating the user takes a few seconds. Once the green banner appears to inform you that the user has been created, you click on View user:
On the user details page, you select the Security credentials tab and scroll down to the Access keys section and click on Create access key:
You select Application running outside AWS in the list of use cases, and then you click on Next:
You decide to skip adding a description for the access key, instead you click on Create access key:
You copy the values of both the Access key and the Secret access key and store them somewhere safe. Finally, you click on Done:
You set up two environment variables with the AWS access key and secret access key in your terminal:
$ export AWS_ACCESS_KEY_ID=<value you copied>
$ export AWS_SECRET_ACCESS_KEY=<value you copied>
Terraform will automatically use these credentials in this terminal session.
Terraform is not able to create cloud infrastructure on AWS by itself. To do this, Terraform uses the AWS provider for Terraform. A provider is a bridge between Terraform and an external system, like AWS.
You go to the Terraform registry to read the documentation for the AWS provider. You click on USE PROVIDER in the upper right corner and copy the code snippet that is shown:
You open your text editor to begin writing your Terraform configuration. You create a new file named terraform.tf
, and you paste the code that you copied from the documentation:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.62.0"
}
}
}
provider "aws" {
# Configuration options
}
You tell Terraform which providers you are planning to use in the required_providers
block. This block is nested inside of the terraform
block. You can configure other settings for Terraform in the terraform
block, but nothing that you need to worry about now.
You decide to move the provider
block from terraform.tf
to its own file named providers.tf
:
provider "aws" {
# Configuration options
}
It is a good organizational practice to split your Terraform configuration into logical pieces. Terraform will automatically stitch together all the files of the Terraform configuration into one piece.
You live somewhere in Europe4, and you know that the AWS region in Ireland is popular among AWS users so you want to configure the AWS provider for Terraform to use this region.
AWS Regions and Availability Zones#
An AWS region represents a collection of data centers in a specific geographical location.
A region is divided into availability zones. An availability zone is a smaller collection of one or more data centers. Availability zones within a region are far enough away from each other so as not to be affected by the same natural disasters or other infrastructure issues in the region.
An AWS region has a code name that you specify in Terraform. A few common region code names are:
eu-central-1
for Frankfurteu-west-1
for Irelandus-east-1
for North Virginiaus-west-1
for North California
You want to be able to easily use a different AWS region, so you turn the region name into a variable.
In your text editor, you create a new file named variables.tf
and add a variable for the AWS region:
variable "aws_region" {
type = string
description = "AWS region name"
default = "eu-west-1"
}
You give the variable a default value of eu-west-1
, this is the code name for the AWS region in Ireland. If no other value is provided for this variable, the default value of eu-west-1
will be used. You realize that providing sensible defaults for all your variables is a good practice.
To configure the AWS provider to use the selected region you edit the provider
block in providers.tf
to use the variable you just created:
provider "aws" {
region = var.aws_region
}
Variables are referenced in other parts of your Terraform configuration using the syntax var.<variable name>
.
The infrastructure you want to create consists of a virtual network with a number of subnets. A virtual network is the digital equivalent of the computer network you have in your office or at home. In the AWS world a virtual network is called a Virtual Private Cloud, or VPC. A VPC can be split into smaller networks called subnets.
In Terraform, the relationships or dependencies between resources are important. Thus, you have created a diagram of how the resources of your infrastructure are related. This is not an architectural diagram with fancy icons, but it will be useful as a guide when you write your HCL code.
You go to the AWS provider documentation and learn about the VPC resource. You create a new file named main.tf
and add a VPC resource to it:
resource "aws_vpc" "this" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "vpc-${var.aws_region}"
}
}
Resources are declared using the resource
block. This block takes two labels, one for the resource type and one for the resource name.
The resource type of the VPC is aws_vpc
. All resource types from the AWS provider has the prefix aws_
. The resource name of the VPC is this
. You can pick any resource name that makes sense to you.
Resource names#
A resource name can contain letters, digits, underscores and dashes. It must start with a letter or an underscore.
There are a few recommended guidelines you should follow:
- Primarily use lowercase letters and underscores. Start the name with a lowercase letter.
- Avoid using dashes, even if they are allowed.
- Avoid using the name of the provider or the resource type in the resource name (e.g. avoid
my_aws_vpc
)
No two resources of the same type can have the same resource name. If they did, it would be a conflict and Terraform would report an error.
The aws_vpc
resource takes a number of arguments and nested blocks you can use to configure it according to your specification. Most of the arguments are optional. In this case you configure the following arguments:
- You set the
cidr_block
argument to the value of10.0.0.0/16
. A CIDR block is a block of IP addresses that this virtual network consists of. - You set the
tags
argument to a map with key-value pairs. You add one key calledName
with the valuevpc-${var.aws_region}
. Tags are arbitrary key-value pairs you can add to most resources in AWS. TheName
tag is special in AWS, it is used to display a friendly name of the resource in the AWS console. In the value of the tag you have used a method called string interpolation to build a string from a hard-coded partvpc-
and a variable part${var.aws_region}
. Terraform will replace the reference to the variable by the value of the variable. If the value of theaws_region
variable iseu-west-1
, then the string interpolation will end up beingvpc-eu-west-1
.
You realize that you would like to use a variable for the cidr_block
argument as well, to make this configuration more dynamic. In variables.tf
you add a second variable:
# ... previous code is omitted ...
variable "vpc_cidr_block" {
type = string
description = "CIDR block for the VPC network"
default = "10.0.0.0/16"
}
Next, you update the aws_vpc
resource in main.tf
to use the new variable:
resource "aws_vpc" "this" {
cidr_block = var.vpc_cidr_block
tags = {
Name = "vpc-${var.aws_region}"
}
}
A virtual network must be split up into smaller networks called sub-networks, or subnets. Each subnet takes a smaller piece of the block of available IP addresses.
You go to the AWS provider documentation to read about the subnet resource. Next, you add it to main.tf
and configure it as required:
# ... previous code is omitted ...
resource "aws_subnet" "first" {
vpc_id = aws_vpc.this.id
cidr_block = "10.0.1.0/24"
tags = {
Name = "subnet-${var.aws_region}"
}
}
You realize that this is not good enough. You would like to use multiple subnets, and place one subnet in each availability zone of the AWS region.
How do you know how many availability zones are available in the AWS region you are using?
You can use a Terraform data source to query the provider for this information. A data source allows you to ask for information (or data) for existing infrastructure, or metadata related to resources or even the provider itself.
In the documentation you find a data source for availability zones. You add the following block to main.tf
:
# ... previous code is omitted ...
data "aws_availability_zones" "available" {
state = "available"
}
From the documentation you know that this data source has an attribute called names
. This attribute contains the names of the availability zones within the region. In the AWS region named eu-west-1
(i.e. Ireland) the availability zones have names such as eu-west-1a
, eu-west-1b
, etc.
You update the aws_subnet
resource in main.tf
to place it in the first availability zone:
# ... other code is omitted ...
resource "aws_subnet" "first" {
vpc_id = aws_vpc.this.id
cidr_block = "10.0.1.0/24"
availability_zone = data.aws_availability_zones.available.names[0]
tags = {
Name = "subnet-${data.aws_availability_zones.available.names[0]}"
}
}
The names
attribute is a list of strings. You select the first element in the list by adding the [0]
selector. Remember that list indices start at 0, not at 1.
Another issue you have with your current Terraform configuration is that the cidr_block
is currently for the subnet. You would like to make sure that the subnet CIDR block is part of the VPC CIDR block. Remember that you made the VPC CIDR block value into a variable, so it does not necessarily have the default value of 10.0.0.0/16
.
In the Terraform documentation you find a function named cidrsubnet
that can calculate CIDR blocks for you. You update the aws_subnet
resource to use the cidrsubnet
function:
# ... other code is omitted ...
resource "aws_subnet" "first" {
vpc_id = aws_vpc.this.id
cidr_block = cidrsubnet(var.vpc_cidr_block, 8, 1)
availability_zone = data.aws_availability_zones.available.names[0]
tags = {
Name = "subnet-${data.aws_availability_zones.available.names[0]}"
}
}
The value of cidrsubnet(var.vpc_cidr_block, 8, 1)
will evaluate to 10.0.1.0/24
if the AWS VPC CIDR block has its default value of 10.0.0.0/16
. This is one example of a useful function that you should know about, there are a large number of functions available as can be seen in the documentation.
You want to create two additional subnets, so you copy and paste the aws_subnet
resource two times and edit the relevant resource names and arguments so that they don’t conflict with each other (remember that the combination of resource type and resource name must be unique):
# ... other code is omitted ...
resource "aws_subnet" "first" {
vpc_id = aws_vpc.this.id
cidr_block = cidrsubnet(var.vpc_cidr_block, 8, 1)
availability_zone = data.aws_availability_zones.available.names[0]
tags = {
Name = "subnet-${data.aws_availability_zones.available.names[0]}"
}
}
resource "aws_subnet" "second" {
vpc_id = aws_vpc.this.id
cidr_block = cidrsubnet(var.vpc_cidr_block, 8, 2)
availability_zone = data.aws_availability_zones.available.names[1]
tags = {
Name = "subnet-${data.aws_availability_zones.available.names[1]}"
}
}
resource "aws_subnet" "third" {
vpc_id = aws_vpc.this.id
cidr_block = cidrsubnet(var.vpc_cidr_block, 8, 3)
availability_zone = data.aws_availability_zones.available.names[2]
tags = {
Name = "subnet-${data.aws_availability_zones.available.names[2]}"
}
}
The subnet configurations are explicit and easy to understand. However, what if the AWS region has five availability zones and you need one subnet in each? Or what if the AWS region only has two availability zones? Then we would have an error in our Terraform configuration.
Either way, updating this Terraform configuration to match a given AWS region means there will be a lot of tedious and repetitive work.
You realize that you can use a loop meta-argument for your aws_subnet
resource definition instead of copying and pasting resources.
There are two options for loops: the count
and the for_each
meta argument. In this particular scenario it is easier to use the count
meta argument because we just want to create a subnet for each availability zone.
You remove two of the aws_subnet
resources you copied before, and edit the remaining aws_subnet
resource to the following:
# ... other code is omitted ...
resource "aws_subnet" "all" {
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.this.id
cidr_block = cidrsubnet(var.vpc_cidr_block, 8, count.index+1)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "subnet-${data.aws_availability_zones.available.names[count.index]}"
}
}
The above resource definition will create one subnet for each availability zone in the list of availability zones for the selected region. Wonderful!
You are almost done, but you want your Terraform configuration to output the ID of the AWS VPC. You need the VPC ID for something that is not a part of this Terraform configuration. You can output values from Terraform using an output
block.
You create a new file named outputs.tf
and add the VPC ID as an output:
output "aws_vpc_id" {
description = "ID of the AWS VPC"
value = aws_vpc.this.id
}
You realize that you would also like to output all the subnet IDs. You add another output in outputs.tf
for this, remembering that you created subnets using the count
meta argument:
# ... other code is omitted ...
output "aws_subnet_ids" {
description = "IDs of all subnets in the VPC"
value = aws_subnet.all[*].id
}
You have used a splat expression [*]
to reference all the subnets, and then selected the id
attribute from each subnet.
You believe your Terraform configuration is done (for now) so you go to your terminal in the Terraform working directory and initialize Terraform using the terraform init
command:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "5.62.0"...
- Installing hashicorp/aws v5.62.0...
- Installed hashicorp/aws v5.62.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
This command downloads the providers you have specified in the required_providers
block in terraform.tf
, it downloads any external modules that you are referencing (none in this case), it connects to your backend where you will store the state file (you use the local working directory in this case).
Since the terraform init
command did not encounter any errors, you move on to see what Terraform thinks will happen if it were to apply these changes. You do this using the terraform plan
command, outputting the plan to a file named actions.tfplan
:
$ terraform plan -out=actions.tfplan
data.aws_availability_zones.available: Reading...
data.aws_availability_zones.available: Read complete after 1s [id=eu-west-1]
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_subnet.all[0] will be created
+ resource "aws_subnet" "all" {
# details left out for brevity
}
# aws_subnet.all[1] will be created
+ resource "aws_subnet" "all" {
# details left out for brevity
}
# aws_subnet.all[2] will be created
+ resource "aws_subnet" "all" {
# details left out for brevity
}
# aws_vpc.this will be created
+ resource "aws_vpc" "this" {
# details left out for brevity
}
Plan: 4 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ aws_subnet_ids = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
+ aws_vpc_id = (known after apply)
Saved the plan to: actions.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "actions.tfplan"
Since this is a new Terraform configuration there is no pre-existing infrastructure. Another way to put this is that there is no pre-existing state.
The output informs you that four resources will be created, and nothing will be changed or destroyed. This is good news for you and it was exactly what you expected. You move on to the apply phase, this is where Terraform orchestrates the creation of resources through the AWS provider.
You do this with the terraform apply
command, passing it the plan file you created previously:
$ terraform apply actions.tfplan
aws_vpc.this: Creating...
aws_vpc.this: Creation complete after 2s [id=vpc-057fbe0c63d183892]
aws_subnet.all[0]: Creating...
aws_subnet.all[1]: Creating...
aws_subnet.all[2]: Creating...
aws_subnet.all[2]: Creation complete after 1s [id=subnet-0a13b06126b8fc421]
aws_subnet.all[1]: Creation complete after 1s [id=subnet-076bd4eb98e692205]
aws_subnet.all[0]: Creation complete after 1s [id=subnet-0b6ebb3ce77572b63]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
aws_subnet_ids = [
"subnet-0b6ebb3ce77572b63",
"subnet-076bd4eb98e692205",
"subnet-0a13b06126b8fc421",
]
aws_vpc_id = "vpc-057fbe0c63d183892"
The output tells you that the operation completed successfully and the correct number of resources have been created.
You now have your network infrastructure in AWS. To verify this, you go to the AWS console. This time you type VPC into the search box at the top and select the VPC service in the list:
Next, you select Your VPCs in the menu on the left to see your VPC resources:
You find the VPC named vpc-eu-west-1
and select it to see its details:
You click on the Resource map tab and see that this VPC contains three subnets, subnet-eu-west-1a
, subnet-eu-west-1b
, and subnet-eu-west-1c
:
You are satisfied with what you have accomplished so far!
After some consideration you realize that you would like to collaborate on this Terraform configuration with your colleagues. You remember that your organization uses HCP Terraform, and you set out to migrate your current Terraform configuration to use HCP Terraform as a backend for state storage. This will allow your colleagues to work with the same configuration.
You go to HCP Terraform and sign in to your account using your HCP organization account. You arrive at the HCP Terraform landing page and you click on Create a workspace:
A workspace in HCP Terraform is one instance of a Terraform configuration with its own state file.
Workspaces#
There is a concept of a workspace in the Terraform CLI as well as in HCP Terraform, but it’s not technically the same thing.
Using CLI workspaces creates a new state file for the current configuration. This means you can use the same configuration but with multiple state files.
In HCP Terraform a workspace has a single state file. In Chapter 8 you will learn more about workspaces in HCP Terraform.
You specify that you want a CLI-Driven Workflow for the new workspace, this means HCP Terraform will be used for state storage and for running plans and apply5:
There are three different types of workflows to choose from:
- The version control workflow requires that the Terraform configuration is placed in a Git repository and that you connect this repository to HCP Terraform.
- The CLI-driven workflow allows you to run Terraform from anywhere where you have the CLI (e.g. your laptop or a CI/CD pipeline), but you can utilize HCP Terraform for state storage and other features.
- The API-driven workflow is similar to the CLI-driven workflow but does not require the use of the CLI, you use the API directly instead. The API-driven workflow is the most advanced type of workflow but allows for some interesting use cases.
On the next page you configure your new workspace with a name of aws-networking, you place it in the Default Project and you give it a short description, then you click on Create:
Workspaces in HCP Terraform are part of a project, and projects are part of an organization.
Once the workspace is created, you copy the example code from the workspace overview page. You will need this code to configure your Terraform configuration to use the HCP Terraform workspace you created:
You edit the terraform
block in terraform.tf
using the code you copied:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.62.0"
}
}
cloud {
organization = "your-organization"
workspaces {
name = "aws-networking"
}
}
}
Currently, you have a state file in your local working directory. The state file is a record of the infrastructure that Terraform has created, with all their attributes. This is the single most important file you have in your Terraform working directory. The state file is named terraform.tfstate
by default.
Now when you are migrating to use HCP Terraform as a backing system for your Terraform configuration you must migrate the state file from your local directory to the HCP Terraform workspace.
A state migration can take place between any state storage backends you are using, an example could be AWS S3 object storage. In Chapter 5 you will learn more about configuring remote state.
Normally, you would initiate a state migration between two different backends with the command terraform init -migrate-state
, however, HCP Terraform is a bit of a special case. You just run a regular terraform init
and follow the prompts:
$ terraform init
Initializing HCP Terraform...
Do you wish to proceed?
As part of migrating to HCP Terraform, Terraform can optionally copy
your current workspace state to the configured HCP Terraform workspace.
Answer "yes" to copy the latest state snapshot to the configured
HCP Terraform workspace.
Answer "no" to ignore the existing state and just activate the configured
HCP Terraform workspace with its existing state, if any.
Should Terraform migrate your existing state?
Enter a value: yes
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.62.0
HCP Terraform has been successfully initialized!
To make sure the state file is migrated to HCP Terraform, you go to your HCP Terraform workspace and click on the States menu option on the left:
You see one available state in the list and you click on it:
You see the state file displayed as JSON, and you scroll through it to get an understanding of the format:
HCP Terraform keeps a record of all the state changes.
Now you would like to test your HCP Terraform workspace as the driver of your Terraform workflow by issuing terraform plan
and terraform apply
commands. However, you are certain that there are no changes to your infrastructure so you take a shortcut by issuing terraform apply -auto-approve
:
$ terraform apply -auto-approve
Preparing the remote apply...
Waiting for the plan to start...
Terraform v1.9.4
on linux_amd64
Initializing plugins and modules...
When the run starts, you go to your HCP Terraform workspace and see that the run has been registered. You click on See details to get a better view:
In the run overview you are met with a large red error message:
You realize that HCP Terraform does not have access to the AWS credentials you configured for Terraform in your own terminal, this is why the apply operation failed!
To rectify the situation you go to your HCP Terraform workspaces overview page and select Settings in the left-hand menu:
On the settings page, select the Variable sets option in the left-hand menu:
A variable set is a collection of one or more variables, either Terraform variables or environment variables. You can apply the variable set to one or more workspaces as needed. You click on Create variable set:
You give your variable set the name aws-credentials, you provide a short description, and you say that this variable set should apply globally to all your workspaces6:
You scroll down and add a new variable to the variable set, you specify that it should be an environment variable, that the key should be AWS_ACCESS_KEY_ID and you provide the value of the access key you stored earlier. You also specify that this is a sensitive value so that HCP Terraform does not output it in logs.
Finally, you click on Add variable to add it to the variable set:
You repeat the process for the environment variable named AWS_SECRET_ACCESS_KEY with the value of the secret access key you have stored locally.
Once you are done adding the two environment variables you click on Create variable set:
You initiate a new terraform apply
from your terminal to see if HCP Terraform has everything it needs to work with your Terraform configuration:
$ terraform apply -auto-approve
Preparing the remote apply...
Waiting for the plan to start...
Terraform v1.9.4
on linux_amd64
Initializing plugins and modules...
data.aws_availability_zones.available: Refreshing...
aws_vpc.this: Refreshing state... [id=vpc-057fbe0c63d183892]
data.aws_availability_zones.available: Refresh complete after 0s [id=eu-west-1]
aws_subnet.all[2]: Refreshing state... [id=subnet-0a13b06126b8fc421]
aws_subnet.all[0]: Refreshing state... [id=subnet-0b6ebb3ce77572b63]
aws_subnet.all[1]: Refreshing state... [id=subnet-076bd4eb98e692205]
No changes. Your infrastructure matches the configuration.
In HCP Terraform you can also verify that the run has completed successfully:
The workday is over and it is time to go home. You are glad that you have accomplished a lot today!
Fast-forward a few months. You no longer need to use the Terraform configuration you created back in the day. It is time to end the lifecycle of this infrastructure.
You do this using the terraform destroy
command:
$ terraform destroy
Preparing the remote apply...
Waiting for the plan to start...
Initializing plugins and modules...
data.aws_availability_zones.available: Refreshing...
aws_vpc.this: Refreshing state... [id=vpc-057fbe0c63d183892]
data.aws_availability_zones.available: Refresh complete after 0s [id=eu-west-1]
aws_subnet.all[2]: Refreshing state... [id=subnet-0a13b06126b8fc421]
aws_subnet.all[1]: Refreshing state... [id=subnet-076bd4eb98e692205]
aws_subnet.all[0]: Refreshing state... [id=subnet-0b6ebb3ce77572b63]
# ... output hidden for brevity
Plan: 0 to add, 0 to change, 4 to destroy.
Do you really want to destroy all resources in workspace "aws-networking"?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_subnet.all[0]: Destroying... [id=subnet-0b6ebb3ce77572b63]
aws_subnet.all[1]: Destroying... [id=subnet-076bd4eb98e692205]
aws_subnet.all[2]: Destroying... [id=subnet-0a13b06126b8fc421]
aws_subnet.all[1]: Destruction complete after 0s
aws_subnet.all[0]: Destruction complete after 0s
aws_subnet.all[2]: Destruction complete after 0s
aws_vpc.this: Destroying... [id=vpc-057fbe0c63d183892]
aws_vpc.this: Destruction complete after 1s
Apply complete! Resources: 0 added, 0 changed, 4 destroyed.
The terraform destroy
command is in fact an alias for terraform apply -destroy
, but you appreciate the clarity of the terraform destroy
command.
Your journey with Terraform has come to an end, for now. In reality it is only the beginning.
We covered a lot about Terraform and a little bit about AWS and HCP Terraform in this section. Knowing what you have read here by heart is a good start on your professional certification journey, but the exam will test you on edge cases that move outside of this linear path that was presented in this chapter.
That is what the rest of this book is about!
Basics of the HashiCorp Configuration Language#
In the previous section we saw a few concepts of the HashiCorp Configuration Language (HCL).
HCL is easy to learn and you might already know most of what you need to know about HCL, but I will still go through a few details of the language in this section.
HCL consists of two primary constructs: arguments and blocks.
An argument is the assignment of a value to a named entity:
name = "value"
Argument values can be one of three basic types: string
, number
, or bool
. Values can also be of complex types, such as lists of strings or objects with many sub-arguments:
arr = ["one", "two", "three"]
obj = {
first = "one"
second = "two"
third = "three"
}
Values such as "one"
, 2
, ["three", "four", "five"]
are called literal values.
Values can also be expressions. An expression is something that in the end evaluates to a literal value. Expressions can contain literal values, functions, and references. A few examples of expressions:
exp1 = min(1,2,3,6,3,0,9)
exp2 = var.do_it ? "I did it" : "I did not do it"
exp3 = [for name in var.names : lower(name)]
exp4 = aws_vpc.this.name
A block is a piece of code with a type and zero or more labels. A block can contain arguments or other blocks (nested blocks).
Examples of blocks with zero up to two labels look like this:
# a block with zero labels
terraform {
required_version = "1.6.0"
}
# a block with one label
provider "aws" {
region = "eu-west-1"
}
# a block with two labels
resource "aws_security_group" "web" {
name = "web"
# other arguments ...
}
A block containing a nested block looks like this:
terraform {
# a nested block
required_providers {
# an argument with a complex value (an object)
aws = {
source = "hashicorp/aws"
}
}
}
There are many functions in the HCL language for Terraform7. An example of what the use of a function looks like:
# name will evaluate to foo-bar-baz
name = join("-", ["foo", "bar", "baz"])
These are the main pieces to remember. By using the constructs described above you can write all of the Terraform code you can imagine. The difficulty of Terraform is not the HCL language, it is rather in complexities related to the resources we create using Terraform providers.
Below follows examples of the main root-level blocks available in Terraform, these are the blocks you will use in most Terraform configurations you write:
- The
resource
block represents infrastructure resources we want to create. It has two labels, one for the resource type and one for the resource name of the resource:resource "aws_security_group" "web" { # attributes }
- The
data
block represents data sources, these are existing resources you want to read attributes from. It has two labels, one for the data source type and one for the data source name:data "aws_security_group" "web" { # attributes }
- The
variable
block represents input to our Terraform configuration. It has one label for the name of the variable:variable "aws_region" { type = string description = "AWS region" }
- The
output
block represents outputs from our Terraform configuration. It has one label for the name of the output:output "vpc_id" { value = aws_vpc.this.id }
- The
terraform
block allows us to configure the required Terraform binary version, the required provider versions, our state backend location, and possible HCP Terraform integration. This block has no labels:terraform { required_version = "..." required_providers { ... } backend "s3" { # attributes } }
- The
provider
block allows us to configure a given provider that we are using. It has one label representing the name of the provider:provider "aws" { region = "eu-west-1" # other attributes ... }
- The
module
block declares a module. A module is a reusable piece of Terraform configuration. It has one label representing the module name:module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.13.0" }
- The
locals
block can be used to create local values that can be referenced elsewhere in your Terraform configuration. This block has no labels:locals { local1 = "value1" # other locals }
We will see these blocks in use throughout the book.
Summary#
This chapter explored a journey with Terraform. We started by installing Terraform on our local system, and we went through the design of a network architecture with Terraform. We applied the configuration, decided to migrate our state to HCP Terraform, and finally destroyed our infrastructure.
We also explored the basic concepts of the HashiCorp Configuration Language (HCL). We learned that HCL has two main concepts: arguments and blocks. We saw examples of root-level blocks that are available in Terraform.
Refer back to this chapter to remind yourself of the big picture of working with Terraform.
Check out the Terraform Tuesdays playlist on YouTube by my fellow HashiCorp Ambassador Ned Bellavance, or Ned in the Cloud. It covers bits and pieces of Terraform and the surrounding ecosystem. ↩︎
Throughout this book I have installed version 1.9.X of Terraform. The exam tests you on version 1.6.x of Terraform. Make sure you are not relying on features available in a version later than 1.6.x. ↩︎
In Chapter 7 and Chapter 8 there will be a discussion of other ways to provide credentials to Terraform. ↩︎
In this imaginary scenario you do! ↩︎
To collaborate in a better way you should use a VCS-driven workflow (Version Control System). However, setting this up from scratch would make this chapter a lot longer and it would not contribute anything to the following story, so I decided to skip that in this example. ↩︎
Be careful to apply credentials to all workspaces like this. If your organization has a large number of projects and workspaces, apply credentials at a lower scope to fit your needs. ↩︎
Visit the documentation to see a list of all the built-in functions. ↩︎