Skip to main content
AWS S3 ABAC for Terraform Backends

AWS S3 ABAC for Terraform Backends

·1538 words·8 mins
Mattias Fjellström
Author
Mattias Fjellström
Cloud architect · Author · HashiCorp Ambassador · Microsoft MVP

Role-Based Access Control or RBAC is a powerful way to manage what permissions you assign to entities in your environment. In AWS you group permissions into policies and you attach one or more policies to roles that can be assumed by different principals (e.g. a service on AWS or a GitHub Actions workflow).

A related concept is Attribute-Based Access Control or ABAC. AWS recently added support for ABAC for S3 general-purpose buckets, allowing you to authorize operations on an S3 bucket based on what tags are set on the bucket. The idea is to allow or deny operations based on the attributes on the bucket instead of specific bucket names.

In this blog post I go through the steps to create a new AWS S3 Terraform backend with ABAC enabled, and show how you can use this backend from a GitHub Actions workflow.

Initial configuration
#

The example I will go through will use two providers:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 6.23.0"
    }

    github = {
      source  = "integrations/github"
      version = "6.9.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

provider "github" {
  owner = var.github_handle
}
The ABAC feature for S3 buckets was introduced in provider version 6.23.0 - that is why I have configured the version constraint version = ">= 6.23.0" for the AWS provider.

My configuration has the following variables configured:

variable "aws_region" {
  description = "AWS region name (e.g. eu-north-1)"
  type        = string
}

variable "github_handle" {
  description = "GitHub username or organization name"
  type        = string
}

variable "github_repository" {
  description = "Existing GitHub repository where Terraform will run"
  type        = string
}

Set up an AWS S3 backend with ABAC enabled
#

A Terraform backend on AWS consists of an S3 bucket. You can optionally use a DynamoDB table for managing state file locks, but the modern approach is to instead use a lockfile on S3 (more on that later).

Configure an S3 bucket using the aws_s3_bucket resource type, and enable ABAC for the bucket using the aws_s3_bucket_abac resource:

resource "aws_s3_bucket" "default" {
  bucket_prefix = "terraform-state-"

  tags = {
    "managed-by" = "terraform"
    "purpose"    = "state"
  }
}

resource "aws_s3_bucket_abac" "default" {
  bucket = aws_s3_bucket.default.bucket

  abac_status {
    status = "Enabled"
  }
}

In this example I have added two tags for my S3 bucket:

  • managed-by = terraform
  • purpose = state

These tags are important because the ABAC configuration will depend on the values of these tags.

Note that these tags are not special by themselves, you can use any tags that you see fit for your environment.

Create a role with an ABAC policy
#

Next I want to create a role that has a policy that allows it to use the S3 state backend.

Before I create the role I want to configure how the role will be assumed. The role is intended to be used from a GitHub Actions workflow, so I can use workload identity federation based on the OIDC workflow.

The following code configures an AWS IAM OIDC provider for GitHub, an assume-role policy that allows my repository on GitHub to assume the role using this provider, and the role itself:

resource "aws_iam_openid_connect_provider" "github" {
  url            = "https://token.actions.githubusercontent.com"
  client_id_list = ["sts.amazonaws.com"]
}

data "aws_iam_policy_document" "assume_role" {
  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRoleWithWebIdentity"]
    principals {
      type        = "Federated"
      identifiers = [aws_iam_openid_connect_provider.github.arn]
    }
    condition {
      test     = "StringEquals"
      variable = "token.actions.githubusercontent.com:aud"
      values   = ["sts.amazonaws.com"]
    }
    condition {
      test     = "StringLike"
      variable = "token.actions.githubusercontent.com:sub"
      values   = ["repo:${var.github_handle}/${var.github_repository}:ref:*"]
    }
  }
}

resource "aws_iam_role" "terraform" {
  name_prefix        = "terraform-"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

Now I want to create a policy that allows the role to use the S3 bucket as a Terraform state backend. The secret to achieve this is to add conditions to the policy that checks for the existence of the tags I defined on the S3 bucket earlier. The policy data source is configured as follows:

data "aws_iam_policy_document" "terraform_state" {
  statement {
    effect = "Allow"
    actions = [
      "s3:GetObject",
      "s3:PutObject",
      "s3:DeleteObject",
      "s3:ListBucket"
    ]
    resources = ["*"]
    condition {
      test     = "StringEquals"
      variable = "aws:ResourceTag/managed-by"
      values   = ["terraform"]
    }
    condition {
      test     = "StringEquals"
      variable = "aws:ResourceTag/purpose"
      values   = ["state"]
    }
  }
}

An important detail here is that I allow access to all S3-buckets that has these tags set (i.e. resources = ["*"]). This allows me to base access on tag values instead of bucket names. If I add a different S3 bucket that I intend to use for Terraform state storage, then I just add the same tags on this bucket and each role that has this policy can start using it immediately.

I added the s3:DeleteObject permission because I want to use the lock-file feature of the S3 backend for Terraform. With this feature a lock-file is created in the S3 bucket when the state file is used by Terraform, and it is deleted when the Terraform run is complete.

You can write a more granular policy that only allows s3:DeleteObject for lock-files and nothing else if you want to make sure no state files are deleted by accident.

From this data source I create a policy resource (in case I want to reuse this policy for other roles), and I attach it to the Terraform role:

resource "aws_iam_policy" "terraform" {
  name_prefix = "terraform-state-"
  policy      = data.aws_iam_policy_document.terraform_state.json
}

resource "aws_iam_role_policy_attachment" "terraform" {
  role       = aws_iam_role.terraform.name
  policy_arn = aws_iam_policy.terraform.arn
}

Finally, I also give my role a policy that will allow it to perform its intended job (which I arbitrarily defined as managing EC2 instances):

resource "aws_iam_role_policy_attachment" "ec2" {
  role       = aws_iam_role.terraform.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
}

Configure a GitHub Actions workflow
#

The GitHub Actions workflow will assume this role, and once it runs terraform init and other Terraform commands the ABAC magic will happen behind the scenes.

The workflow I want to run is this:

name: Use AWS ABAC

on:
  workflow_dispatch:
    inputs:
      operation:
        type: choice
        options:
          - apply
          - destroy

permissions:
  id-token: write
  contents: read

env:
  AWS_REGION: ${{ vars.AWS_REGION }}
  AWS_ROLE_ARN: ${{ vars.AWS_ROLE_ARN }}

jobs:
  ec2:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: terraform/aws-s3-abac/use
    steps:
      - uses: aws-actions/configure-aws-credentials@v5.1.1
        with:
          role-to-assume: ${{ env.AWS_ROLE_ARN }}
          aws-region: ${{ env.AWS_REGION }}
      - uses: actions/checkout@v5
      - uses: hashicorp/setup-terraform@v3
      - run: terraform init
      - run: terraform plan -no-color
        env:
          TF_VAR_aws_region: ${{ env.AWS_REGION }}
      - if: ${{ inputs.operation == 'apply' }}
        run: terraform apply -auto-approve
        env:
          TF_VAR_aws_region: ${{ env.AWS_REGION }}
      - if: ${{ inputs.operation == 'destroy' }}
        run: terraform destroy -auto-approve
        env:
          TF_VAR_aws_region: ${{ env.AWS_REGION }}

The important step in the workflow is the first step where I assume a role defined in the AWS_ROLE_ARN environment variable. I have given the workflow the id-token: write permission to allow it to write an ID token after authentication. The rest of the workflow consists of simple Terraform commands.

I have configured the environment variables to come from action variables in the repository:

env:
  AWS_REGION: ${{ vars.AWS_REGION }}
  AWS_ROLE_ARN: ${{ vars.AWS_ROLE_ARN }}

I create these variable using Terraform:

data "github_repository" "default" {
  name = var.github_repository
}

resource "github_actions_variable" "aws_region" {
  repository    = data.github_repository.default.name
  variable_name = "AWS_REGION"
  value         = var.aws_region
}

resource "github_actions_variable" "aws_role_arn" {
  repository    = data.github_repository.default.name
  variable_name = "AWS_ROLE_ARN"
  value         = aws_iam_role.terraform.arn
}

Using the AWS S3 Terraform backend with ABAC
#

The Terraform configuration that my GitHub Actions workflow run is easy to configure. The provider does not require any specific configuration for authentication:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 6.23.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

The backend configuration must include the bucket name and region along with the key (name) of the state file:

terraform {
  backend "s3" {
    region       = "eu-north-1"
    bucket       = "terraform-state-20251217093701061200000001"
    key          = "state/team/terraform.tfstate"
    use_lockfile = true
  }
}
There is no specific configuration required to use ABAC for authorization, all of that is handled on the AWS side.

As I mentioned earlier, I use the lock-file feature for S3 backends by setting use_lockfile = true. The alternative would have been to use a DynamoDB table but that is nowadays unnecessary.

The demo infrastructure I want to create is a simple Ubuntu EC2 instance:

data "aws_ami" "ubuntu" {
  owners = ["099720109477"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-*"]
  }

  most_recent = true
}

resource "aws_instance" "server" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"

  tags = {
    Name = "github-instance"
  }
}

Summary and key takeaways
#

ABAC and RBAC are two sides of the authorization coin. Use both where it makes sense!

ABAC allows you to easily scale management of access to S3 buckets based on tags. In this blog post I showed you how to set up an S3 backend for Terraform with ABAC enabled and how to configure a policy for a role that will use the bucket.

You can build more sophisticated policies. One example would be to include a team tag and allow different teams to only use a specific path prefix inside of the S3 backend. You could have one general ABAC policy for general Terraform state usage (i.e. that allows the use of specific S3 buckets dedicated for Terraform state) and then other ABAC policies that only allows managing state files in a specific path inside of these buckets.

The possibilities are endless!

Related