Configure Terraform Remote Backend on AWS S3 Bucket

Part IV — Setup terraform to store state file on AWS S3 bucket with DynamoDB lock.

Sagar
4 min readSep 17, 2023

In the previous post we understood what the state file is and its importance. Considering the use of TF in an organisation where multiple members of a project might work on and modify the same resource, its essential that the same .tfstate file is available to everyone.

However keeping it in git repo is not recommended as it store sensitive data unencrypted. So how do we share it with all members of the project. The solution is to use a remote backend.

Find the repository for all the project configuration on GitHub.

What is backend in Terraform:

Terraform backend is where terraform stores in state file. By default the backend is “local” which is why when we run our configuration, the .tfstate is generated on our workstation.

S3 Backend Config:

We’ll use an S3 bucket to store the .tfste file and use DynamoDB to lock the state file when someone is executing it, this way we can protect it against getting corrupted by simultaneous execution by multiple members. This S3 bucket has a policy to prevent its destruction.

First lets create one project to create IAM users and apply it. This stores the state in local, and will configure s3 backend afterwards (you can directly configure backend and skip this).

[root@labputer Remote_backend]# pwd
/root/TF/Remote_backend
[root@labputer Remote_backend]# tree
.
├── 01_IAM_USER
│ ├── main.tf
│ └── terraform.tfstate
└── 02_S3_backend
├── main.tf
└── terraform.tfstate

2 directories, 4 files
[root@labputer Remote_backend]#

The 01 folder main.tf contains only IAM user configuration and the 02 folder main.tf contains an S3 bucket and DynamoDB config.

IAM user config:

provider "aws" {
region = "ap-south-1"
}

resource "aws_iam_user" "my_iam_users" {
name = "test_iam_18_sept_23"
}

Applying this, we have the state stored in our local as usual.

Now we’ll create the S3 bucket and a DyanamoDB table.

S3 with DynamoDB config:

provider "aws" {
region = "ap-south-1"
}


resource "aws_s3_bucket" "office_backend_state" {
bucket = "xyz-applications-backend-state-sagar-18thsept"

lifecycle {
prevent_destroy = true # bucket can't be destoroyed
}

}

resource "aws_s3_bucket_versioning" "S3_with_versioning" {
bucket = aws_s3_bucket.office_backend_state.id
versioning_configuration {
status = "Enabled"
}
}

resource "aws_s3_bucket_server_side_encryption_configuration" "Bucket_encryption" {
bucket = aws_s3_bucket.office_backend_state.bucket

rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}


resource "aws_dynamodb_table" "office_backend_lock" {
name = "xyz_application_locks"
billing_mode = "PAY_PER_REQUEST"

hash_key = "LockID"

attribute {
name = "LockID"
type = "S"
}

}

Lets configure the backend to be this S3 bucket in the IAM configuration.

Change the configuration to add the backend block as below:

terraform {
backend "s3" {
bucket = "xyz-applications-backend-state-sagar-18thsept"
key = "xyz/backend-state/IAM/backend-state"
region = "ap-south-1"
dynamodb_table = "xyz_application_locks"
encrypt = true
}
}

provider "aws" {
region = "ap-south-1"
}

resource "aws_iam_user" "my_iam_users" {
name = "test_iam_18_sept_23"
}

As we are changing this, we need to run “tf init” again (if you see any issues, run “tf init -reconfigure”). And we shall see its asking us if we want to copy our existing tfstate to the remote backend.

[root@labputer 01_IAM_USER]# tf init

Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.

Enter a value: yes


Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

We can verify the file in S3 bucket. Open the object to view the contents. Since wh have the state in S3 now, we can delete the local state file for IAM.

When we destroy the IAM resource, the state is updated on the bucket.

aws_iam_user.my_iam_users: Destroying... [id=test_iam_18_sept_23]
aws_iam_user.my_iam_users: Destruction complete after 1s

The state file content after applying destroy on the IAM resource:

{
"version": 4,
"terraform_version": "1.5.6",
"serial": 3,
"lineage": "random things",
"outputs": {

},
"resources": [],
"check_results": null
}

After you are done with your experiment, destroy the S3 bucket and dynamodb table.

Note: First change the lifecycle policy to false and delete all the versions of the object to be able to delete the S3 bucket.

Alrighty. Hope you found it helpful. Thanks for reading.

References: Hashicorp Docs

Further Reading:

Terraform

5 stories

Kubernetes

13 stories

--

--