Here at Rover, the engineering team uses terraform quite extensively to manage and define our technical infrastructure in code. We even use terraform to manage our datadog monitors.
We often want to pull configuration information managed by outside build processes into terraform. An example would be referencing a current release's container image tag in an EC2 Container Service (ECS) task definition in order to deploy a new version of an application in ECS. There are many ways to accomplish this. One that we've found useful is reading the body of a text/plain
s3 object using the aws_s3_bucket_object
data source.
Referrencing S3 Object Content in Terraform
Using the aws_s3_bucket_object
data source, terraform will make the content of the object available to you as the body attribute if the content type is text/*
or application/json
. We've found this to be a convenient way of reading simple strings from external sources into terraform due to the ease of updating keys in S3 using a variety of programming languages and shells.
Below are some examples of writing a string to an S3 key and reading that value back out in terraform. It's important to either not write a trailing newline to the S3 object or to use the trimspace
interpolation to strip leading and trailing whitespace.
Write "abc123" to s3://my-s3-bucket/myapp/staging/current through any means convenient to you as part of your build pipeline
echo -n "abc123" | aws s3 cp - \
s3://my-s3-bucket/myapp/staging/current \
--acl private \
--content-type "text/plain"
Read the S3 key's data in terraform
variable "environment" {
default = "staging"
}
data "aws_s3_bucket_object" "release_id" {
bucket = "my-s3-bucket"
key = "myapp/{var.environment}/current"
}
output "release_id" {
value = "${data.aws_s3_bucket_object.release_id.body}"
}
Putting it Together to Deploy an Application in ECS
You can use this method to pass the current container image tag containing your latest release of an application to your ECS task definition for the purposes of deploying an application. The general steps would look like:
- Your build process builds your container image and tags it with a release id (perhaps a prefixed git sha1) and pushes the built image tag to your image container repository of choice.
- Your build process updates the S3 object with the release id it just built and pushed.
- Once this is done you're free to run terraform
plan
andapply
as it will read in the currently desired release id from the S3 object you wrote to in the previous step.
Example Terraform
provider "aws" {
region = "us-west-2"
}
data "aws_s3_bucket_object" "release_id" {
bucket = "my-application-releases"
key = "myapp/current"
}
# We would typically use a data template_file here
# passing the release_id in as a var and reading
# the JSON from a file
resource "aws_ecs_task_definition" "nginx" {
family = "my-app"
container_definitions = <<EOF
[
{
"name": "my-app",
"image": "my-container-image:${data.aws_s3_bucket_object.release_id.body}",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0
}
],
"memory": 100,
"cpu": 10,
"docker_tags": {
"RELEASE_ID": "${data.aws_s3_bucket_object.release_id.body}"
}
}
]
EOF
}
output "release_id" {
value = "${data.aws_s3_bucket_object.release_id.body}"
}
You can use interpolations in the aws_s3_bucket_object.key
data source attribute to modularize this easily using some type of variable denoting environment. You could also use terraform workspaces.
Summary
There are other methods of reading this type of configuration data into terraform, including:
- Wrapping terraform in a way where you read in configuration via any means and supply it as variables using the CLI or by setting
TF_VAR_
environment variables. We interact with terraform through a standard Makefile so we have a nice entry point through which to do things like this. - Using the External Data Source provider (https://www.terraform.io/docs/providers/external/data_source.html)
- Something I haven't thought of that is better than everything mentioned in this post.
What I like about reading the body of an S3 object directly in the terraform configuration is you can read through the terraform and see what's going on all in one place. You're free to use the standard CLI commands allowing terraform to natively pull in the correct values. To me this method is a similar workflow improvement to when terraform brought backend configuration from something you configured on the command line to something declared right in your terraform files. One notable downside is the inability to store the result of interpolations in intermediate variables, meaning every time you access the data you'll need to re-run interpolations such as trimspace
.