Ab Februar 2025 Pflicht (EU AI Act): Anbieter und Betreiber von KI-Systemen müssen KI-Kompetenz nachweisen.
Alle Informationen 
Managing AWS Instances with Terraform
Cloud

Terraform in an AWS Multi Account Environment

Lesezeit
4 ​​min

Notice:
This post is older than 5 years – the content might be outdated.

Terraform is a great tool to spin up environments on AWS—or in other clouds. But when it comes to a multi account environment there might be a gap. This article offers different solutions to bypass this with some kind of Makefile magic.

Requirements

Let’s define requirements we’ve met in some customer projects before we try and conquer them:

1. Terraform as infrastructure provider tool. As there are multiple team members a remote state in an S3 bucket is needed.
2. Different AWS Account for each stage.
3. Three stages (dev, test, prod) with the same infrastructure setup. Everything should be the same, except in sizing: different sizes for the instances and various volume sizes.
4. The S3 bucket for the remote state has to be managed within these AWS accounts. This means: Total isolation of the separate stages.
5. Terraform will be used within CI/CD pipelines to automate service delivery.

Solutions

With these requirements in mind, there are several solutions.

Keep separate subdirectories

The most obvious solution is to keep each stage within its own subdirectory and handle each subdirectory as an individual terraform project with its own state file. Works like a charm, problem solved—dear subconscious, please ignore the necessary code duplication. Until you have to modify your infrastructure in each separate subdirectory—over and over again to keep it up to date.

Terraform Workspaces

So let’s look at the built-in-feature mentioned at the Terraform docs:

Where possible, it’s recommended to use a single backend configuration for all environments and use the terraform workspace command to switch between workspaces.

Terraform Workspaces offer the possibility to deploy multiple instances from one code base. Each workspace in one project is linked to one state file. Reflecting on the specified requirements this means breaking the isolation of the stages (requirement 4) as all the stages would have to share one S3 bucket.

Account isolation

Take a look at the provider configuration in its basic values. This configuration causes Terraform to look into your ~/.aws  directory for an AWS default configuration and write everything into the mentioned s3 bucket.
Let’s make this configuration a little bit more dynamic by injecting variables during runtime. This is needed to work in separate AWS accounts. There are some config values we can inject during terraform execution:
  • most obvious: the AWS account to use which can be addressed with the configuration parameter
  • backend configuration while initializing the terraform project: the -backend-config  flag supports config parameter like bucket  and dynamodb_table.
To sum up these two settings, you’ll want something like this—assuming you’ve got an aws-dev  profile:
Hard to remember, therefore let’s wrap it up an a Makefile to have a portable solution.
In addition, you might want to have different variables for your stages—e.g. different EC2 types. Luckily Terraform does provide an option to handle environment files, so our Makefile supports this as well.
As you will see the Makefile creates some unique prefix to identify the S3 bucket. Furthermore, there is some input handling for an integration in ci/cd pipelines (tested with gitlab-ci). With this solution your Terraform code only needs to be modified in some places where things will have to insert as variables:
These are only snippets. A complete example can be found at Github.

Alternative

Take a look at the stuff Terragrunt does. It might be worth a try.

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert