SHIFT

--- Sjoerd Hooft's InFormation Technology ---

User Tools

Site Tools


Sidebar

Recently Changed Pages:

View All Pages


View All Tags


LinkedIn




WIKI Disclaimer: As with most other things on the Internet, the content on this wiki is not supported. It was contributed by me and is published “as is”. It has worked for me, and might work for you.
Also note that any view or statement expressed anywhere on this site are strictly mine and not the opinions or views of my employer.


Pages with comments

View All Comments

terraform

Terraform

In this article I'll be taking you on a ride with terraform. What should have started as a simple “How to get started with… ” page turned into an extensive article on terraform basics, some more advanced concepts, using terraform on AWS, Azure and Azure DevOps. Some of the order might not make sense at first, but it's actually the order in which I figured everything out, so from the basics to more advanced concepts and going from AWS to Azure and then Azure DevOps.

Terraform on Windows Setup

First prepare your windows desktop. Install VSCode and add the terraform plugin from hashicorp. Then install terraform locally, which works best as a package from: chocolatey. After installing chocolatey you can run {choco install terraform.

After the installation you should be able to run terraform version. Sometimes, in VS Code you might get an error that terraform is not recognized as a command. In that case you should restart VS Code to refresh the environment (refreshenv).

Credentials for AWS

You can add the credentials into the main.tf file like this:

provider "aws" {
  region = "eu-west-1"
  access_key = "AKIAXXXXXXXXXX"
  secret_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}

Or you can add the credentials to the environment variables by running these commands in powershell:

$Env:AWS_ACCESS_KEY_ID="AKIAXXXXXXXXXXX"
$Env:AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
$Env:AWS_DEFAULT_REGION="eu-west-1"
If you set an environment variable at the PowerShell prompt as shown in the previous examples, it saves the value for only the duration of the current session. To make the environment variable setting persistent across all PowerShell and Command Prompt sessions, store it by using the System application in Control Panel. Alternatively, you can set the variable for all future PowerShell sessions by adding it to your PowerShell profile.

Or in the command shell:

C:\> setx AWS_ACCESS_KEY_ID AKIAXXXXXXXXXX
C:\> setx AWS_SECRET_ACCESS_KEY xxxxxxxxxxxxxxxxxxxxx
C:\> setx AWS_DEFAULT_REGION eu-west-1
Using set to set an environment variable changes the value used until the end of the current command prompt session, or until you set the variable to a different value. Using setx to set an environment variable changes the value used in both the current command prompt session and all command prompt sessions that you create after running the command. It does not affect other command shells that are already running at the time you run the command.

If you are working with multiple accounts you could also install the AWS CLI and run the aws configure command to create profiles. Download the installer from here, and run the aws configure command which allows you to setup your default profile:

PS C:\Users\sjoer> aws configure
AWS Access Key ID [None]: AKIAXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Default region name [None]: eu-west-1
Default output format [None]: json

This will create a credentials file in a .aws directory in your home directory. You can edit the file and rename the profile so you can use it in your main.tf file: Credential file:

# Amazon Web Services Credentials File used by AWS CLI, SDKs, and tools
# This file was created by the AWS Toolkit for Visual Studio Code extension.
#
# Your AWS credentials are represented by access keys associated with IAM users.
# For information about how to create and manage AWS access keys for a user, see:
# https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
#
# This credential file can store multiple access keys by placing each one in a
# named "profile". For information about how to change the access keys in a
# profile or to add a new profile with a different access key, see:
# https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html
#
[terraform]
region=eu-west-1
aws_access_key_id = AKIAXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxx
[prod]
region=eu-west-1
aws_access_key_id = AKIAXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxx

main.tf file:

provider "aws" {
    profile = "terraform"
}

Terraform Basics

The main commands are:

  • Terraform init: Checks the config file and downloads the provider
  • Terraform plan: Creates the plan, what to do based on the config file and state file
  • Terraform apply: Applies the plan
    • Use the option -auto-approve to suppress the summary and confirmation
  • Terraform destroy: Destroys the plan
    • Use the option -auto-approve to suppress the summary and confirmation

Terraform State

After you have applied a terraform config you get a tfstate file, which is the heart of your terraform operation. You can check the file for all resources created and information about these resources such as the aws arn for example. Don't delete this file!

Terraform Variables and How to Use them

variable "vpcname" {
    type = string
    default = "myvpc"
}

variable "sshport" {
    type = number
    default = 22
}

variable "enabled" {
    default = true
}

variable "mylist" {
    type = list(string)
    default = ["myvalue0","myvalue1"]
}

variable "mymap" {
    type = map
    default = {
        Key1 = "Value1"
        Key2 = "Value2"
    }
}

# A tuple is like a list but with support for multiple datatypes
variable "mytuple" {
    type = tuple("string", "number", "string")
    default ["cat", 2, "dog"]
}

# An object is like a JSON object, and like a map but with support for multiple datatypes
variable "myobject" {
    type = object({ name = string, port = list(number)})
    default = {
        name = "portlist"
        port = (22, 25, 80)
    }
}

## Use strings/numbers/boolean
resource "aws_vpc" "myvpc" {
  cidr_block = "10.0.0.0/16"

  tag = {
      Name = var.vpcname
  }
}

## Use lists
resource "aws_vpc" "myvpc" {
  cidr_block = "10.0.0.0/16"

  tag = {
      Name = var.mylist[0]
  }
}

## Use maps
resource "aws_vpc" "myvpc" {
  cidr_block = "10.0.0.0/16"

  tag = {
      Name = var.mymap["Key1"]
  }
}

## InputVariables - Terraform will ask for the input during terraform plan
variable "inputname" {
    type = string
    description = "Provide the name of the VPC: "
}

# Use input variable
resource "aws_vpc" "myvpc" {
  cidr_block = "10.0.0.0/16"

  tag = {
      Name = var.inputname
  }
}

# Output
output "vpcid" {
    value = aws_vpc.myvpc.id
}

Terraform on AWS

Config file for 2 EC2 Instances

Assignment:

  • Create a EC2 instance and output the private IP
  • Create a EC2 web server and output the public IP
  • Create a security group for the webserver opening port 80 and 443
  • Run a scrip on the webserver

First the script to run, this needs to be in the same directory as the config file:

#!/bin/bash
sudo yum update
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
echo "<h1>Hello from Terraform</h1>" | sudo tee /var/www/html/index.html

And now the config file:

provider "aws" {
    profile = "terraform"
}
 
variable "ingressrules" {
    type = list(number)
    default = [80,443]
}
 
variable "egressrules" {
    type = list(number)
    default = [80,443]
}
 
resource "aws_instance" "db" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
    tags = {
        Name = "DBServer"
        Terraform = "True"
    }
}
 
resource "aws_instance" "web" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
    security_groups = [aws_security_group.webtraffic.name]
    user_data = file("server-script.sh")
    tags = {
        Name = "WebServer"
        Terraform = "True"
    }
}
 
resource "aws_eip" "elasticeip" {
    instance = aws_instance.web.id
}
 
resource "aws_security_group" "webtraffic" {
    name = "Allow Web Traffic"
 
    dynamic "ingress" {
        iterator = port
        for_each = var.ingressrules
        content {
            from_port = port.value
            to_port = port.value
            protocol = "TCP"
            cidr_blocks = ["0.0.0.0/0"]
        }
    }
 
    dynamic "egress" {
        iterator = port
        for_each = var.egressrules
        content {
            from_port = port.value
            to_port = port.value
            protocol = "TCP"
            cidr_blocks = ["0.0.0.0/0"]
        }
    }
}
 
output "webip" {
    value = aws_eip.elasticeip.public_ip
}
 
output "dbip" {
    value = aws_instance.db.private_ip
}

Terraform Modules

Modules within terraform enable you to repeat code. The hard part is input and output. As seen in the very small example below, the module ec2, which is getting its ec2 name from the module configuration in the main.tf, while the output is being passed through from the module ec2.tf to the main.tf file:

main.tf:

provider "aws" {
  profile = "terraform"
}
module "ec2module" {
  source = "./ec2"
  ec2name = "ec2WithModuleName"
}
output "module_output" {
    value = module.ec2module.instance_id
}

ec2.tf

variable "ec2name" {
  type = string
}
resource "aws_instance" "ec2" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
    tags = {
      "Name" = var.ec2name
    }
}
output "instance_id" {
    value = aws_instance.ec2.id
}

Terraform and IAM

When working with policies it's best to create the JSON policy file using the console. You can go to IAM, go to policies, create a policy, configure it, and when you've added all the permissions you need, go to the JSON tab. Here are all your permissions you just configured ans you can easily copy them to use them in terraform.

We'll use this one as an example:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ec2:*",
            "Resource": "*"
        }
    ]
}

main.tf:

provider "aws" {
    profile = "terraform"
}
resource "aws_iam_user" "myUser" {
    name = "Sjoerd"
}
resource "aws_iam_policy" "customPolicy" {
    name = "EC2AllOfIt"
    policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ec2:*",
            "Resource": "*"
        }
    ]
}
    EOF
}
resource "aws_iam_policy_attachment" "policyBind" {
    name = "attachment"
    users [aws_iam_user.myUser.name]
    policy_arn = aws_iam_policy.customPolicy.arn
}

Terraform Advanced Options

Terraform Dependencies

You can make one resource dependent another resource to make sure resources are created and started in the right order:

provider "aws" {
    profile = "terraform"
}
resource "aws_instance" "db" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
}
resource "aws_instance" "web" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
    depends_on = [aws_instance.db]
}

Terraform Count

Use count to create multiple resources with the same configuration:

provider "aws" {
    profile = "terraform"
}
 
resource "aws_instance" "db" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
 
    count = 3
 
    tags = {
        Name = "Server ${count.index}"
    }
}

Note that you could use length to input the names from a variable

provider "aws" {
    profile = "terraform"
}
 
variable "server_names" {
    type = list(string)
    default = ["mariadb","mysql","mssql"]
}
 
resource "aws_instance" "db" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
 
    count = length(var.server_names)
 
    tags = {
        Name = var.server_names[count.index]
    }
}

Terraform Variable Files

You can use variable files, for example, when using terraform to deploy production and test and you don't want to create separate terraform file per environment:

prod.tfvars

number_of_servers = 4

test.tfvars

number_of_servers = 2

main.tf

provider "aws" {
    profile = "terraform"
}
 
variable "number_of_servers" {
    type = number
}
resource "aws_instance" "db" {
    ami = "ami-0d1bf5b68307103c2"
    instance_type = "t2.micro"
 
    count = var.number_of_servers
}

Now, when you run terraform plan or apply, use the following option to define the vars file:

terraform plan -var-file=test.tfvars
terraform apply

Terraform Import

You can bring existing resources in AWS under control of terraform.

Let's say you have an existing VPC with the id of vpc-xxx123ccc123cccc which has a cidr block of 192.168.0.0/24. You can import this using the following main.tf:

resource "aws_vpc" "myvpc" {
  cidr_block = "192.168.0.0/24"
}

And run the following command:

terraform import aws_vpc.myvpc vpc-xxx123ccc123cccc
Note that this is quite limited because all the settings need to match, so if you've configured lots of options for your resources this might prove too much of a hassle.

Terraform Data

You can use the data option to query the statefile, or if the information is not in there, to query (in this case) aws about information of the resources you have. Adding this snippet for example to your main.tf file queries for all the EC2 instances that have a specific tag:

data "aws_instance" "dbsearch" {
    filter {
        name = "tag:Name"
        values = ["DB Server"]
    }
}
 
output "dbservers" {
    value = data.aws_instance.dbsearch.id
}
 
output "dbserveraz" {
    value = data.aws_instance.dbsearch.availability_zone
}
 
output "dbserversandaz" {
    value = ["data.aws_instance.dbsearch.id","data.aws_instance.dbsearch.availability_zone"]
}
Note that values is a list, so you can search for multiple values at once.


Note that you can re-use the search results for multiple properties of the aws instances. Here is a list of all the attributes available on aws_instance.

Terraform on Azure

Azure Cloud Shell

To work with terraform on Azure it's very convenient to to use the cloud shell. Terraform is already installed, so you can simply upload your config file and run terraform commands:

Config file for Resource Group and Storage Account:

variable "storage_account_name" {
    type=string
    default="storageaz400terraform"
}
 
variable "resource_group_name" {
    type=string
    default="rg_az400_terraform"
}
 
provider "azurerm"{
    subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    tenant_id       = "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy"
    features {}
}
 
resource "azurerm_resource_group" "grp" {
  name     = var.resource_group_name
  location = "West Europe"
}
 
resource "azurerm_storage_account" "store" {
  name                     = var.storage_account_name
  resource_group_name      = azurerm_resource_group.grp.name
  location                 = azurerm_resource_group.grp.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

Now in the Azure Cloud Shell, check the version, init, plan, apply and destroy:

# Check the terraform version:
PS /home/sjoerd> terraform --version
Terraform v1.0.3
on linux_amd64
 
PS /home/sjoerd> terraform init
 
Initializing the backend...
 
Initializing provider plugins...
- Finding hashicorp/azurerm versions matching "2.0.0"...
- Installing hashicorp/azurerm v2.0.0...
- Installed hashicorp/azurerm v2.0.0 (signed by HashiCorp)
 
...<cut>...
 
Terraform has been successfully initialized!
 
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
 
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
 
PS /home/sjoerd> terraform plan -out storage.tfplan
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # azurerm_resource_group.grp will be created
  + resource "azurerm_resource_group" "grp" {
      + id       = (known after apply)
      + location = "westeurope"
      + name     = "rg_az400_terraform"
    }
 
  # azurerm_storage_account.store will be created
 ...<cut>...
 
Plan: 2 to add, 0 to change, 0 to destroy.
 
 
Saved the plan to: storage.tfplan
 
To perform exactly these actions, run the following command to apply:
    terraform apply "storage.tfplan"
 
 
PS /home/sjoerd> terraform apply "storage.tfplan"
azurerm_resource_group.grp: Creating...
azurerm_resource_group.grp: Creation complete after 1s [id=/subscriptions/3e4edfc4-22e1-4c8f-8b6b-9b30045a9d48/resourceGroups/rg_az400_terraform]
azurerm_storage_account.store: Creating...
azurerm_storage_account.store: Still creating... [10s elapsed]
azurerm_storage_account.store: Still creating... [20s elapsed]
azurerm_storage_account.store: Creation complete after 21s [id=/subscriptions/3e4edfc4-22e1-4c8f-8b6b-9b30045a9d48/resourceGroups/rg_az400_terraform/providers/Microsoft.Storage/storageAccounts/storageaz400terraform]
 
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
 
PS /home/sjoerd> terraform plan -destroy -out vm.tfplan 
 
PS /home/sjoerd> terraform apply "vm.tfplan"

Terraform in Azure DevOps

This part is about running terraform commands using the hosted agents in Azure DevOps. Although this sounds quite similar, there is a big difference. Everything running on the agents gets removed afterwards, including the tfstate file, and as you remember, the tfstate file is very important as it keeps track of your infrastructure deployed through terraform. This means you need to configure a backend for your tfstate so the the tfstate file is not hosted locally on the hosted agents, but on a remote backend.

The best way to configure this is in the terraform configuration file, but sometimes an extension chooses a different approach. That this is not the best approach is very clear as shown below. The extension as provided by the Microsoft devlabs chose a different approach by configuring the backend in the extension, but as terraform evolved this does not work anymore and you need to add a remote backend configuration to your terraform file anyway. I'll be using the exact same config file as shown above, but now with a remote backend block added:

terraform {
  backend "azurerm" {
    resource_group_name  = "rg_terradevops"
    storage_account_name = "shiftterrastatefile"
    container_name       = "tfstate"
    key                  = "storage.tfstate"
  }
}
 
variable "storage_account_name" {
    type=string
    default="storageaz400terraform"
}
 
variable "resource_group_name" {
    type=string
    default="rg_az400_terraform"
}
 
provider "azurerm"{
    subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    tenant_id       = "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy"
    features {}
}
 
resource "azurerm_resource_group" "grp" {
  name     = var.resource_group_name
  location = "West Europe"
}
 
resource "azurerm_storage_account" "store" {
  name                     = var.storage_account_name
  resource_group_name      = azurerm_resource_group.grp.name
  location                 = azurerm_resource_group.grp.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

Now follow the following steps to deploy the resource group and storage account using terraform in Azure DevOps:

  • Prepare your Azure subscription fist. For this I'm assuming you've already connected Aure DevOps and your Azure subscription with each other, but you'll also need a container to host your tfstate file as a remote backend. So create a resource group, create a storage account and create a blob container.
    • Resource group: rg_terradevops
    • Storage account: shiftterrastatefile
    • Container: tfstate
  • In Azure DevOps, go to the marketplace icon in your terraform project and select “Browse Marketplace”
  • Search for and install the terraform extension created by Microsoft Devlabs
    • Note that the extension does not get very good reviews and that there might be better options in the market for you, but for simplicity sake we'll keep it to the Microsoft's provided extension. Note that it will probably be unwise to use this extension in a production environment
  • Create a (empty) classic pipeline in the same project where you've stored the terraform files
  • Add the task “Install Terraform” to the pipeline. Note that by default version 0.12.3 is installed. If you run the pipeline now you can check the output from the task and edit the version field of the task to the latest version
  • Add the task “Terraform” to the pipeline. The first task needs to be configured as init
    • Display name: terraform init
    • Provider: azurerm
    • Command: init
    • Configuration directory: browse to the folder in your repository that holds the .tf file
    • Azure subscription, resource group, storage account, container and key: Setup like configured in the azure portal. Note that “key” is a bit confusing, it is the name of the tfstate file, optionally in a folder.
  • Add the task “Terraform” to the pipeline. The second task needs to be configured as plan
    • Display name: terraform plan
    • Provider: azurerm
    • Command: plan
    • Configuration directory: browse to the folder in your repository that holds the .tf file
    • Azure subscription: Setup like configured in the azure portal.
  • Add the task “Terraform” to the pipeline. The third task needs to be configured as apply and validate
    • Display name: terraform apply and validate
    • Provider: azurerm
    • Command: validate and apply
    • Configuration directory: browse to the folder in your repository that holds the .tf file
    • Azure subscription: Setup like configured in the azure portal.

If you run the pipeline and check the blob container in the azure portal afterwards you'll notice a storage.tfstate file in the container. You can even check the contents using “edit”.

YAML Pipeline

This would be the pipeline if you'd set it up in yaml:

pool:
  name: Azure Pipelines
steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller@0
  displayName: 'Install Terraform 1.0.8'
  inputs:
    terraformVersion: 1.0.8

- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2@2
  displayName: 'Terraform Init'
  inputs:
    workingDirectory: terratest
    backendServiceArm: 'GetShifting Azure subscription (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)'
    backendAzureRmResourceGroupName: 'rg_terradevops'
    backendAzureRmStorageAccountName: shiftterrastatefile
    backendAzureRmContainerName: tfstate
    backendAzureRmKey: storage.tfstate

- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2@2
  displayName: 'Terraform Plan'
  inputs:
    command: plan
    workingDirectory: terratest
    environmentServiceNameAzureRM: 'GetShifting Azure subscription (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)'

- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2@2
  displayName: 'Terraform Validate and Apply'
  inputs:
    command: apply
    workingDirectory: terratest
    environmentServiceNameAzureRM: 'GetShifting Azure subscription (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)'

Resources

You could leave a comment if you were logged in.
terraform.txt · Last modified: 2021/10/12 12:42 by sjoerd