Chapter 5: Creating a basic Azure infrastructure with Terraform
In the previous chapters, we learned how to interact with Azure using Terraform. Now, it's time to put that knowledge into practice with a hands-on example. We will create an Azure infrastructure with the following components:
A resource group
An Azure App Service Plan
Azure Linux Web Apps
Creating Infrastructure with a Single main.tf File
In Chapter 1, we discussed the Terraform ground rules to follow when working with Terraform. Let's create a team agreement that reflects these best practices, even though the agreement below is intentionally simplified to get us started. We will enhance and apply these ground rules as we progress through this chapter.
The agreement below is the simplest agreement for us to start with, and we will enhance it throughout this chapter. It does not follow best practices intentionally. We will apply them as we go along in this chapter.
Directory structure
single main.tf file
Terraform Resource Naming Convention
Resource Group: basic-infra Service Plan: basic-infra Linux Web App: basic-app
State file properties
Use local state file with default name
Terraform output naming convention
Resource Group Name: rg_name Service Plan Name: sp_name Linux Web App Name: webapp_name Linux Web App URL: webapp_url
Lifecycle of resources
Do not use lifecycle
The directory structure is intentionally designed to build the infrastructure in only one environment (development). It will be enhanced later in this chapter.
Let's follow the core workflow we worked on in Chapter 4 and create our infrastructure. The first step is to handle the authentication between Azure and Terraform. We will use a Service Principal that has enough privileges to create infrastructure under our subscription. For now, we will intentionally provide the secret values in the main.tf file and we will refactor our code to make it more enterprise-grade as we go along in this chapter.
To begin, let's create a directory called basic-infra-single-file-local-state and navigate to that directory.
First, we need to let Terraform know that we will be working with Azure by using the terraform and provider blocks. We need to provide the name of the provider, its source to download, and also the version we want to use.
Under the terraform block, we tell Terraform that we want to interact with an Azure infrastructure by using the azurerm provider and also specify where to download it and what version to use. Then, in the provider block, we let Terraform use the subscription authentication details.
It's time to start writing the actual Terraform code to create the infrastructure. Your starting point should be the official documentation to learn how to consume Terraform resources and data types. We will refer to the following documentation:
Azure Service Plan
Azure Linux Web App
Below is how we can create the infrastructure with only the required parameters:
Now, let's go over the details. In Terraform, we create resources by defining a resource block in the code. Each resource has its own arguments, which can be found in the azurerm resources documentation.
After the resources have been created, we can display output for a specific resource by defining an output block.
A question we need to consider is how Terraform understands the dependencies between these resources and ensures that it does not try to create the Azure Service Plan before the resource group, which would cause an error. On lines 10 and 11, we allow Terraform to use the location of the resource group and implicitly create the dependency without mentioning it. We could also use depends_on block and provide the list of resources to create a dependency explicitly.
Please note that we have set the location of the azurerm_service_plan resource by azurerm_resource_group.basic-infra.location. If the input resource has been created under the same Terraform code, you can reference it by using the syntax: <azurerm_resource_type>.<resource_name>.<resource_attribute>, but how do we know which attributes are exported per resource? Each Terraform resource has a resources and data resources section in the Base section of azurerm documentation. The complete list of attributes is defined with their example usage in the documentation.
If we would like to refer to an existing resource in our code, we can create a data source and export its attributes. You can create data sources by defining a data block using the construct data "<azure_resource_data_type>" "name" {}, and then in your Terraform code, you can consume it as data.<azure_resource_data_type>.<name>.<resource_data_attribute>.
Now it is time to continue with the next steps of the workflow: initializing, validating, planning, and applying our code.
Terraform has downloaded the required plugin(s) and stored them in the ".terraform" directory within the current working directory.
Now let's validate our code with terraform terraform validate command.
Since the validation has been passed, it is time to run the plan:
It looks like we forgot something here. We have inputted everything for Terraform to create the resources, except the authentication. We have not provided the environment variables for Terraform to use to authenticate with Azure. In Chapter 1, we mentioned how to do that and we covered having a shell alias for the task:
Now, we have exported the required environment variables for Terraform to authenticate with our Azure subscription.
We can review the plan and apply it if we are satisfied with the result:
Once we applied the code, Terraform will create a local state file under the current working directory and will compare the desired state of the environment with the state file. If we run the same code again now, Terraform will not detect any changes and will skip the execution.
If you decide to change the name of a resource in Terraform (not the Azure resource name), Terraform will compare your code with the state file and decide to remove the Azure resource with the old Terraform resource name and create a new Azure resource with the new Terraform resource name. This is why it is important to follow a Terraform resource naming convention in your team.
For example, let's change the resource name of azurerm_linux_web_app resource from basic-appto basic-app2:
As you can see, Terraform wants to delete the resource and wants to recreate it with the same Azure resource name as the Terraform resource object name has changed.
The above example demonstrates how to start using Terraform with a single file, but it does not allow us to utilize many of Terraform's useful features. Now, we will refactor the code to make it more suitable for use in an enterprise setting. Here is a list of improvements we can make:
State file: The state file should always be stored remotely in a secure location.
Divide and conquer: Divide the
main.tffile into multiple logical sections to make it easier to read, especially for more extensive code.Directory structure: The directory structure should allow you to use the same Terraform skeleton for different environments.
Integrating Remote State Files into Terraform Workflows
If you are working with a team, the remote state file can be used to share the state of your infrastructure and configuration with your team members. This ensures that everyone is working with the same set of resources and helps to coordinate changes properly. The state file can contain passwords and sensitive infrastructure details about your environment. Keeping the state file in a remote location helps to protect this sensitive information and secure the endpoint.
Let's now integrate our code with an existing Azure storage account.
In order to use a remote state file, we need to define a backend block under the terraform block in our main.tf file, as shown below:
Now, we need to reinitialize our Terraform code so that it uses the remote backend for state storage instead of the local state file. If there is already a local state file, Terraform will automatically attempt to copy it to the remote backend.
From now on, Terraform will use the remote backend for the state file, as it is the configured backend in our main.tf file. To avoid confusion, you may want to delete the local state file and its backup.
Dividing the Infrastructure into Multiple Terraform Files
Dividing your code into multiple files can make it easier to read and understand, particularly for larger configurations. By organizing your code into logical sections and giving each file a descriptive name, you can more effectively convey the purpose of the code and improve its readability.
Let's create a structure for our code and divide the main.tf into multiple Terraform files:
main.tf
Define infrastructure resources
variables.tf
Define variables
outputs.tf
Define output values
backend.tf
Define backend configuration
versions.tf
Specify providers, plugins with versions
terraform.tfvars
Specify values for the variables
The names of the files are self-explanatory. Let's move all output blocks to outputs.tf, and backend details to backend.tf, as shown below:
We will also move the required provider and its version details into the versions.tf file:
If you review our initial main.tf file, you will notice that we have not used any variables. We have hardcoded every value in the code. To make our code more modular and easier to maintain, let's define all possible variables in the variables.tf file and reference them in main.tf, as shown below:
In the variables.tf file, we define variables using the variable block, which can accept multiple arguments. We can set a default value for a variable in the variable block in the variables.tf file. If we do not explicitly set the value of the variable in a .tfvars file, it will use the default value defined in the variables.tf file.
Now we can input those variables in our main.tf file as below:
As you can see above, we can set the value of an argument by referencing variables using the var.<variable_name> format. On line 29, we can also use the lifecycle block to specify conditions that must be met before a resource is created. lifecycle is an important feature in Terraform.
The only remaining step is to specify the non-default values for the variables. By default, Terraform will look for a terraform.tfvars file in the working directory to determine the variable inputs, unless the file is explicitly specified when running the terraform binary.
Introducing a Terraform Directory Structure
We have already divided our Terraform code into multiple files, but we are using the default terraform.tfvars file and provisioning only a single environment (development environment). In a real-world scenario, we may have multiple environments such as DEV, PREPROD, or PROD. It is my best practice to keep the code fixed and use different variables files to ensure consistent infrastructure across different environments. There are various approaches to organizing a Terraform directory structure, but here is one option that I personally prefer: I keep all my code in a single directory and use a directory layout like the following in the root directory for variables:
This structure allows me to use the same infrastructure code for all of my environments by using dedicated Terraform variables files and a dedicated remote Terraform backend. As an example, I (or my pipeline) can initialize the remote Terraform backend for the DEV environment using this structure:
terraform init -backend-config=environment/DEV/backend.hcl
and then run the plan command as:
terraform plan -var-file=environment/DEV/terraform/tfvars
We will cover the details of this setup in upcoming chapters, but for now, let's create a structure with only DEV and PROD environments. Let's start with the DEV environment and move the terraform.tfvars file to the environment/DEV path. We also need to create a backend.hcl file and modify the backend.tf file as follows:
Similarly, you can create new backend.hcl and terraform.tfvars files under the PROD directory with their dedicated values and remote backend configuration.
Remember to run terraform init -backend-config=environment/<environment>/backend.hcl [-reconfigure] each time you switch environments. If you forget to do this, your Terraform code may try to plan changes against the wrong environment, as it may still be pointing to the remote state file of a previous environment.
Let's apply for the same code against PROD environment as below:
The example provided above demonstrates how to create a Resource Group and Service Plan using only the mandatory arguments of the respective Terraform resources. This has been done intentionally to keep the example simple and focused on creating infrastructure with Terraform without getting into too much detail. In the upcoming chapters, we will cover more advanced examples.
Last updated