Infrastructure as Code: Getting Started with Terraform in Azure DevOps
Tue Feb 10 2026
Clicking around the Azure Portal is fine for learning, but it is a disaster for production. Manual changes are untraceable, error-prone, and unrepeatable.
Infrastructure as Code (IaC) solves this. And Terraform is the industry standard for IaC.
In this guide, we will build a robust Azure DevOps pipeline to automate your infrastructure deployments using Terraform, encompassing best practices for security and state management.
The Prerequisites
Before writing YAML, we need three things in Azure:
- Service Principal: An identity for your pipeline to talk to Azure.
- Storage Account: To store the
terraform.tfstatefile. (Never check your state file into Git!). - Azure Service Connection: The link between ADO and your Azure Subscription.
The Terraform Workflow: Plan vs. Apply
A naive pipeline simply runs terraform apply -auto-approve. Do not do this.
A production pipeline must separate the Planning phase (seeing what will happen) from the Applying phase (making it happen).
Step 1: The Terraform Configuration
Imagine a simple main.tf that creates a Resource Group:
provider "azurerm" {
features {}
}
terraform {
backend "azurerm" {
resource_group_name = "tf-state-rg"
storage_account_name = "tffiles"
container_name = "tfstate"
key = "prod.terraform.tfstate"
}
}
resource "azurerm_resource_group" "rg" {
name = "my-app-rg"
location = "East US"
}
Step 2: The Pipeline Structure
We will use a Multi-Stage Pipeline to enforce an approval gate.
# azure-pipelines.yml
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
serviceConnection: 'Azure-Prod-Connection'
resourceGroup: 'tf-state-rg'
storageAccount: 'tffiles'
container: 'tfstate'
key: 'prod.terraform.tfstate'
stages:
- stage: TerraformPlan
displayName: 'Plan Infrastructure'
jobs:
- job: Plan
steps:
- task: TerraformInstaller@0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV4@4
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: '$(serviceConnection)'
backendAzureRmResourceGroupName: '$(resourceGroup)'
backendAzureRmStorageAccountName: '$(storageAccount)'
backendAzureRmContainerName: '$(container)'
backendAzureRmKey: '$(key)'
- task: TerraformTaskV4@4
displayName: 'Terraform Plan'
inputs:
provider: 'azurerm'
command: 'plan'
commandOptions: '-out=tfplan'
environmentServiceNameAzureRM: '$(serviceConnection)'
- task: PublishPipelineArtifact@1
displayName: 'Publish Plan Artifact'
inputs:
targetPath: '$(System.DefaultWorkingDirectory)/tfplan'
artifact: 'tfplan'
- stage: TerraformApply
displayName: 'Apply Infrastructure'
dependsOn: TerraformPlan
condition: succeeded()
jobs:
- deployment: Apply
environment: 'production-infra' # Links to ADO Environment for Approvals
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: DownloadPipelineArtifact@2
inputs:
artifact: 'tfplan'
path: '$(System.DefaultWorkingDirectory)'
- task: TerraformInstaller@0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV4@4
displayName: 'Terraform Init'
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: '$(serviceConnection)'
# ... (Same backend config as above)
- task: TerraformTaskV4@4
displayName: 'Terraform Apply'
inputs:
provider: 'azurerm'
command: 'apply'
commandOptions: 'tfplan' # Apply the specific plan we generated
environmentServiceNameAzureRM: '$(serviceConnection)'
Critical Security: Handling Secrets
Your infrastructure often needs secrets (e.g., a database password). Never hardcode these in .tf files.
- Use Azure Key Vault: Store the secret in Key Vault.
- Variable Group: Link an Azure DevOps Variable Group to that Key Vault.
- Pipeline Mapping: Pass the variable into Terraform as an environment variable or
-varargument.
# In Pipeline
variables:
- group: 'prod-secrets' # Contains 'dbPassword' from Key Vault
steps:
- task: TerraformTaskV4@4
inputs:
command: 'plan'
commandOptions: '-var="db_password=$(dbPassword)"'
4. Handling State Locking
One of the biggest risks in a team environment is two pipelines running at the same time and corrupting the state file.
By using Azure Storage as a backend (as shown above), Terraform automatically supports State Locking.
- If Pipeline A is running
terraform apply, it acquires a “lease” on the state file blob. - If Pipeline B tries to run, it will fail immediately with a “State Locked” message.
- This prevents race conditions and corruption. Always check your pipeline logs to ensure locks are being acquired and released.
5. Drift Detection: The Self-Healing Infrastructure
What happens if someone manually changes a firewall rule in the Azure Portal? Your code is no longer the source of truth. This is called Drift.
- Best Practice: Run your
Terraform Planstage on a Schedule (cron job) every night. - If the plan shows changes (“Plan: 1 to add, 1 to change”), you know Drift has occurred.
- You can then investigate who made manual changes and revert them.
Summary
By structuring your Terraform pipeline with Plan and Apply stages, managing state remotely in Azure Storage, and protecting deployments with Approval Gates, you create a safe, auditable environment for infrastructure changes.
This setup prevents the dreaded “it worked on my machine” issues and ensures that your cloud footprint typically matches your code.
Go Beyond “Hello World”
Infrastructure as Code is complex. Handling state files, secret rotation, and drift detection requires hands-on practice.
In our Azure DevOps Masterclass, Module 4 is dedicated entirely to Terraform & IaC. You will:
- Write modular Terraform code to deploy complete Azure environments (network, compute, storage).
- Implement security scanning (Checkov/Trivy) directly in your Terraform pipeline.
- Learn to use Gen AI to generate Terraform configurations from plain English requirements.
Stop guessing and start automating. Join the next weekend batch.