Questions
After upgrading providers, terraform plan shows resources will be destroyed and recreated. Prevent data loss.
The Scenario
You upgraded the AWS provider from 4.x to 5.x. Now terraform plan shows:
$ terraform plan
# aws_db_instance.production must be replaced
-/+ resource "aws_db_instance" "production" {
~ id = "prod-db" -> (known after apply)
~ endpoint = "prod-db.xxx.us-east-1.rds.amazonaws.com" -> (known after apply)
~ engine_version = "14.7" -> "14.7" # No actual change!
# (forces replacement)
...
}
# aws_s3_bucket.data must be replaced
-/+ resource "aws_s3_bucket" "data" {
~ id = "company-data-bucket" -> (known after apply)
~ bucket = "company-data-bucket" -> "company-data-bucket"
# (forces replacement due to acl argument removal)
}
Plan: 2 to add, 0 to change, 2 to destroy.
Your production database and S3 bucket will be destroyed! The data loss would be catastrophic.
The Challenge
Understand why provider upgrades cause force-replacement, prevent data loss, and safely complete the upgrade.
A junior engineer might panic and revert the provider version, run apply hoping it works out, or manually delete the resources from state. Reverting delays necessary upgrades, apply would destroy production data, and state manipulation without understanding causes more problems.
A senior engineer reads the provider changelog for breaking changes, uses lifecycle prevent_destroy as a safety net, migrates deprecated arguments, uses moved blocks for renamed resources, and tests upgrades in non-prod first.
Step 1: Understand What Changed
# Check provider changelog
# AWS Provider 5.0 Upgrade Guide:
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-5-upgrade
# Common breaking changes in AWS 5.0:
# 1. S3 bucket ACL argument removed (use aws_s3_bucket_acl)
# 2. Default tags behavior changed
# 3. Some argument names changed
# 4. Resource attribute type changesStep 2: Add Safety Net First
# Add prevent_destroy BEFORE upgrading provider
resource "aws_db_instance" "production" {
identifier = "prod-db"
# ...
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket" "data" {
bucket = "company-data-bucket"
lifecycle {
prevent_destroy = true
}
}
# Now if you accidentally run apply, it will fail safely:
# Error: Instance cannot be destroyedStep 3: Fix S3 Bucket Breaking Changes (AWS 5.0)
# BEFORE (AWS Provider 4.x)
resource "aws_s3_bucket" "data" {
bucket = "company-data-bucket"
acl = "private" # Removed in 5.0!
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
# AFTER (AWS Provider 5.x)
resource "aws_s3_bucket" "data" {
bucket = "company-data-bucket"
}
# ACL is now separate resource
resource "aws_s3_bucket_acl" "data" {
bucket = aws_s3_bucket.data.id
acl = "private"
}
# Versioning is now separate
resource "aws_s3_bucket_versioning" "data" {
bucket = aws_s3_bucket.data.id
versioning_configuration {
status = "Enabled"
}
}
# Encryption is now separate
resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
bucket = aws_s3_bucket.data.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}Step 4: Import New Resources
# The new separate resources need to be imported
terraform import aws_s3_bucket_acl.data company-data-bucket,private
terraform import aws_s3_bucket_versioning.data company-data-bucket
terraform import aws_s3_bucket_server_side_encryption_configuration.data company-data-bucket
# Verify no changes
terraform plan
# Should show: No changes.Step 5: Handle Attribute Type Changes
# Sometimes attribute types change, causing false replacements
# Example: tags changed from map to object
# Check state for current value
terraform state show aws_db_instance.production
# If state has old format, refresh to update
terraform apply -refresh-only -target=aws_db_instance.productionStep 6: Use Moved Blocks for Renames
# If a resource type or name changed
moved {
from = aws_s3_bucket_object.config
to = aws_s3_object.config # Renamed in AWS 4.0
}
# Terraform will update state without recreatingUpgrade Testing Strategy
# versions.tf - Pin provider versions
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Allows 5.x but not 6.0
}
}
}
# Step-by-step upgrade process:
# 1. Test in dev with new provider
# 2. Review plan output carefully
# 3. Fix breaking changes
# 4. Apply to dev
# 5. Repeat for staging
# 6. Finally upgrade prodLock File Management
# .terraform.lock.hcl pins exact versions
# After testing, commit the lock file
# Update only specific provider
terraform init -upgrade=true
# View current locked versions
cat .terraform.lock.hcl
# If you need to update lock file
terraform providers lock -platform=linux_amd64 -platform=darwin_amd64Common Provider Upgrade Issues
| Provider | Version | Breaking Change | Fix |
|---|---|---|---|
| AWS | 4.0 | aws_s3_bucket_object → aws_s3_object | Use moved block |
| AWS | 5.0 | S3 bucket arguments split | Separate resources + import |
| AWS | 5.0 | Default tags handling | Update provider config |
| 4.0 | Project field required | Add project to resources | |
| Azure | 3.0 | Resource renames | Use moved blocks |
Automated Upgrade Checking
# .github/workflows/provider-check.yml
name: Check Provider Updates
on:
schedule:
- cron: '0 9 * * 1' # Weekly on Monday
jobs:
check-updates:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Check for Updates
run: |
terraform init
terraform providers lock -platform=linux_amd64
if git diff --exit-code .terraform.lock.hcl; then
echo "No provider updates available"
else
echo "Provider updates available!"
# Create PR with lock file changes
fiRecovery from Bad Upgrade
# If you already applied and things broke:
# 1. Check state for backup versions
aws s3api list-object-versions \
--bucket terraform-state \
--prefix prod/terraform.tfstate
# 2. Restore previous state version
aws s3api get-object \
--bucket terraform-state \
--key prod/terraform.tfstate \
--version-id "VERSION_ID" \
previous-state.json
terraform state push previous-state.json
# 3. Revert provider version in versions.tf
# 4. Run terraform init -upgrade=false
# 5. Verify with terraform plan Provider Upgrade Checklist
| Step | Action | Verification |
|---|---|---|
| 1 | Read upgrade guide | Understand breaking changes |
| 2 | Add prevent_destroy | Safety net in place |
| 3 | Upgrade in dev first | Plan shows expected changes |
| 4 | Fix deprecated arguments | No warnings in plan |
| 5 | Import new resources | State matches reality |
| 6 | Run plan | No unexpected destroys |
| 7 | Apply to dev | Successful, no data loss |
| 8 | Repeat for staging/prod | All environments updated |
Practice Question
What should you do FIRST when terraform plan shows a database will be destroyed after a provider upgrade?