DeployU
Interviews / Cloud & DevOps / After upgrading providers, terraform plan shows resources will be destroyed and recreated. Prevent data loss.

After upgrading providers, terraform plan shows resources will be destroyed and recreated. Prevent data loss.

debugging Provider Management Interactive Quiz Code Examples

The Scenario

You upgraded the AWS provider from 4.x to 5.x. Now terraform plan shows:

$ terraform plan

# aws_db_instance.production must be replaced
-/+ resource "aws_db_instance" "production" {
      ~ id                    = "prod-db" -> (known after apply)
      ~ endpoint              = "prod-db.xxx.us-east-1.rds.amazonaws.com" -> (known after apply)
      ~ engine_version        = "14.7" -> "14.7"  # No actual change!
      # (forces replacement)
      ...
    }

# aws_s3_bucket.data must be replaced
-/+ resource "aws_s3_bucket" "data" {
      ~ id     = "company-data-bucket" -> (known after apply)
      ~ bucket = "company-data-bucket" -> "company-data-bucket"
      # (forces replacement due to acl argument removal)
    }

Plan: 2 to add, 0 to change, 2 to destroy.

Your production database and S3 bucket will be destroyed! The data loss would be catastrophic.

The Challenge

Understand why provider upgrades cause force-replacement, prevent data loss, and safely complete the upgrade.

Wrong Approach

A junior engineer might panic and revert the provider version, run apply hoping it works out, or manually delete the resources from state. Reverting delays necessary upgrades, apply would destroy production data, and state manipulation without understanding causes more problems.

Right Approach

A senior engineer reads the provider changelog for breaking changes, uses lifecycle prevent_destroy as a safety net, migrates deprecated arguments, uses moved blocks for renamed resources, and tests upgrades in non-prod first.

Step 1: Understand What Changed

# Check provider changelog
# AWS Provider 5.0 Upgrade Guide:
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-5-upgrade

# Common breaking changes in AWS 5.0:
# 1. S3 bucket ACL argument removed (use aws_s3_bucket_acl)
# 2. Default tags behavior changed
# 3. Some argument names changed
# 4. Resource attribute type changes

Step 2: Add Safety Net First

# Add prevent_destroy BEFORE upgrading provider
resource "aws_db_instance" "production" {
  identifier = "prod-db"
  # ...

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_s3_bucket" "data" {
  bucket = "company-data-bucket"

  lifecycle {
    prevent_destroy = true
  }
}

# Now if you accidentally run apply, it will fail safely:
# Error: Instance cannot be destroyed

Step 3: Fix S3 Bucket Breaking Changes (AWS 5.0)

# BEFORE (AWS Provider 4.x)
resource "aws_s3_bucket" "data" {
  bucket = "company-data-bucket"
  acl    = "private"  # Removed in 5.0!

  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

# AFTER (AWS Provider 5.x)
resource "aws_s3_bucket" "data" {
  bucket = "company-data-bucket"
}

# ACL is now separate resource
resource "aws_s3_bucket_acl" "data" {
  bucket = aws_s3_bucket.data.id
  acl    = "private"
}

# Versioning is now separate
resource "aws_s3_bucket_versioning" "data" {
  bucket = aws_s3_bucket.data.id
  versioning_configuration {
    status = "Enabled"
  }
}

# Encryption is now separate
resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
  bucket = aws_s3_bucket.data.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

Step 4: Import New Resources

# The new separate resources need to be imported
terraform import aws_s3_bucket_acl.data company-data-bucket,private
terraform import aws_s3_bucket_versioning.data company-data-bucket
terraform import aws_s3_bucket_server_side_encryption_configuration.data company-data-bucket

# Verify no changes
terraform plan
# Should show: No changes.

Step 5: Handle Attribute Type Changes

# Sometimes attribute types change, causing false replacements
# Example: tags changed from map to object

# Check state for current value
terraform state show aws_db_instance.production

# If state has old format, refresh to update
terraform apply -refresh-only -target=aws_db_instance.production

Step 6: Use Moved Blocks for Renames

# If a resource type or name changed
moved {
  from = aws_s3_bucket_object.config
  to   = aws_s3_object.config  # Renamed in AWS 4.0
}

# Terraform will update state without recreating

Upgrade Testing Strategy

# versions.tf - Pin provider versions
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"  # Allows 5.x but not 6.0
    }
  }
}

# Step-by-step upgrade process:
# 1. Test in dev with new provider
# 2. Review plan output carefully
# 3. Fix breaking changes
# 4. Apply to dev
# 5. Repeat for staging
# 6. Finally upgrade prod

Lock File Management

# .terraform.lock.hcl pins exact versions
# After testing, commit the lock file

# Update only specific provider
terraform init -upgrade=true

# View current locked versions
cat .terraform.lock.hcl

# If you need to update lock file
terraform providers lock -platform=linux_amd64 -platform=darwin_amd64

Common Provider Upgrade Issues

ProviderVersionBreaking ChangeFix
AWS4.0aws_s3_bucket_objectaws_s3_objectUse moved block
AWS5.0S3 bucket arguments splitSeparate resources + import
AWS5.0Default tags handlingUpdate provider config
Google4.0Project field requiredAdd project to resources
Azure3.0Resource renamesUse moved blocks

Automated Upgrade Checking

# .github/workflows/provider-check.yml
name: Check Provider Updates

on:
  schedule:
    - cron: '0 9 * * 1'  # Weekly on Monday

jobs:
  check-updates:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3

      - name: Check for Updates
        run: |
          terraform init
          terraform providers lock -platform=linux_amd64

          if git diff --exit-code .terraform.lock.hcl; then
            echo "No provider updates available"
          else
            echo "Provider updates available!"
            # Create PR with lock file changes
          fi

Recovery from Bad Upgrade

# If you already applied and things broke:

# 1. Check state for backup versions
aws s3api list-object-versions \
  --bucket terraform-state \
  --prefix prod/terraform.tfstate

# 2. Restore previous state version
aws s3api get-object \
  --bucket terraform-state \
  --key prod/terraform.tfstate \
  --version-id "VERSION_ID" \
  previous-state.json

terraform state push previous-state.json

# 3. Revert provider version in versions.tf
# 4. Run terraform init -upgrade=false
# 5. Verify with terraform plan

Provider Upgrade Checklist

StepActionVerification
1Read upgrade guideUnderstand breaking changes
2Add prevent_destroySafety net in place
3Upgrade in dev firstPlan shows expected changes
4Fix deprecated argumentsNo warnings in plan
5Import new resourcesState matches reality
6Run planNo unexpected destroys
7Apply to devSuccessful, no data loss
8Repeat for staging/prodAll environments updated

Practice Question

What should you do FIRST when terraform plan shows a database will be destroyed after a provider upgrade?