🐶
Terraform

Terraform S3 Lifecycle Rules: A Step-by-Step Guide

By Ondřej Dolanský on 01/07/2025

Learn how to automate the management of your Amazon S3 data lifecycle with Terraform by implementing lifecycle rules for efficient data storage and cost optimization.

Terraform S3 Lifecycle Rules: A Step-by-Step Guide

Table of Contents

Introduction

Managing the lifecycle of your data in Amazon S3 is crucial for cost optimization and data retention policies. Terraform provides flexible ways to define lifecycle rules for your S3 buckets. This article explores two primary methods for adding lifecycle rules using Terraform, outlining their strengths and potential pitfalls.

Step-by-Step Guide

You can add lifecycle rules to your S3 bucket using Terraform in a couple of ways:

1. Directly within the aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"

  lifecycle_rule {
    enabled = true

    prefix = "logs/"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    expiration {
      days = 365
    }
  }
}

This approach is simpler for basic lifecycle rules.

2. Using the aws_s3_bucket_lifecycle_configuration resource:

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log-cleanup"
    enabled = true

    filter {
      prefix = "logs/"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    expiration {
      days = 365
    }
  }
}

This method offers more flexibility, especially when dealing with multiple complex rules.

Important Considerations:

  • Avoid using for_each with aws_s3_bucket_lifecycle_configuration: It can lead to issues and is not recommended.
  • Plan updates carefully: Changes to lifecycle rules can impact existing objects in your bucket.
  • Use ignore_changes cautiously: While it can help with specific scenarios like external replication configuration management, it can also lead to Terraform and AWS being out of sync.

Code Example

The provided Terraform code demonstrates how to manage S3 lifecycle rules to automate data management in AWS S3 buckets. It showcases two approaches: defining rules directly within the aws_s3_bucket resource and using a separate aws_s3_bucket_lifecycle_configuration resource. Examples illustrate configuring transitions to different storage classes (e.g., STANDARD_IA, GLACIER) based on object age and applying rules based on prefixes and tags. The code emphasizes best practices like using separate resources for complex scenarios and avoiding for_each with aws_s3_bucket_lifecycle_configuration.

1. Directly within the aws_s3_bucket resource:

This example creates an S3 bucket named "my-bucket" and applies a lifecycle rule to move objects with the prefix "logs/" to STANDARD_IA storage class after 30 days and permanently delete them after 365 days.

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"

  lifecycle_rule {
    enabled = true

    prefix = "logs/"

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    expiration {
      days = 365
    }
  }
}

2. Using the aws_s3_bucket_lifecycle_configuration resource:

This example creates an S3 bucket named "my-bucket" and applies a lifecycle rule named "log-cleanup" using a separate resource. This rule has the same functionality as the previous example.

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log-cleanup"
    enabled = true

    filter {
      prefix = "logs/"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    expiration {
      days = 365
    }
  }
}

Example with multiple rules and different filters:

This example demonstrates using the aws_s3_bucket_lifecycle_configuration resource to define multiple lifecycle rules with different filters.

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "log-cleanup"
    enabled = true

    filter {
      prefix = "logs/"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    expiration {
      days = 365
    }
  }

  rule {
    id     = "image-optimization"
    enabled = true

    filter {
      and {
        prefix = "images/"
        tags = {
          "auto-optimize" = "true"
        }
      }
    }

    noncurrent_version_transition {
      days          = 15
      storage_class = "GLACIER"
    }
  }
}

This example defines two rules:

  • log-cleanup: Same as the previous example.
  • image-optimization: Applies to objects with the prefix "images/" and a tag "auto-optimize" set to "true". It moves non-current versions to GLACIER storage class after 15 days.

Remember to plan and apply your changes carefully and avoid using for_each with aws_s3_bucket_lifecycle_configuration. Use ignore_changes cautiously and only when necessary.

Additional Notes

  • Lifecycle rule actions are applied in a specific order: Amazon S3 applies lifecycle configuration rules in a specific order. It first applies rules with the shortest noncurrent version age and then proceeds to the longest. For rules with the same noncurrent version age, it applies them in alphabetical order by rule ID.
  • Versioning: Lifecycle rules can be applied to both versioned and non-versioned buckets. For versioned buckets, you can specify rules for both current and noncurrent object versions.
  • Tags: You can use tags to filter objects for lifecycle rules, providing more granular control over which objects are affected.
  • Monitoring: It's essential to monitor the effects of your lifecycle rules to ensure they behave as expected and optimize costs effectively. Consider using CloudWatch metrics and logs for monitoring.
  • Testing: Before deploying lifecycle rules to production, thoroughly test them in a development environment to avoid unintended data deletion or unexpected costs.
  • Alternatives to for_each: While for_each is not recommended with aws_s3_bucket_lifecycle_configuration, you can explore alternative approaches like using modules or dynamic nested blocks within the resource to manage multiple lifecycle rules effectively.
  • Security: Ensure that your lifecycle rules align with your data security and compliance requirements. For example, consider using lifecycle rules to automatically expire or transition sensitive data to more secure storage classes.
  • Performance: While lifecycle rules are powerful, a large number of complex rules can impact performance. Consider consolidating rules where possible and using clear, concise rule definitions.
  • Documentation: Maintain clear documentation of your lifecycle rules, including their purpose, scope, and any dependencies on other resources. This documentation will be helpful for troubleshooting and future modifications.

Summary

This article outlines two methods for managing S3 lifecycle rules using Terraform:

Method Description Advantages Considerations
Directly within aws_s3_bucket resource Define lifecycle rules directly within the bucket resource definition. Simpler for basic rules. Less flexible for complex scenarios.
Using aws_s3_bucket_lifecycle_configuration resource Define lifecycle rules in a separate resource linked to the bucket. More flexible, especially for multiple complex rules. Requires an additional resource.

Key Points:

  • Avoid using for_each with aws_s3_bucket_lifecycle_configuration.
  • Carefully plan updates to lifecycle rules as they can impact existing objects.
  • Use ignore_changes cautiously to avoid Terraform and AWS becoming out of sync.

Conclusion

By effectively leveraging Terraform for S3 lifecycle management, you can automate data archival, optimize storage costs, and ensure your data retention policies are consistently enforced. Choosing the appropriate method, understanding the nuances of lifecycle rule behavior, and adhering to best practices will enable you to manage your S3 data efficiently and securely. Remember to thoroughly test your configurations and monitor their impact to maintain optimal performance and prevent unintended data loss. As your infrastructure evolves, revisit and adapt your lifecycle rules to align with your changing data management needs.

References

Were You Able to Follow the Instructions?

😍Love it!
😊Yes
😐Meh-gical
😞No
🤮Clickbait