Custom Provisioners & null_resource
สร้าง logic ซับซ้อนผ่าน
null_resource+ provisioner combinations
null_resource คืออะไร?
null_resource = "fake" resource ที่ไม่สร้างอะไรจริง — แค่ใช้เป็น container ของ provisioner
resource "null_resource" "do_something" {
provisioner "local-exec" {
command = "echo 'I run on apply'"
}
}
ต้องประกาศ provider:
terraform {
required_providers {
null = {
source = "hashicorp/null"
version = "~> 3.2"
}
}
}
triggers
triggers map = re-run provisioner เมื่อ values เปลี่ยน
resource "null_resource" "deploy" {
triggers = {
cluster_id = aws_eks_cluster.main.id
deployed_at = timestamp()
}
provisioner "local-exec" {
command = "kubectl apply -f manifests/"
}
}
→ ถ้า cluster_id เปลี่ยน → null_resource replace → re-run provisioner
Trigger on File Change
resource "null_resource" "build" {
triggers = {
script_hash = filemd5("build.sh") # ← change file = re-run
}
provisioner "local-exec" {
command = "./build.sh"
}
}
Trigger ทุกครั้ง
resource "null_resource" "always" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "./run-everytime.sh"
}
}
Patterns
Pattern 1: Bootstrap K8s Cluster
resource "aws_eks_cluster" "main" {
# ...
}
# 1. Update kubeconfig
resource "null_resource" "kubeconfig" {
triggers = {
cluster_name = aws_eks_cluster.main.name
}
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name ${aws_eks_cluster.main.name}"
}
depends_on = [aws_eks_cluster.main]
}
# 2. Install AWS Load Balancer Controller
resource "null_resource" "alb_controller" {
triggers = {
kubeconfig = null_resource.kubeconfig.id
}
provisioner "local-exec" {
command = <<-EOF
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--namespace kube-system \
--set clusterName=${aws_eks_cluster.main.name}
EOF
}
}
# 3. Apply manifests
resource "null_resource" "apps" {
triggers = {
alb_controller = null_resource.alb_controller.id
manifests_hash = filemd5("manifests/app.yaml")
}
provisioner "local-exec" {
command = "kubectl apply -f manifests/"
}
}
Pattern 2: Database Migration
resource "aws_db_instance" "main" {
# ...
}
resource "null_resource" "migrate" {
triggers = {
db_endpoint = aws_db_instance.main.endpoint
migrations_hash = sha1(join(",", [for f in fileset(path.module, "migrations/*.sql") : filesha1(f)]))
}
provisioner "local-exec" {
command = "psql -h ${aws_db_instance.main.address} -U admin -d mydb -f migrations.sql"
environment = {
PGPASSWORD = random_password.db.result
}
}
depends_on = [aws_db_instance.main]
}
Pattern 3: Wait for Resource Ready
resource "aws_db_instance" "main" {
# ...
}
resource "null_resource" "wait_for_db" {
triggers = {
db_id = aws_db_instance.main.id
}
provisioner "local-exec" {
command = <<-EOF
while ! pg_isready -h ${aws_db_instance.main.address} -p 5432; do
sleep 5
done
echo "Database is ready"
EOF
}
}
Pattern 4: Conditional Provisioning
resource "null_resource" "seed_data" {
count = var.environment == "dev" ? 1 : 0
triggers = {
db_id = aws_db_instance.main.id
}
provisioner "local-exec" {
command = "psql -f seed.sql"
}
}
→ Run เฉพาะใน dev environment
Pattern 5: Tear-down Cleanup
resource "null_resource" "cleanup_dns" {
triggers = {
instance_id = aws_instance.web.id
record_name = "web.${var.domain}"
}
provisioner "local-exec" {
when = destroy
command = "aws route53 change-resource-record-sets --hosted-zone-id ${self.triggers.zone_id} --change-batch ..."
}
}
Time-based Pattern (with time_sleep)
terraform {
required_providers {
time = {
source = "hashicorp/time"
version = "~> 0.11"
}
}
}
resource "aws_db_instance" "main" {
# ...
}
# Wait 30 seconds for DB to fully ready
resource "time_sleep" "wait_for_db" {
depends_on = [aws_db_instance.main]
create_duration = "30s"
}
resource "null_resource" "migrate" {
triggers = {
after_wait = time_sleep.wait_for_db.id
}
provisioner "local-exec" {
command = "./migrate.sh"
}
}
Alternatives ที่ดีกว่า
Use Case: Helm Charts
❌ null_resource:
resource "null_resource" "helm" {
provisioner "local-exec" {
command = "helm install myapp ./chart"
}
}
✅ Helm Provider:
provider "helm" {
kubernetes {
host = aws_eks_cluster.main.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.main.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.main.token
}
}
resource "helm_release" "myapp" {
name = "myapp"
repository = "https://charts.example.com"
chart = "myapp"
version = "1.0.0"
set {
name = "replicas"
value = 3
}
}
Use Case: K8s Manifests
❌ null_resource:
resource "null_resource" "k8s" {
provisioner "local-exec" {
command = "kubectl apply -f manifest.yaml"
}
}
✅ Kubernetes Provider:
resource "kubernetes_namespace" "app" {
metadata { name = "myapp" }
}
resource "kubernetes_deployment" "app" {
metadata {
name = "app"
namespace = kubernetes_namespace.app.metadata[0].name
}
spec { ... }
}
Use Case: HTTP Health Check
❌ null_resource:
resource "null_resource" "health_check" {
provisioner "local-exec" {
command = "curl -f https://${aws_alb.main.dns_name}/health"
}
}
✅ http data source:
data "http" "health" {
url = "https://${aws_alb.main.dns_name}/health"
retry {
attempts = 5
min_delay_ms = 1000
}
lifecycle {
postcondition {
condition = self.status_code == 200
error_message = "Health check failed: ${self.status_code}"
}
}
}
terraform_data (Terraform 1.4+)
terraform_data = built-in replacement สำหรับ null_resource
# null_resource (legacy)
resource "null_resource" "x" {
triggers = {
foo = var.foo
}
provisioner "local-exec" {
command = "echo hello"
}
}
# terraform_data (modern)
resource "terraform_data" "x" {
triggers_replace = [var.foo]
provisioner "local-exec" {
command = "echo hello"
}
}
ข้อดี:
- ไม่ต้อง declare provider
null - รองรับ
triggers_replacesyntax ใหม่ - Built-in ใน Terraform core
ตัวอย่าง: รวม
terraform {
required_providers {
aws = { source = "hashicorp/aws", version = "~> 5.0" }
kubernetes = { source = "hashicorp/kubernetes", version = "~> 2.30" }
helm = { source = "hashicorp/helm", version = "~> 2.15" }
}
}
# 1. EKS cluster (managed Terraform)
resource "aws_eks_cluster" "main" { ... }
# 2. Update kubeconfig (provisioner — limited use case)
resource "terraform_data" "kubeconfig" {
triggers_replace = [aws_eks_cluster.main.id]
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name ${aws_eks_cluster.main.name}"
}
}
# 3. K8s resources (managed Terraform — preferred!)
resource "kubernetes_namespace" "app" {
metadata { name = "myapp" }
}
resource "helm_release" "monitoring" {
name = "kube-prometheus-stack"
repository = "https://prometheus-community.github.io/helm-charts"
chart = "kube-prometheus-stack"
namespace = "monitoring"
create_namespace = true
}
Best Practices
✅ DO:
- ใช้ provider ตรงๆ (kubernetes, helm, http) แทน null_resource ถ้าทำได้
- ใช้ triggers ที่ specific (ไม่ใช่ timestamp() ทุกที่)
- ใช้ terraform_data แทน null_resource ใน Terraform 1.4+
- Document ว่าทำไมต้อง provisioner
❌ DON'T:
- ห้ามใช้ null_resource เพื่อหลีกเลี่ยง Terraform pattern
- ห้ามใช้ timestamp() ใน triggers ถ้าไม่ต้องการ re-run ทุกครั้ง
- ห้ามทำ logic ที่ Terraform จัดการได้แล้ว (provider, data source)
สรุป
null_resource/terraform_data= container สำหรับ provisionertriggers= ควบคุมว่าเมื่อไหร่ rerun- ใช้ pattern: bootstrap, migration, wait for ready, cleanup
- ทางเลือกดีกว่า: kubernetes/helm/http providers
- Terraform 1.4+ ใช้
terraform_dataแทนnull_resource
ต่อไป → Section 14: Data Sources