The reading for today covered something I glossed over yesterday: not just how to isolate state, but when to use each approach. I spent most of the day actually practicing both, and by the end I had a pretty clear opinion on them.
Short version: File layouts are more work upfront but give you real isolation. Workspaces are faster to set up but leave you with a false sense of separation that breaks down under pressure.
Here's how I got there.
Round 1 — Workspaces
Setting it up
Starting from the Day 5/6 web app config, creating dev/staging/prod workspaces took maybe 10 minutes:
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod
terraform workspace list
default
dev
* prod
staging
The * shows the active workspace. Switching is fast:
terraform workspace select dev
Inside the config, I referenced the workspace name to differentiate resources:
locals {
environment = terraform.workspace
name_prefix = "web-app-${local.environment}"
}
Then terraform apply in each workspace deploys a separate copy of the infrastructure, with its own state file. In S3 it looks like this:
my-terraform-state-bucket/
└── env:/
├── dev/terraform.tfstate
├── staging/terraform.tfstate
└── prod/terraform.tfstate
Clean. Simple. Done in minutes.
Handling config differences between workspaces
Dev and prod don't want identical infrastructure — dev runs smaller instances, prod has a different domain, different scaling numbers. The pattern that works well here is a locals map keyed by workspace name:
locals {
environment = terraform.workspace
config = {
dev = {
instance_type = "t2.micro"
min_size = 1
max_size = 2
domain = "dev.myapp.com"
}
staging = {
instance_type = "t2.small"
min_size = 1
max_size = 3
domain = "staging.myapp.com"
}
prod = {
instance_type = "t3.small"
min_size = 2
max_size = 6
domain = "myapp.com"
}
}
current = local.config[local.environment]
}
Then every resource pulls from local.current instead of hardcoded values:
resource "aws_autoscaling_group" "web" {
min_size = local.current.min_size
max_size = local.current.max_size
desired_capacity = local.current.min_size
# ...
}
resource "aws_launch_template" "web" {
instance_type = local.current.instance_type
# ...
}
Switch workspace, run terraform apply, and the ASG and instance type are automatically right for that environment. Same pattern handles domain names, replica counts, or anything else that varies per environment — it all lives in that one config map, which makes it easy to review what's different between environments at a glance.
Where it started feeling wrong
The problem became clear when I thought about access control. All three workspaces use the exact same backend config — same S3 bucket, same IAM credentials. There's nothing stopping someone from switching to prod and running terraform apply from a dev machine.
terraform workspace select prod
terraform apply # this works. nothing stops it.
That's not isolation. That's just organization. For a personal project or a small team where everyone is trusted with full access, workspaces are fine. But in a real company where prod deployments should require approval and different credentials? Workspaces don't give you that.
The other thing: if the workspace name is wrong or forgotten, you're applying to the wrong environment. It's a one-command mistake with real consequences.
Round 2 — File Layouts
Setting it up
File layout takes longer. I created a proper folder structure:
infrastructure/
├── modules/
│ └── web-app/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
├── dev/
│ ├── main.tf
│ ├── variables.tf
│ └── backend.tf
├── staging/
│ ├── main.tf
│ ├── variables.tf
│ └── backend.tf
└── prod/
├── main.tf
├── variables.tf
└── backend.tf
Each environment's backend.tf points to a different S3 key:
# dev/backend.tf
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "dev/web-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locks"
encrypt = true
}
}
# prod/backend.tf
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket-prod" # separate bucket for prod
key = "prod/web-app/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locks-prod"
encrypt = true
}
}
To deploy to dev, I cd dev/ and run terraform apply. To deploy to prod, I need to cd prod/ — and in a real setup, prod would require a different IAM role to assume. You can't accidentally apply to prod from the wrong directory.
What it felt like
More typing. More files. But every operation felt deliberate. There's no mental state to track — no "wait, which workspace am I in?" The working directory is the environment. It's obvious.
The module setup also means changes to shared infrastructure logic happen once, in modules/web-app/, and all environments pull them in. No copy-paste drift.
State Migration
One practical thing that came up today: moving state between backends. I had local state from earlier in the week and needed to push it to S3.
terraform init -migrate-state handles this:
# Update backend config, then:
terraform init -migrate-state
Terraform detects the backend changed and asks if you want to copy the existing state over. Say yes.
I also practiced pulling state down locally for inspection:
terraform state pull > local-backup.tfstate
This gives a local snapshot of remote state without modifying anything. Useful when something in a plan looks wrong and you want to dig into the raw state.
If you need to push a corrected state back:
terraform state push local-backup.tfstate
Use that one carefully — pushing incorrect state will confuse Terraform about what's actually deployed.
Testing State Locking
I actually tested what happens when two applies run simultaneously. Two terminals, same directory, same remote backend.
Terminal 1 acquired the lock and ran normally. Terminal 2 was blocked immediately:
╷
│ Error: Error acquiring the state lock
│
│ Lock Info:
│ ID: f3a21b...
│ Who: mnourdine@NourMac
│ Created: 2026-04-13 14:22:05
│ Operation: OperationTypeApply
╵
Once Terminal 1 finished, Terminal 2 ran without any issues. Exactly what you'd want.
One edge case: if Terraform crashes mid-apply, the lock sometimes doesn't release automatically. You'll get the error above even with no apply running. Fix it with:
terraform force-unlock <LOCK_ID>
Only run this when you're absolutely sure nothing is running — force-unlocking an active apply is how you corrupt the state and it will be hard to resolve if you did not track versions.
Which One to Use
| Workspaces | File Layouts | |
|---|---|---|
| Setup time | Fast | Slower |
| Accidental env mistakes | Easy to make | Hard to make |
| Access control per env | Not possible | Fully possible |
| Code duplication | None | Solved with modules |
| Best for | Feature branches, short-lived envs | Long-lived prod/staging/dev |
If I'm spinning up a temporary environment to test something, workspaces. For anything long-lived — anything with "prod" in the name — file layouts with separate backend configs.
The tradeoff that made this obvious: the cost of a workspace mistake is high (wrong env, full access, real infrastructure), while the cost of file layout setup is just time upfront. And modules solve the duplication. Once I framed it that way, the choice felt clear.
This post is part of a 30-day Terraform learning journey.
💬 Comments
No comments yet. Be the first to share your thoughts!
Leave a Comment