~Engineers solve problems, I solve engineer's problems 🤘
A practical guide to moving workloads from Karpenter to Amazon EKS Auto Mode—why you might do it, how to run both side-by-side, and what we learned along the way.
If you’ve been running Karpenter for node autoscaling (as in Scaling with Karpenter in Kubernetes), EKS Auto Mode can look attractive: AWS manages compute, storage, and networking in one place, with built-in security and cost optimization. Migrating doesn’t have to be a big-bang cutover. Here’s a concise path and some lessons learned.
| Aspect | Karpenter | EKS Auto Mode |
|---|---|---|
| Who runs it | You (Helm, CRDs, upgrades) | AWS (managed control plane + node lifecycle) |
| Node lifecycle | You configure NodePools, EC2NodeClass | AWS-managed; immutable nodes, auto-rotation of nodes |
| Security | You harden AMIs, IAM, networking | Built-in: immutable AMIs, SELinux, read-only root, encryption |
| Networking / storage | You wire VPC CNI, EBS CSI | Pre-integrated VPC CNI, EBS CSI |
| Cost optimization | Spot, consolidation, right-sizing in your NodePools | AWS applies Spot, right-sizing, instance selection |
| Operational load | Higher (you own scaling logic and upgrades) | Lower (fewer components to operate) |
Auto Mode fits teams that want less to run themselves and are okay with fewer knobs than Karpenter’s fine-grained NodePools and instance constraints.
Enable Auto Mode on the existing cluster but do not turn on the built-in general-purpose node pool yet. That keeps existing workloads on Karpenter until you explicitly move them.
Terraform example:
resource "aws_eks_cluster" "main" {
name = "my-cluster"
role_arn = aws_iam_role.eks_cluster.arn
version = "1.29"
vpc_config {
subnet_ids = var.subnet_ids
}
# Enable EKS Auto Mode; do not enable default node pool during migration
auto_mode_config {
enabled = true
}
}
AWS CLI alternative:
aws eks update-cluster-config \
--name my-cluster \
--auto-mode '{"enabled":true}'
During migration, leave the built-in general-purpose node pool disabled so only your new, tainted Auto Mode Node Pool receives migrated workloads.
Create an EKS Auto Mode Node Pool with a taint so existing pods do not schedule on it until you add tolerations. This lets you migrate workload-by-workload.
# eks-auto-mode-nodepool.yaml
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: eks-auto-mode
spec:
template:
spec:
requirements:
- key: "eks.amazonaws.com/instance-category"
operator: In
values: ["c", "m", "r"]
nodeClassRef:
group: eks.amazonaws.com
kind: NodeClass
name: default
taints:
- key: "eks-auto-mode"
effect: "NoSchedule"
Adjust requirements (e.g. instance categories, size) to align with what you had in Karpenter. You need at least one requirement. Apply:
kubectl apply -f eks-auto-mode-nodepool.yaml
For each workload you want on Auto Mode, add a toleration for the taint and a node selector so it only schedules on Auto Mode nodes during the transition.
# Deployment patch example
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
tolerations:
- key: "eks-auto-mode"
effect: "NoSchedule"
nodeSelector:
eks.amazonaws.com/compute-type: auto
# ... rest of pod spec
EKS Auto Mode uses labels under eks.amazonaws.com (e.g. eks.amazonaws.com/compute-type: auto). This is different from Karpenter’s labels—expect to touch node selectors/affinity when migrating.
After all target workloads run on EKS Auto Mode nodes:
kubectl delete nodepool <your-original-karpenter-nodepool-name>
Karpenter will drain and remove its nodes. Confirm no critical workloads are still on those nodes before deleting.
If you want new workloads to use Auto Mode by default:
taints section from the NodePool spec).nodeSelector from workloads so they don’t need to explicitly target Auto Mode.Once everything runs on Auto Mode and you’ve removed Karpenter Node Pools and EC2 Node Classes, uninstall Karpenter the same way you installed it (e.g. Helm):
helm uninstall karpenter -n karpenter
Clean up CRDs, IAM roles, and any Karpenter-specific resources (queues, instance profiles, etc.) as per Karpenter docs.
Label and taint differences
Auto Mode uses eks.amazonaws.com/* labels and its own NodeClass. Plan for a one-time update of node selectors, affinity, and tolerations; a small inventory of “what runs where” helps.
Run both in parallel
Using a tainted Auto Mode Node Pool and migrating in batches reduces risk. You can validate behavior and roll back by reverting tolerations/selectors.
PDBs and graceful shutdowns
Auto Mode still does node replacement and scaling. Strong PDBs and preStop/grace periods (as in the Karpenter article) remain important.
Less control, less ops
You give up Karpenter’s granular NodePools and instance-type constraints. If you need very specific instance families or consolidation policies, evaluate whether Auto Mode’s defaults are enough before committing.
Cost and observability
Auto Mode optimizes for cost; keep using Cost Explorer and your existing billing alerts. Revisit rightsizing and Spot usage after migration—behavior may differ from your previous Karpenter tuning.
Terraform and GitOps
Model Auto Mode enablement and, where possible, Node Pools in Terraform (or your IaC). Keep workload changes (tolerations, node selectors) in Git for a clear audit trail and rollback.
Migrating from Karpenter to EKS Auto Mode is doable without downtime: enable Auto Mode, add a tainted Node Pool, move workloads with tolerations and node selectors, then remove Karpenter Node Pools and uninstall Karpenter. The main trade-off is less control for less operational burden. Document label/taint changes, migrate gradually, and keep PDBs and graceful shutdowns—then you can run EKS Auto Mode with confidence.