Recently, while deploying Longhorn, I encountered a frustrating issue: my pods were always showing “evicted” even though it was a freshly deployed Kubernetes Cluster. Since I am new to Kubernetes, I had no clue which commands would help me to better understand and then solve the problem. Luckily, ChatGPT makes that process much easier, but it still took me a while to figure it out.
FYI, Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes.
Understanding the “Evicted” Status
In Kubernetes, eviction is a mechanism used to maintain node stability under resource pressure. When the kubelet detects that a node is running out of resources (memory, CPU, disk), it may evict pods to reclaim resources.
When you run:
kubectl get pods -A
You see something like:
instance-manager-db95e4280b535d82cde25cce5b44f97b 0/1 Evicted 0 1s
If you describe the node, you will see something like:
kubectl describe node k8s-worker1 | grep -A10 "Conditions"
Type: DiskPressure
Status: True
Message: kubelet has disk pressure
If you describe the pod, you will see something like:
kubectl describe pod -n longhorn-system
Status: Failed
Reason: Evicted
Message: The node was low on resource: ephemeral-storage.
On my node “k8s-worker1”, I then executed this command:
df -h /var/lib/kubelet
which showed:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 11G 8.5G 1.8G 84% /var/lib/kubelet
Conclusion: Only 1.8 GB was free, and it was already at 84% usage. Kubernetes starts evicting pods when the disk usage exceeds thresholds, typically around 85–90%, depending on your kubelet settings.
That seems to have happened in my case. I don’t know how production Kubernetes clusters are set up and which “best practices” the experts use, but you would probably at least monitor /var/lib/kubelet closely. 🙂
The Solution
I had to increase the virtual disk size in ESXi and then had to resize the partition, physical volume, and logical volume inside the Ubuntu VM. After resizing the volumes, Longhorn could deploy properly without eviction errors. My nodes now looked like this:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 71G 8.0G 60G 12% /var/lib/kubelet
Kubernetes stopped evicting pods, and the Longhorn UI was finally available to me and ready to provision volumes! 😀
Kubernetes Documentation – Node-pressure Eviction
Here are some snippets from the official Kubernetes documentation and node-pressure eviction.
The kubelet monitors resources like memory, disk space, and filesystem inodes on your cluster’s nodes. When one or more of these resources reach specific consumption levels, the kubelet can proactively fail one or more pods on the node to reclaim resources and prevent starvation.
You can specify custom eviction thresholds for the kubelet to use when it makes eviction decisions. You can configure soft and hard eviction thresholds.
The kubelet reports node conditions to reflect that the node is under pressure because hard or soft eviction threshold is met, independent of configured grace periods.