When using Kubernetes, sooner or later you’ll encounter a failed deployment where your pods are in CrashLoopBackoff. CrashLoopBackOff
tells that a pod crashes right after the start. Kubernetes tries to start pod again, but the pod crashes again and this goes in a loop.
It’s hard to know what is going on as the pod is restarting before you even have a chance to take a look at its logs.
So how to figure out why your pod is failing?
The trick is to call the logs command but include the ‘-p’ parameter to get log messages from previous instantiations as well:
kubectl logs [podname] -p