Back to top
Less than a minute read

What to do if your pod isn't working in kubernetes

Some simple troubleshooting steps to help you get your pod running

We are assuming you have your kubernetes cluster running and you are able to access it with kubectl and that you have a pod being created either using a pod definition file or some other method replication controller, deployment etc. We tend to follow best practice and create our pods in a namespace, so assuming your namespace is called pod-test you should be able to find the name of the pod with:

kubectl --namespace pod-test get pods

Let's say you are trying to start a wiki pod up in your pod-test namespace and you issue the above command and get this back:

NAME              READY     STATUS         RESTARTS   AGE
wiki-test-x7o8z   0/1       ErrImagePull   0          22s

You know the name of the pod now, but you can see that the Status is ErrImagePull (or it can be ImagePullBackOff), and the most likely reason for this is that you are pulling from a private container image repository and you need to add an imagePullSecret to your pod definition and then create the secret in the pod-test namespace.

So you add those things to your kubernetes cluster and try again. Note you can use:

kubectl delete pod wiki-test-x7o8z

to get rid of the failing pod, but if you created it using a replication controller or deployment deleting it will just see it automatically re-created, so assuming you created it with a file called wiki-test.yaml you can delete the pod and the controller with:

kubectl delete -f ./wiki-test.yaml

Once you have corrected the image pulling your problems might not be over, sometimes you will see something similar to this output in response to creating a pod:

NAME              READY     STATUS             RESTARTS   AGE
wiki-test-p8s6t   0/1       CrashLoopBackOff   3          27s

Note the Status is CrashLoopBackOff and the Restart count is at 3. This means kubernetes cannot start your pod and has restarted it three times. Before you kill this pod as you did before run the following:

kubectl --namespace pod-test logs wiki-test-p8s6t

This will show you any logs that the image has output as it tried to start and can often give you a clue why the pod is not starting. Assuming you have docker installed locally it is usually a good idea to run the image locally and try to ascertain why it is not starting there. This can be because the pod is expecting some further configuration to run such as specifying some environmental variables and you can do this in the pod YAML definition file and retry.

If you find that the pod is starting, but not behaving as you would like you can usually shell into the pod like this:

kubectl --namespace pod-test exec wiki-test-u5w7p -i -t -- bash -il

Note that if you have multiple containers in a pod you have to specify which container you want to shell into by using the -c flag.

Using the describe pod kubectl option is also useful in seeing if there is anything odd in how your pod is behaving or how it is setup:

kubectl --namespace pod-test describe pod wiki-test-u5w7p

Sometimes a pod won't get created at all and this is often because you have specified some sort of selector, either a node selector that uses a label that is not present (or misspelled) or a resource constraint (CPU or memory) that kubernetes cannot fulfill. Then:

kubectl describe nodes

will give you more information about node labels and resource allocation on your nodes.

Contact us

If you've found this article useful perhaps ngineered can help you. Please feel free to contact us using the details below if you want to discuss what services ngineered can provide to your company.


Ian MacDougall

Ian is responsible for Ngineered’s customer builds. An expert in the requirements of the enterprise, he is known for his rock-solid builds, and his sensitivity to legacy systems.

Discussion