Skip to main content

Azure Kubernetes Service–Volume node affinity conflict

When trying to deploy a pod on our AKS cluster, it hanged in the pending state. I looked at the logs and noticed the following warning:

FailedScheduling – 1 node(s) had volume node affinity conflict

The pod I tried to deploy had a persistent volume claim and I was certain that the persistent volume was succesfully deployed and available.

What was going wrong?

It turned out that my AKS cluster was deployed in 3 availability zones but I had only 2 nodes running:

AKS cluster is gedeployed in 3 zones maar er zijn maar 2 nodes:

$ kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"

Name:               aks-agentpool-38609413-vmss000003

                    failure-domain.beta.kubernetes.io/zone=westeurope-1

Name:               aks-agentpool-38609413-vmss000004

                    failure-domain.beta.kubernetes.io/zone=westeurope-2

Here we can see that our nodes are living in zones westeurope-1 and westeurope-2.

If we now take a look at our persistent volume, we can see that it is deployed in zone westeurope-3:

$  kubectl describe pv pvc-ecc

                   failure-domain.beta.kubernetes.io/zone=westeurope-3

Annotations:       pv.kubernetes.io/bound-by-controller: yes

                   pv.kubernetes.io/provisioned-by: kubernetes.io/azure-disk

                   volumehelper.VolumeDynamicallyCreatedByKey: azure-disk-dynamic-provisioner

Finalizers:        [kubernetes.io/pv-protection]

StorageClass:      default

Status:            Bound

Claim:             appservice-ns/appservice-ext-k8se-build-service

Reclaim Policy:    Delete

Access Modes:      RWO

VolumeMode:        Filesystem

Capacity:          100Gi

Node Affinity:

  Required Terms:

    Term 0:        failure-domain.beta.kubernetes.io/region in [westeurope]

                   failure-domain.beta.kubernetes.io/zone in [westeurope-3]

Message:

That explains why the deployment failed as the pod cannot connect to the volume on another zone.

As a quick solution I introduced a 3th node in the cluster so that at least one node can connect to the persistent volume.

Popular posts from this blog

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...