Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to provision volume with StorageClass "local-storage": configuration error, no node was specified #332

Open
FeiYing9 opened this issue Apr 25, 2023 · 11 comments

Comments

@FeiYing9
Copy link

I have deployed local-path-privisioner with v0.0.24, when I apply a busybox to make a test, the pvc is always pending and reports errors about configuration error, no node was specified.

Here's my cluster info:

# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

# kubectl get nodes -owide
NAME     STATUS   ROLES         AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION                CONTAINER-RUNTIME
master   Ready    master,node   15m   v1.23.1   172.30.0.72   <none>        Debian GNU/Linux 9 (stretch)   4.9.0-8-amd64                 docker://20.10.9
node1    Ready    node          14m   v1.23.1   172.30.0.77   <none>        Ubuntu 16.04 LTS               4.4.0-21-generic              docker://20.10.9
node2    Ready    node          14m   v1.23.1   172.30.0.78   <none>        CentOS Linux 7 (Core)          3.10.0-1160.88.1.el7.x86_64   docker://20.10.9

Something about the local-path-provisoner:

  • configmap
# k get cm local-path-provisioner-files -o yaml
apiVersion: v1
data:
  config.json: |-
    {
      "nodePathMap":[
        {
          "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
          "paths":["/var/log/local"]
        }
      ]
    }
  helperPod.yaml: |-
    apiVersion: v1
    kind: Pod
    metadata:
      name: helper-pod
    spec:
      containers:
      - name: helper-pod
        image: "debian:10.12"
        imagePullPolicy: IfNotPresent
  setup: |-
    #!/bin/sh
    set -eu
    #mkdir -m 0777 -p "$VOL_DIR"
  teardown: |-
    #!/bin/sh
    set -eu
    #rm -rf "$VOL_DIR"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: local-path-provisioner
    meta.helm.sh/release-namespace: default
  labels:
    app: local-path-provisioner
    app.kubernetes.io/managed-by: Helm
    chart: local-path-provisioner
    heritage: Helm
  name: local-path-provisioner-files
  namespace: default
  • deployment
# k get deploy local-path-provisioner -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: local-path-provisioner
    meta.helm.sh/release-namespace: default
  labels:
    app: local-path-provisioner
    app.kubernetes.io/managed-by: Helm
  name: local-path-provisioner
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      containers:
      - command:
        - local-path-provisioner
        - --debug
        - start
        - --provisioner-name
        - rancher.io/local-path
        - --configmap-name
        - local-path-provisioner-files
        image: my-registry-addr/rancher/local-path-provisioner:v0.0.24
        imagePullPolicy: IfNotPresent
        name: rancher-local-path-provisioner
        securityContext:
          runAsNonRoot: true
          runAsUser: 65534
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 65534
      serviceAccount: default-admin
  • storageclass and pvc
# k get pvc busybox-0 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    meta.helm.sh/release-name: busybox
    meta.helm.sh/release-namespace: default
    volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
    volume.kubernetes.io/storage-provisioner: rancher.io/local-path
    volumeType: local
  creationTimestamp: "2023-04-25T06:38:17Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: busybox
    app.kubernetes.io/managed-by: Helm
    appGroup: test
    chart: busybox
    controller: "false"
    heritage: Helm
    nimOwner: yunxin
  name: busybox-0
  namespace: default
  resourceVersion: "1831"
  uid: 0af80c24-c596-4fe8-a1bd-10ec933eab85
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
  storageClassName: local-storage
  volumeMode: Filesystem

# k get sc
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
local-storage   rancher.io/local-path   Retain          Immediate           false                  15m

# k get sc local-storage -o yaml
allowVolumeExpansion: false
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    meta.helm.sh/release-name: local-path-provisioner
    meta.helm.sh/release-namespace: default
    storageclass.kubernetes.io/is-default-class: "false"
  labels:
    app: local-path-provisioner
    app.kubernetes.io/managed-by: Helm
  name: local-storage
parameters:
  nodeSelector: kubernetes.io/hostname=node1
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: Immediate
  • logs
# k logs -f deploy/local-path-provisioner
time="2023-04-25T14:37:30+08:00" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/var/log/local\"]}]}"
time="2023-04-25T14:37:30+08:00" level=debug msg="Provisioner started"
I0425 14:37:30.158974       1 controller.go:811] Starting provisioner controller rancher.io/local-path_local-path-provisioner-5bbd874ff9-lvs86_ecc8ff42-bcce-44de-885f-ec2d71f3ba75!
I0425 14:37:30.259124       1 controller.go:860] Started provisioner controller rancher.io/local-path_local-path-provisioner-5bbd874ff9-lvs86_ecc8ff42-bcce-44de-885f-ec2d71f3ba75!

I0425 14:38:17.644571       1 controller.go:1337] provision "default/busybox-0" class "local-storage": started
W0425 14:38:17.644670       1 controller.go:937] Retrying syncing claim "0af80c24-c596-4fe8-a1bd-10ec933eab85" because failures 0 < threshold 15
E0425 14:38:17.644692       1 controller.go:957] error syncing claim "0af80c24-c596-4fe8-a1bd-10ec933eab85": failed to provision volume with StorageClass "local-storage": configuration error, no node was specified
I0425 14:38:17.645092       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-0", UID:"0af80c24-c596-4fe8-a1bd-10ec933eab85", APIVersion:"v1", ResourceVersion:"1831", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/busybox-0"
I0425 14:38:17.645115       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-0", UID:"0af80c24-c596-4fe8-a1bd-10ec933eab85", APIVersion:"v1", ResourceVersion:"1831", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "local-storage": configuration error, no node was specified
I0425 14:38:32.645501       1 controller.go:1337] provision "default/busybox-0" class "local-storage": started
W0425 14:38:32.645599       1 controller.go:937] Retrying syncing claim "0af80c24-c596-4fe8-a1bd-10ec933eab85" because failures 1 < threshold 15
E0425 14:38:32.645625       1 controller.go:957] error syncing claim "0af80c24-c596-4fe8-a1bd-10ec933eab85": failed to provision volume with StorageClass "local-storage": configuration error, no node was specified
I0425 14:38:32.645761       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-0", UID:"0af80c24-c596-4fe8-a1bd-10ec933eab85", APIVersion:"v1", ResourceVersion:"1831", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/busybox-0"
I0425 14:38:32.645805       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"busybox-0", UID:"0af80c24-c596-4fe8-a1bd-10ec933eab85", APIVersion:"v1", ResourceVersion:"1831", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "local-storage": configuration error, no node was specified
@FeiYing9
Copy link
Author

I have read something code about the errors, it seems there is no annotation of volume.kubernetes.io/selected-node in the pvc,and the provisioner reports the errors

I have no ideas why and how to resolve it

@FeiYing9
Copy link
Author

there is no psp resources:

# k get psp -A
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found

@captaingabi
Copy link

captaingabi commented Apr 25, 2023

I have the same issue, and funny thing that yesterday it was working fine, and I did not change anything on my local k3s setup that uses this local-path provisioner.

Edit:
OF course if add

annotations:
    volume.kubernetes.io/selected-node: desktop-ad24a9m

to my pvc, or volumeClaimTemplates in the metadata section, it works. (where desktop-ad24a9m is my node)

@denisok
Copy link

denisok commented Jul 19, 2023

I guess it is because #296 updated sigs.k8s.io/sig-storage-lib-external-provisioner/v8 v8.0.0 so it respects volume.kubernetes.io/selected-node for the WaitForFirstConsumer

Copy link

github-actions bot commented Jun 9, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Jun 9, 2024
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

@kuseo
Copy link

kuseo commented Nov 21, 2024

Still happens in v0.0.30 when volumeBindingMode of storageClass is Immediate..

@brokedba
Copy link

Still happens in v0.0.30 when volumeBindingMode of storageClass is Immediate..

I confirm it still does to this day.

@derekbit derekbit removed the stale label Dec 21, 2024
@derekbit derekbit reopened this Dec 21, 2024
@derekbit
Copy link
Member

Will investigate the issue soon

@derekbit
Copy link
Member

Currently, I don't have too many thoughts for the issue. Any feedback or contribution is welcomed. Thanks.

@polomani
Copy link

polomani commented Jan 28, 2025

@derekbit I faced the same issue, below are the details and the workaround I applied (mentioned by captaingabi)
Maybe there is something I could help debugging in here? Not sure where to poke at this point

Original code (this produces configuration error, no node was specified)

# Config map
apiVersion: v1
data:
  setup: |-
    #!/bin/sh
    set -eu
    mkdir -m 0777 -p "${VOL_DIR}"
    chmod 700 "${VOL_DIR}/.."
  teardown: |-
    #!/bin/sh
    set -eu
    rm -rf "${VOL_DIR}"
  helperPod.yaml: |-
    apiVersion: v1
    kind: Pod
    metadata:
      name: helper-pod
    spec:
      containers:
      - name: helper-pod
        image: "rancher/mirrored-library-busybox:1.36.1"
        imagePullPolicy: IfNotPresent
  config.json: |-
    {
      "storageClassConfigs": {
        "local-path": {
          "nodePathMap": [
            {
              "node": "DEFAULT_PATH_FOR_NON_LISTED_NODES",
              "paths": ["/var/lib/rancher/k3s/storage"]
            }
          ]
        },
        "local-path-ssd": {
          "nodePathMap": [
            {
              "node": "DEFAULT_PATH_FOR_NON_LISTED_NODES",
              "paths": ["/mnt/ssd-storage"]
            }
          ]
        }
      }
    }
kind: ConfigMap
metadata:
  name: local-path-config
  namespace: kube-system
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path-ssd
provisioner: rancher.io/local-path
parameters:
  nodePath: /mnt/ssd-storage
reclaimPolicy: Retain
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pg-data
spec:
  storageClassName: local-path-ssd
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
status: {}

once I hardcode the node name it begins to provision the PV properly

# PersistentVolumeClaim
  annotations:
    volume.kubernetes.io/selected-node: nodename

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants