You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
A kustomize deployment gives a warning that the deployment should be changed a bit:
$ kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.16.6
namespace/node-feature-discovery created
customresourcedefinition.apiextensions.k8s.io/nodefeaturegroups.nfd.k8s-sigs.io created
customresourcedefinition.apiextensions.k8s.io/nodefeaturerules.nfd.k8s-sigs.io created
customresourcedefinition.apiextensions.k8s.io/nodefeatures.nfd.k8s-sigs.io created
serviceaccount/nfd-gc created
serviceaccount/nfd-master created
serviceaccount/nfd-worker created
role.rbac.authorization.k8s.io/nfd-worker created
clusterrole.rbac.authorization.k8s.io/nfd-gc created
clusterrole.rbac.authorization.k8s.io/nfd-master created
rolebinding.rbac.authorization.k8s.io/nfd-worker created
clusterrolebinding.rbac.authorization.k8s.io/nfd-gc created
clusterrolebinding.rbac.authorization.k8s.io/nfd-master created
configmap/nfd-master-conf created
configmap/nfd-worker-conf created
deployment.apps/nfd-gc created
Warning: spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key: node-role.kubernetes.io/master is use "node-role.kubernetes.io/control-plane" instead
deployment.apps/nfd-master created
daemonset.apps/nfd-worker created
What you expected to happen:
no warning
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
Kubernetes version (use kubectl version):
Client Version: v1.31.4
Kustomize Version: v5.4.2
Server Version: v1.31.2
The text was updated successfully, but these errors were encountered:
We tolerate both taints. The node-role.kubernetes.io/master taint was changed around k8s v1.25 so I'd say not remove that yet to not deliberately break existing users (running old versions of Kubernetes)
What happened:
A kustomize deployment gives a warning that the deployment should be changed a bit:
What you expected to happen:
no warning
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):The text was updated successfully, but these errors were encountered: