Kubernetes nodes cannot be provisioned any more in subnets tagged with sigs.k8s.io/cluster-api-provider-aws/association: secondary #5227
Labels
kind/bug
Categorizes issue or PR as related to a bug.
needs-priority
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
/kind bug
What steps did you take and what happened:
Upgraded the CAPA provider to v2.7.1 and then tried to upgrade one of my AWS clusters to a newer Kubernetes version.
During the rolling update of MachineDeployments, CAPA v2.7.1 rejected creation of new EC2 instances saying
subnet XXXX belongs to a secondary CIDR block which won't be used to create instances.
What did you expect to happen:
The new EC2 instances are provisioned - as this used to happen before the upgrade to v2.7.1.
Anything else you would like to add:
Downgrading CAPA provider to v2.6.1 resolved the issue.
The problem might be around this code block in
pkg/cloud/services/ec2/instance.go
:Environment:
kubectl version
): v1.29.10-eks-7f9249a/etc/os-release
): Ubuntu 20.04.6 LTSWe use four private subnets in AWS which are pre-provisioned by our IT team:
We followed the docs at https://cluster-api-aws.sigs.k8s.io/topics/eks/pod-networking#unmanaged-static-vpc:
sigs.k8s.io/cluster-api-provider-aws/association=secondary
We do not want to use the first two subnets for Kubernetes nodes as those are pretty small and could be easily exhausted when we scale out.
The text was updated successfully, but these errors were encountered: