Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when multus is used cluster playbook run fails on v2.26 #11794

Open
bvujnovac opened this issue Dec 12, 2024 · 0 comments
Open

when multus is used cluster playbook run fails on v2.26 #11794

bvujnovac opened this issue Dec 12, 2024 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@bvujnovac
Copy link

bvujnovac commented Dec 12, 2024

What happened?

When multus (kube_network_plugin_multus: true) is used cluster playbook run fails with msg:

TASK [kubernetes-apps/network_plugin/multus : Multus | Start resources] ******** fatal: [ipv46-deploy1]: FAILED! => {"msg": "'ansible.vars.hostvars.HostVarsVars object' has no attribute 'multus_manifest_2.results'. 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'multus_manifest_2.results'"}

The issue seem to be related to change in #10934

What did you expect to happen?

The expectation was for the task to succeed.

How can we reproduce it (as minimally and precisely as possible)?

copy sample inventory
cp -rfp inventory/sample inventory/mycluster

add hosts_ipv46-deploy.yaml to inventory/mycluster

all:
  vars:
    ansible_connection: "ssh"
    ansible_user: "so"
    ansible_ssh_pass: "changeme"
  hosts:
    ipv46-deploy1:
      ansible_host: 172.17.180.50
      ip: 172.17.180.50
      ip6: 2001:2001:362::50
      access_ip: 172.17.180.50
    ipv46-deploy2:
      ansible_host: 172.17.180.51
      ip: 172.17.180.51
      ip6: 2001:2001:362::51
      access_ip: 172.17.180.51
    ipv46-deploy3:
      ansible_host: 172.17.180.52
      ip: 172.17.180.52
      ip6: 2001:2001:362::52
      access_ip: 172.17.180.52
  children:
    kube_control_plane:
      hosts:
        ipv46-deploy1:
    kube_node:
      hosts:
        ipv46-deploy2:
        ipv46-deploy3:
    etcd:
      hosts:
        ipv46-deploy1:
    calico_rr:
      hosts: {}
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
        calico_rr:

add cluster variables cluster-variables_dual_stack.yaml to inventory/mycluster

kube_version: v1.30.6
kube_network_plugin: calico
kube_proxy_mode: ipvs
kube_network_plugin_multus: true
macvlan_interface: "net1"
multus_cni_version: "0.3.1"
enable_dual_stack_networks: true

container_manager: crio
etcd_deployment_type: host
download_container: false
skip_downloads: false
crio_insecure_registries:
  - registry-int.com:5000
  - registry-int.com:443

metrics_server_enabled: true
metrics_server_kubelet_insecure_tls: true
metrics_server_metric_resolution: 15s
metrics_server_replicas: 2

enable_nodelocaldns: false
auto_renew_certificates: true

helm_enabled: true

ntp_enabled: true
ntp_manage_config: true

kube_service_addresses: 10.96.0.0/16
kube_pods_subnet: 10.240.0.0/16
kube_service_addresses_ipv6: 2001:db8:42:1::/112
kube_pods_subnet_ipv6: 2001:db8:42:0::/56
kube_network_node_prefix_ipv6: 64

deploy_netchecker: false

calico_vxlan_mode: Always
calico_vxlan_mode_ipv6: Always

nat_outgoing: true
nat_outgoing_ipv6: true

kubeconfig_localhost: false

upstream_dns_servers:
  - 10.14.252.251
  - 10.14.252.248

run playbook
ansible-playbook -i inventory/mycluster/hosts_ipv46-deploy.yaml -e @inventory/mycluster/cluster-variables_dual_stack.yaml --become --become-user=root cluster.yml

OS

Linux 5.14.0-503.15.1.el9_5.x86_64 x86_64
NAME="Rocky Linux"
VERSION="9.5 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.5"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.5 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
VENDOR_NAME="RESF"
VENDOR_URL="https://resf.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.5"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.5"

Version of Ansible

ansible [core 2.16.14]
config file = /root/k8s-1.30_install/kubespray/ansible.cfg
configured module search path = ['/root/k8s-1.30_install/kubespray/library']
ansible python module location = /root/k8s-1.30_install/kubespray-venv/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /root/k8s-1.30_install/kubespray-venv/bin/ansible
python version = 3.11.9 (main, Sep 11 2024, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] (/root/k8s-1.30_install/kubespray-venv/bin/python3.11)
jinja version = 3.1.4
libyaml = True

Version of Python

Python 3.11.9

Version of Kubespray (commit)

75e12e8

Network plugin used

calico

Full inventory with variables

inventory with variables:
https://gist.github.com/bvujnovac/a17012e5c3c36a9d0d84e89d1206e217#file-gistfile1-txt

Command used to invoke ansible

ansible-playbook -i inventory/mycluster/hosts_ipv46-deploy.yaml -e @inventory/mycluster/cluster-variables_dual_stack.yaml --become --become-user=root cluster.yml

Output of ansible run

playbook run output:

https://gist.github.com/bvujnovac/ec4400289b25b31c02744c032d8552dd#file-gistfile1-txt

ansible -i inventory/mycluster/hosts_ipv46-deploy.yaml all -m debug -a "var=hostvars[inventory_hostname]"

Anything else we need to know

I've tried to change the affecting line as I don't think multus_manifest_2.results can be directly accessed like that.

the following patch seems to work but I have only tested it for my limited use case.

diff --git a/roles/kubernetes-apps/network_plugin/multus/tasks/main.yml b/roles/kubernetes-apps/network_plugin/multus/tasks/main.yml
index 54dd1ed61..82ff57efb 100644
--- a/roles/kubernetes-apps/network_plugin/multus/tasks/main.yml
+++ b/roles/kubernetes-apps/network_plugin/multus/tasks/main.yml
@@ -9,7 +9,7 @@
     state: "latest"
   delegate_to: "{{ groups['kube_control_plane'][0] }}"
   run_once: true
-  with_items: "{{ (multus_manifest_1.results | default([])) + (multus_nodes_list | map('extract', hostvars, 'multus_manifest_2.results') | default([]) | list) }}"
+  with_items: "{{ (multus_manifest_1.results | default([])) + (multus_nodes_list | map('extract', hostvars, 'multus_manifest_2') | map(attribute='results') | default([]) | list) }}"
   loop_control:
     label: "{{ item.item.name if item != None else 'skipped' }}"
   vars:
@bvujnovac bvujnovac added the kind/bug Categorizes issue or PR as related to a bug. label Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant