-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm used http proxy to access kube-apiserver when installing kubelet-csr-approver (with temporary solution) #11095
Comments
Could u show the kube-vip setting? |
Thanks for your reply. For your information, after I added back the ## `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`
kube_proxy_strict_arp: true
## `inventory/mycluster/group_vars/k8s_cluster/addons.yml`
# Kube VIP
kube_vip_enabled: true
kube_vip_arp_enabled: true
kube_vip_controlplane_enabled: true
kube_vip_address: 10.XX.XX.XX
loadbalancer_apiserver:
address: "{{ kube_vip_address }}"
port: 6443
kube_vip_interface: enp6s19
kube_vip_lb_enable: true
kube_vip_services_enabled: false
kube_vip_enableServicesElection: true |
@KubeKyrie I think it just needs a simple fix FYI, to enable kubelet-csr-approver we need |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What happened?
Hi Kubespray Team. I am new to k8s. Glad to use kubespray!
I found that when installing kubelet-csr-approver, due to the lack of 'no_proxy' env, helm uses the http proxy to access kube-apiserver. Our corporate proxy logged the requests like
CONNECT lb-apiserver.kubernetes.local:6443
.For the security and privacy, I have hidden the username, IPs, proxy URL.
Source shortcut: https://github.com/kubernetes-sigs/kubespray/blob/v2.24.1/roles/kubernetes-apps/kubelet-csr-approver/meta/main.yml
playbook log:
What did you expect to happen?
It should be passed normally
How can we reproduce it (as minimally and precisely as possible)?
kubelet_csr_approver_enabled
(hardening: https://github.com/kubernetes-sigs/kubespray/blob/v2.24.1/docs/hardening.md)OS
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
Version of Ansible
ansible [core 2.15.9]
config file = /home/user/test/kubespray/ansible.cfg
configured module search path = ['/home/user/test/kubespray/library']
ansible python module location = /home/user/miniconda3/envs/kubespray/lib/python3.11/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/miniconda3/envs/kubespray/bin/ansible
python version = 3.11.8 (main, Feb 26 2024, 21:39:34) [GCC 11.2.0] (/home/user/miniconda3/envs/kubespray/bin/python)
jinja version = 3.1.2
libyaml = True
Version of Python
Python 3.11.8
Version of Kubespray (commit)
2cb8c85 (v2.24.1)
Network plugin used
calico
Full inventory with variables
provided in the reproduce method before
Command used to invoke ansible
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root --ask-become-pass -vvvvvv cluster.yml
Output of ansible run
playbook output:
verbose output show the proxy env
Anything else we need to know
My solution is adding back
no_proxy
env (https://github.com/kubernetes-sigs/kubespray/blob/v2.24.1/roles/kubernetes-apps/kubelet-csr-approver/meta/main.yml)I also found a similar structure at custom_cni (https://github.com/kubernetes-sigs/kubespray/blob/v2.24.1/roles/network_plugin/custom_cni/meta/main.yml)
The text was updated successfully, but these errors were encountered: