-
-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GKE: can't get things to work when ingress: enabled: true #62
Comments
I've tried adding
But that didn't help. I tried changing to paths:
- "/*" But that didn't help. |
The above was attempted using an Autopilot Cluster. I'm going to try again with a standard cluster. Update: I saw the same behavior with the Standard clusters. |
An ingress is useless without an ingress controller, and typically you'll want to add a host at the very least. I'm guessing this is technically a bug in that it shouldn't pass the wrong values to the template and generate that service/ingress combo (I may have time to test this on Monday) but even if it passed compatible values you would still not get a working ingress. Are you using an ingress controller? If so, which, and what annotations will allow it to map things to the path? Can you provide the ingress and service descriptions? |
$ kubectl describe service npm-verdaccio
Name: npm-verdaccio
Namespace: default
Labels: app=npm-verdaccio
app.kubernetes.io/instance=npm
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=verdaccio
app.kubernetes.io/version=5.0.1
helm.sh/chart=verdaccio-4.0.0
Annotations: cloud.google.com/neg: {"ingress": true}
cloud.google.com/neg-status:
{"network_endpoint_groups":{"4873":"k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4"},"zones":["us-central1-a","us-central1-b"]}
meta.helm.sh/release-name: npm
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/instance=npm,app.kubernetes.io/name=verdaccio
Type: ClusterIP
IP: 10.106.128.180
Port: <unset> 4873/TCP
TargetPort: http/TCP
Endpoints: 10.106.0.130:4873
Session Affinity: None
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Create 2m38s neg-controller Created NEG "k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4" for default/npm-verdaccio-k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4--/4873-http-GCE_VM_IP_PORT-L7 in "us-central1-a".
Normal Create 2m34s neg-controller Created NEG "k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4" for default/npm-verdaccio-k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4--/4873-http-GCE_VM_IP_PORT-L7 in "us-central1-b".
Normal Attach 40s neg-controller Attach 1 network endpoint(s) (NEG "k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4" in zone "us-central1-a") $ kubectl describe ingress npm-verdaccio
Name: npm-verdaccio
Namespace: default
Address: 107.xxx.255.xxx
Default backend: default-http-backend:80 (10.106.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
*
/ npm-verdaccio:4873 (10.106.0.130:4873)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-31319--328e1eabe967e100":"HEALTHY","k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4":"UNHEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-x537ki4w-default-npm-verdaccio-0ou1tpqw
ingress.kubernetes.io/target-proxy: k8s2-tp-x537ki4w-default-npm-verdaccio-0ou1tpqw
ingress.kubernetes.io/url-map: k8s2-um-x537ki4w-default-npm-verdaccio-0ou1tpqw
meta.helm.sh/release-name: npm
meta.helm.sh/release-namespace: default
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 7m41s loadbalancer-controller UrlMap "k8s2-um-x537ki4w-default-npm-verdaccio-0ou1tpqw" created
Normal Sync 7m39s loadbalancer-controller TargetProxy "k8s2-tp-x537ki4w-default-npm-verdaccio-0ou1tpqw" created
Normal Sync 7m35s loadbalancer-controller ForwardingRule "k8s2-fr-x537ki4w-default-npm-verdaccio-0ou1tpqw" created
Normal IPChanged 7m35s loadbalancer-controller IP is now 107.xxx.255.xxx
Normal Sync 50s (x6 over 9m) loadbalancer-controller Scheduled for sync |
I believe that I'm using Container-native load balancing. I'm not using a Shared VPC. My cluster is running
My access logs on the ingress have
|
OK, I got something working finally! Simlar to #48 (comment) I had to change the following in the Service: ports:
- port: 4873
protocol: TCP
targetPort: http to ports:
- port: 80
protocol: TCP
targetPort: 4873 Then in the Ingress, I had to change spec:
rules:
- http:
paths:
- backend:
serviceName: npm-verdaccio
servicePort: 4873
path: /*
pathType: ImplementationSpecific to spec:
rules:
- http:
paths:
- backend:
serviceName: npm-verdaccio
servicePort: 80
path: /*
pathType: ImplementationSpecific Now I'm trying to figure out how to change |
And related to #48 (comment) I tried updating the service to ports:
- port: 80
protocol: TCP
targetPort: http But that didn't work. |
It seems like this should be charts/charts/verdaccio/values.yaml Line 22 in 2e563d8
|
I did another full deployment with the following image:
repository: verdaccio/verdaccio
tag: 5.0.4
pullPolicy: IfNotPresent
pullSecrets: [ ]
# - dockerhub-secret
nameOverride: ""
fullnameOverride: ""
service:
annotations:
cloud.google.com/neg: '{"ingress": true}'
clusterIP: ""
## List of IP addresses at which the service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: [ ]
loadBalancerIP: ""
loadBalancerSourceRanges: [ ]
port: 80
type: ClusterIP
# nodePort: 31873
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: { }
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
affinity: { }
## Tolerations for nodes
tolerations: [ ]
## Additional pod labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: { }
## Additional pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: { }
replicaCount: 1
resources: { }
# requests:
# cpu: 100m
# memory: 512Mi
# limits:
# cpu: 100m
# memory: 512Mi
ingress:
enabled: true
# Set to true if you are on an old cluster where apiVersion extensions/v1beta1 is required
useExtensionsApi: false
# This (/*) is due to gce ingress needing a glob where nginx ingress doesn't (/).
paths:
- /*
# Use this to define, ALB ingress's actions annotation based routing. Ex: for ssl-redirect
# Ref: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/tasks/ssl_redirect/
extraPaths: [ ]
# hosts:
# - npm.blah.com
# annotations:
# kubernetes.io/ingress.class: nginx
# tls:
# - secretName: secret
# hosts:
# - npm.blah.com
## Service account
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: { }
# The name of the service account to use.
# If not set and create is true, a name is generated using the Chart's fullname template
name: ""
# Extra Environment Values - allows yaml definitions
extraEnvVars:
# - name: VALUE_FROM_SECRET
# valueFrom:
# secretKeyRef:
# name: secret_name
# key: secret_key
# - name: REGULAR_VAR
# value: ABC
# Extra Init Containers - allows yaml definitions
extraInitContainers: [ ]
configMap: |
# This is the config file used for the docker images.
# It allows all users to do anything, so don't use it on production systems.
#
# Do not configure host and port under `listen` in this file
# as it will be ignored when using docker.
# see https://github.com/verdaccio/verdaccio/blob/master/docs/docker.md#docker-and-custom-port-configuration
#
# Look here for more config file examples:
# https://github.com/verdaccio/verdaccio/tree/master/conf
#
# path to a directory with all packages
storage: /verdaccio/storage/data
web:
# WebUI is enabled as default, if you want disable it, just uncomment this line
#enable: false
title: Verdaccio
auth:
htpasswd:
file: /verdaccio/storage/htpasswd
# Maximum amount of users allowed to register, defaults to "+infinity".
# You can set this to -1 to disable registration.
#max_users: 1000
# a list of other known repositories we can talk to
uplinks:
npmjs:
url: https://registry.npmjs.org/
agent_options:
keepAlive: true
maxSockets: 40
maxFreeSockets: 10
packages:
'@*/*':
# scoped packages
access: $all
publish: $authenticated
proxy: npmjs
'**':
# allow all users (including non-authenticated users) to read and
# publish all packages
#
# you can specify usernames/groupnames (depending on your auth plugin)
# and three keywords: "$all", "$anonymous", "$authenticated"
access: $all
# allow all known users to publish packages
# (anyone can register by default, remember?)
publish: $authenticated
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: npmjs
# To use `npm audit` uncomment the following section
middlewares:
audit:
enabled: true
# log settings
logs: {type: stdout, format: pretty, level: http}
# logs: {type: file, path: verdaccio.log, level: info}
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires Persistence.Enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
## Verdaccio data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "standard-rwo"
accessMode: ReadWriteOnce
size: 8Gi
## selector can be used to match an existing PersistentVolume
## selector:
## matchLabels:
## app: my-app
selector: { }
volumes:
# - name: nothing
# emptyDir: {}
mounts:
# - mountPath: /var/nothing
# name: nothing
# readOnly: true
podSecurityContext:
fsGroup: 101
securityContext:
runAsUser: 10001
priorityClass:
enabled: false
# name: ""
existingConfigMap: false |
I haven't tried any of this yet, but as an aside (and for anyone finding this issue historically) I can say that the "http" port name is a misnomer. Kubernetes does not have any meaning for the string in the targetPort. If it's an integer k8s will use that value. If it is a string, it will use the corresponding name on the pod. If it can't find a corresponding name, the service fails to work. So the string "http" is just for human readability, but you can see in the template that it's named:
That's why the name will work. I'll comment further on what I find in experimenting Monday, of course, and if I find anything I'll PR it. |
I tried a few more deployments and I ended up having to manually override the service from targetPort: http to targetPort: 4873 to get things to work consistently. |
I'm finding that I have to update |
Just hit this outside GKE. Long story short - hunt in your env where Long story long: when I named my release I think it essentially can be treated as a bug and that variable shall be renamed to avoid clashing with k8s pod env, since i'd assume EDIT: edited as I just realised your your setup is different, but root cause is essentially the same |
@Splaktar, i thought at first it maybe because of naming or templating in the chart, but i don't see just this particular chart release generating that variable. I'd rather think it must be defined somewhere else. |
@noroutine thank you for looking into this. Indeed I do have the following spec:
containers:
- image: verdaccio/verdaccio:5.0.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /-/ping
port: http
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: verdaccio
ports:
- containerPort: 4873
name: http
protocol: TCP Defined in my deployment. It looks like I can't use
So I'll be trying to stand up another cluster to see if this resolves it. I needed to start fresh to try enabling JWT anyhow... |
I'm seeing that specifying |
As an update from my troubleshooting, I used the values.yaml that was simply enabling the ingress, and the command:
I did so against both docker desktop's kubernetes and a minikube install (I don't normally use minikube). Both worked as expected, and I get the following logs:
Kubernetes version 1.19 I would suggest adding a |
OK, thanks, I'll take a look at using a namespace, it's a new concept to me. I'm waiting for it to come up now in a namespace and with both |
It still failed in a namespace and I had to set In the following docs, it only shows |
@Splaktar @todpunk @juanpicado @noroutine I am also facing the same issue. I am using kubernetes server version Error log:
I tried with following configuration: targetPort: 4873
port: 80 It updated the error message with:
Still looking for a solution. No idea what is going on. |
@himanshumehta1114 the chart doesn't support specifying I ended up forking the chart in order to add the needed GKE support, I tried to list the changes in the releases here: However, it's only been tested with Verdaccio v5. |
@Splaktar Thanks for sharing the chart. Using this chart, verdaccio deployed smoothly but still, I wasn't able to access it. I had to update the ingress config as following to make it work. ingress:
enabled: true
paths:
- /
annotations:
kubernetes.io/ingress.class: contour Reason being, the chart adds paths:
- /* I'm using |
@himanshumehta1114 good to know! Thankfully those are both things that you can define in your own copy of the |
@himanshumehta1114 I ran into this same issue. Did you use Using a different release name should avoid that. Another workaround is manually alter I used this: name: {{ template "verdaccio.fullname" . }}-svc This is probably unrelated to the original ingress issue though. |
I hadn't tried out the I just changed this line as below and the issue was resolved.
Not sure if this is GKE / K8s version specific. ref: #48 (comment) |
@Splaktar Since you have https://github.com/xlts-dev/verdaccio-gke-charts pretty well maintainer, should we linked it from the README so users are aware of? I guess you understand deeply the differences. |
I think that should be okay. We just added support for configuring the horizontal pod autoscaler today. We also added the ability to customize the liveness and readiness probes. We forked the google-cloud-storage plug in, but that is not yet published. Hopefully in November, we will be able to get that published and make it the default for our GKE chart to enable a good autoscaling configuration out of the box. We'll also look at some ways to contribute upstream once we get past some deadlines. |
Similar to #48, I'm having a lot of problems with getting this to work with GKE (
1.18.16-gke.2100
). However, I'm starting with a brand new cluster with no existing ingresses.If I take https://github.com/verdaccio/charts/blob/master/charts/verdaccio/values.yaml and copy it locally and only change the following
Then run
helm install npm -f values.yaml verdaccio/verdaccio
I get
Then in my pod, I start to see the following:
In the service UI, I see this
If I leave ingress enabled set to
false
, everything spins up fine, but I can't get to it using an external IP.The text was updated successfully, but these errors were encountered: