-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workload active field doesn't work for StatefulSet Integration #3851
Comments
it needs testing but I think adding the StatefulSet to the owners of the Workload will prevent workload deletion and so it should work well. |
/assign |
It won't help because the workload could still finish. I think we need to add a "serving" field to the workload to prevent it from finishing. |
Could you please that approach first. It is possible that I'm missing something, but I think we need to keep the workload around. Also, it is not clear to me why we need the "serving" field on workload since it is already on the pods created by StatefulSet. In any case, the argument is not convincing to me yet. |
The problem is that the workload finishes once all the pods are finished. Yes, with the ownerReference, the workload won't disappear, but we can't admit it after reactivation. |
Preventing the workload from disappearing is the first step - if it is gone, then for sure "active" is gone too. Yes, possibly we may need the "serving" field on workload, but I would prefer to reuse the pod annotation so that we can cherry-pick the fix for 0.10.1. So, I would prefer to first understand well where are the issues, rather than jumping to conclusions. |
Moreover, starting with ownerReferences would fix the issue already for workloads which return non-zero exit code on SIGTERM, which would already unblock the use case for some users. |
/unassign Sorry, I don't have capacity to work on it. |
@mbobrovskyi I can work on it after the holidays. /assign |
What happened:
Changing the
active
field of the Workload object from true to false doesn't scale down the StatefulSet permanently in v0.10.0-rc.4. It only temporarily change the status. Both the workload object and the pods are deleted but immediately recreated automatically.What you expected to happen:
Changing the
active
field from true to false should scale the StatefulSet down to 0. Changing it back to true should scale up the StatefulSet.How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
): v1.31.0git describe --tags --dirty --always
): v0.10.0-rc.4cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: