-
Notifications
You must be signed in to change notification settings - Fork 827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
audit followup: ensure secretmanager secrets and bindings are managed by scripts #1731
Comments
/kind cleanup Labels and areas for relevant sigs: /sig testing /sig release /sig contributor-experience /sig scalability /sig node |
Another piece of followup: I setup a custom |
Happy to help! Just ping. :) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten
|
I'd like to add to definition of done that secrets are provisioned the same way for all of our clusters. As of completion of #2220 all prow build cluster secrets are provisioned via terraform. But the secrets for aaa are still handled by bash. |
Once #3028 merges, all of our secrets will be managed by terraform. What then remains is docs on how to provision a new secret. The age-old question of "where is the best place for these?" and my best guesses for answers:
|
I'll drop the respective sig labels for each app since they've been migrated and this is more on sig-k8s-infra to doc now /remove-sig node |
/milestone v1.24 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Caught via #1718 (comment)
I've done the following pattern a few times now (most recently #1696 (comment)):
app: foo
if its for an app in thefoo
dir running on aaagroup: sig-foo
if the app is owned by sig-fooPut it all together and this allows for relatively simple / safe deployment: https://github.com/kubernetes/k8s.io/tree/main/slack-infra#how-to-deploy
Problems with the above:
The text was updated successfully, but these errors were encountered: