Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Q: Does the operator perform state management? #18

Open
mkulke opened this issue May 6, 2024 · 7 comments
Open

Q: Does the operator perform state management? #18

mkulke opened this issue May 6, 2024 · 7 comments

Comments

@mkulke
Copy link

mkulke commented May 6, 2024

The upstream kbs deployment recipes lack explicit state management and store resources like secrets, policies on a local container instance. There are APIs that allow altering of that pod-local state, which leads to unpredictable results. If the operator would provide facilities and defaults for explicit state (StatefulSet with Volume?) that would be good additional value, and we could point users to the operator as a default means for deploying KBS.

@bpradipt
Copy link
Member

Currently the operator takes most of the config via configmaps. Would that suffice ?

@mkulke
Copy link
Author

mkulke commented May 14, 2024

I was more thinking about making the kbs a StatefulSet deployment and volume provisioning. But if the operator would provide options to mount secrets and policies via k8s read-only secrets/configmaps that would be also a good-enough solution IMO.

@bpradipt
Copy link
Member

I was more thinking about making the kbs a StatefulSet deployment and volume provisioning. But if the operator would provide options to mount secrets and policies via k8s read-only secrets/configmaps that would be also a good-enough solution IMO.

We can aim to keep the core KBS as stateless and provide all config and secrets via K8s configMap and Secrets respectively. I think this will be easier to maintain, help with scaling of KBS/AS as needed and integrate with 3rd party vault storage.

@mkulke
Copy link
Author

mkulke commented May 18, 2024

We can aim to keep the core KBS as stateless and provide all config and secrets via K8s configMap and Secrets respectively. I think this will be easier to maintain, help with scaling of KBS/AS as needed and integrate with 3rd party vault storage.

in general that makes sense, I think @kartikjoshi21 is trialing out using configmaps for policies, maybe we can implement it in the scope of this repo.

it would make sense to have a config option "policies/resources.readonly" on kbs so that we disable the respective endpoints and give sensible error messages.

with regard to full statelessness: the kbs protocol is stateful, so a kbs pod has state in memory

solutions that come to mind:

  • sticky sessions (potentially litte/no changes to kbs required, but maybe elaborate k8s machinery, not sure)
  • implement a redis (or similar) backend for sessions (should be trivial, but we defer the problem of state to either a managed redis or a vanilla k8s stateful redis, the latter is probaby pretty mature, so that's still a win over everything we build ourselves)

@bpradipt
Copy link
Member

We can aim to keep the core KBS as stateless and provide all config and secrets via K8s configMap and Secrets respectively. I think this will be easier to maintain, help with scaling of KBS/AS as needed and integrate with 3rd party vault storage.

in general that makes sense, I think @kartikjoshi21 is trialing out using configmaps for policies, maybe we can implement it in the scope of this repo.

Sounds good

it would make sense to have a config option "policies/resources.readonly" on kbs so that we disable the respective endpoints and give sensible error messages.

Can you please point me to the kbs endpoints that will require disabling

with regard to full statelessness: the kbs protocol is stateful, so a kbs pod has state in memory

solutions that come to mind:

  • sticky sessions (potentially litte/no changes to kbs required, but maybe elaborate k8s machinery, not sure)

I was thinking about sticky sessions/session affinity when having multiple kbs pod replicas running. Example - https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinity
https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/

@mkulke
Copy link
Author

mkulke commented May 21, 2024

Can you please point me to the kbs endpoints that will require disabling

I think it's those endpoints: https://github.com/confidential-containers/trustee/blob/main/kbs/src/api/src/http/config.rs

I was thinking about sticky sessions/session affinity when having multiple kbs pod replicas running. Example - https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinity https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/

those would be the stateful RCAR protocol's endpoints: https://github.com/confidential-containers/trustee/blob/main/kbs/src/api/src/http/attest.rs

we might have to make to set some headers to allow sticky sessions, not sure.

@bpradipt
Copy link
Member

@mkulke @kartikjoshi21 opa policy via configmaps is now supported with the operator - #33

I'm experimenting with session stickiness using ingress capabilities and will update soon. Here is what I have till now

  • Expose ingress route (enable session stickiness as part of the config)
  • Increase the replica count of the kbs deployment to 2
  • Open two console to look at the logs of the two kbs deployments
  • Use curl from two different pods and send request to the ingress

Verify from the logs how the requests are handled by kbs - Typically it'll be handled in a round-robin fashion (by default).

  • Add a cookie to the curl (header) request

Verify from the logs how the requests are handled - the requests should only be served by a specific kbs deployment

I'll repeat this with some different configs just to be sure and based on that we can decide how to implement it on the client side.

lmilleri pushed a commit to lmilleri/trustee-operator that referenced this issue Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants