You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now node just tells if it can or can not provide the label, but for cluster proper working it will be great to have more data about how much it will take to run the resource and properly elect the right executor.
Time to get the images - quick HEAD request for the URLs will give some idea on the archives size and previous downloads will provide average download speed to calculate the required time.
Storage required for the images could be calculated out of images metadata (which is stored in the head of the tar.xz archive) which contains the size of unpacked files. Maybe it will be not a good idea to cleanup the old images just to fulfill the resource request if the other nodes in the cluster will be faster...
So something like that, quite sure there is alot more and the logic to calculate all those input data will be quite complex, but will optimize the cluster utilization and will help with nodes specialization.
The text was updated successfully, but these errors were encountered:
With lifetime timeout we can be sure the application resource will not be executed forever if something bad happened and based on this timeout fish node will be able to do some heuristics on how much time left till provisioning the next resource (#38).
The duration is set in standard golang format as string "1h2m3s", if it's empty or 0 - the default from fish config will be used. And if it's negative (like "-1s") then the resource should exist until user say so by deallocating it.
This change really helps with clouds where we certainly don't need to left the resources for long time.
Right now node just tells if it can or can not provide the label, but for cluster proper working it will be great to have more data about how much it will take to run the resource and properly elect the right executor.
The metrics:
So something like that, quite sure there is alot more and the logic to calculate all those input data will be quite complex, but will optimize the cluster utilization and will help with nodes specialization.
The text was updated successfully, but these errors were encountered: