feat: unify and propagate CMAKE_ARGS to GGML-based backends #4367
+45
−42
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This pull request centralizes CMAKE_ARGS composition as such is shared between backends that are ggml-based. llama.cpp cmake args can be used for instance with bark.cpp and stable-diffusion.cpp (ggml based). This aims to enable cuda and hipblas support on bark.cpp and stablediffusion.cpp (ggml variant).
For now this doesn't aim to be smart and share this in a common way (maybe using cmake, or a makefile that is called by both to generate the cmake args). The attempt of this pr is to understand any changes that might be required by enabling the flags for the respective backends. I'm not sure that bark and stablediffusion currently linking processes are correct in term of GPU support.
Notes for Reviewers
Signed commits