Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: unify and propagate CMAKE_ARGS to GGML-based backends #4367

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mudler
Copy link
Owner

@mudler mudler commented Dec 11, 2024

Description

This pull request centralizes CMAKE_ARGS composition as such is shared between backends that are ggml-based. llama.cpp cmake args can be used for instance with bark.cpp and stable-diffusion.cpp (ggml based). This aims to enable cuda and hipblas support on bark.cpp and stablediffusion.cpp (ggml variant).

For now this doesn't aim to be smart and share this in a common way (maybe using cmake, or a makefile that is called by both to generate the cmake args). The attempt of this pr is to understand any changes that might be required by enabling the flags for the respective backends. I'm not sure that bark and stablediffusion currently linking processes are correct in term of GPU support.

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

Copy link

netlify bot commented Dec 11, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 894a302
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/6759fe760a6baa000818c441
😎 Deploy Preview https://deploy-preview-4367--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant