-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receiving ROCm execution provider not supported in this build #1130
Comments
Hi! The ROCm support is currently very experimental. You can try pass |
No, when I build with --use_rocm, the build script only build cpu codes.
|
Different with building onnxruntime-rocm, build onnxruntime-genai-rocm even didn't use hipify to convert cuda codes(.cu) to hip codes(.hip). And it didn't try to find Please, U need to fix it. |
Also, u need to change the version of Microsoft.ML.OnnxRuntime.Rocm from 1.19.2 to 1.21.0-dev-20241211-0046-64d8e25b in cmake/ortlib.cmake at line 39, because it's wrong. |
True! I updated it to 1.20.0 and it was able to pull the Rocm library. Maybe I need to update to the one you mentioned @Looong01 ! Do you have any suggestions on how can we make the hip compilation successful for getting the Rocm working? |
Compared to the build procedure of onnxruntime, the cmake files, U will know that it still needs a big effort to fill the CMake compile codes for ROCm. It's lack of a lot of codes. |
Describe the bug
Hi team,
I am currently building OnnxRuntime-GenAI from source for AMD GPUs. With some tweaks to the main branch, I was able to successfully build the
wheel
and install the python package. I was able to use the Model builder to convert LLMs to ROCm supported models. But when I try to run the inference, I am seeing the following error -onnxruntime_genai.onnxruntime_genai.OrtException: ROCm execution provider is not enabled in this build.
I am sure there is no issue with onnxruntime dependency because I received the following supported EPs from it -
Please suggest how I can solve this problem.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I expect no errors and LLMs should output same as they do in Nvidia GPUs.
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: