-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python bindings to cuFileDriverOpen()
and cuFileDriverClose()
#514
Conversation
47a0452
to
4b8181c
Compare
4967be7
to
e7bde28
Compare
3d02555
to
0b4537f
Compare
Thank you @madsbk for your efforts! |
We want to trigger CI error if a package was built without cufile support
37d318f
to
3f8f8b7
Compare
return cufile_driver.driver_close() | ||
|
||
|
||
def initialize() -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: Whose job is it to call initialize
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user for now. If we find a Python example that segfaults because of cuFile's termination issues, we should consider calling it in __init__.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tiny suggestions, none are blocking, I think
…o cufile_driver
Co-authored-by: Lawrence Mitchell <[email protected]>
Co-authored-by: Lawrence Mitchell <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly cosmetic suggestions. I had one thought about an atexit handler but it seems like that doesn't order the way we need w.r.t. CUDA teardown?
// not allowed to call CUDA after main[1]. This is because, cuFile will segfault if the | ||
// driver isn't closed on program exit i.e. we are doomed if we do, doomed if we don't, but | ||
// this seems to be the lesser of two evils. | ||
// [1] <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#initialization> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we explicitly register a std::atexit
handler? Assuming that the CUDA runtime is doing the same, atexit has ordering guarantees so we'll be pushing our exit handler onto the stack to run before CUDA does its cleanup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. This is explicitly UB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know how CUDA does its teardown, if not via atexit handlers that have predictable ordering?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think std::atexit
has ordering guarantees between multiple shared libraries :/
If cuFile isn't available. | ||
""" | ||
driver_open() | ||
atexit.register(driver_close) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm well if this can error it seems to invalidate my suggestion to use an atexit handler in the C++...
Co-authored-by: Vyas Ramasubramani <[email protected]>
/merge |
Changes:
cuFileDriverOpen()
andcuFileDriverClose()
.kvikio.cufile_driver.initialize()
, which open the cuFile driver and close it again at module exit.