Skip to content

Releases: Dao-AILab/flash-attention

v2.7.2.post1

v2.7.2

07 Dec 16:34
Compare
Choose a tag to compare
Bump to v2.7.2

v2.7.1.post4

07 Dec 06:15
Compare
Choose a tag to compare
[CI] Don't include <ATen/cuda/CUDAGraphsUtils.cuh>

v2.7.1.post3

07 Dec 05:41
Compare
Choose a tag to compare
[CI] Change torch #include to make it work with torch 2.1 Philox

v2.7.1.post2

07 Dec 01:13
Compare
Choose a tag to compare
[CI] Use torch 2.6.0.dev20241001, reduce torch #include

v2.7.1.post1

06 Dec 01:53
Compare
Choose a tag to compare
[CI] Fix CUDA version for torch 2.6

v2.7.1

06 Dec 01:43
Compare
Choose a tag to compare
Bump to v2.7.1

v2.7.0.post2

13 Nov 04:02
Compare
Choose a tag to compare
[CI] Pytorch 2.5.1 does not support python 3.8

v2.7.0.post1

12 Nov 22:29
Compare
Choose a tag to compare
[CI] Switch back to CUDA 12.4

v2.7.0

12 Nov 22:12
Compare
Choose a tag to compare
Bump to v2.7.0