Releases: Dao-AILab/flash-attention
Releases Β· Dao-AILab/flash-attention
v2.7.2.post1
[CI] Use MAX_JOBS=1 with nvcc 12.3, don't need OLD_GENERATOR_PATH
v2.7.2
Bump to v2.7.2
v2.7.1.post4
[CI] Don't include <ATen/cuda/CUDAGraphsUtils.cuh>
v2.7.1.post3
[CI] Change torch #include to make it work with torch 2.1 Philox
v2.7.1.post2
[CI] Use torch 2.6.0.dev20241001, reduce torch #include
v2.7.1.post1
[CI] Fix CUDA version for torch 2.6
v2.7.1
Bump to v2.7.1
v2.7.0.post2
[CI] Pytorch 2.5.1 does not support python 3.8
v2.7.0.post1
[CI] Switch back to CUDA 12.4
v2.7.0
Bump to v2.7.0