Releases: FluxML/Flux.jl
Releases · FluxML/Flux.jl
v0.13.17
Flux v0.13.17
Closed issues:
- Metal GPU acceleration on Apple Silicon (#1304)
- Docs Revamp (#2174)
- Add Tutorial Image Segmentation using Metalhead's UNet (#2192)
- Flux.state binding does not exist (#2256)
- "failed to start primary task" with Julia 1.9 and nthreads(:interactive) > 0 (#2257)
- Error message (#2259)
- GPU/CUDA memory leak (#2261)
Merged pull requests:
- [CI] Separate julia-nightly from common matrix generator (#2254) (@skyleaworlder)
- Bump thollander/actions-comment-pull-request from 2.3.1 to 2.4.0 (#2264) (@dependabot[bot])
- add outdated version banner to the docs (#2267) (@CarloLucibello)
- initial Metal support (#2270) (@CarloLucibello)
v0.13.16
Flux v0.13.16
Closed issues:
- Failed to precompile Flux (#2231)
- need help from an expert (#2233)
- Conv layer throws TaskFailedException when internal parameters are BigFloat (#2243)
- Code block in documentation not rendering properly (#2248)
- Regularisation looks to slow down gradient function by factor 500 (#2253)
Merged pull requests:
- Add
EmbeddingBag
(#2031) (@mcognetta) - Remove greek-letter keyword arguments (#2139) (@mcabbott)
- Speed-up normalization layers (#2220) (@pxl-th)
- Print the state of Dropout etc. (#2222) (@mcabbott)
- Float32 warning bug fix (#2226) (@ashwani-rathee)
- Remove type restrictions for recurrent cells (#2234) (@darsnack)
- Fix Conv transfer to AMDGPU (#2235) (@pxl-th)
- add Flux.state(x) (#2239) (@CarloLucibello)
- fix perf issue loadmodel! (#2241) (@CarloLucibello)
gpu(::DataLoader)
, take III (#2245) (@mcabbott)- Update NEWS (#2247) (@mcabbott)
- fix indent in recurrent.jl (#2249) (@mcabbott)
- rm StatsBase (#2251) (@mcabbott)
- Remove greek-letter keyword from
normalise
(#2252) (@mcabbott)
v0.13.15
Flux v0.13.15
Closed issues:
- MethodError from show on custom layer, using
@functor
with no fields (#2208) - Simple L2 regularisation with Flux.params excruciatingly slow (#2211)
- Quick start tutorial doesn't work; Flux.setup not defined (#2217)
- Migrating to new explicit-style Flux (#2221)
- should f16,f32, ... convert integer or boolean arrays? (#2225)
- Creation of Adversarial Examples (#2229)
Merged pull requests:
- Add a logistic regression example to the Getting Started section (#2021) (@Saransh-cpp)
- MultiHeadAttention implementation (#2146) (@CarloLucibello)
- Use consistent spelling for optimise (#2203) (@jeremiahpslewis)
- Fix a bug in show (#2210) (@mcabbott)
- Create dependabot.yml (#2212) (@CarloLucibello)
- Bump thollander/actions-comment-pull-request from 1.0.1 to 2.3.1 (#2213) (@dependabot[bot])
- manual gradient checks for RNN - implicit and explicit gradients (#2215) (@jeremiedb)
- doc entry for MultiHeadAttention (#2218) (@CarloLucibello)
- fix some documenter warnings (#2219) (@CarloLucibello)
- example code have a little mistake (#2223) (@jaco267)
- Update 2020-09-15-deep-learning-flux.md (#2224) (@natema)
- Fix link to ParameterSchedulers.jl docs (#2227) (@white-alistair)
- f16,f32,.. don't convert int arrays + handle complex (#2228) (@CarloLucibello)
- doc: Syntax in GAN tutorial (#2230) (@musoke)
v0.13.14
Flux v0.13.14
Closed issues:
- loadmodel! example in the docs doesn't work (#2191)
- ConvTranspose can cause Julia crash on GPU (#2193)
- Flattern removal (#2195)
- Flux v0.13.13 gpu crashes (#2199)
- FLux and Lux upon using: ERROR: LoadError: ArgumentError: invalid version string: local (#2202)
UndefVarError: #flatten not defined
with existing model (#2204)
Merged pull requests:
- Add AMDGPU extension (#2189) (@pxl-th)
- fixed BSON loadmodel! documentation and added a test case (#2194) (@jonathanBieler)
- reintroduce flatten (#2196) (@CarloLucibello)
- fix various deprecation warnings (#2197) (@ExpandingMan)
- CompatHelper: add new compat entry for Preferences at version 1, (keep existing compat) (#2198) (@github-actions[bot])
- reintegrate has_cudnn check (#2200) (@CarloLucibello)
- remove comment on not importing losses in Flux's namespace (#2201) (@CarloLucibello)
- dummy implementation of flatten (#2205) (@CarloLucibello)
- tiny change to docstring of gpu function (#2206) (@GSmithApps)
v0.13.13
Flux v0.13.13
Closed issues:
- Normalization layers promote eltype (#1562)
- Recurrent cell
eltype
restriction breaksoutputsize
(#1565) - Performance regression with graph neural networks (#1577)
- Opaque error caused by Float64 input to RNN (#1972)
- Binding Flux.setup does not exist (#2169)
- Un-intended behaviour? Should Flux be able to reduce StaticArrays? (#2180)
- Custom model can not be trained (#2187)
Merged pull requests:
v0.13.12
Flux v0.13.12
Closed issues:
- Delta neural networks inference (#2129)
- [Bug] Embedding forward pass breaks for onehotbatch with multiple batch dimensions (#2160)
- MethodError: no method matching when training LSTMs even when loss function is working corrently (#2168)
- Type instability with Flux.update! when loss function involves extra arguments (#2175)
Merged pull requests:
- Un-deprecate
track_stats
for InstanceNorm (#2149) (@ToucheSir) - Move
dropout
to NNlib (#2150) (@mcabbott) - Use NNlib's
within_gradient
(#2152) (@mcabbott) - Export
rand32
and friends (#2157) (@mcabbott) - Remove piratical array conversion rule (#2167) (@ToucheSir)
- update: actions node 12 => node 16 (#2173) (@skyleaworlder)
- cuda 4.0 compat (#2177) (@CarloLucibello)
v0.13.11
Flux v0.13.11
Closed issues:
- Deprecate
track_stats=true
forGroupNorm
andInstanceNorm
(#2006) cpu(x)
errors forx isa CuArray{<:CartesianIndex}
(#2116)- Constructing a Chain from a dictionary (#2142)
- Method error when using
Flux.setup
withEmbedding
layer (#2144) - Method Error when using Flux.withgradient (#2148)
Merged pull requests:
- fix cpu(x) for immutable arrays (#2117) (@CarloLucibello)
- Fix two bugs re
setup
(#2145) (@mcabbott) - CompatHelper: bump compat for MLUtils to 0.4, (keep existing compat) (#2147) (@github-actions[bot])
v0.13.10
Flux v0.13.10
Closed issues:
- remove Bors (#1843)
- Only generate and upload coverage for one matrix entry (#1939)
- [Discussion]: Revamped Getting Started guide (#2012)
- Using users provided weight matrix to build LSTM layers (#2130)
Merged pull requests:
- Re-write training docs (#2114) (@mcabbott)
- Move doc sections to "guide" + "reference" (#2115) (@mcabbott)
- Allow ForwardDiff in BatchNorm's track_stats (#2127) (@mcabbott)
- Fix last block in quickstart.md (#2131) (@simonschnake)
- Delete bors.toml (#2133) (@CarloLucibello)
- Docs for
onecold
(#2134) (@nathanielvirgo) - [ISSUE 1939] Update workflow, to only generate coverage for a specific entry (#2136) (@skyleaworlder)
v0.13.9
Flux v0.13.9
Closed issues:
- Iteration over
params(m)
in explicit mode gives no gradient (#2091) Flux.Optimise.update!
updating grads instead of params? (#2121)- Flux.reset! triggers a BoundsError (#2124)
Merged pull requests:
- Remove
train!
from quickstart example (#2110) (@mcabbott) - Re-organise "built-in layers" section (#2112) (@mcabbott)
- Narrower version of
@non_differentiable params
(#2118) (@mcabbott) - allow non-tuple data in the new train! (#2119) (@CarloLucibello)
- fix train! test (#2123) (@CarloLucibello)
- Move 5 tutorials from fluxml.github.io (#2125) (@mcabbott)
- Remove Flux.Data module (#2126) (@mcabbott)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#2128) (@github-actions[bot])