Skip to content

Releases: FluxML/Flux.jl

v0.12.8

28 Oct 10:16
69afb67
Compare
Choose a tag to compare

Flux v0.12.8

Diff since v0.12.7

Closed issues:

  • Coverage (#89)
  • Flux.train! stops working after the first iteration without an error. (#1692)
  • Update Zygote (#1728)
  • additional arguments to loss function? (#1730)
  • The Purpose and Goals of Flux.jl (#1734)
  • FluxML's NumFOCUS Affiliate project application (#1740)
  • ConvTranspose does not support groups (#1743)
  • deepcopy(nn::Chain) does not deep copy with CuArray weights! (#1747)
  • InvalidIRError when putting a model on the GPU (#1754)

Merged pull requests:

v0.12.7

29 Sep 11:31
2804618
Compare
Choose a tag to compare

Flux v0.12.7

Diff since v0.12.6

Closed issues:

  • Poor performance relative to PyTorch (#886)
  • Recur struct's fields are not type annotated, which is causing run–time dispatch and a significant slowdowns (#1092)
  • Bug: lower degree polynomial substitute in gradient chain! (#1188)
  • Very slow precompile (>50min) on julia 1.6.0 on Windows (#1554)
  • Do not initialize CUDA during precompilation (#1597)
  • GRU implementation details (#1671)
  • Parallel layer doesn't need to be tied to array input (#1673)
  • update! a scalar parameter (#1677)
  • Support NamedTuples for Container Layers (#1680)
  • Freezing layer parameters still computes all gradients (#1688)
  • A demo is 1.5x faster in Flux than tensorflow, both use cpu; while 3.0x slower during using CUDA (#1694)
  • Problems with a mixed CPU/GPU model (#1695)
  • Flux tests with master fail with signal 11 (#1697)
  • [Q] How does Flux.jl work on Apple Silicon (M1)? (#1701)
  • Typos in documents (#1706)
  • Fresh install of Flux giving errors in precompile (#1710)
  • Flux.gradient returns dict of params and nothing (#1713)
  • Weight matrix not updating with a user defined initial weight matrix (#1717)
  • [Documentation] No logsumexp in NNlib page (#1718)
  • Flattened data vs Flux.flatten layer in MNIST MLP in the model zoo (#1722)

Merged pull requests:

v0.12.6

23 Jul 10:31
0a21546
Compare
Choose a tag to compare

Flux v0.12.6

Diff since v0.12.5

Merged pull requests:

v0.12.5

19 Jul 07:02
9931730
Compare
Choose a tag to compare

Flux v0.12.5

Diff since v0.12.4

Closed issues:

  • Hessian vector products (#129)
  • Stopping criteria (#227)
  • Flux + Julia ecosystem docs (#251)
  • RNN unbroadcast on GPU not working (#421)
  • Shouldn't gradcheck compares Jacobian? (#462)
  • Transition examples in docs to doctests (#561)
  • Batch-axis thread parallelism (#568)
  • Add tests of ExpDecay (#684)
  • Sudden memory leak when training on GPU over many epochs (#736)
  • performance variance between macOS / Linux ? (#749)
  • onehot ambiguous method (#777)
  • Killed while training the model (#779)
  • type Method has no field sparam_syms, while @save model (#783)
  • Flux#zygote Error in phenomes... Mutating arrays is not supported (#819)
  • Custom serialization pass for intermediate states (#845)
  • OneHotMatrix does not support map (#958)
  • CuArrays + huber_loss iterate(::nothing) error (#1128)
  • Can't get Flux (v0.10.3) working for Custom Loss function (#1153)
  • Custom loss function on subset of parameters fails (#1371)
  • Minimizing sum fails (#1510)
  • gpu behaves differently from cu on a Char array (#1517)
  • Warn different size inputs in loss functions (#1522)
  • Recurrent docs need to be update for v0.12 (#1564)
  • Computation of higher order derivatives for recurrent models results in strange errors (#1593)
  • Why does DataLoader not throw an error when fed with a 1D vector for the target? (#1599)
  • a small error in the documentation... (#1609)
  • Slow unnecessary GPU copy of output of gpu(::OffsetArray) (#1610)
  • "using Flux" makes type inference fail when there is a Ref{} (#1611)
  • @epochs is missing a bracket (#1615)
  • Flux Overview Documentation Out of Date (#1621)
  • missing kernel for Base.unique (#1622)
  • Compilation error on PPC (#1623)
  • _restructure as part of the public API? (#1624)
  • ERROR: setindex! not defined for Zygote.OneElement{...} (#1626)
  • MethodError: Cannot convert an object of type Params to an object of type Float64 (#1629)
  • MethodError: no method matching flatten(::Array{Float32,4}) (#1630)
  • Where are the cpu() and gpu() functions? (#1631)
  • bug in RNN docs (#1638)
  • Bug in the current overview documentation (#1642)
  • How to tell Flux.jl not to use the GPU? (#1644)
  • Missing docs for @functor (#1653)
  • typo in the docs/overview section right at the beginning (#1663)

Merged pull requests:

v0.12.4

01 Jun 22:15
b78a27b
Compare
Choose a tag to compare

Flux v0.12.4

Diff since v0.12.3

Closed issues:

  • Unable to get gradients of "Dense" models when sparse arrays are involved (#965)
  • Pullback within pullback throws error when using swish activation function (#1500)
  • Stable docs are stuck on v0.11.2 (#1580)
  • LSTM gradient calculation fails on GPU, works on CPU (#1586)
  • BSON.@save model_path * ".bson" model ERROR: type Method has no field ambig (#1591)
  • Too slow hcat of OneHotMatrix. (#1594)
  • Fallback implementation convolution when using Duals (#1598)
  • Bad printing for OneHot* (#1603)
  • SamePad() with even kernel dimensions does not work (only in CUDA) (#1605)

Merged pull requests:

v0.12.3

28 Apr 23:17
37daa61
Compare
Choose a tag to compare

Flux v0.12.3

Diff since v0.12.2

Closed issues:

  • Flux overrides cat behaviour and causes stack overflow (#1583)

Merged pull requests:

v0.12.2

23 Apr 14:42
509b21a
Compare
Choose a tag to compare

Flux v0.12.2

Diff since v0.12.1

Closed issues:

  • Cosine_embedding_loss could be added to Flux.jl (#1094)
  • Char RNN errors (#1215)
  • Colab - MethodError: no method matching (::Flux.LSTMCell{... (#1563)
  • Issue with Flux.jl installation (#1567)
  • Issue with Flux.jl installation (#1568)
  • Model no longer type stable when using destructure and restructure (#1569)

Merged pull requests:

v0.12.1

01 Apr 02:14
8a5b977
Compare
Choose a tag to compare

Flux v0.12.1

Diff since v0.12.0

Closed issues:

  • Helper functions for choosing data types for bias and weight in Flux chains? (#1548)
  • LSTM failed to return gradient (#1551)
  • Flux.destructure gives MethodError when used with non-trainable parameters (#1553)
  • Restructure on Dense no longer plays nicely with alternative types (#1556)

Merged pull requests:

v0.12.0

28 Mar 05:32
e251327
Compare
Choose a tag to compare

Flux v0.12.0

Diff since v0.11.6

Closed issues:

  • RNN state dimension with batches (#121)
  • Support for additional dimensions in Dense layer (#282)
  • Error messages when CUDNN is not loaded. (#287)
  • Easier way of switching models from cpu to gpu? (#298)
  • how would I implement an echo state network in flux ? (#336)
  • Pkg.update() in Julia 0.6.x gets you an incompatible version of Flux (#341)
  • Indices not defined (#368)
  • Regression with Flux (#386)
  • LSTM sequence processing (#393)
  • Checkpointing (#402)
  • Allowing users to specify their default data folder (#436)
  • elu not working with GPU (#477)
  • Tied Weights (#488)
  • rethinking Conv, and layer granularity in general (#502)
  • σ.() on GPU not using CUDAnative (#519)
  • Using tensorflow and pytorch layers (#521)
  • Abstract layers (#525)
  • Max norm regularisation (#541)
  • Typical accuracy function using onecold with a OneHotMatrix fails to compile on GPU (#582)
  • Export apply!, etc (#588)
  • Better initialization support (#670)
  • Deprecate initialiser keyword arguments (#671)
  • backprop fails on min.(x1,x2) (#673)
  • Adaptive pooling layers in Flux. (#677)
  • CUDAnative (#682)
  • accumulate gradient with the new gradient API? (#707)
  • sigmoid: multiplicative identity only defined for non-square matrices (#730)
  • 1D Conv Broken (#740)
  • Layers and Params should support equality (#1012)
  • InstanceNorm throws a scalar getindex disallowed error on GPU (#1195)
  • Error with GroupNorm on GPU (#1247)
  • Error with BatchNorm/InstanceNorm after Conv1D on GPU (#1280)
  • How to apply L2 regularization to a subset of parameters? (#1284)
  • define modules function (#1294)
  • Misleading InstanceNorm documentation? (#1308)
  • ConvTranspose on GPU fails with certain activation functions (#1350)
  • Conv with non homogenous array eltypes gives confusing error message (#1421)
  • Layers' docstrings and constructors inconsistencies (#1422)
  • BatchNorm alters its sliding mean/standard deviation parameters even in testmode if Zygote is called (#1429)
  • BatchNorm on CUDA accepts improper channel size argument and "works" in a possibly ill-defined way. Proper errors on CPU (#1430)
  • Better handling for layers with multiple inputs w/ outputsize (#1466)
  • Dense function does not support tensor? (#1480)
  • Cannot load model saved with JLD (#1482)
  • RNN and GRU give mutation error; LSTM gives ArgumentError about number of fields (#1483)
  • Moving OneHotMatrix to GPU triggers the slow scalar operations (#1494)
  • Does gain do anything in kaiming_uniform? (#1498)
  • Zeros has old behaviour on releases up to 0.11.6 (#1507)
  • getting this -> ERROR: Mutating arrays is not supported (solved) (#1512)
  • Moving multihead attention from transformers.jl into Flux.jl (#1514)
  • Gradient cannot be got under testmode gpu net with Batchnorm (#1520)
  • Development version document example on Dense layer's bias not working (#1523)
  • how to use flatten layer? (it does not flatten arrays) (#1525)
  • Ambiguity in recurrent neural network training (#1528)
  • scalar indexing when showing OneHot gpu (#1532)
  • Acitvation function relu terrible performance (#1537)
  • Error on precompile (#1539)
  • Flux.normalise vs standardise (#1541)
  • Cudnn batchnorm causes errors when I disable BatchNorm when training (#1542)
  • DimensionMismatch("All data should contain same number of observations") (#1543)
  • Softmax stucks the network (#1546)

Merged pull requests:

v0.11.6

26 Jan 08:30
Compare
Choose a tag to compare

Flux v0.11.6

Diff since v0.11.5

Merged pull requests: