Skip to content

v0.10.0

Compare
Choose a tag to compare
@nathanielsimard nathanielsimard released this 24 Oct 22:45
· 490 commits to main since this release

Burn v0.10.0 sees the addition of the burn-compute crate to simplify the process of creating new custom backends, a new training dashboard and the possibility of using the GPU in the browser along with a web demo. Additionally, numerous new features, bug fixes, and CI improvements have been made.

Warning: there are breaking changes, see below.

Changes

Burn Compute

Burn Import

  • Add more ONNX record types @antimora

  • Support no-std for ONNX imported models @antimora

  • Add custom file location for loading record with ONNX models @antimora

  • Support importing erf operation to ONNX @AuruTus

Burn Tensor

Burn Dataset

  • Improved speed of SqLite Dataset @antimora

  • Use gix-tempfile only when sqlite is enabled @AlexErrant

Burn Common

  • Add benchmark abstraction @louisfd

  • Use thread-local RNG to generate IDs @dae

Burn Autodiff

  • Use AtomicU64 for node ids improving performance @dae

Burn WGPU

Burn Candle

  • Candle backend is now available as a crate and updated with Candle advances @louisfd @agelas

Burn Train

  • New training cli dashboard using ratatui @nathanielsimard

  • [Breaking] Heavy refactor of burn-train making it more extensible and easier to work with @nathanielsimard

  • Checkpoints can be customized with criteria based on collected metrics @nathanielsimard

  • Add the possibility to do early stopping based on collected metrics @nathanielsimard

Examples

  • Add image classifier web demo using different backends, including WebGPU, @antimora

Bugfixes

  • Epoch and iteration were swapped. (#838) @daniel-vainsencher

  • RNN (Gru & LSTM) were not generic over the batch size @agelas, @EddieMataEwy

  • Other device adaptors in WGPU were ignored when best available device was used @chistophebiocca

Documentation

Chores

Thanks

Thanks to all aforemetioned contributors and to our sponsors @smallstepman, @0x0177b11f and @premAI-io.