Skip to content

Releases: tracel-ai/burn

v0.13.2

03 May 14:23
Compare
Choose a tag to compare

Bugfix

Fix autodiff graph memory management strategy to improve performance (#1702 #1710) @louisfd
Fix matmul double broadcasting for ndarray (#1646 #1679) @lancelet

v0.13.1

26 Apr 20:01
Compare
Choose a tag to compare

Bugfix

Fix autodiff memory leak and improve performance with a new graph memory management strategy (#1698) @nathanielsimard @louisfd
Fix inplace fused operations (#1682) @nathanielsimard

Improvements

Linear 1D support, helpful for ONNX support (#1682) @nathanielsimard
Upgrade wgpu to 0.19.4 (#1692) @nathanielsimard

v0.13.0

12 Apr 20:12
cf7b279
Compare
Choose a tag to compare

The Burn Release 0.13 is a significant update introducing numerous new features and performance enhancements. One major change is the removal of the Sync trait implementation from most Burn types, see Core User APIs. Additionally, the release introduces several new tensor operations, module features, optimizers, as well as improvements to the autodiff backend. Notably, a new bridge mechanism facilitates runtime switching between backends, and significant work has been done on the Just-in-Time and Wgpu backends. The release also addresses numerous bug fixes, documentation improvements, infrastructure updates, CI enhancements, and miscellaneous changes to improve code quality and usability.

Core User APIs

A major change in this release is that most Burn types no longer implement the Sync trait, such as modules, optimizers, and tensors. This change should not impact users of the Learner struct for model training. However, it may affect those who implemented their own training loop and inference server. While modules, optimizers and tensors can be sent to other threads, they cannot be accessed concurrently by multiple threads. This aligns with Burn's workflow, where each tensor operation requires an owned version of the tensor. The change was made to safely reduce the number of locks needed when modifying the state of the autodiff graph, fusion state, allocation cache, and various other use cases. While not all locks have been removed, the type signature no longer poses a problem for follow-up optimizations. Note that the same tensor can still be sent to multiple threads without copying the underlying data. However it will require cloning before sending a tensor to a thread. (#1575) @nathanielsimard

Tensor

Module

Optimizer

Train

Backend

This release also introduces the backend bridge, a new mechanism for runtime switching between backends.
While an improvement, it remains compatible with previous methods of supporting mixed precision. (#1529) @nathanielsimard

JIT

Significant effort has been devoted over the past few months to refactor the previous Wgpu backend into a shader-agnostic Just-in-Time backend.
All lower-level dependencies have been abstracted into the Just-in-Time Runtime trait, requiring a compiler, compute server, and storage.
The bulk of this work was carried out by @nathanielsimard and @louisfd.

Commits: #1274 #1280 #1313 #1340 #1356 #1359 #1378 #1391 #1396 #1398 #1417 #1429 #1423 #1424 #1433 #1456 #1474 #1457 #1480 #1472 #1493 #1509 #1530 #1528 #1541 #1550 #1569

Wgpu

Autodiff

Extensive work has also been undertaken on Burn's autodiff backend.
The backend now supports gradient checkpointing to reduce memory usage and has been refactored into a client/server architecture.
These updates result in significantly less blocking when tracking gradients, enhancing performance particularly on smaller models.
Furthermore, various bugs have been fixed where some graph nodes weren't used, potentially truncating the autodiff graph.
Overall, these changes make the autodiff process more reliable and efficient. (#1575) (#1358) @louisfd @nathanielsimard

Candle

Data

Import

Benchmarks

We have implemented a system that enables the comparison of backends across a variety of tasks.
Currently, most of these tasks consist of micro-benchmarks, but we plan to expand the range of benchmarks in the future.
To ensure Burn's portability and performance across different devices, the community can run and upload benchmarks! 🔥

Bug Fix

Infrastructure

The minimum Rust version has been updated to 1.75. (#1297) @syl20bnr

Docs

Read more

v0.12.1

01 Feb 21:11
Compare
Choose a tag to compare

Bugfix

Fix wgpu performance issue: revert to wgpu 0.18.0 #1221 @nathanielsimard
Fix problem with batch norm on LibTorch backend #1226 @nathanielsimard
Fix docs build #1212 #1229 @syl20bnr @nathanielsimard
Fix training dashboard metrics switch #1228 @nathanielsimard

Chores

Put all dependencies versions in workspace #1210 @nathanielsimard

v0.12.0

31 Jan 20:05
2acf656
Compare
Choose a tag to compare

This release highlights an optimized Wgpu Backend, clearer examples and documentation, and numerous bug fixes.
Notably, breaking changes in device management mandate explicit device specification to prevent potential bugs.
Additionally, the new PyTorch recorder simplifies model porting by enabling automatic import of PyTorch's weights.
We also put a lot of efforts into improving our CI infrastructure for enhanced reliability, efficiency, and scalability.

Changes

Tensor & Module API

  • Added support for generic modules #1147 @nathanielsimard
  • Added support for tuple modules #1186 @varonroy
  • Enabled loading PyTorch .pt (weights/states) files directly to module's record, currently available on Linux & MacOS #1085 @antimora
  • Added mish and softplus activation functions #1071 @pacowong
  • Improved chunk performance in backends @1032 @Kelvinyu1117
  • [Breaking] Added the device as an argument for tensor operations that require it, replacing the previous optional device usage #1081 #518 #1110 @kpot
    • Code update involves either using Default::default for the same behavior or specifying the desired device.
  • Allowed raw tensors to be serialized/deserialized directly with serde #1041 @jmacglashan
  • [Breaking] Forced the choice of the device for deserialization #1160 #1165 @nathanielsimard
  • Added element-wise pow operation #1133 @skewballfox
  • Refactored the tensor backend API names #1174 @skewballfox
  • [Breaking] Changed the default recorder to NamedMpkFileRecorder #1161 #1151 @laggui
    • After a bit of exploration, we removed any type of compression because it adds to much overhead

Examples & Documentation

Wgpu Backend

Fusion

Infra

Chore

Bug Fixes

v0.11.1

04 Dec 15:46
3088c46
Compare
Choose a tag to compare

Burn v0.11.1 fixes a few bugs in the recent v0.11.0

Bugfixes

Fix concurrency issue in burn-fusion, related to freeing tensors that are never read @nathanielsimard

Fix typos in the book @shanmo

Fix Readme @nathanielsimard

Fix docs build @dcvz

Thanks

Thanks to all aforementioned contributors

v0.11.0

01 Dec 21:10
Compare
Choose a tag to compare

The main feature of Burn v0.11.0 is automatic kernel fusion, which is still in active development but already usable. Many enhancement and new features have been added throughout the framework, for better efficiency and reliability.

Warnings:

  • There are some breaking changes, see below.
  • The organization has been renamed from burn-rs to tracel-ai.

Changes

Overall changes

Burn Fusion

Burn Core

Burn Tensor

  • New operators in tensor API: unsqueeze_dim, narrow, stack, chunk, tril, triu @dcvz

  • Recip operation support on all backends @gzsombor

  • Implement DoubleEndedIterator for DimIter @wcshds

Burn Compute

Burn Import

Burn Train

Burn WGPU

Burn Candle

  • Support conv_transpose_1d @louisfd

  • Enable accelerate for MacOS CPU @dcvz

Backend Comparison

Bugfixes

  • Allow arbitrary precision threshold for float equality assertion @meteor-lsw

  • Update serde_rusqlite to the new version with MIT/Apache2 license @antimora

  • Fix SQLite database tests on Windows @syl20bnr

  • Fix max_dim and min_dim tensor operations @gzsombor

  • Fix inplace double binary broadcasting in the LibTorch backend @nathanielsimard

Documentation

Continuous Integration

Thanks

Thanks to all aforemetioned contributors.

v0.10.0

24 Oct 22:45
Compare
Choose a tag to compare

Burn v0.10.0 sees the addition of the burn-compute crate to simplify the process of creating new custom backends, a new training dashboard and the possibility of using the GPU in the browser along with a web demo. Additionally, numerous new features, bug fixes, and CI improvements have been made.

Warning: there are breaking changes, see below.

Changes

Burn Compute

Burn Import

  • Add more ONNX record types @antimora

  • Support no-std for ONNX imported models @antimora

  • Add custom file location for loading record with ONNX models @antimora

  • Support importing erf operation to ONNX @AuruTus

Burn Tensor

Burn Dataset

  • Improved speed of SqLite Dataset @antimora

  • Use gix-tempfile only when sqlite is enabled @AlexErrant

Burn Common

  • Add benchmark abstraction @louisfd

  • Use thread-local RNG to generate IDs @dae

Burn Autodiff

  • Use AtomicU64 for node ids improving performance @dae

Burn WGPU

Burn Candle

  • Candle backend is now available as a crate and updated with Candle advances @louisfd @agelas

Burn Train

  • New training cli dashboard using ratatui @nathanielsimard

  • [Breaking] Heavy refactor of burn-train making it more extensible and easier to work with @nathanielsimard

  • Checkpoints can be customized with criteria based on collected metrics @nathanielsimard

  • Add the possibility to do early stopping based on collected metrics @nathanielsimard

Examples

  • Add image classifier web demo using different backends, including WebGPU, @antimora

Bugfixes

  • Epoch and iteration were swapped. (#838) @daniel-vainsencher

  • RNN (Gru & LSTM) were not generic over the batch size @agelas, @EddieMataEwy

  • Other device adaptors in WGPU were ignored when best available device was used @chistophebiocca

Documentation

Chores

Thanks

Thanks to all aforemetioned contributors and to our sponsors @smallstepman, @0x0177b11f and @premAI-io.

v0.9.0

06 Sep 14:48
Compare
Choose a tag to compare

Burn v0.9.0 sees the addition of the Burn Book, a new model repository, and many new operations and optimizations.

Burn Book

The Burn Book is available at https://burn-rs.github.io/book/

Model repository

The Model repository is available at https://github.com/burn-rs/models

Changes to Burn

Neural networks

Tensors

Training

  • New training metrics @Elazrod56
    • CPU temperature and use
    • GPU temperature
    • Memory use
  • Custom training and validation metric loggers @nathanielsimard
  • Migration from log4rs to tracing, better integration in a GUI app @dae
  • Training interruption @dae
  • New custom optimize method @nathanielsimard

Backends

Dataset

Import & ONNX

Fix

  • Hugging Face downloader Windows support @Macil
  • Fix grad replace and autodiff backward broadcast @nathanielsimard
  • Fix processed count at learning completion @dae
  • Adjust some flaky tests @dae
  • Ability to disable experiment logging @dae

Configuration

Documentation

Thanks

Thanks to all aforemetioned contributors and to our sponsors @smallstepman and @premAI-io.

v0.8.0

25 Jul 15:38
Compare
Choose a tag to compare

In this release, our main focus was on creating a new backend using wgpu.
We greatly appreciate the meaningful contributions made by the community across the project.
As usual, we have expanded the number of supported operations.

Changes

Tensor

Dataset

  • Added a dataset using Sqlite for storage. Now used to store huggingface datasets. @antimora
  • New speech command audio dataset. @antimora
  • Create python virtual environment for huggingface dependencies. @dengelt

Burn-Import

Backend

Neural Networks

  • Added LSTM module. @agelas
  • Added GRU module. @agelas
  • Better weights initialization with added support for Xavier Glorot. @louisfd
  • Added MSE loss. @bioinformatist
  • Cleanup padding for convolution and pooling modules. @Luni-4
  • Added sinusoidal positional embedding module. @antimora

Fix

Documentation

  • Improve documentation across the whole project ♥! @antimora

Thanks

Thanks to all contributors and to the sponsor @smallstepman.