Skip to content

aimagelab/mammoth

Repository files navigation

logo

Mammoth - An Extendible (General) Continual Learning Framework for Pytorch

Official repository of Class-Incremental Continual Learning into the eXtended DER-verse and Dark Experience for General Continual Learning: a Strong, Simple Baseline

Mammoth is a framework for continual learning research. It is designed to be modular, easy to extend, and - most importantly - easy to debug. Idelly, all the code necessary to run the experiments is included in the repository, without needing to check out other repositories or install additional packages.

With Mammoth, nothing is set in stone. You can easily add new models, datasets, training strategies, or functionalities.

Join our Discord Server for all your Mammoth-related questions → Discord Shield

NEW: WIKI

We have created a WIKI! Check it out for more information on how to use Mammoth.

Sequential MNIST Sequential CIFAR-10 Sequential TinyImagenet Permuted MNIST Rotated MNIST MNIST-360

Setup

  • Use ./utils/main.py to run experiments.
  • Use argument --load_best_args to use the best hyperparameters from the paper.
  • New models can be added to the models/ folder.
  • New datasets can be added to the datasets/ folder.

Models

  • Efficient Lifelong Learning with A-GEM (A-GEM, A-GEM-R - A-GEM with reservoir buffer): agem, agem_r
  • Bias Correction (BiC): bic.
  • Continual Contrastive Interpolation Consistency (CCIC) - Requires pip install kornia: ccic.
  • CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning (CODA-Prompt) - Requires pip install timm==0.9.8: coda-prompt.
  • Dark Experience Replay (DER): der.
  • Dark Experience Replay++ (DER++): derpp.
  • DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning (DualPrompt) - Requires pip install timm==0.9.8: dualprompt.
  • Experience Replay (ER): er.
  • online Elastic Weight Consolidation (oEWC): ewc_on.
  • Function Distance Regularization (FDR): fdr.
  • Greedy Sampler and Dumb Learner (GDumb): gdumb.
  • Gradient Episodic Memory (GEM) - Unavailable on windows: gem.
  • Greedy gradient-based Sample Selection (GSS): gss.
  • Hindsight Anchor Learning (HAL): hal.
  • Incremental Classifier and Representation Learning (iCaRL): icarl.
  • JointGCL: joint_gcl (only for General Continual).
  • Learning to Prompt (L2P) - Requires pip install timm==0.9.8: l2p.
  • LiDER (on DER++, iCaRL, GDumb, and ER-ACE): derpp_lider, icarl_lider, gdumb_lider, er_ace_lider.
  • Learning a Unified Classifier Incrementally via Rebalancing (LUCIR): lucir.
  • Learning without Forgetting (LwF): lwf.
  • Meta-Experience Replay (MER): mer.
  • Progressive Neural Networks (PNN): pnn.
  • Regular Polytope Classifier (RPC): rpc.
  • Synaptic Intelligence (SI): si.
  • SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model (SLCA) - Requires pip install timm==0.9.8: slca.
  • Transfer without Forgetting (TwF): twf.
  • eXtended-DER (X-DER): xder (full version), xder_ce (X-DER with CE), xder_rpc (X-DER with RPC).

Datasets

NOTE: Datasets are automatically downloaded in the data/.

  • This can be changes by changing the base_path function in utils/conf.py.

  • The data/ folder is not tracked by git and is craeted automatically if missing.

  • Sequential MNIST (Class-Il / Task-IL): seq-mnist.

  • Sequential CIFAR-10 (Class-Il / Task-IL): seq-cifar10.

  • Sequential Tiny ImageNet (Class-Il / Task-IL): seq-tinyimg.

  • Sequential Tiny ImageNet resized 32x32 (Class-Il / Task-IL): seq-tinyimg-r.

  • Sequential CIFAR-100 (Class-Il / Task-IL): seq-cifar100.

  • Sequential CIFAR-100 resized 224x224 (ViT version) (Class-Il / Task-IL): seq-cifar100-224.

  • Sequential CIFAR-100 resized 224x224 (ResNet50 version) (Class-Il / Task-IL): seq-cifar100-224-rs.

  • Permuted MNIST (Domain-IL): perm-mnist.

  • Rotated MNIST (Domain-IL): rot-mnist.

  • MNIST-360 (General Continual Learning): mnist-360.

  • Sequential CUB-200 (Class-Il / Task-IL): seq-cub200.

  • Sequential ImageNet-R (Class-Il / Task-IL): seq-imagenet-r.

Pretrained backbones

Citing these works

@article{boschini2022class,
  title={Class-Incremental Continual Learning into the eXtended DER-verse},
  author={Boschini, Matteo and Bonicelli, Lorenzo and Buzzega, Pietro and Porrello, Angelo and Calderara, Simone},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022},
  publisher={IEEE}
}

@inproceedings{buzzega2020dark,
 author = {Buzzega, Pietro and Boschini, Matteo and Porrello, Angelo and Abati, Davide and Calderara, Simone},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {15920--15930},
 publisher = {Curran Associates, Inc.},
 title = {Dark Experience for General Continual Learning: a Strong, Simple Baseline},
 volume = {33},
 year = {2020}
}

Awesome Papers using Mammoth

Our Papers

  • Dark Experience for General Continual Learning: a Strong, Simple Baseline (NeurIPS 2020) [paper]
  • Rethinking Experience Replay: a Bag of Tricks for Continual Learning (ICPR 2020) [paper] [code]
  • Class-Incremental Continual Learning into the eXtended DER-verse (TPAMI 2022) [paper]
  • Effects of Auxiliary Knowledge on Continual Learning (ICPR 2022) [paper]
  • Transfer without Forgetting (ECCV 2022) [paper][code]
  • Continual semi-supervised learning through contrastive interpolation consistency (PRL 2022) [paper][code]
  • On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning (NeurIPS 2022) [paper] [code]

Other Awesome CL works using Mammoth

Get in touch if we missed your awesome work!

  • Prediction Error-based Classification for Class-Incremental Learning (ICLR2024) [paper] [code]
  • TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion (NeurIPS2023) [paper] [code]
  • Overcoming Recency Bias of Normalization Statistics in Continual Learning: Balance and Adaptation (NeurIPS2023) [paper] [code]
  • A Unified and General Framework for Continual Learning (ICLR2024) [paper] [code]
  • Decoupling Learning and Remembering: a Bilevel Memory Framework with Knowledge Projection for Task-Incremental Learning (CVPR2023) [paper] [code]
  • Regularizing Second-Order Influences for Continual Learning (CVPR2023) [paper] [code]
  • Sparse Coding in a Dual Memory System for Lifelong Learning (CVPR2023) [paper] [code]
  • A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm (CVPR2023) [paper] [code]
  • A Multi-Head Model for Continual Learning via Out-of-Distribution Replay (CVPR2023) [paper] [code]
  • Preserving Linear Separability in Continual Learning by Backward Feature Projection (CVPR2023) [paper] [code]
  • Complementary Calibration: Boosting General Continual Learning With Collaborative Distillation and Self-Supervision (TIP2023) [paper] [code]
  • Continual Learning by Modeling Intra-Class Variation (TMLR2023) [paper] [code]
  • ConSlide: Asynchronous Hierarchical Interaction Transformer with Breakup-Reorganize Rehearsal for Continual Whole Slide Image Analysis (ICCV2023) [paper] [code]
  • CBA: Improving Online Continual Learning via Continual Bias Adaptor (ICCV2023) [paper] [code]
  • Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and Concept Rehearsal (ICML2023) [paper] [code]
  • Learnability and Algorithm for Continual Learning (ICML2023) [paper] [code]
  • Pretrained Language Model in Continual Learning: a Comparative Study (ICLR2022) [paper] [code]
  • Representational continuity for unsupervised continual learning (ICLR2022) [paper] [code]
  • Continual Normalization: Rethinking Batch Normalization for Online Continual Learning (ICLR2022) [paper] [code]
  • Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System (ICLR2022) [paper] [code]
  • New Insights on Reducing Abrupt Representation Change in Online Continual Learning (ICLR2022) [paper] [code]
  • Looking Back on Learned Experiences for Class/Task Incremental Learning (ICLR2022) [paper] [code]
  • Task Agnostic Representation Consolidation: a Self-supervised based Continual Learning Approach (CoLLAs2022) [paper] [code]
  • Consistency is the key to further Mitigating Catastrophic Forgetting in Continual Learning (CoLLAs2022) [paper] [code]
  • Self-supervised models are continual learners (CVPR2022) [paper] [code]
  • Learning from Students: Online Contrastive Distillation Network for General Continual Learning (IJCAI2022) [paper] [code]

Contributing

Pull requests welcome!

Please use autopep8 with parameters:

  • --aggressive
  • --max-line-length=200
  • --ignore=E402

Previous versions

If you're interested in a version of this repo that only includes the original code for Dark Experience for General Continual Learning: a Strong, Simple Baseline or Class-Incremental Continual Learning into the eXtended DER-verse, please use the following tags:

About

An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of Dark Experience for General Continual Learning

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages