This website applies a recommendation system and continuous learning.
-
Updated
May 25, 2024 - EJS
This website applies a recommendation system and continuous learning.
Dans mon défi de développement et d'amélioration quotidienne, je vous présente "One Commit Per Day". Ce projet va au-delà d'un simple dépôt GitHub ; il incarne mon engagement quotidien envers l'amélioration continue de mes compétences en programmation. Chaque jour, un nouveau commit marque ma progression continue et ma passion pour le codage.
Awesome Machine Unlearning (A Survey of Machine Unlearning)
collection of diffusion model papers categorized by their subareas
An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of Dark Experience for General Continual Learning
Official PyTorch implementation of paper "Continual Action Assessment via Task-Consistent Score-Discriminative Feature Distribution Modeling" (TCSVT 2024)
A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning. arXiv:2307.09218.
A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you need to evaluate your own agents. See more in the blogpost:
Code for CPAL-2024 paper "Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates"
Lu, et al. (2023). Reconciling Shared versus Context-Specific Information in a Neural Network Model of Latent Causes.
Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
Variational continual learning of a conditional diffusion model to generate MNIST. Based on 'Conditional Diffusion MNIST'.
This repository contains a collection of papers and codes for continual reinforcement learning.
Code for the paper "FOCIL: Finetune-and-Freeze for Online Class-Incremental Learning by Training Randomly Pruned Sparse Experts"
[ACL2024] A Codebase for Incremental Learning with Large Language Models; Official released code for "Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models (ACL 2024)", "Incremental Sequence Labeling: A Tale of Two Shifts (ACL 2024 Findings)", and "Concept-1K: A Novel Benchmark for Instance Incremental Learning (arxiv)"
A Biologically-Inspired Approach to Continual Learning through Adjustment Suppression and Sparsity Promotion
Continual Learning code for SRe2L paper (NeurIPS 2023 spotlight)
Awesome Continual Multi-view Clustering is a collection of SOTA, novel continual multi-view clustering methods (papers, codes).
Add a description, image, and links to the continual-learning topic page so that developers can more easily learn about it.
To associate your repository with the continual-learning topic, visit your repo's landing page and select "manage topics."