Gather research papers, corresponding codes (if having), reading notes and any other related materials about Hot🔥🔥🔥 fields in Computer Vision based on Deep Learning.
-
Updated
May 23, 2024
Gather research papers, corresponding codes (if having), reading notes and any other related materials about Hot🔥🔥🔥 fields in Computer Vision based on Deep Learning.
A curated list of awesome NLP, Computer Vision, Model Compression, XAI, Reinforcement Learning, Security etc Paper
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Deep Multimodal Guidance for Medical Image Classification: https://arxiv.org/pdf/2203.05683.pdf
Official PyTorch Code for "Dynamic Temperature Knowledge Distillation"
A personal knowledge 🧠 base used to distil knowledge in to a atomic document 📄 using logseq
Full Wiki enables seamless access to Wikipedia content in multiple languages. It translates English Wikipedia the most comprensive knowledge base into other languages. The user do not need to know the translated search term. This project should be a concept of how LLMs will tear down language barriers.
Distill knowledge from in-context learning into efficient LoRA adapters, enabling expert LLM performance with smaller context windows.
[AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"
AI book for everyone
A curated list for Efficient Large Language Models
Code for CVPR'24 Paper: Segment Any Event Streams via Weighted Adaptation of Pivotal Tokens
[CVPR 2024 Highlight] Logit Standardization in Knowledge Distillation
A treasure chest for visual classification and recognition powered by PaddlePaddle
[CVPR 2024] Source code for "Diffusion-Based Adaptation for Classification of Unknown Degraded Images".
模型压缩的小白入门教程
Knowledge Distillation from VGG16 (teacher model) to MobileNet (student model)
An implementation of the KAN architecture using learnable activation functions for knowledge distillation on the MNIST handwritten digits dataset. The project demonstrates distilling a three-layer teacher KAN model into a more compact two-layer student model, comparing the performance impacts of distillation versus non-distilled models.
Attention-guided Feature Distillation for Semantic Segmentation
Add a description, image, and links to the knowledge-distillation topic page so that developers can more easily learn about it.
To associate your repository with the knowledge-distillation topic, visit your repo's landing page and select "manage topics."