Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
Jun 9, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
Low Tensor Rank adaptation of large language models
PEFT is a wonderful tool that enables training a very large model in a low resource environment. Quantization and PEFT will enable widespread adoption of LLM.
Speech, Language, Audio, Music Processing with Large Language Model
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Finetuning coding LLM OpenCodeInterpreter-DS-6.7B for Text-to-SQL Code Generation on a Single A100 GPU in PyTorch.
This repository is dedicated to small projects and some theoretical material that I used to get into NLP and LLM in a practical and efficient way.
MindSpore online courses: Step into LLM
This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
High Quality Image Generation Model - Comes Under NGC Models @prithivmlmods
a bro who codes with you
Implementation for the different ML tasks on Kaggle platform with GPUs.
Fine Tuning pegasus and flan-t5 pre-trained language model on dialogsum datasets for conversation summarization to to optimize context window in RAG-LLMs
🚂 Fine tuning large language models
[SIGIR'24] The official implementation code of MOELoRA.
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT
Add a description, image, and links to the peft topic page so that developers can more easily learn about it.
To associate your repository with the peft topic, visit your repo's landing page and select "manage topics."