Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
-
Updated
Mar 6, 2024 - Python
Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
Implementation of the models of the Universal-NER Paper 2024 using a Streamlit-based web application that is designed to process PDF documents for Named Entity Recognition tasks. It allows users to upload PDF files, from which the application extracts text, images, and tables to identify entities based on a user-specific user-specified entity type.
M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts. Furthermore, M3DBench provides a new benchmark to assess large models across 3D vision-centric tasks.
EasyRLHF aims to provide an easy and minimal interface to train aligned language models, using off-the-shelf solutions and datasets
This is the official code repository for the ACL Findings Paper "Multi-Task Transfer Matters During Instruction-Tuning"
Evaluating Large Language Models with Instructions and Prompts
KoTox is an automatically generated instruction dataset in Korean. The instruction set is used to mitigate the toxicity of the LLMs.
an instruction-tuning dataset generation script
Chinese Grammar Error and Spelling Error Correction System - 中文文法錯誤及錯別字校正系統
Implementation of Bitune: Bidirectional Instruction-Tuning
Random Noisy Embeddings with fine-tuning 방법론을 한국어 LLM에 간단히 적용할 수 있는 Kosy🍵llama
Instruction and training dataset generation using Mistral 7B with context from document chunks
A multimodal model for language-guided socially compliant robot navigation.
The offical Implementation of "Instruction-Guided Visual Masking"
Vision Large Language Models trained on M3IT instruction tuning dataset
This repository contains the implementation of a fine-tuned Llama2 chatbot using QLoRA, tailored to provide detailed information and recommendations about movies. The model is fine-tuned on the IMDB dataset, enabling it to generate informed and contextually relevant responses.
End-to-end MLOps LLM instruction finetuning based on PEFT & QLoRA to solve math problems.
The official implementation of paper "Demystifying Instruction Mixing for Fine-tuning Large Language Models"
Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.
To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."