🏗️ Fine-tune, build, and deploy open-source LLMs easily!
-
Updated
Jun 7, 2024 - Go
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Multi-node production AI stack. Run the best of open source AI easily on your own servers. Create your own AI by fine-tuning open source models. Integrate LLMs with APIs. Run gptscript securely on the server
Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
Finetuning an LLM for structured data extraction from press releases
Interact with your SQL database, Natural Language to SQL using LLMs
Provide best practices for LMOps, as well as elegant and convenient access to the features of the Qianfan MaaS Platform. (提供大模型工具链最佳实践,以及优雅且便捷地访问千帆大模型平台)
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
Open source project for data preparation of LLM application builders
microsoft/Phi-3-vision-128k-instruct for Apple MLX
"Improving Mathematical Reasoning with Process Supervision" by OPENAI
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
优质稳定的OpenAI的API接口-For企业和开发者。OpenAI的api proxy,支持ChatGPT的API调用,支持openai的API接口,支持:gpt-4,gpt-3.5。不需要openai Key, 不需要买openai的账号,不需要美元的银行卡,通通不用的,直接调用就行,稳定好用!!智增增
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
Webui for using XTTS and for finetuning it
Fine Tuning is a cost-efficient way of preparing a model for specialized tasks. Fine-tuning reduces required training time as well as training datasets. We have open-source pre-trained models. Hence, we do not need to perform full training every time we create a model.
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the available tool, methods, repo, code etc to detect hallucination, LLM evaluation, grading and much more.
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."