A cloud-native vector database, storage for next generation AI applications
-
Updated
May 30, 2024 - Go
A cloud-native vector database, storage for next generation AI applications
A web UI Project In order to learn the large language model. This project includes features such as chat, quantization, fine-tuning, prompt engineering templates, and multimodality.
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain
FAISS and Annoy indexing + search evaluation workflow
Lightweight RAG Framework: Simple and Scalable Framework with Efficient Embeddings. Leverage: FAISS, ChromaDB, and Ollama
Fast Open-Source Search & Clustering engine × for Vectors & 🔜 Strings × in C++, C, Python, JavaScript, Rust, Java, Objective-C, Swift, C#, GoLang, and Wolfram 🔍
Text-To-Speech, RAG, and LLMs. All local!
Scalable, Low-latency and Hybrid-enabled Vector Search in Postgres. Revolutionize Vector Search, not Database.
A Dockerized Streamlit app leveraging a RAG LLM with FAISS to offer answers from uploaded markdown files, deployed on GCP Cloud.
It reflects the main purpose of the code, which is to perform semantic search on a dataset of text documents using FAISS for indexing and the Universal Sentence Encoder for generating embeddings.
Word2vec, sentenceBert, BM25 and IVFFlat Index quality and speed comparison
An extension for oobabooga/text-generation-webui that enables the LLM to search the web using DuckDuckGo
The Llama-2-GGML-CSV-Chatbot is a conversational tool leveraging the powerful Llama-2 7B language model. It facilitates multi-turn interactions based on uploaded CSV data, allowing users to engage in seamless conversations.
Similarities: a toolkit for similarity calculation and semantic search. 相似度计算、匹配搜索工具包,支持亿级数据文搜文、文搜图、图搜图,python3开发,开箱即用。
In this project I have built an end to end langchain project using hugging face open source llm models such as Mistral and also open source embedding models.
Fast and customizable framework for automatic and quick Causal Inference in Python
This repo provides a simple integration of Huggingface LM's to answer questions given a set of documents.
Talk_with_PDF is a powerful, AI-driven solution designed to automate the extraction of information and generation of answers based on PDF documents. By integrating OpenAI's advanced language models and embeddings, this system provides accurate and contextually relevant responses, making it an invaluable tool for education, business, and research.
Add a description, image, and links to the faiss topic page so that developers can more easily learn about it.
To associate your repository with the faiss topic, visit your repo's landing page and select "manage topics."