Skip to content

DuckyMomo20012/chat-gbit

Repository files navigation

Chat GBit

A simple Chat GPT clone

contributors last update forks stars open issues license


πŸ“” Table of Contents

🌟 About the Project

πŸ“· Screenshots

text_input

Text input demo

text_input_dark_mode

Text input demo (dark mode)

voice_input

Voice input demo

upload_training

Upload training chat

Code highlighting

code_highlighting

πŸ‘Ύ Tech Stack

Client
Server

🎯 Features

  • Chat history. πŸ†•
  • Simple authentication. πŸ†•
  • Self-hosted AI server with LocalAI. πŸ†•
  • New chat completion.
  • Voice input.
  • Allow setting model for the chat.
  • Upload training chat.
  • Regenerate chat completion.
  • MDX support.
  • Code highlighting.

πŸ”‘ Environment Variables

To run this project, you will need to add the following environment variables to your .env file:

  • NextAuth configs:

    • NEXTAUTH_SECRET: Used to encrypt the NextAuth.js JWT, and to hash email verification tokens.

    • NEXTAUTH_URL: When deploying to production, set the NEXTAUTH_URL environment variable to the canonical URL of your site.

Note

Doesn't have to set NEXTAUTH_URL when deploying to vercel.

  • App configs:

    • OPENAI_API_KEY: OpenAI API key.

    • LOCAL_AI_BASE_URL: URL of the local AI server, this is used to connect to the local AI server.

    • DATABASE_URL: PostgreSQL database URL.

E.g:

# .env
NEXTAUTH_SECRET="my-secret-key"
NEXTAUTH_URL="http://localhost:3000/"
OPENAI_API_KEY="sk-***"
LOCAL_AI_BASE_URL="http://localhost:8080"
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/chatgbit"

You can also check out the file .env.example to see all required environment variables.

🧰 Getting Started

‼️ Prerequisites

πŸƒ Run Locally

Clone the project:

git clone https://github.com/DuckyMomo20012/chat-gbit.git

Go to the project directory:

cd chat-gbit

Install dependencies:

pnpm install

Start the database:

docker-compose up -d

Or, with the Local AI server (might take a while to download the model):

docker-compose up -d -profile local-ai

Run the migrations:

pnpm prisma:migrate

Start the server:

pnpm dev

πŸ‘€ Usage

Basic usage

This app uses the new Chat Completion API to generate the chat.

You can communicate with the chatbot by:

  • Text input: Typing in the input box and pressing the Enter key or clicking the Send button.

  • Voice input: Click the Start Recording button to start recording your voice and click the Stop Recording button to stop recording. The server will try to transcribe your voice and send the text to the chatbot as the prompt.

    • This feature requires microphone permission. If you haven't granted the permission, the browser will ask you to grant permission.

    • You can ONLY record up to 30 seconds of audio, this feature is intended to reduce the cost.

    • The audio data is persisted until the user submits the prompt, reloads the page, or start a new recording.

    • The record is saved as a .webm audio file and then send to the server.

    • This input will use the new Whisper API to generate the transcription. You can read more about the Whisper API here. Currently, the API only supports the whisper-1 model.

    • When the user revokes the microphone permission during recording, the record is stopped immediately, but the voice input is changed to inactive mode about 10 seconds later (not immediately). The user can still submit the audio before the record is stopped.

Flow description:

  • When there is no message in the chat, you can submit the prompt with the role user or with system by checking the Set as system instruction checkbox.

  • When you submit the prompt with the role user, the chatbot will submit the request to the server. The server will call the OpenAI API to generate the next completion. After that, the server will send the completion back to the client.

    • While the client is fetching the completion, the user can't submit the next prompt or stop the generation.
  • When the client is typing the completion, you can stop the generation by clicking the Stop generating button.

Note

The completion is already generated and sent to the client before typing the completion. So you are already billed for that completion (if you use OpenAI models).

  • When there is no typing completion and not fetching the completion, you can regenerate the completion by clicking the Regenerate response button.

  • When something went wrong and the completion wasn't added to the chat, you can:

    • Regenerate the completion.

    • Submit the prompt again. The new prompt will replace the old prompt in the chat.

Set model for chat

You can set the model for the chat by clicking the Set model button.

Available models:

  • gpt-3.5-turbo
  • gpt-3.5-turbo-0301: supported through at least June 1st.
  • ggml-gpt4all-j-v1.3-groovy.bin (Self-hosted): Local AI chat model.
  • ggml-whisper-base.bin (Self-hosted): Local AI transcription model.

Note

You should change the model before submitting the prompt. Changing after submitting the prompt will have no effect.

Upload training chat

As the Chat Completion API supports the "hard-coded" chat, you now can upload training chat to the chatbot by clicking the Settings button.

Caution

Update training chat will delete all the chat history.

  • The chat MUST follow chat format.

  • User can also hide the training chat by clicking the Hide training messages button.

  • The form is validated while typing, so you can see the error message while typing.

MDX support

This app supports MDX v2. You now can create .mdx files for new routes. For example, you can create a new route /about by creating a new file pages/about.mdx.

This feature was intentionally added for parsing markdown content in the chat. When parsing the markdown content, the content will be safely sanitized by rehype-sanitize.

  • Code blocks and inline code is allowed to keep the className attribute to support code highlighting.

Warning

The content in .mdx files are not sanitized. You should aware of the security risk.

Code highlighting

Now the app can support code highlighting in the chat (both prompt and completion).

Note

Writing markdown content for the prompt is not recommended. The prompt should be plain text.

  • Code highlighting is not supported while typing, you still can see the backticks (`) in the chat.

For example, when the message contains a code block or inline code:

  • Code block (```): The code block will be highlighted with CodeHighlight code highlight from the @mantine/prism, due to the Mantine v7 upgrade.

    • If the code block has a language specified, e.g., python, the code block will be highlighted with the corresponding language if Prism supports it.
  • Inline code (`): The inline code will be highlighted with Mantine's Code component.

Note

The code highlighting won't effect the real prompt or completion, even if the user stop the generation while typing the completion. The prompt or completion will be sent to the server as plain text.

🧭 Roadmap

  • Load training chat from a local file or text input.
  • Hide message.
  • Display code block in completion.
  • Edit user prompt.
  • Support chat toolbar for the small screen.
  • Open the usage panel in a new tab.
  • Regenerate mid chat completion.

πŸ‘‹ Contributing

Contributions are always welcome!

πŸ“œ Code of Conduct

Please read the Code of Conduct.

❔ FAQ

  • Why do the app migrate from the Local Storage to the PostgreSQL database?

    • We want to add the multi chat feature, but it's hard to maintain the chat history in the Local Storage with Redux. We tried to implement this feature in the PRs #38, #39.

⚠️ License

Distributed under MIT license. See LICENSE for more information.

🀝 Contact

Duong Vinh - @duckymomo20012 - tienvinh.duong4@gmail.com

Project Link: https://github.com/DuckyMomo20012/chat-gbit.

πŸ’Ž Acknowledgements

Here are useful resources and libraries that we have used in our projects:

Releases

No releases published

Packages

No packages published

Languages