A Rust crate for easy serving of OpenAI's GPT-3 API, with rate limiting and token use tracking out of the box.
- Rust crate for API access
- Base API server
- Rate Limiting based off of user ID
- Per user token tracking
Create a .env
at the root of this project and fill out your API Key
GPT_KEY=...
Then run cargo run
in this directory to start the server on port :8000
.
$ docker build . -t rs-openai:latest
$ docker run -p 8000:8000 rs-openai:latest