We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In addition to OpenAI and its associated cost it would be great if we might be able to use Ollama
Extend the AI integration to use Ollama which allows running various LLM models locally without a fee.
The text was updated successfully, but these errors were encountered:
最近,ollama 已实现对 OpenAI 的兼容性。czg 支持 Ollama 一个可行办法是利用 ‘ollama cp’ 命令,将现有模型名复制到一个临时名。在命令行中具体执行像下面这样的命令:
ollama cp gemma gpt-3.5-turbo
接下来,修改 .czrc 文件内容,仿如下所示:
{ "openAIToken": " ", "apiEndpoint": "http://localhost:11434/v1" }
Sorry, something went wrong.
感谢! 我会在最近开始这方面的开发 🫠
# ollama cp ollama pull gemma ollama ls ollama cp <target-module> gpt-3.5-turbo ollama ls # check cp success
It is recommended to use commands for configuration here. The loading path of AI configuration is different from that of other configurations.
npx czg --api-key=" " --api-endpoint="http://localhost:11434/v1"
Zhengqbbb
No branches or pull requests
💭 Describe the feature
In addition to OpenAI and its associated cost it would be great if we might be able to use Ollama
💡 Proposed Solution
Extend the AI integration to use Ollama which allows running various LLM models locally without a fee.
The text was updated successfully, but these errors were encountered: