Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
The ollama model how resides on the gpu?
feature request
New feature or request
#4254
opened May 8, 2024 by
lonngxiang
The model does not output correctly in Ollama, but it works fine in LM Studio.
bug
Something isn't working
#4249
opened May 8, 2024 by
vawterdada
error loading model architecture: unknown model architecture: 'qwen2moe'
bug
Something isn't working
#4248
opened May 8, 2024 by
li904775857
"amdgpu is not supported" for AMD Vega64 on Windows
bug
Something isn't working
#4239
opened May 7, 2024 by
bryndin
Mixtral 8x22b template update
feature request
New feature or request
ollama.com
#4228
opened May 7, 2024 by
slavonnet
Degraded response quality on v 0.1.33
bug
Something isn't working
performance
#4227
opened May 7, 2024 by
dezoito
run llama3-70B-q8_0 error
bug
Something isn't working
gpu
nvidia
Issues relating to Nvidia GPUs and CUDA
#4226
opened May 7, 2024 by
leoHostProject
embeddings support batch?
feature request
New feature or request
#4224
opened May 7, 2024 by
yuanjie-ai
DeepSeek发布全球最强开源第二代MoE模型:DeepSeek-V2!
model request
Model requests
#4221
opened May 7, 2024 by
tqangxl
modify template, system,or params on webpage
feature request
New feature or request
#4220
opened May 7, 2024 by
taozhiyuai
Long context models don't split memory correctly leads to OOM error
bug
Something isn't working
gpu
nvidia
Issues relating to Nvidia GPUs and CUDA
#4212
opened May 6, 2024 by
kungfu-eric
Support for https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual
model request
Model requests
#4211
opened May 6, 2024 by
plitc
Previous Next
ProTip!
no:milestone will show everything without a milestone.