Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simple-chat: error during loading params onto WebGPU, GPUPipelineError: A valid external Instance reference no longer exists #332

Open
lebron8dong opened this issue Mar 13, 2024 · 3 comments

Comments

@lebron8dong
Copy link

image

@CharlieFRuan
Copy link
Contributor

Thanks for reporting the error. Could you show the log in console? Not sure if there is more info there. Besides, does this issue occur in all models? Could you perhaps try a smaller model like gemma 2b?

@CharlieFRuan CharlieFRuan changed the title simple-chat:After system initalize, loading onto webgpu Generate error simple-chat: error during loading params onto WebGPU, GPUPipelineError: A valid external Instance reference no longer exists Mar 13, 2024
@lebron8dong
Copy link
Author

Llama:log in console
image

WizardMath no error occurred;

@CharlieFRuan
Copy link
Contributor

This is a bit strange. WizardMath has q4f16_1, could you also try Llama-2-7B-q4f16_1? Besides, how much RAM do you have? My guess is that it is some OOM issue. Looking at the field vram_required_MB in https://github.com/mlc-ai/web-llm/blob/main/examples/simple-chat/src/gh-config.js, Llama-2-7B-q4f32_1 requires roughly 2GB more vram than Llama-2-7B-q4f16_1 (hence also WizardMath).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants