Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How slow is slow? #49

Open
flatsiedatsie opened this issue Mar 9, 2024 · 0 comments
Open

How slow is slow? #49

flatsiedatsie opened this issue Mar 9, 2024 · 0 comments

Comments

@flatsiedatsie
Copy link

I downloaded the github repo and placed in on a localhost server.

I opened the page, and clicked on the "Load GPT2 117Mb" model.

I've been waiting for a few minutes now, with the output stuck on Loading token embeddings.... Is that normal behaviour?

Loading model from folder: gpt2
Loading params...
Warning: Buffer size calc result exceeds GPU limit, are you using this value for a tensor size? 50257 768 1 154389504
bufferSize @ model.js:510
loadParameters @ model.js:298
await in loadParameters (async)
loadModel @ model.js:276
initialize @ model.js:32
await in initialize (async)
loadModel @ gpt/:105
onclick @ gpt/:23
Params: {n_layer: 12, n_head: 12, n_embd: 768, vocab_size: 50257, n_ctx: 1024, …}
Loading token embeddings...
  • Apple M1 Pro
  • Brave 1.61 - "WebGPU is supported in your browser!"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant