Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about data shape difference between quantization and forward #674

Open
sleepwalker2017 opened this issue May 17, 2024 · 0 comments

Comments

@sleepwalker2017
Copy link

I run auto gptq using llama-7b, when doing model quantization, I see the shape of a layer as follows:

scale:  [4096, 32]
zero:   [4096, 32]
g_idx: [4096]

Then I think GPTQ uses groups, it quantize 128 columns as a group, is that right?

But when I do inference, I find the shapes changed:

weight: [32, 128, 4096]  int8
zeros:  [32, 1, 4096]  int8
scales: [32, 1, 4096] fp16

why is that? why the zeros and scales transposed?

I'm very confused about this.

How is the group partition? using multiple rows as a group or multiple columns?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant