You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug:
As far as I know, most ptq-quantization methods needn't train again but it seems that the current ptq-quantization of nni must run with the training process. It would cost too many time to train again. Is there any way to run the ptq-quantization of nni without training?
Environment:
NNI version:
Training service (local|remote|pai|aml|etc):
Python version:
PyTorch version:
Cpu or cuda version:
Reproduce the problem
Code|Example:
How to reproduce:
The text was updated successfully, but these errors were encountered:
Describe the bug:
As far as I know, most ptq-quantization methods needn't train again but it seems that the current ptq-quantization of nni must run with the training process. It would cost too many time to train again. Is there any way to run the ptq-quantization of nni without training?
Environment:
Reproduce the problem
Code|Example:
How to reproduce:
The text was updated successfully, but these errors were encountered: