You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From accelerate document:
"By using register_for_checkpointing(), you can register custom objects to be automatically stored or loaded from the two prior functions, so long as the object has a state_dict and a load_state_dict functionality. This could include objects such as a learning rate scheduler."
Is it possible to include scalar value in a checkpoint just like pytorch? such as epoch and step, etc. if so, redundant lines to recover epoch can be avoided.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Sure we can absolutely. If you'd like to expand our checkpointing example here in accelerate implementing that, we can look at upstreaming it further 🤗
From accelerate document:
"By using register_for_checkpointing(), you can register custom objects to be automatically stored or loaded from the two prior functions, so long as the object has a state_dict and a load_state_dict functionality. This could include objects such as a learning rate scheduler."
Is it possible to include scalar value in a checkpoint just like
pytorch
? such as epoch and step, etc. if so, redundant lines to recover epoch can be avoided.The text was updated successfully, but these errors were encountered: