Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

paddle fleet分布式框架 设置参数不同学习率时报错 #5738

Open
lwgkzl opened this issue Sep 14, 2023 · 0 comments
Open

paddle fleet分布式框架 设置参数不同学习率时报错 #5738

lwgkzl opened this issue Sep 14, 2023 · 0 comments
Assignees

Comments

@lwgkzl
Copy link

lwgkzl commented Sep 14, 2023

paddle版本: 2.2.1.
背景: 多任务学习中,同一个backbone模型有不同的头学习不同的任务, 期望各个头fc层的学习率可以不一样,但是设置的时候会有如下错误:
错误信息如下:
Traceback (most recent call last):
File "run_batch_fine_grained.py", line 466, in
train(args)
File "run_batch_fine_grained.py", line 283, in train
optimizer.minimize(loss)
File "/py37/lib/python3.7/site-packages/paddle/distributed/fleet/base/fleet_base.py", line 1501, in minimize
loss, startup_program, parameter_list, no_grad_set=no_grad_set)
File "/py37/lib/python3.7/site-packages/paddle/distributed/fleet/meta_optimizers/meta_optimizer_base.py", line 95, in minimize
loss, startup_program, parameter_list, no_grad_set)
File "/py37/lib/python3.7/site-packages/paddle/distributed/fleet/meta_optimizers/sharding_optimizer.py", line 516, in minimize_impl
self._apply_sharding_pass(params_grads)
File "/py37/lib/python3.7/site-packages/paddle/distributed/fleet/meta_optimizers/sharding_optimizer.py", line 295, in _apply_sharding_pass
self._split_program(main_block)
File "/py37/lib/python3.7/site-packages/paddle/distributed/fleet/meta_optimizers/sharding_optimizer.py", line 746, in _split_program
assert (int(op.attr('op_role')) != int(OpRole.Optimize))
AssertionError

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants