Skip to content

Official PyTorch Implementation for Conflict-Averse Gradient Descent (CAGrad)

License

Notifications You must be signed in to change notification settings

Cranial-XIX/CAGrad

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Conflict-Averse Gradient Descent for Multitask Learning (CAGrad)

This repo contains the source code for CAGrad, which has been accepted to NeurIPS 2021.

update 2023/11/9: We have released an improved method FAMO, a novel multitask/multiobjective optimizer that avoids computing all task gradients to balance different objectives. The main ideas are: 1) ensure all task objectives are optimized at an equal rate as much as possible, and 2) amortize the computation over time so we do not need to compute all task gradients if it is multi-objective optimization.

update 2021/11/22: Thank @lushleaf for finding that replacing the last backward(retain_graph=True) to backward() saves much GPU memory.

Toy Optimization Problem

Alt Text

To run the toy example:

python toy.py

Image-to-Image Prediction

The supervised multitask learning experiments are conducted on NYU-v2 and CityScapes datasets. We follow the setup from MTAN. The datasets could be downloaded from NYU-v2 and CityScapes. After the datasets are downloaded, please follow the respective run.sh script in each folder. In particular, modify the dataroot variable to the downloaded dataset.

Multitask Reinforcement Learning (MTRL)

The MTRL experiments are conducted on Metaworld benchmarks. In particular, we follow the mtrl codebase and the experiment setup in this paper.

  1. Install mtrl according to the instructions.

  2. Git clone Metaworld and change to d9a75c451a15b0ba39d8b7a8b6d18d883b8655d8 commit (Feb 26, 2021). Install metaworld accordingly.

  3. Copy the mtrl_files folder under mtrl of this repo to the cloned repo of mtrl. Then

cd PATH_TO_MTRL/mtrl_files/ && chmod +x mv.sh && ./mv.sh

Then follow the run.sh script to run experiments (We are still testing the results but the code should be runnable).

Solving the Dual Optimization in Practice

cagrad

Citations

If you find our work interesting or the repo useful, please consider citing this paper:

@article{liu2021conflict,
  title={Conflict-Averse Gradient Descent for Multi-task Learning},
  author={Liu, Bo and Liu, Xingchao and Jin, Xiaojie and Stone, Peter and Liu, Qiang},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

@misc{liu2023famo,
      title={FAMO: Fast Adaptive Multitask Optimization}, 
      author={Bo Liu and Yihao Feng and Peter Stone and Qiang Liu},
      year={2023},
      eprint={2306.03792},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

About

Official PyTorch Implementation for Conflict-Averse Gradient Descent (CAGrad)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published