Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

accelerate test does not work when using 3 GPUs #2769

Open
2 of 4 tasks
Trojaner opened this issue May 13, 2024 · 0 comments
Open
2 of 4 tasks

accelerate test does not work when using 3 GPUs #2769

Trojaner opened this issue May 13, 2024 · 0 comments

Comments

@Trojaner
Copy link

System Info

accelerate 0.30.1

Kernel: linux-5.19.0-1010-nvidia-lowlatency-x86_64-with-glibc2.35
OS: Ubuntu 22.04

Hardware
AMD Ryzen 7 3700x
1x NVIDIA RTX 3060 12GB
1x NVIDIA RTX 3090 24GB
1x NVIDIA RTX 4080 16GB
48GB DDR4 RAM

Software
Python 3.10.1
PyTorch 2.2.0+cu121
Numpy 1.26.4

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
  • My own task or dataset (give details below)

Reproduction

Expected behavior

There is no way of running accelerate test without setting num_processes to 1, 2, 4 etc. since split_batches is enabled and batch_size is hardcoded to 8.

Its expected that accelerate test works regardless of GPU / process count

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant