Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adversarial Fairness and Pytorch crashing #1227

Open
nyangnicole opened this issue Mar 17, 2023 · 1 comment
Open

Adversarial Fairness and Pytorch crashing #1227

nyangnicole opened this issue Mar 17, 2023 · 1 comment
Labels
Waiting for OP's Response Waiting for original poster's response, and will close if that doesn't happen for a while.

Comments

@nyangnicole
Copy link

Describe the bug

When I run the AdversarialFairness mitigator with a pytorch backend, the kernel dies, when I run it with 7 or more covariates.
It does not crash when I use a tensor flow backend.

Steps/Code to Reproduce

Sample code to reproduce the problem

Expected Results

No error is thrown.

Actual Results

Kernel dies.

Screenshots

Versions

OS: Mac
Browser: Safari
Python version: 3.9.13
Fairlearn version: 0.8.0
Pytorch version: 2.0.0
Torchvision version: 0.15.1

System:
python: 3.9.13 (main, Aug 25 2022, 18:29:29) [Clang 12.0.0 ]
executable: /Users/nicoleyang/opt/anaconda3/bin/python
machine: macOS-10.16-x86_64-i386-64bit

Python dependencies:
Cython: 0.29.32
matplotlib: 3.5.2
numpy: 1.21.5
pandas: 1.4.4
pip: 22.2.2
scipy: 1.9.1
setuptools: 63.4.1
sklearn: 1.0.2
tempeh: None

@romanlutz
Copy link
Member

Can you provide a minimal example that we can use to reproduce the issue?

@adrinjalali adrinjalali added the Waiting for OP's Response Waiting for original poster's response, and will close if that doesn't happen for a while. label Mar 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Waiting for OP's Response Waiting for original poster's response, and will close if that doesn't happen for a while.
Projects
None yet
Development

No branches or pull requests

3 participants