Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking issue for Apple M-series GPU support #792

Open
danieldk opened this issue Oct 18, 2022 · 0 comments
Open

Tracking issue for Apple M-series GPU support #792

danieldk opened this issue Oct 18, 2022 · 0 comments

Comments

@danieldk
Copy link
Contributor

danieldk commented Oct 18, 2022

This issue documents the current state of support for Apple M-series (M1/M2) support in Thinc and spacy-transformers. Subscribe to this issue if you'd like to receive status updates on support for M-series GPUs.

Thinc layers

Apple M-series (M1/M2) GPUs are currently not used by Thinc models. The matrix multiplication (AMX) units of the M-series CPUs will be used when thinc-apple-ops is installed.

PyTorch layers

M-series GPUs are supported by the PyTorch wrapper in Thinc. Calling require_gpu or prefer_gpu changes the default PyTorch device to mps. As a result, Metal Performance Shaders will be used by PyTorch to run compute kernels on the GPU.

Keep in mind though that the Metal Performance Shader support is still fairly limited, especially so in PyTorch 1.12.x, as a consequence your models may not work correctly.

spacy-transformers

spacy-transformers 1.1.8 adds experimental support for M1 GPUs through Metal Performance Shaders. However, this support comes with some limitations:

  • Only inference is supported.
  • PyTorch 1.13.x is required. Even though PyTorch 1.12.x adds support for MPS, it is too incomplete to use with spacy-transformers.
  • We have not tested MPS support on Intel Macs with Metal-capable GPUs.

We have tested the following models with MPS, PyTorch 1.13.0 and spacy-transformers 1.1.8. The models for which Requires fallback is ticked only work with the PYTORCH_ENABLE_MPS_FALLBACK=1 environment variable set.

Model Supported Requires fallback
ca_core_news_trf
da_core_news_trf
de_dep_news_trf
en_core_web_trf
es_dep_news_trf
fr_dep_news_trf
ja_core_news_trf
uk_core_news_trf
zh_core_web_trf

You can find more information about using spacy-transformers with M-series GPUs in the GPU Support Troubleshooting FAQ.

Benchmarks

More background about the MPS support, including benchmarks, can be found in our blog post Fast transformer inference with Metal Performance Shaders.

@explosion explosion locked as resolved and limited conversation to collaborators Oct 18, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant