Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide ARM64v8 wheel #21283

Closed
SebastianSchildt opened this issue Nov 25, 2019 · 39 comments
Closed

Provide ARM64v8 wheel #21283

SebastianSchildt opened this issue Nov 25, 2019 · 39 comments

Comments

@SebastianSchildt
Copy link

Is your feature request related to a problem? Please describe.

Cross compiling the Python package for arm64 using qemu/docker approach (installing with pip) takes forever. Compiling natively on a arm64 platformis not that fast either.
I guess the reason is, the official package https://pypi.org/project/grpcio/1.25.0/#files does not contain an arm64v8 wheel. (PiWheels not helping those are not 64 bit).

Describe the solution you'd like

Arm64 wheel that is used when I do pip install grpcio on an arm64 machine

Describe alternatives you've considered

I searched for a python wheel repository containing prebuilt binaries.

Am I missing something? What would be needed to improve user experience on arm64?

@lidizheng
Copy link
Contributor

@SebastianSchildt You are right about both facts that 1 ) it takes forever for qemu complilation; 2) grpcio does not yet release armv8 binary wheels.

@gnossen
Copy link
Contributor

gnossen commented Nov 25, 2019

@SebastianSchildt I recommend you check out https://www.piwheels.hostedpi.com/project/grpcio/. They have prebuilt wheels for up to armv7, which should work for you considering backwards compatibility between ARM versions.

@SebastianSchildt
Copy link
Author

Hi, thanks for the information. The system and python is completely aarch64, and according to my research it is not possible to just "mix in" a single arm32v7 pip package (I think pip does not even have the option to specify architecture?).

Is there any technical reason for not providing a v8 wheel? I think there is a lot of hardware out there nowadays. I get that a aarch64 wheel would not only be cpu but probably also distribution specific, depending on what is linked, but that is the same for other architectures too. I saw the piwheels guy provided his sources, so probably I could create my own instance and provide a complete set of aarch64 wheels to the world (or did somebody already do it? I didn't find anything), but... that seems a rather large workaround. I might just buy a faster build machine....

@gnossen
Copy link
Contributor

gnossen commented Nov 27, 2019

@SebastianSchildt There's definitely a pip option to specify the platform. What would the blocker be for linking an arm32v7 SO into your application binary? Do you have any links from research or the results of experiments run yourself?

Our build system for all of our ARM artifacts currently uses QEMU and Docker on x86 CI servers because we simply don't have native ARM infrastructure set up. As a result, it's by far our longest artifact build. Introducing an armv8 build isn't out of the question, but if there's a workaround that fulfills the use case just as well, I'd prefer it.

@hrw
Copy link

hrw commented Dec 4, 2019

There is official manylinux2014 image for aarch64 nowadays. And Travis CI supports aarch64.

@gnossen AArch64 systems may not be able to run arm32 binaries - 32bit support is optional and none of current server class cpus provide it.

@vielmetti
Copy link

In particular, the Marvell ThunderX and ThunderX2 systems are 64-bit only with no 32-bit support.

@jmcoreymv
Copy link

jmcoreymv commented Mar 25, 2020

I have the same desire for an Aarch64 wheel. I’m cross compiling with qemu for a Cortex A53 target and it takes over an hour just for the grpcio package.

@stale
Copy link

stale bot commented May 6, 2020

This issue/PR has been automatically marked as stale because it has not had any update (including commits, comments, labels, milestones, etc) for 30 days. It will be closed automatically if no further update occurs in 7 day. Thank you for your contributions!

@vielmetti
Copy link

Registering the continuing need for this, as noted the cross-compile step is unduly burdensome.

@stale stale bot removed the disposition/stale label May 6, 2020
@SebastianSchildt
Copy link
Author

Agree, still valid. In this day and age a still actively developed project should not ignore ARM. In the meantime we found some pain relief, in case you have some sufficiently big iron around you can set the environment variable

GRPC_PYTHON_BUILD_EXT_COMPILER_JOBS
for example in case you are building in docker just add
ENV GRPC_PYTHON_BUILD_EXT_COMPILER_JOBS 8
to the docker file. (This even seems to be effective when build is done via pip)

However, I would not consider this a "solution". It would still be much easier if the effort is only done once, and not by everybody....

@stale
Copy link

stale bot commented Aug 4, 2020

This issue/PR has been automatically marked as stale because it has not had any update (including commits, comments, labels, milestones, etc) for 30 days. It will be closed automatically if no further update occurs in 7 day. Thank you for your contributions!

@hrw
Copy link

hrw commented Aug 4, 2020

According to pyca/cryptography#5292 (comment) you may try new aarch64 nodes on travis ci (if you are on .com not .org). Those are AWS Graviton2 based so should be faster.

@stale stale bot removed the disposition/stale label Aug 4, 2020
@geoffreyblake
Copy link

AWS Graviton2 machines from my anecdotal testing of building pyca/cryptography are 2x faster than the old arm64 based machines on Travis. Should be roughly equivalent in build times to the x86 nodes.

@ashtacore
Copy link

Is there any chance this will get upgraded on the priority list? I'm sure there are a lot of people using Raspberry Pi 4s that would like a simple solution to this problem.

@thediveo
Copy link

Any chance on this? At the moment I need to build aarm64 Docker images requiring grpcio and protobuf on RPi 4B/8G and more than often image caching breaks, so this feels like my ZX81+16KB in slow mode emulating a 4004 using drum storage...

@jtattermusch
Copy link
Contributor

python manylinux2014 aarch64 wheels are now provided by #25418.

@iemejia
Copy link

iemejia commented Apr 7, 2021

Are there plans to release the wheels into PyPI soon? Any ETA?

@jtattermusch
Copy link
Contributor

Are there plans to release the wheels into PyPI soon? Any ETA?

The release 1.37.0-rc1 now has aarch64 manylinux2014 wheels:
https://pypi.org/project/grpcio/1.37.0rc1/#files

If you could try them out and report back your findings, I would highly appreciate that.

@iemejia
Copy link

iemejia commented Apr 7, 2021

Will do, we want to upstream this into apache/beam a heavy user of gRPC :)
Any plans for Mac arm64 wheels?

@jtattermusch
Copy link
Contributor

Will do, we want to upstream this into apache/beam a heavy user of gRPC :)

Thanks! I will be looking forward to see the results.

Any plans for Mac arm64 wheels?

I think tentatively we want to do this, but we have no concrete plans for this now. The main challenge is definitely the lack of hardware to test on (and note that OSX cannot be emulated :-( )

@jiridanek
Copy link

jiridanek commented Apr 7, 2021

I'd like to note that Python 3.6 does not have a wheel there; that version is still in support (for next 8 months) and it is the default Python version on RHEL 8.

The release 1.37.0-rc1 now has aarch64 manylinux2014 wheels:
https://pypi.org/project/grpcio/1.37.0rc1/#files

I put it into Travis CI for apache/qpid-dispatch. The gRPC test in the project runs some sort of "FriendshipService" demo, so it is not exactly comprehensive, but it does cover a bit of ground, besides just doing pip install.

Here's the pip install passing: https://travis-ci.com/github/apache/qpid-dispatch/jobs/496714833#L847

And here's the FriendShip service being tested, also passing: https://travis-ci.com/github/apache/qpid-dispatch/jobs/496714833#L5961

Looks good to me.

The main challenge is definitely the lack of hardware to test on (and note that OSX cannot be emulated :-( )

Legally no, but technically yes, and some publicly available projects have made the UX around it pretty good. Not that that's much help to gRPC.io or any above-board group.

@jtattermusch
Copy link
Contributor

I'd like to note that Python 3.6 does not have a wheel there; that version is still in support (for next 8 months) and it is the default Python version on RHEL 8.

Given that the aarch64 support is unofficial at this point, python3.6 will probably go out of support before people have time to start using it in any real app in python on aarch64.
That said, building python3.6 wheel for aarch64 is quite possible, I'm just not convinced it's worth doing it.

@hrw
Copy link

hrw commented Apr 8, 2021

grpcio is used in OpenStack. We run OpenStack on AArch64 for several years now.

Also python 3.6 is default Python in RHEL/CentOS 8 which is used also on AArch64 systems.

So please add py3.6 version as well. Especially when building grpcio using manylinux2014 allows to build 3.6-3.9 in one run.

@jtattermusch
Copy link
Contributor

Attempting to add the 3.6 wheel here: #25928 (hopefully it's going to be easy).

@iemejia
Copy link

iemejia commented Apr 9, 2021

Just a tiny validation. I ran the quickstart on AWS Graviton (AARCH64) on the just released 1.37 and it worked well.

@thediveo
Copy link

thediveo commented Apr 16, 2021

I successfully tried to ˋpip3 install grpcioˋ on a python:3.7-slim-buster image and it downloaded and installed the aarch64 wheel. Is there any chance for alpine/musl support? I tried but it always downloads the grpcio.tar.gz and then slowly burns through building the wheel.

@hrw
Copy link

hrw commented Apr 16, 2021

@thediveo pure curiosity: how many projects provide wheels for musl based systems?

@jiridanek
Copy link

jiridanek commented Apr 16, 2021

@thediveo pure curiosity: how many projects provide wheels for musl based systems?

@hrw gRPC is different, I'd say. There is native code in the Python package, building it takes ages, and the project is a core infra component. I can imagine wanting to run Alpine containers on my Aarch quite readily. I'd hope for a distro package, though, as my best bet.

@thediveo
Copy link

thediveo commented Apr 16, 2021

I don't know but then most pip packages I otherwise need are architecture-"ignorant". It's just grpcio that is the literally big bump in the road (in the python?) that is the problem for me. We're using Alpine because it otherwise is much, much slimmer than the self-styled Debian slim-buster, there's a difference of 100M IIRC. And the package containerd I need on top of grpcio I made sure that it is architecture neutral and a slim wheel.

So I would hope that any optimization spent on grpcio for Alpine is well spent for many users on ARM64 beyond just RPi, but Apple and lots of "larger" embedded devices with 64bit ARM.

@gnossen
Copy link
Contributor

gnossen commented Apr 16, 2021

@thediveo Previous response to that question here. Long story short, unless there's been movement in the ecosystem since we last evaluated it, there just isn't first-class support for musl libc in the Python packaging ecosystem the way there is for libc. We're blocked on Pypa creating a good story for that.

@vinnnyr
Copy link

vinnnyr commented Sep 9, 2021

Has there been some sort of pause on building these wheels? It seems like the last arm wheels were published with version 1.38.0

@jtattermusch
Copy link
Contributor

Has there been some sort of pause on building these wheels? It seems like the last arm wheels were published with version 1.38.0

Not really, we still provide aarch64 wheels. The only thing that changed is that for 1.39.0 and 1.40.0 release, the wheels are now "manylinux_2_24" - this change was necessary to fix a bug - see #26430

https://pypi.org/project/grpcio/1.39.0/#files

In the future, we will likely be providing manylinux_2_17 / manylinux2014 wheels for aarch64 again, since it seems we've be able to fix the problem without requiring a workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment