New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[match] add caching layer to significantly improve performance by up to 100x #21694
Conversation
Thanks @nekrich. Nice PR here. I don't mind the large PR. Will look into this during the week. |
7d19210
to
d8ae5a4
Compare
d8ae5a4
to
00639f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass. One major comment regarding the design, and several nitpicks.
The code is really clean, appreciated 👍.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is awesome! I only did a partial review so far though 🙏 but the improvements you've observed are game-changing, thanks for this!!
I didn't have the time to go through the entire PR atm, but I have a few questions:
- Is the cache persistent (e.g. on disk)? If so, what's the persistency scope (e.g. is it valid throughout different fastlane action calls of a single lane, or different fastlane lane calls?
- What is the cache bursting logic?
Again thanks for all the effort you put in this PR, I'm glad it solved the performance issues you had and excited for other people to be able to access this! 🙌
Co-authored-by: Roger Oba <rogerluan.oba@gmail.com>
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
🚀! Nice PR @nekrich. Thanks @milch @rogerluan :) |
@nekrich I'm experiencing an issue with the error message "cached_profiles parameter must be a non-empty array of Spaceship::ConnectAPI::Profile" after upgrading to fastlane version 2.218.0. I had to revert to version 2.217.0 to resolve the issue. |
@thanhcuong1990 , I'll look into in in a couple of hours. Can you please tell what auth type you use: Keys or login and password? |
I used |
The same issue. I'm using *.p8 key
|
@viktor-savchik-idf , @thanhcuong1990 can you please check if the fix works nekrich:fix/sigh-cache-parameters-validation? Gemfile: |
@nekrich yes, it works |
@nekrich but when repeating the same command, there is another issue.
|
@viktor-savchik-idf , can you please share the options you use to run the match action? I'm interested in all force_* parameters, the readonly flag and profile type. Thanks. |
@nekrich Here is the complete action, the lane :certificates_dev do
api_key = app_store_connect_api_key_creator
register_devices(devices_file: "./devices.txt", api_key: api_key)
match(
app_identifier: ["id.one", "id.two", "id.three"],
api_key: api_key,
type: "development",
force_for_new_devices: true,
template_name: "ApplePay In-App Provisioning Distribution",
)
end |
Hi,
Triggered by following action:
EDIT: |
@nekrich Confirmed, it works for me. |
Wow! I tried that right now with version 2.219.0 and for our 60+ apps this is a pure gamechanger 🎉 Thank you so much for this update 👍 Here are our benchmarks: matrix-codesigning.sh is the script which runs fastlane match for development/appstore for each of our 60+ apps.
|
…to 100x (fastlane#21694) * [match] increase match speed with caching * chore: skip new profile validation * chore: don’t install a nonexistent profile * chore: code readability improvements * chore: improve cert & device difference * chore: fix variable naming * chore: remove redundant var init * chore: check for unique profiles * fix: typo * chore: match portal bundle id fetcher expect only arrays * fix: only uuids with one hyphen supported for silicon macs * chore: add comment about cert type downcasing Co-authored-by: Roger Oba <rogerluan.oba@gmail.com> * chore: use standard syntax for multiline blocks * chore: remove empty it * chore: update cache returns * chore: extra check for cached certs after reset --------- Co-authored-by: Roger Oba <rogerluan.oba@gmail.com>
Checklist
bundle exec rspec
from the root directory to see all new and existing tests passbundle exec rubocop -a
to ensure the code style is validci/circleci
builds in the "All checks have passed" section of my PR (connect CircleCI to GitHub if not)Hi.
Problem
We have 112 provisioning profiles. And running match not in
readonly
mode takes a lot of time for us.I've made a Faraday middleware to count requests, fetch time, and the amount of passed data.
I saw many redundant fetches: profiles, certificates, bundle IDs, and devices fetched for each
app_identifier
and some of them several times.tl;dr; The cached version of the match installs all 112 profiles with cert and device checks as fast as the current version manages to check and install just one development profile with the same parameters.
Description
Introduce cache and fetcher for match. We must save the state of profiles, certificates, bundle IDs, and devices. I've looked at what information changes when the match is executed and what is static.
With this, I see that caching will not bring a lot of headaches, and there will be no cache inconsistency. And introducing cache on the instance level will probably bring more good than bad.
Another good thing is that the match is executed with strict parameters: type and platform. And we can write better filters for fetching certificates and profiles.
Results (x100 😅)
Stats per url
Before:
After:
Solution
This PR.
To ease the review, I can split it into 5 smaller PRs:
device_count_different?
andcertificate_count_different?
to a new file.As for me, making separate PRs into the main branch makes almost no sense. I'm a fan of a feature branch but not a maintainer 😅. If you are ready for one biggie, please let me know.
Please tell me if my solution can be merged into the fastlane, and what strategy of PRs you would like to use.
PS. I want to thank @PaulTaykalo for his #21016 PR, which I included in this one. Paul, I hope you don't mind 😉.