You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a service that integrates localized resource files with repositories on GHE. I have a monorepo case where the octokit-write limit starts becoming onerous since I need to write about ~200 files per locale. At the current limits, this'll take a second per file, problematic since I'll be committing back 5 locales x ~200 files.
Ask
I'd like to expose some way to configure the bottlenecks created in createGroups.
For example:
@octokit/maintainers I wouldn't mind exposing the detailed throttle settings. I'd probably make them independent of bottleneck as we might replace this implementation detail in future.
However I think doing more than 1 mutating request per second will trigger secondary rate limits very quick. I'm not sure if thresholds for secondary rate limit can be configured or if an actor can be exempt.
As an alternative, and to try it out before we implement any changes, you can create multiple octokit instances and rotate through them. Or disable throttling altogether and implement your own for this particular use case
Thanks for the quick response. Yeah happy to help make this more generic.
For the interim, I've disabled this plugin. I'm working on Github Enterprise where technically my service account has generous rate limits. I've tried tweaking the innate limits here while running my service locally.
Motivation
I have a service that integrates localized resource files with repositories on GHE. I have a monorepo case where the
octokit-write
limit starts becoming onerous since I need to write about ~200 files per locale. At the current limits, this'll take a second per file, problematic since I'll be committing back 5 locales x ~200 files.Ask
I'd like to expose some way to configure the bottlenecks created in
createGroups
.For example:
The text was updated successfully, but these errors were encountered: