Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make disk size trim algorithm more efficient #216

Open
Quartme opened this issue Mar 19, 2018 · 0 comments
Open

Make disk size trim algorithm more efficient #216

Quartme opened this issue Mar 19, 2018 · 0 comments

Comments

@Quartme
Copy link

Quartme commented Mar 19, 2018

I'm using a PINDiskCache to store roughly 1000 images when it reaches the disk cache limit specified.

After the cache reaches it's limit, it looks like nearly every call to -setObjectAsync:forKey:completion: to add an item to the cache triggers one -trimDiskToSizeByDate: call. This call sorts the _metadata array so the code ends up sorting this same array repeatedly.

We could mitigate the costs of this sorting by either throttling the trim to once every 60s (could be configurable), or by using a data structure like a priority queue. I could submit a Pull Request for configurable throttling of the trim if needed. thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants