Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Another LRU crate #93

Open
marmeladema opened this issue Oct 29, 2020 · 4 comments
Open

Another LRU crate #93

marmeladema opened this issue Oct 29, 2020 · 4 comments

Comments

@marmeladema
Copy link

Hello @jeromefroe and sorry to bother you!

I have been looking for an LRU cache with O(1) operations and found your crate but decided as an exercise to implement my own: https://github.com/marmeladema/clru-rs

I wonder if you could take a quick look and tell me what you think of it?

As outlined in the README, the main differences are:

  • Smaller amount of unsafe code.
  • API closer to the standard HashMap collection which allows to lookup with Borrow-ed version of the key.

Technically, it might address some issues opened in this crate, while certainly introducing a lot new ones :-)
Namely:

On the implementation side, it relies on a regular HashMap and a linked-list stored in a vector.

I understand its quite an unusual way to contact people, please forgive me.

Thank you in advance!

@jeromefroe
Copy link
Owner

Hi @marmeladema! It's no bother at all!

I just took a look at your crate, and I think it's a great approach! I'd be happy to add a comment to the README linking to it and explaining the differences between the two. The way I see it, the two crates occupy slightly different positions on the spectrum of performance and usability. clru-rs offers an API closer to HashMap and less unsafe code at the cost of slightly more memory usage because the keys are stored in a vector instead of a linked list, and this crate makes the opposite trade-off. Does that sound about right?

@marmeladema
Copy link
Author

Thank you for taking the time to look at the code! Really appreciate it!

I'd be happy to add a comment to the README linking to it and explaining the differences between the two.

Wow that would be super nice 👍 Maybe it's a bit too soon though, I'd like some more people to look at it / use it.

at the cost of slightly more memory usage because the keys are stored in a vector instead of a linked list

The keys are moved in a Rc and thus are shallow copy in the linked list. The overhead with this would be similar as if the keys were boxed once. I am not sure there is such a difference in memory usage between the two crates? It would be nice to measure it though! The only thing that I can think of where clru might be more memory hungry would be for non-full cache, because it pre-allocates all nodes ahead of time but then there is not much point in using an LRU cache that is never full.

@jeromefroe
Copy link
Owner

Maybe it's a bit too soon though, I'd like some more people to look at it / use it.

Sounds good. We can circle back whenever you feel it's ready!

The only thing that I can think of where clru might be more memory hungry would be for non-full cache, because it pre-allocates all nodes ahead of time but then there is not much point in using an LRU cache that is never full.

That was all I could think of too. And I agree, it certainly seems like the difference would be negligible.

@marmeladema
Copy link
Author

By the way, I haven't tagged a new release yet, but clru does not pre-allocate unconditionally all the memory anymore.
There is still a constructor available to do so though, for cases (like the ones I have) where it can be known in advance that the cache is going to be full.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants