-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizing for concurrency #21
Comments
This is a great question! Admittedly, when I first started working on this project, I intended it to be a learning experience so I didn't focus on performance very much. It would definitely be interesting to go back and revisit different parts of the cache now though. Of course, as the saying goes, "If you can't measure it, you can't improve it". So I'll probably start by writing some benchmarks which can provide a baseline against which any changes can be compared. |
👍. I used xxhash which gives slightly better performance. |
RocksDB's Shared cache which sharded into multiple LRU cache is faster. |
Great, thanks for the references! Hopefully benchmarking and profiling will reveal some quick wins, anything more extensive might be better left for a new crate which implements the same interface. |
@rohitjoshi You may also find this useful https://crates.io/crates/concread this has a concurrently readable / transactional cache implementation. |
@Firstyear Thanks for sharing. Earlier I saw |
@rohitjoshi If your keys are never updated, and you are mainly read to write, then you have ever more reason to look at the arcache. This design has "no" locking, allows full parallel lookups between all readers, and when you have a "cache miss" any reader can include content to the cache without blocking existing readers. For bonus, it also support SIMD for parallel key lookups via a feature + nightly rust. Additionally, ARC as a cache replacement strategy is far more effective than LRU :) https://github.com/kanidm/concread/blob/master/CACHE.md Feel free to email me directly (rather that us annoying @jeromefroe) - it can be found on my github profile. |
lru-rs
is quite fast compared to many other LRU cache implementation. Is there any way to optimize in multi-threaded access? Maybe a read-write lock or reducing the locking scope.Maybe something like CHashmap : https://docs.rs/chashmap/2.2.0/chashmap/
get
andput
takes mutable self so compiler forcers to use mutex lock in a multi-threaded environment even though Send and Sync are implemented.The text was updated successfully, but these errors were encountered: