Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared redis cache with larger LRU cache size #6

Open
Nevon opened this issue Oct 24, 2018 · 0 comments
Open

Shared redis cache with larger LRU cache size #6

Nevon opened this issue Oct 24, 2018 · 0 comments

Comments

@Nevon
Copy link

Nevon commented Oct 24, 2018

I'm using nodecache-as-promised to cache some values that I would otherwise need to fetch from an expensive remote. In order to share the cache between my instances, I'm using the redis persistence middleware.

My understanding at the time was that I would keep a fixed size cache in memory, and that the shared cache in redis would be evicted based on TTL. This appears to not be true, as values get deleted from redis when the in-memory cache evicts a value based on the size of the LRU cache.

The reason I'm thinking about this is because I have less available memory on my machines than I have space in Redis. So in Redis I could store maybe 500000 items, whereas on my application servers I would only want to fit maybe 100000. If a value is requested that is not in the local in-memory cache, I would be fine taking the hit to check if it is in the redis cache and then update my local in-memory cache accordingly, as that's still much cheaper than getting the value from the expensive remote.

Basically the idea is that the in-memory cache would contain a slice of the most recent part of the persistent cache.

Does this make sense? I would consider developing this, but you probably have more experience with this, so maybe there are things I'm not thinking about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant