Skip to content

Congratulations and a few thoughts #15

Open
the-nicolas opened this issue Feb 4, 2020 · 1 comment
Open

Congratulations and a few thoughts #15

the-nicolas opened this issue Feb 4, 2020 · 1 comment

Comments

@the-nicolas
Copy link

First of all I want to congratulate you on this fantastic piece of code! Finally a conceptual approach that makes sense.

My thoughts:

I think today's systems need more like classic caches and your approach goes a lot further. I don't need a cache, I need a distributed memory manager. The problem at the moment is that the cache is always seen as an intermediate layer to data sources (DB, API) and a "cost / benefit" decision must always be made.

I think it should be easier! The cache should basically be the basis for storing data - and then, depending on the configuration, reduce latency and computing power in a distributed environment.

What is missing?

1. Multi-tier

  • The cache should basically be multi-tier, whereby I would not overdo it (creating X adapters etc.)

  • However, it must be possible to use an in-memory cache.

  • Local memory is just FAST and today's servers really have enough of it.

  • If the cache should be the basis, then you should also be able to use it for development or for small projects, where maybe there is no Redis (yet).

    In-memory cache with size limit and hit tracking for optimal garbadge collect

2. Invalidation - instead of TTL

  • Our micro services receive events via a message broker and based on this information you can delete outdated data.
  • Right now I see no direct delete methods - and they should also work based on tags and not limited to one key

3. Versioning - short round trip instead of TTL

  • Internally you already work with versions
  • It should be possible to ask Redis for newer version (based on given version from local in-memory cache)
  • I created such a thing already with some little lua script injected to Redis, which just returns the data if a newer version exists.
  • With that you have just one request and 5 byes response if your local data is still up2date

4. Pre-warm in memory cache

  • In combination with a message brokers (or even Redis pubsub) it is also possible to cache data in-memory, before it gets accessed

  • This logic is application side, but would a good interface would help.

    Refresh ahead is nice, but I want control over my data and not just statistical optimizations**

Right now we have many backends with some kind of "session stickyness", just to have some little information already there to improve latency. That sucks and with the possibilities described it would be easy to solve.

Maybe you have similar ideas? I would appreciate feedback and if you have a roadmap in which direction the project should go, I would be very interested!

Thanks a lot!

Nic

@drwatsno
Copy link
Collaborator

drwatsno commented Feb 4, 2020

Hello! First of all, thanks for the kind words. A lot of what you wrote makes sense. Therefore, I will share our plans for the near future.

1,3. As for the multi-tier cache, past implementations made it possible to additionally use in-memory caching, but we were faced with the problem of how exactly the amount of data used in the memory can be calculated and how to evict records based on this data.

The strategy we used was that we could give the version of data from local memory without additional expenses for serialization and deserialization if Redis did not have a more recent version of the data.

In the future, we will definitely implement an in-memory layer (In-memory adapter, and memory storage which will be holding local versions of data) with exactly the same behavior that you described.
2. Delete methods will be implemented

If we talk about the roadmap, then in the near future it is like this:

  1. Now we are doing some optimizations to close the problem of excessive tag storage, as well as the complexity of get methods associated with the record number of tags. We have already done some optimizations - and this negatively affected the universality of the adapter methods - in the near future we will make a composition to get rid of this drawback.
  2. We are planning improvements in the implementation of the Redis adapter, since now much has been implemented too simplistically (where hashes could be used, they are not used)
  3. We will implement an adapter for Memcached since in some cases it does better than Redis
  4. We also plan to introduce benchmarkingm and load testing in our CI.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants