Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to explicitly request prefetch_related from cache #317

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

bharling
Copy link

This PR adds a cache_prefetch_related( *lookups, ops, timeout, lock ) method that will rewrite a regular prefetch_related call and attempt to pull the related models from the cache instead of the database.

In our application we have a particular model with a large amount of many-to-many relations and were able to achieve a significant performance increase by fetching the related models from the cacheops cache where possible. Obviously this only works if the querysets in question have already been cached at some point in the past.

@Suor
Copy link
Owner

Suor commented Feb 15, 2019

Not sure this is a good approach. This obviously won't work if base queryset slightly changes, i.e. you fetch latest posts, which constantly change, but have huge overlap each time. This will potentially populate cache with duplicate data.

@bharling
Copy link
Author

It seems to work well for our particular use case but I agree it could do with some more thorough test cases for the issue you mention certainly - can you suggest any approach I can take to test that?

Up till recently we had caching enabled for all models in our app via the regular CACHEOPS settings in django but we're taking a more explicit approach now to try and avoid some inconsistency errors we got sometimes between our 'dashboard' application and the user-facing app ( as both systems were agressively caching everything, whereas we in fact need to be talking more to the database on the 'dashboard' app ).

As far as I can tell from our unit tests, that approach was basically doing the same thing as this, since the models referenced in our 'prefetch_related' queries were fetched from the cache more often than not as they were all included in the automatic caching settings.

My aim here was to replicate that behavior explicitly rather than via the automatic caching.

For the particular case we had - the queryset does not frequently change, but is often user-dependent. We also have a requirement to fetch a lot of Many-to-Many relations in the same query.

@Suor
Copy link
Owner

Suor commented Feb 15, 2019

Good point about it working this way now when automatic caching is enabled. I need to think about it.

@Suor
Copy link
Owner

Suor commented Feb 15, 2019

BTW, I was thinking on caching individually by pk and then using MGET. But this will require to hack into fetching process and overall way more complex in implementation.

@Suor
Copy link
Owner

Suor commented Feb 18, 2019

Here is what I think about it:

  • this should be incorporated as a practical stop gap, since there is no roadmap of a better solution.
  • if you don't use params like ops, timeout and lock then you should remove that handling. First ops don't make much sense since only fetch one would be used and second - limiting featues to potentially support or drop.
  • stop modifying Prefetch objects, make shallow copies instead.

Would you make the changes to your pull request?

@bharling
Copy link
Author

Sure that all sounds sensible - I'll make those amends

@Suor
Copy link
Owner

Suor commented Apr 5, 2020

It's been here for quite a while. Is this still needed?

@bharling
Copy link
Author

I'm using this still on our production app and it makes a big difference - IMO I'd like to see it merged if you're happy with it?

@Suor
Copy link
Owner

Suor commented May 13, 2020

I'll review it

@Natgho
Copy link

Natgho commented Mar 18, 2024

hello hello,

It's a good idea and it's been three years. Is there an update?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants