Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mechanism for requesting shared Objects #196

Open
maboesanman opened this issue Apr 25, 2022 · 7 comments
Open

Mechanism for requesting shared Objects #196

maboesanman opened this issue Apr 25, 2022 · 7 comments
Labels
A-core Area: Core / deadpool enhancement New feature or request

Comments

@maboesanman
Copy link

tokio_postgres allows for pipelining, but it's not really possible to take advantage of it with deadpool as the client you receive is always exclusively borrowed.

It would be a pretty powerful if there was a get_shared method on Pool that gives you a shared reference to an object instead of an object (or maybe an ObjectRef or something), and can be given out again to other callers to get_shared.

The reason you still need mutable references to clients is for transactions, but if you don't need them then a shared reference suffices and it might not make sense for another non-transaction client to fully tie up the resource.

If right now deadpool is a "first available mutex", then I'm suggesting a "first available rwlock"

@bikeshedder bikeshedder added A-core Area: Core / deadpool enhancement New feature or request labels Apr 25, 2022
@bikeshedder
Copy link
Owner

This idea has been brought up before and I have been thinking about this for a long time now. Up until now I didn't really have a idea how to solve this properly without introducing a lot of complicated (and error prone) code. Yesterday I started writing some implementation notes and ditched them shortly after as I had a "Aha!"-moment. I found a better and way simpler way to implement this without affecting the core pool implementation at all...

How about adding a SharedPool<T> that hands out SharedObject<T> but is backed by a regular Pool<T>. The difference between SharedObject<T> and Object<T> is the lack of a DerefMut implementation. SharedPool<T> keeps a list of those objects and sorts them by the number of current users. When a SharedObject object is returned to the SharedPool and the user_count of the SharedObject reaches 0 it can be returned to the backing Pool.

That would be the most trivial and straight forward implementation of this feature.

There are lots of open questions of course:

  • Should there be a max_age of objects?
  • Is it actually safe to upgrade a SharedObject to a Object after there are no users of it?
  • What configuration parameters should it support. I'm pretty sure every object needs a max_users_per_object but what about max_objects`?
  • Should the SharedPool prefer separate objects over sharing objects or is there maybe some kind of heuristic? e.g. Aim for a count of X users per object and depending on the current situation create new objects or hand out additional shares of already claimed objects.

I'm pretty sure there are lots of gotchas along the way, but a prototype of that feature could be developed in a few hours showing the possible potential.

@maboesanman
Copy link
Author

maboesanman commented May 1, 2022

I think this can be done purely as an async concurrency primitive on top of DeadPool.

here's a sketch of what this primitive might be:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=168e4a41f8167eb43bc3c7867713b851

in the case of Deadpool, the T param would be Object

I'd be interested in experimenting with building this out on a fork. it seems like an interesting problem

edit: updated playground with more detail

@maboesanman
Copy link
Author

maboesanman commented May 2, 2022

I've thrown together an implementation of the concurrency primitive which I think will solve the problem here:

https://github.com/maboesanman/MultiRwLock

to use this with Deadpool should be as simple as adding all the Deadpool Objects to the resources, then calling read for a reader and write for a writer.

I haven't implemented it yet but it shouldn't be bad to add/remove resources while it's running.

note: I have not tested this at all. if you think this looks promising I'm happy to add some features/tests and assist with integration with Deadpool

@bikeshedder
Copy link
Owner

I've thrown together an implementation of the concurrency primitive which I think will solve the problem here: (...)

I've skimmed over the implementation and the thing I wonder most about is why you even bother tracking writer handles. This is already done by by the backing pool and the underlying Object<T>.

I'm currently working on some internals of the deadpool implementation and once I finish that I'll write a PoC for that SharedPool I talked about earlier. Implementation wise I imagine it being in the region of ~100 LoC without compromising on functionality.

@maboesanman
Copy link
Author

You need to track writer handles because you need to know which resource is currently being written to, and you need to move it to the freed resources when available.

It's certainly possible this is over engineered for this use case but I think it's a useful primitive anyway so I'm gonna keep messing with it. I'll let you know if it yields nice behavior when used with Deadpool.

@bikeshedder
Copy link
Owner

You need to track writer handles because you need to know which resource is currently being written to, and you need to move it to the freed resources when available.

That's the thing I don't get. Couldn't you just return all freed resources to the backing pool instead? 🤔

If you're keeping freed resources in a queue you have created a pool yourself and don't need deadpool at all. Just let your implementation use the Manager directly in order to create and recycle objects.

@maboesanman
Copy link
Author

That's a good point. I may opt to use the manager trait directly for my own uses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-core Area: Core / deadpool enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants