Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deadpool-lapin - recycling connections and channels #47

Open
ppiotr3k opened this issue Mar 28, 2020 · 6 comments
Open

deadpool-lapin - recycling connections and channels #47

ppiotr3k opened this issue Mar 28, 2020 · 6 comments
Labels
A-lapin Area: Lapin / deadpool-lapin enhancement New feature or request

Comments

@ppiotr3k
Copy link

ppiotr3k commented Mar 28, 2020

Per my understanding, RabbitMQ requires using a channel (lapin::Channel) within a connection (lapin::Connection) in order for a Publisher to send messages.

Unfortunately, I'm struggling to recycle deadpool_lapin::Connection objects provided by deadpool_lapin::Pool Manager, as the Manager does not handle channels, and lapin::Connection.channels is a private field.

My use case is a hyper based microservice publishing events to RabbitMQ.
Currently, the less worse performance is achieved by reusing a channel directly:

let client: deadpool_lapin::Connection = pool.get().await.expect("RabbitMQ connection failed");
let channel = client.create_channel().await.unwrap();

let service = make_service_fn(|_| {                                                                                                                                                          
    let channel = channel.clone();                                                                                                                                                           
    async {
        Ok::<_, hyper::Error>(                                                                                                                                                                   
            service_fn(move |req: Request<Body>| microservice_handler(req, channel.clone()))
        )                                                                                                                                                                                    
    }                                                                                                                                                                                        
});

While providing the best performance so far, the overall impact on req/s is quite important, as the single connection and channel to RabbitMQ becomes the bottleneck.

I would prefer using code similar to deadpool-redis or deadpool-postgres, allowing several connections to RabbitMQ, each with its channel:

let service = make_service_fn(|_| {                                                                                                                                                          
    let pool = pool.clone();                                                                                                                                                           
    async {
        Ok::<_, hyper::Error>(                                                                                                                                                                   
            service_fn(move |req: Request<Body>| microservice_handler(req, pool.clone()))
        )                                                                                                                                                                                    
    }                                                                                                                                                                                        
});

However, for the time being, as deadpool_lapin::Pool does not handle channels, and lapin::Connection.channels is a private field, channels must be created and removed with each request, resulting in an even greater performance impact than a single connection/channel bottleneck.

While having a limited experience on the matter, my intuition, to match the Pool pattern, is that it would be great if the Manager implementation managed channels on top of the connections.

@bikeshedder bikeshedder added A-lapin Area: Lapin / deadpool-lapin enhancement New feature or request labels Mar 29, 2020
@jbg
Copy link

jbg commented Mar 31, 2020

This would benefit us greatly as well (similar use-case - hyper service which sometimes needs to send something to RabbitMQ, and currently has to get a connection from the pool, open a channel, publish, and then close the channel).

I think there may be some complexity around making sure the channel is safe to re-use, though. Channels have some state - they can be placed in confirm mode (by sending the confirm.select method) or transactional mode (tx.select), for example. I don't think those operations can be undone.

We use confirms on all channels, but in an application that didn't want them on some channels, you would not want to get given a re-used channel which was previously put in confirm mode.

@bikeshedder
Copy link
Owner

This is something that's very similar to something I'm thinking about for deadpool-postgres. Instead of creating multiple connections to the DB it's possible to use pipelining. It's quite similar to this problem as it is only possible when not using transactions. Thus the underlying connection is never used for something else which modifies its state.

When first working on deadpool-lapin I experimented with returning channels instead of connections but found them to be not very recycling friendly. So I ended up just recycling connections. This is not very efficient as you already noticed.

I think it's pretty obvious that connection pooling only makes sense when sending a message to the broker. For anything that receives messages from the broker the pool is mostly useless as the service very likely never returns the connection to the pool.

Right now deadpool_lapin::Pool is just a type alias for deadpool::managed::Pool. If I was to change that into its own type and just wrap the underlying connection pool I could add new methods to it.

e.g. Pool could implement a basic_publish method which uses a single connection internally. It could even cache the declared queues and exchanges and reduce the roundtrips similar to the statement cache of deadpool-postgres.

impl Pool {
    pub async fn basic_publish(
        &self,
        exchange: &str,
        routing_key: &str,
        options: BasicPublishOptions,
        payload: Vec<u8>,
        properties: BasicProperties
    ) -> Result<()> {
        ...
    }
}

Personally I would also love to add some kind of declaration system similar to kombu where you declare the exchanges and queue in a separate structure and just reference it from there. e.g.

struct Publisher {
    exchange: &str,
    routing_key: &str,
    options: BasicPublishOptions,
}
impl Publisher {
    pub async fn publish(&self, pool: &Pool, payload: Vec<u8>) -> Result<()> {
        ...
    }
}

Just sketching some ideas here...

@Dowwie
Copy link

Dowwie commented Apr 7, 2020

It is my understanding that a server should have very few RabbitMQ connections and instead many channels: https://www.cloudamqp.com/blog/2018-01-19-part4-rabbitmq-13-common-errors.html

I don't know how authoritative a source this is, but the company specializes in this domain.

@ppiotr3k
Copy link
Author

It is my understanding that a server should have very few RabbitMQ connections and instead many channels: https://www.cloudamqp.com/blog/2018-01-19-part4-rabbitmq-13-common-errors.html

I don't know how authoritative a source this is, but the company specializes in this domain.

While I don't have much experience, I share both the understanding, and CloudAMQP's best practice 🙂

I was thinking on @bikeshedder raw ideas, and I was wondering how far we can go without affecting the spirit of dead simple, in usage, and in this crate implementation.

And it made me wondering again on this abstraction on channel level, having the channels in a safe recycled state as mentioned by @jbg - even wondering maybe asking in https://github.com/sozu-proxy/lapin about Kerupse's thoughs, and eventual plans to have a pool directly in the lapin library 🤔

@bikeshedder
Copy link
Owner

I'm currently tinkering with #196 which should allow to implement this feature. The most difficult part about it is the detection and handling of broken connections. That part could be solved by a new recycle method which is called by the SharedPool that can detect broken connections and detach the object from the backing pool.

@stennisrl
Copy link

stennisrl commented Aug 18, 2022

Has there been any recent progress made on this potential feature? I hate to bump a thread with nothing to contribute, but I ran into the same issue in my own application and my workaround is sub-par at best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-lapin Area: Lapin / deadpool-lapin enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants