New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Outbound queue improvements #4084
Conversation
Signed-off-by: Neil Twigg <neil@nats.io>
Signed-off-by: Neil Twigg <neil@nats.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we tested against the setup from yesterday?
func nbPoolPut(b []byte) { | ||
switch cap(b) { | ||
case nbPoolSizeSmall: | ||
b := (*[nbPoolSizeSmall]byte)(b[0:nbPoolSizeSmall]) | ||
nbPoolSmall.Put(b) | ||
case nbPoolSizeMedium: | ||
b := (*[nbPoolSizeMedium]byte)(b[0:nbPoolSizeMedium]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without the coalescing, we have to allocate more buffers when traffic is in flight, so I added an extra pool size so that there's middle-ground between small and large messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant the casting before placing back into the pool. We cast when we pull it out, seems we might not need?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, the code was doing this before too, the reason is that sync.Pool
is most efficient when the contents are pointerised, otherwise it creates lots of copies of the slice header on the heap and they get re-boxed passing through the interface{}
boundary.
Signed-off-by: Neil Twigg <neil@nats.io>
Testing now |
Previous race issue in go and web socket parse/connection issues not found in testing live app! PS) Browser app is now MUCH faster in full 2.9.16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This PR effectively reverts part of #4084 which removed the coalescing from the outbound queues as I initially thought it was the source of a race condition. Further investigation has proven that not only was that untrue (the race actually came from the WebSocket code, all coalescing operations happen under the client lock) but removing the coalescing also worsens performance. Signed-off-by: Neil Twigg <neil@nats.io>
This extends the previous work in #3733 with the following:
writev
syscall in rare circumstancesFixes nats-io/nats.ws#194.
Signed-off-by: Neil Twigg neil@nats.io