Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[server] proper handling of batch errors and mixed calls #917

Merged
merged 5 commits into from
Oct 26, 2022

Conversation

niklasad1
Copy link
Member

@niklasad1 niklasad1 commented Oct 23, 2022

Builds on top of #896, kudos to @polachok for starting this.
As he hasn't responded on my feedback then I fixed it myself.

So before we just checked if the batches was Vec<Request> or Vec<Notification which doesn't work for errors and batches with mixed calls and notifications which this fixes.

In addition I removed the empty message that are we sent on batches with only notifications on WS for HTTP we are still doing it to ACK to HTTP request itself.

@niklasad1 niklasad1 requested a review from a team as a code owner October 23, 2022 20:05
Err(batch_err) => batch_err,
};
}
if let Ok(batch) = serde_json::from_slice::<Vec<&JsonRawValue>>(&data) {
Copy link
Member Author

@niklasad1 niklasad1 Oct 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parsing the request like this is slower than parsing directly Vec<Request> which is the most common usecase

overall, it looks to similar to master which I'm happy with considering that I changed the responses to be in order now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider that my machine is a bit spurious for the benches:

group                                      batch-ordered                           master
-----                                      -------------                           ------
async/http_batch_requests/fast_call/10     1.26     57.9±8.16µs 168.6 KElem/sec    1.00     46.0±3.16µs 212.4 KElem/sec
async/http_batch_requests/fast_call/100    1.00   281.0±62.15µs 347.6 KElem/sec    1.20   337.5±31.42µs 289.4 KElem/sec
async/http_batch_requests/fast_call/2      1.32     43.6±9.62µs 44.8 KElem/sec     1.00     33.1±2.01µs 59.0 KElem/sec
async/http_batch_requests/fast_call/5      1.56    62.7±16.28µs 77.8 KElem/sec     1.00     40.2±2.96µs 121.5 KElem/sec
async/http_batch_requests/fast_call/50     1.04   182.8±44.29µs 267.2 KElem/sec    1.00   175.3±48.17µs 278.5 KElem/sec
async/ws_batch_requests/fast_call/10       1.00     82.2±9.43µs 118.8 KElem/sec    1.03     84.6±3.72µs 115.4 KElem/sec
async/ws_batch_requests/fast_call/100      1.00   305.9±37.64µs 319.3 KElem/sec    1.18   359.7±19.63µs 271.5 KElem/sec
async/ws_batch_requests/fast_call/2        1.00     42.1±3.17µs 46.4 KElem/sec     1.23     51.8±3.90µs 37.7 KElem/sec
async/ws_batch_requests/fast_call/5        1.00     30.9±1.19µs 158.0 KElem/sec    1.75    54.2±11.31µs 90.2 KElem/sec
async/ws_batch_requests/fast_call/50       1.00   185.8±22.98µs 262.9 KElem/sec    1.23   228.3±14.44µs 213.9 KElem/sec
sync/http_batch_requests/fast_call/10      1.00    72.9±12.58µs 134.0 KElem/sec    1.15     83.9±3.51µs 116.3 KElem/sec
sync/http_batch_requests/fast_call/100     1.14   335.2±42.17µs 291.4 KElem/sec    1.00   292.8±32.85µs 333.5 KElem/sec
sync/http_batch_requests/fast_call/2       1.28     45.0±9.56µs 43.4 KElem/sec     1.00     35.0±4.86µs 55.8 KElem/sec
sync/http_batch_requests/fast_call/5       1.15     49.3±9.73µs 99.1 KElem/sec     1.00     42.7±7.20µs 114.4 KElem/sec
sync/http_batch_requests/fast_call/50      1.00   150.9±41.80µs 323.6 KElem/sec    1.05    159.0±4.82µs 307.2 KElem/sec
sync/ws_batch_requests/fast_call/10        1.00    48.3±14.22µs 202.2 KElem/sec    1.27    61.4±22.71µs 159.1 KElem/sec
sync/ws_batch_requests/fast_call/100       1.00   272.4±46.48µs 358.5 KElem/sec    1.26   343.6±22.20µs 284.2 KElem/sec
sync/ws_batch_requests/fast_call/2         1.05     25.8±1.33µs 75.7 KElem/sec     1.00     24.5±0.72µs 79.6 KElem/sec
sync/ws_batch_requests/fast_call/5         1.04     30.9±1.28µs 158.3 KElem/sec    1.00     29.8±0.85µs 163.8 KElem/sec
sync/ws_batch_requests/fast_call/50        1.03   210.7±18.53µs 231.7 KElem/sec    1.00   205.2±10.41µs 237.9 KElem/sec

let mut got_notif = false;
let mut batch_response = BatchResponseBuilder::new_with_limit(call.max_response_body_size as usize);

let mut pending_calls: FuturesOrdered<_> = batch
Copy link
Collaborator

@jsdw jsdw Oct 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to avoid duplicating this logic I wonder? Like a single function whose job it is to take a batch request, run some provided function on each valid item and aggregate the results in the right way.

Maybe more complicated than it's worth, but I wonder how much we could separate out the logic that doesn't care specifically about ws/http and only cares about the formatting and such of messages I guess.

(Doesn't need to be related to this PR; just a general thought!)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, the only thing that's different is the call type which needs refactoring to work.

let's do it in another PR eventually :)

tx_log_from_str(&response.result, max_log_length);
logger.on_response(&response.result, request_start, TransportProtocol::WebSocket);
let _ = sink.send_raw(response.result);
if let Some(response) = response {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so this basically catches the case when every request is a notification; gotcha!

Copy link
Collaborator

@jsdw jsdw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, looks great! Good to get notifications properly supported as well :)

@niklasad1 niklasad1 merged commit e649f38 into master Oct 26, 2022
@niklasad1 niklasad1 deleted the na-batch-errors branch October 26, 2022 16:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants