Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Experiment] Reuse buffer on read [API-1602] #652

Open
puzpuzpuz opened this issue Oct 27, 2020 · 1 comment
Open

[Experiment] Reuse buffer on read [API-1602] #652

puzpuzpuz opened this issue Oct 27, 2020 · 1 comment

Comments

@puzpuzpuz
Copy link
Contributor

puzpuzpuz commented Oct 27, 2020

Node.js v12.10.0 introduced new onread option for TCP clients allowing to reuse the same Buffer for socket reads. This feature should help us to improve Node.js client's throughput for reads (the exact improvement depends on the scenario and has to benchmarked). The only problem is the absence of this feature in the tls module, which we use for encrypted TCP connections, but we did a contribution to the core (see nodejs/node#35753).

Corresponding PRD: https://hazelcast.atlassian.net/wiki/spaces/PM/pages/2691956737/Node.js+Client+Performance+Improvement+for+Reads

This issue describes results of a PoC implementation for unsecure sockets. The PoC may be found in this branch.

Benchmark results

Built-in benchmark with get scenario was used for testing. The latest master (cd1f7d0) was used as the baseline.

Each measurement series was made with a fresh member and run with seq 1 15 | xargs -I{} sh -c "node benchmark/Benchmark.js get <params> >> compare-onread-poc.csv && sleep 1".

Environment: Node.js v12.19.0, OpenJDK 11.0.8, IMDG 4.0.3, uname -a: Linux apechkurov-laptop 5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux.

cat compare-onread-poc.csv | Rscript benchmark/compare.R
                                             confidence improvement accuracy (*)   (**)  (***)
 benchmark/Benchmark.js get t=1M s=8KB c=256        ***      9.97 %       ±3.40% ±4.63% ±6.25%
 benchmark/Benchmark.js get t=1M s=1KB c=256                -0.97 %       ±6.88% ±9.31% ±12.44%
 benchmark/Benchmark.js get t=1M s=1KB c=64                -0.96 %      ±10.11% ±13.65% ±18.18%
 benchmark/Benchmark.js get t=1M s=128B c=128                -2.92 %       ±5.80% ±7.83% ±10.41%

Be aware that when doing many comparisons the risk of a false-positive
result increases. In this case there are 1 comparisons, you can thus
expect the following amount of false-positive results:
  0.05 false positives, when considering a   5% risk acceptance (*, **, ***),
  0.01 false positives, when considering a   1% risk acceptance (**, ***),
  0.00 false positives, when considering a 0.1% risk acceptance (***)

Plot for t=1M s=8KB c=256:
compare-onread-poc-8kb

Summary

As expected, buffer allocation rate starts playing an important role when dealing with larger payloads (see the result for 8KB values). In such scenarios, the onread optimization seems to be valuable and users may notice an improved throughput. On the other hand, for smaller payloads the optimization doesn't make any change and there are some extra steps necessary for the proper implementation (see TODOs in the PoC branch).

Another thing to mention is backwards compatibility. If we start relying on the onread option, we still need to support pre-12.10.0 versions of Node.js (I'm not considering tls option for simplicity here, but that's also important). As the result, the code base beyond initial socket configuration will be unified and based on the assumption that the read buffer is reused, which may add a bit of an overhead in Node.js versions that don't support onread (because of extra buffer copying done when handling edge cases).

Probably, the best way would be to re-iterate over this optimizations in future, when most of active LTS versions of node get support for onread in both net and tls modules.

@github-actions
Copy link

Internal Jira issue: API-1602

@github-actions github-actions bot changed the title [Experiment] Reuse buffer on read [Experiment] Reuse buffer on read [API-1602] Sep 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants