Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show current benchmarking results #123

Open
jcready opened this issue Jun 24, 2021 · 9 comments
Open

Show current benchmarking results #123

jcready opened this issue Jun 24, 2021 · 9 comments
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed

Comments

@jcready
Copy link
Contributor

jcready commented Jun 24, 2021

The manual currently provides a comparison between code size vs speed, but it only shows the resulting size of the generated code so we don't know the runtime difference. There is already a benchmarks package which provides code to perform benchmarks, but it leave something to be desired. Namely: the current results. :)

It would be helpful to show this information (especially the perf.ts results) somewhere. Ideally it could be included in the code size vs speed section of the manual.

P.S. A comparison against protobuf.js would be very nice to compare the performance against since that library tends to be the fastest out there at the moment.

P.P.S. I've already taken a crack at adding protobuf.js to the perf.ts benchmarks locally and it seems like protobuf.js can decode/encode the binary about twice as fast as protobuf-ts@2.0.0-alpha.27. Any ideas how that gap could be closed? Is protobuf.js taking shortcuts that aren't conformant to the proto spec? Are there any techniques that could be copied from protobuf.js?

Thank you again for this wonderful project!

@timostamm
Copy link
Owner

Coincidentally, I just updated the code size benchmarks this morning, cleanup up the code as well.

It's easy to measure code size, not so much the performance. For example, google-protobuf deserializes into an intermediate state. So a simple roundtrip might show different results, compared to when all fields are set with the getter / setter methods.

That being said, having performance benchmarks for protobuf.js would be great.

jcready added a commit to jcready/protobuf-ts that referenced this issue Jun 24, 2021
Partially address timostamm#123

Results of the perf benchmark on my machine:

### read binary
google-protobuf             :     413.662 ops/s
ts-proto                    :   1,324.736 ops/s
protobuf-ts (speed)         :   1,461.452 ops/s
protobuf-ts (speed, bigint) :   1,475.889 ops/s
protobuf-ts (size)          :   1,250.677 ops/s
protobuf-ts (size, bigint)  :   1,255.167 ops/s
protobufjs                  :   1,732.049 ops/s
### write binary
google-protobuf             :     906.883 ops/s
ts-proto                    :   3,805.993 ops/s
protobuf-ts (speed)         :     430.632 ops/s
protobuf-ts (speed, bigint) :     448.063 ops/s
protobuf-ts (size)          :     378.682 ops/s
protobuf-ts (size, bigint)  :     392.511 ops/s
protobufjs                  :   1,539.768 ops/s
### from partial
ts-proto                    :   4,503.332 ops/s
protobuf-ts (speed)         :   1,568.577 ops/s
protobuf-ts (size)          :   1,555.881 ops/s
### read json
ts-proto                    :   3,719.366 ops/s
protobuf-ts (speed)         :     889.613 ops/s
protobuf-ts (size)          :     890.034 ops/s
protobufjs                  :   4,120.232 ops/s
### write json
ts-proto                    :  13,200.842 ops/s
protobuf-ts (speed)         :   1,865.668 ops/s
protobuf-ts (size)          :   1,862.537 ops/s
protobufjs                  :   4,199.372 ops/s
### read json string
ts-proto                    :     957.057 ops/s
protobuf-ts (speed)         :     416.284 ops/s
protobuf-ts (size)          :     421.48  ops/s
protobufjs                  :     910.256 ops/s
### write json string
ts-proto                    :   1,000.572 ops/s
protobuf-ts (speed)         :     943.338 ops/s
protobuf-ts (size)          :     949.784 ops/s
protobufjs                  :   1,446.891 ops/s
@timostamm
Copy link
Owner

Thanks for the PR! The makefile bug from this comment is fixed in commit 326299f, making sure the performance benchmarks run with the same payload every time. Not saying benchmarks should only run with the large payload, this was just fixing the obvious makefile bug.

All testees are in the same ballpark with the large payload size:

### read binary
google-protobuf             : 11.26 ops/s
ts-proto                    : 25.34 ops/s
protobuf-ts (speed)         : 27.98 ops/s
protobuf-ts (speed, bigint) : 25.9 ops/s
protobuf-ts (size)          : 24.43 ops/s
protobuf-ts (size, bigint)  : 23.38 ops/s
### write binary
google-protobuf             : 16.15 ops/s
ts-proto                    : 13.11 ops/s
protobuf-ts (speed)         : 12.12 ops/s
protobuf-ts (speed, bigint) : 11.8 ops/s
protobuf-ts (size)          : 10.1 ops/s
protobuf-ts (size, bigint)  : 9.76 ops/s
### from partial
ts-proto                    : 25.39 ops/s
protobuf-ts (speed)         : 22.25 ops/s
protobuf-ts (size)          : 21.27 ops/s
### read json
ts-proto                    : 41.19 ops/s
protobuf-ts (speed)         : 16.16 ops/s
protobuf-ts (size)          : 16.55 ops/s
### write json
ts-proto                    : 138.19 ops/s
protobuf-ts (speed)         : 23.38 ops/s
protobuf-ts (size)          : 23.19 ops/s
### read json string
ts-proto                    : 11.74 ops/s
protobuf-ts (speed)         : 7.78 ops/s
protobuf-ts (size)          : 7.77 ops/s
### write json string
ts-proto                    : 16.13 ops/s
protobuf-ts (speed)         : 16.15 ops/s
protobuf-ts (size)          : 16.82 ops/s

I think the benchmarks should run on several payload sizes. There are some factor 10 gaps with the smaller payload you were measuring that are worth closer investigation.

@timostamm
Copy link
Owner

For reference:

Large payload: 1.2MiB - FileDescriptorSet for packages/test-fixtures/**/*.proto
Small payload: 49KiB - FileDescriptorSet just for google/protobuf/descriptor.proto

@fenos
Copy link
Contributor

fenos commented Jul 12, 2021

Great results!

One thing that caught my eyes is the massive difference in writing JSON between ts-proto and protobuf-ts
do you know the reason why protobuf-ts is way slower?

@timostamm
Copy link
Owner

ts-proto uses protobuf.js for JSON, which doesn't fully the official JSON format, see protobufjs/protobuf.js#1304. I am sure they are skipping a few things.

But look closely at the numbers. They are put into relation when you realize that they measure turning the internal representation into a JSON object. What you need in practice is a JSON string:

### read json string
ts-proto                    : 11.74 ops/s
protobuf-ts (size)          : 7.77 ops/s
### write json string
ts-proto                    : 16.13 ops/s
protobuf-ts (size)          : 16.82 ops/s

@timostamm timostamm added the documentation Improvements or additions to documentation label Jul 17, 2021
@timostamm
Copy link
Owner

I think the manual deserves a performance comparison table at the end of the section Code size vs speed. It should just show numbers for binary I/O and JSON (string) I/O. It should show generator version number and parameters, preferably in a one simple table. It should be mentioned how and where this is measured, and with what payload size. The table should be generated by a script, similar to the code size report.

These are the results including protobuf.js:

### read binary
google-protobuf             :       9.938 ops/s
ts-proto                    :      23.604 ops/s
protobuf-ts (speed)         :      23.742 ops/s
protobuf-ts (speed, bigint) :      23.066 ops/s
protobuf-ts (size)          :      24.891 ops/s
protobuf-ts (size, bigint)  :      23.829 ops/s
protobufjs                  :      28.464 ops/s
### write binary
google-protobuf             :      15.118 ops/s
ts-proto                    :      13.626 ops/s
protobuf-ts (speed)         :      12.078 ops/s
protobuf-ts (speed, bigint) :      12.036 ops/s
protobuf-ts (size)          :      10.554 ops/s
protobuf-ts (size, bigint)  :      10.672 ops/s
protobufjs                  :      12.305 ops/s
### from partial
ts-proto                    :      40.744 ops/s
protobuf-ts (speed)         :      26.53  ops/s
protobuf-ts (size)          :      27.213 ops/s
### read json string
ts-proto                    :      14.237 ops/s
protobuf-ts (speed)         :       8.307 ops/s
protobuf-ts (size)          :       8.469 ops/s
protobufjs                  :      15.367 ops/s
### write json string
ts-proto                    :      18.328 ops/s
protobuf-ts (speed)         :      18.403 ops/s
protobuf-ts (size)          :      18.34  ops/s
protobufjs                  :      23.837 ops/s
### read json object
ts-proto                    :      34.747 ops/s
protobuf-ts (speed)         :      17.509 ops/s
protobuf-ts (size)          :      17.0   ops/s
protobufjs                  :      46.793 ops/s
### write json object
ts-proto                    :     182.47  ops/s
protobuf-ts (speed)         :      30.375 ops/s
protobuf-ts (size)          :      30.049 ops/s
protobufjs                  :      47.009 ops/s

@timostamm timostamm added the help wanted Extra attention is needed label Jul 18, 2021
@timostamm
Copy link
Owner

Looks like there has been a regression in v2.0.0-alpha.9. We stopped generating create for speed optimized code. Thanks to @odashevskii-plaid, this is fixed. It bumps up the performance of read and create methods a bit:

### read binary
google-protobuf             :      11.525 ops/s
ts-proto                    :      26.28  ops/s
protobuf-ts (speed)         :      31.584 ops/s
protobuf-ts (speed, bigint) :      33.79  ops/s
protobuf-ts (size)          :      24.935 ops/s
protobuf-ts (size, bigint)  :      25.073 ops/s
protobufjs                  :      32.129 ops/s
### write binary
google-protobuf             :      16.832 ops/s
ts-proto                    :      14.168 ops/s
protobuf-ts (speed)         :      12.636 ops/s
protobuf-ts (speed, bigint) :      12.769 ops/s
protobuf-ts (size)          :      10.969 ops/s
protobuf-ts (size, bigint)  :      11.045 ops/s
protobufjs                  :      12.902 ops/s
### from partial
ts-proto                    :      40.707 ops/s
protobuf-ts (speed)         :      29.767 ops/s
protobuf-ts (size)          :      27.98  ops/s
### read json string
ts-proto                    :      14.963 ops/s
protobuf-ts (speed)         :       8.485 ops/s
protobuf-ts (size)          :       8.272 ops/s
protobufjs                  :      15.59  ops/s
### write json string
ts-proto                    :      18.633 ops/s
protobuf-ts (speed)         :      19.347 ops/s
protobuf-ts (size)          :      18.997 ops/s
protobufjs                  :      27.291 ops/s
### read json object
ts-proto                    :      37.691 ops/s
protobuf-ts (speed)         :      18.774 ops/s
protobuf-ts (size)          :      16.267 ops/s
protobufjs                  :      44.944 ops/s
### write json object
ts-proto                    :     200.92  ops/s
protobuf-ts (speed)         :      31.027 ops/s
protobuf-ts (size)          :      32.502 ops/s
protobufjs                  :      45.461 ops/s

@jcready
Copy link
Contributor Author

jcready commented Aug 31, 2021

Was this this discovery made in some off-github discussion? I'm curious more than anything since I can't find any issue or PR mentioning this. Glad it was spotted though!

Completely unrelated, but what on Earth is going on with ts-proto's "write json object" benchmark? Being nearly an order of magnitude faster than the underlying library it uses (protobufjs) seems odd.

@timostamm
Copy link
Owner

Was this this discovery made in some off-github discussion?

See #147 (comment) and #147 (comment)

Completely unrelated, but what on Earth is going on with ts-proto's "write json object" benchmark?

It's impressive, right? I don't think ts-proto is sharing any code with protobufjs for JSON.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants