Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Support for benchmarks #67

Open
joestringer opened this issue Jun 3, 2022 · 6 comments
Open

feat: Support for benchmarks #67

joestringer opened this issue Jun 3, 2022 · 6 comments
Assignees
Labels

Comments

@joestringer
Copy link
Contributor

joestringer commented Jun 3, 2022

When running a benchmark through tparse (v0.10.3), it appears that the benchmark information is omitted from the table. It would be neat to be able to also display benchmark results in a table format.

Today:

goos: linux
goarch: amd64
pkg: github.com/cilium/cilium/pkg/labels
cpu: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
BenchmarkParseLabel
BenchmarkParseLabel-8                    4532698               260.2 ns/op
BenchmarkLabels_SortedList
BenchmarkLabels_SortedList-8             1354594               876.3 ns/op           664 B/op         13 allocs/op
BenchmarkLabel_FormatForKVStore
BenchmarkLabel_FormatForKVStore-8       24836546                42.30 ns/op           48 B/op          1 allocs/op
BenchmarkLabel_String
BenchmarkLabel_String-8                 25113582                49.72 ns/op           48 B/op          1 allocs/op
PASS
ok      github.com/cilium/cilium/pkg/labels     5.935s
┌───────────────────────────────────────────────────────────────────────────────────────┐
│  STATUS │ ELAPSED │               PACKAGE               │ COVER │ PASS │ FAIL │ SKIP  │
│─────────┼─────────┼─────────────────────────────────────┼───────┼──────┼──────┼───────│
│  PASS   │ 5.94s   │ github.com/cilium/cilium/pkg/labels │ 0.0%  │    0 │    0 │    0  │
└───────────────────────────────────────────────────────────────────────────────────────┘

Desired:

┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│  STATUS │ ELAPSED │ TEST                               │ CPU         | MEMORY   | ALLOCATIONS  | PACKAGE        │
│─────────┼─────────┼────────────────────────────────────┼-────────────┼──────────┼──────────────┼────────────────│
│  PASS   │    0.00 │ BenchmarkParseLabel-8              │ 260.2 ns/op |          |              │ labels         |
│  PASS   │    0.00 │ BenchmarkLabels_SortedList-8       | 876.3 ns/op | 664 B/op | 13 allocs/op │ labels         │
...                                                                             
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
@mfridman
Copy link
Owner

mfridman commented Jun 3, 2022

Sounds reasonable. Thinking out loud, if we detect that at least 1 package has benchmark results then we expand the table to include (as you suggested):

  • cpu
  • memory
  • allocations

Am I missing any other fields?

This does mean a test from another package that doesn't have benchmarks would be blanked out --, correct?

@mfridman
Copy link
Owner

mfridman commented Jun 3, 2022

One concern I have is the length of each row. Example

│  PASS   │    0.00 │ TestFoo/TestBar/TestBaz/AndTestFinalOutcome/test_02.json    │ tests    │

There is a -smallscreen to attempt and mitigate for narrow outputs, especially in CI.

But with 3 added columns this may not look so pretty, especially if the there is wrapping for longer lines. Not sure what the right solution here is, maybe a different format. Orrr just let it run long.

@joestringer
Copy link
Contributor Author

Am I missing any other fields?

The only other one I see on the raw text output is the actual number of benchmarked operations, but I don't think that's necessarily a useful number by itself.

This does mean a test from another package that doesn't have benchmarks would be blanked out --, correct?

I'm not sure if you can run benchmarks and regular unit tests at the same time. When I do go test -bench . ./pkg/foo, it seems to only pick up the benchmarks and run those.

But with 3 added columns this may not look so pretty, especially if the there is wrapping for longer lines. Not sure what the right solution here is, maybe a different format. Orrr just let it run long.

Well, a benchmark doesn't pass or fail as far as I understand, so that one can be skipped. The package may also be redundant if the specific test name is listed. So that would leave elapsed, then probably the cpu/memory/allocations options.

It seems like sometimes, the benchmark will only measure CPU (probably in cases where the code under test doesn't perform any memory allocations at all). One option for -smallscreen would be to only print the ns/op rate. I do think that all three benchmark metrics are useful though. 🤔

@mfridman
Copy link
Owner

mfridman commented Jun 4, 2022

Took a first pass at this, just need to figure out how to nicely display so it makes sense. I'm thinking this might be a net new table.

Sample output from running benchmarks in github.com/cilium/cilium

go test ./pkg/labels -bench=. -run= -benchtime=1s -v -cover
┌───────────────────────────────────────────────────────────────────────────────────────────┐
│                TEST                │     CPU      │   MEM    │    ALLOC     │  PACKAGE    │
│────────────────────────────────────┼──────────────┼──────────┼──────────────┼─────────────│
│  BenchmarkParseLabel-8             │ 424.70 ns/op │ 0 B/op   │ 0 allocs/op  │ pkg/labels  │
│  BenchmarkLabels_SortedList-8      │ 613.10 ns/op │ 664 B/op │ 13 allocs/op │ pkg/labels  │
│  BenchmarkLabel_FormatForKVStore-8 │ 32.22 ns/op  │ 48 B/op  │ 1 allocs/op  │ pkg/labels  │
│  BenchmarkLabel_String-8           │ 37.51 ns/op  │ 48 B/op  │ 1 allocs/op  │ pkg/labels  │
└───────────────────────────────────────────────────────────────────────────────────────────┘

@embano1
Copy link

embano1 commented Nov 3, 2022

Adding myself to the list here as we would love to have this too :)
timbray/quamina#139

@mfridman mfridman self-assigned this Nov 30, 2022
@mfridman
Copy link
Owner

I should have some cycles in December, so I'll try to bang this out.

@mfridman mfridman pinned this issue Nov 30, 2022
@mfridman mfridman added this to the Next up development milestone Feb 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants