Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computing statistics on generated test values #30

Open
alfert opened this issue Nov 7, 2021 · 5 comments · May be fixed by #31
Open

Computing statistics on generated test values #30

alfert opened this issue Nov 7, 2021 · 5 comments · May be fixed by #31
Labels
enhancement New feature or request

Comments

@alfert
Copy link

alfert commented Nov 7, 2021

In Hypothesis and other QuickCheck-like implementations, it is possible to calculate statistics, usually to validate that test data generation works as expected or is skewed somehow, as described here: https://hypothesis.readthedocs.io/en/latest/details.html#test-statistics

While it is easy to implement something like the event() function of Hypothesis for rapid, generating reports is not. There are no means for decorating a property (that I am aware of) and calling a PrintStats() function at the end of the property will be run every time (i.e. 100 times for a single property). What seems to work is the following:

func TestStackRapid(t *testing.T) {
        defer stats.PrintStats(t)
	rapid.Check(t, func(t *rapid.T) {
             ....
             stats.Event(fmt.Sprintf("some event"))
             ...
       }
}

Is that a hack or an intended way of decorating a property? And: are you interested in an implementation for such statistics?

@flyingmutant
Copy link
Owner

To be honest, I've never actually used such reporting functionality, so I do not understand this use case fully. Your code seems fine to me -- although defer might not be necessary:

func TestStackRapid(t *testing.T) {
	rapid.Check(t, func(t *rapid.T) {
             ....
             stats.Event(fmt.Sprintf("some event"))
             ...
       }
       stats.PrintStats(t)
}

As for implementing reporting inside rapid itself, right now I am not a fan: I think this is quite narrow use case because we should not show such things by default (I believe tests should be silent by default), and when it is hidden behind an option, not many people will benefit from it.

@alfert
Copy link
Author

alfert commented Nov 8, 2021

Thanks, to do the printing after calling Check is indeed a good idea.

The use case for such reporting is important when you define your own generators. How do you ensure that you generate good test data? You might do some test coverage to find out if your relevant code paths are selected. But what do you do if this is not the case? In that case you need to understand what your generator is producing, not only a few samples, but in your tests. In such situations, a reporting tool comes quite handy and is standard feature of QuickCheck from its inception in Haskell and its commercial version in Erlang. But also other implementations provide that functionality, e.g. Hypothesis (Python), PropEr (Erlang, Elixir), ScalaCheck (Scala).

So, reporting is a kind of debugging tool for test developers.

@flyingmutant flyingmutant added the enhancement New feature or request label Nov 9, 2021
@flyingmutant
Copy link
Owner

Thanks for the explanation! Do you have in mind how it should look in rapid? Something close to Hypothesis?

@alfert
Copy link
Author

alfert commented Nov 9, 2021

Since we are close to Hypothesis and do not have the FP restrictions (and abilities), I would suggest to model it generally after Hypothesis. In my simple implementation, I added a Event(t *rapid.T, event string) function to collect events to a given test. That can be printed by above's PrintStats function. I use t.Logf for output, so a test -v will output the stats.

I can provide a PR for this.

@flyingmutant
Copy link
Owner

Sounds interesting, let's see the PR (can't promise fast review right now, unfortunately).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants