Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More plugins - what do you want/need? #6

Open
1 task
PragTob opened this issue Jun 18, 2016 · 8 comments
Open
1 task

More plugins - what do you want/need? #6

PragTob opened this issue Jun 18, 2016 · 8 comments

Comments

@PragTob
Copy link
Member

PragTob commented Jun 18, 2016

Ideas for further plugins, open a new issue or share ideas here :)

  • something that generates a graph right away without other manual steps
@hauleth
Copy link

hauleth commented Jan 21, 2017

Diff plugin, that would allow us to create series of CSV/JSON files and display performance change over the time.

@PragTob
Copy link
Member Author

PragTob commented Jan 21, 2017

Thanks for your input @hauleth !

With "display change over time" what do you mean? Do you mean just some console output ranking the outputs by which is fastest, do you want a graph.. ?

@hauleth
Copy link

hauleth commented Jan 24, 2017

Let's imagine that you creating library and you want to compare speed of different versions. It would be great to have some way to display graph of changes in the same functionality over time.

I am trying to write something like rails/rails-perftest for Phoenix/Plug and it would be great to have possibility to graph performance changes over the time.

@PragTob
Copy link
Member Author

PragTob commented Jan 24, 2017

It's a nice idea and I know other people wanna work on similar things. It's definitely possible. My only caveat is that to be accurate they'd need to be done on the same system, with the same load and same dependencies and approximately the same data in the DB etc.

Might be more accurate to check out different revisions and hen benchmark them in a go on the system... that'd also need some serialization and graphing from there so the work for a benchee plugin is the same 🎉

Thanks for telling me the use case, it makes developing something to aid your cause much easier. Although, no promises to a when :)

@cgiffard
Copy link

Unless I'm missing an obvious plugin which already exists — it'd be really cool to have an ExUnit style set of macros which will allow writing multiple benchmark scenarios in a similar style to our tests.

@joladev
Copy link

joladev commented Mar 19, 2019

Rust's criterion-rs will automatically show you the diff between current and last run, making it easy to check if code changes provide performance improvements.

Maybe I'm missing something, but with benchee if I wanted to compare two runs I'd either have to jot the previous result down, or write some custom code to store and compare. Or is there a better way already?

@NickNeck
Copy link
Contributor

NickNeck commented Mar 19, 2019

I have written a little bit of code to compare benchmarks between branches. For now, it is just a dirty alpha version.

  defmodule GitLoadSave do
    @file_name "benchee.run"

    def config(branch \\ "master") do
      case load(branch) do
        nil -> [save: save()]
        load -> [load: load, save: save()]
      end
    end

    defp load(branch \\ "master") do
      case branch == branch() do
        true ->
          nil

        false ->
          System.tmp_dir!()
          |> Path.join(app())
          |> Path.join(branch)
          |> Path.join(@file_name)
      end
    end

    defp save do
      tag = branch()

      path =
        System.tmp_dir!()
        |> Path.join(app())
        |> Path.join(tag)
        |> Path.join(@file_name)

      [path: path, tag: tag]
    end

    defp app, do: to_string(Mix.Project.config()[:app])

    defp branch,
      do:
        "git"
        |> System.cmd(["rev-parse", "--abbrev-ref", "HEAD"])
        |> trim()

    defp trim({str, 0}), do: String.trim(str)
  end

This can be added to the benchee config [formatters: ...] ++ GitLoadSave.config(). After that, you can run the benchmarks in the master and after that in a different branch with a comparison to the master.
@yuhama with this approach you can compare branches without changing your benchee code.

@PragTob
Copy link
Member Author

PragTob commented Mar 19, 2019

@yuhama there is the Saving & Loading - you can save the previous results, tag them, and load them back into your benchmarking suite where they will be part of the comparison. We don't show explicit differences beyond that - I'd be happy to get input on how to do that/what to do. Criterion is on my radar though and getting all their statistical stuff in is something I want to do, but might need to brush up my statistics for :) Does this help? :)

@NickNeck thanks for sharing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants