You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a benchmark test for MachineComplexityEvaluator to detect performance regressions in machine complexity evaluation.
@[baldawar]
don't think its worth blocking on, but probably worth adding a benchmark test around this class to ensure evaluator doesn't grossly become inefficient after refactoring. for now, could be github issue that we address later.
@[jonessha]
This is a good idea. Note that we partially have this through testEvaluateBeyondMaxComplexity in MachineComplexityEvaluatorTest. This makes sure that we cap evaluation at max complexity. If we don't, the test will more or less spin forever. So this provides some protection. Still a good idea to have a dedicated benchmark test though. Some consumers of this library might run with a really high max complexity and be really sensitive to latency regressions.
Add a benchmark test for MachineComplexityEvaluator to detect performance regressions in machine complexity evaluation.
@[baldawar]
don't think its worth blocking on, but probably worth adding a benchmark test around this class to ensure evaluator doesn't grossly become inefficient after refactoring. for now, could be github issue that we address later.
@[jonessha]
This is a good idea. Note that we partially have this through testEvaluateBeyondMaxComplexity in MachineComplexityEvaluatorTest. This makes sure that we cap evaluation at max complexity. If we don't, the test will more or less spin forever. So this provides some protection. Still a good idea to have a dedicated benchmark test though. Some consumers of this library might run with a really high max complexity and be really sensitive to latency regressions.
Discussion from #18
The text was updated successfully, but these errors were encountered: