Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visible benchmark regression between 08/05-09/05 #9919

Closed
hubertp opened this issue May 10, 2024 · 3 comments
Closed

Visible benchmark regression between 08/05-09/05 #9919

hubertp opened this issue May 10, 2024 · 3 comments
Assignees
Labels
--regression Important: regression -libs Libraries: New libraries to be implemented

Comments

@hubertp
Copy link
Contributor

hubertp commented May 10, 2024

Most visible in
Screenshot from 2024-05-10 17-22-11
But similar bump is seen in other stdlib benchmarks.

@hubertp hubertp added -libs Libraries: New libraries to be implemented --regression Important: regression labels May 10, 2024
@GregoryTravis
Copy link
Contributor

I reproduced a large benchmark regression in this test (org_enso_benchmarks_generated_Collections_vector_float_fold). This unusual test uses List and does all its operations through Meta:

sum_list_meta list =
    nil_cons = Meta.meta List.Nil . constructor
    folder acc list =
        meta_list = Meta.meta list
        if meta_list.constructor == nil_cons then acc else
            fs = meta_list.fields
            @Tail_Call folder (acc + fs.at 0) (fs.at 1)
    res = folder 0 list
    res

Between f1ddf1b and 720d32c, this test got 25x slower, and this is consistently reproducible. The smallest change to reproduce this is shown below -- a change to use the new build instead of new_builder, in both Bench.build and Bench_Builder.group.

This is strange because it seems like it should only affect the construction of the benchmark groups, not the benchmarks themselves. Also strange is the fact that reverting both of the changes below restores the old speed, while reverting only one of these does not change the speed at all.

diff --git a/distribution/lib/Standard/Test/0.0.0-dev/src/Bench.enso b/distribution/lib/Standard/Test/0.0.0-dev/src/Bench.enso
index 42f24a8701..08994596f6 100644
--- a/distribution/lib/Standard/Test/0.0.0-dev/src/Bench.enso
+++ b/distribution/lib/Standard/Test/0.0.0-dev/src/Bench.enso
@@ -55,9 +55,9 @@ type Bench_Builder
     group : Text -> Bench_Options -> (Group_Builder -> Any) -> Any
     group self (name:Text) (configuration:Bench_Options) fn =
         validate_name name
-        b = Vector.new_builder
-        fn (Group_Builder.Impl b)
-        self.builder.append <| Bench.Group name configuration b.to_vector
+        group = Vector.build b->
+            fn (Group_Builder.Impl b)
+        self.builder.append <| Bench.Group name configuration group

 ## Builder to create a group of benchmarks.
 type Group_Builder
@@ -87,9 +87,8 @@ type Bench
     ## Construct a Bench object.
     build : (Bench_Builder -> Any) -> Bench
     build fn =
-        b = Vector.new_builder
-        fn (Bench_Builder.Impl b)
-        groups_vec = b.to_vector
+        groups_vec = Vector.build b->
+            fn (Bench_Builder.Impl b)
         Bench.All groups_vec . validate

@GregoryTravis
Copy link
Contributor

I examined another, more traditional test (org_enso_benchmarks_generated_Column_Arithmetic_1000000_Multiply_Overflowing) and could not reproduce any difference between the same two revisions (for May 7 and 10).

At this point I also noticed that while many benchmarks show a regression around May 9, most of these regressions have reverted in the most recent sample, like this:

Screen Shot 2024-05-13 at 1 36 58 PM

(But possilby this is an issue of incomplete data collection.)

@GregoryTravis
Copy link
Contributor

It does seem to have returned to normal.

Screen Shot 2024-05-15 at 12 44 38 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
--regression Important: regression -libs Libraries: New libraries to be implemented
Projects
Status: 🟢 Accepted
Development

No branches or pull requests

2 participants