Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/performance improvement #173

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

Maksimka101
Copy link

Status

READY

Breaking Changes

NO

Description

I've noticed that the performance of comparing objects with collections can be significantly enhanced. These optimizations in my application led to a considerable increase in FPS, approximately by 3 times. Additionally, I've conducted tests to compare the performance between the old and new approaches

Todos

  • Tests
  • Documentation
  • Examples

Impact to Remaining Code Base

This PR will affect:

  • comparison logic

@felangel
Copy link
Owner

Thanks for the PR! Are you able to share the benchmarks you used? I would love to run them locally and validate the results.

@Maksimka101
Copy link
Author

Maksimka101 commented Dec 11, 2023

Thanks for the PR! Are you able to share the benchmarks you used? I would love to run them locally and validate the results.

Sure, here is the link: https://github.com/Maksimka101/equatable_benchmark

@Maksimka101
Copy link
Author

Hello

Any news?🙂

Do I need to fix something or provide more information?

@felangel
Copy link
Owner

felangel commented Jan 2, 2024

Hello

Any news?🙂

Do I need to fix something or provide more information?

Apologies for the delay! I was slow to respond due to the holidays but I should have time to review this in the next day or two. Thanks again!

@Maksimka101
Copy link
Author

Thanks. No need to hurry. I forgot that normal people relax on holidays🙂. Happy new year

@Maksimka101
Copy link
Author

Hello

Can we merge this?

@escamoteur
Copy link

Just saw this. besides that I think it would definitely great to get this merged, why not borrow the optimization of fast_equatable and cache the hashcode if that really improves performance that much :-)

@felangel felangel closed this May 24, 2024
@felangel felangel reopened this May 24, 2024
@felangel
Copy link
Owner

felangel commented May 24, 2024

Just saw this. besides that I think it would definitely great to get this merged, why not borrow the optimization of fast_equatable and cache the hashcode if that really improves performance that much :-)

Will review this later today apologies that it fell through the cracks. Looks like ci was failing but just re-triggered ci to see the logs.

@felangel
Copy link
Owner

Looks like some tests are missing. I’ll take a look at the benchmarks and will see if I can get this merged later today. Apologies for the delay!

@felangel felangel added the enhancement New feature or request label May 24, 2024
@felangel
Copy link
Owner

felangel commented May 26, 2024

@Maksimka101 I've added benchmarks and run them against the implementation on master and am seeing the following results:

branch: master

EmptyEquatable
          total runs:  2 076 295   
          total time:     2.0000  s
         average run:          0 μs
         runs/second:   Infinity
               units:        100   
        units/second:   Infinity
       time per unit:     0.0000 μs

PrimitiveEquatable
          total runs:    810 588   
          total time:     2.0000  s
         average run:          2 μs
         runs/second:    500 000   
               units:        100   
        units/second: 50 000 000   
       time per unit:     0.0200 μs

CollectionEquatable (small)
          total runs:    443 978   
          total time:     2.0000  s
         average run:          4 μs
         runs/second:    250 000   
               units:        100   
        units/second: 25 000 000   
       time per unit:     0.0400 μs

CollectionEquatable (medium)
          total runs:    442 368   
          total time:     2.0000  s
         average run:          4 μs
         runs/second:    250 000   
               units:        100   
        units/second: 25 000 000   
       time per unit:     0.0400 μs

CollectionEquatable (large)
          total runs:    450 915   
          total time:     2.0000  s
         average run:          4 μs
         runs/second:    250 000   
               units:        100   
        units/second: 25 000 000   
       time per unit:     0.0400 μs

branch: feat/performance-improvement

EmptyEquatable
          total runs:  2 069 828   
          total time:     2.0000  s
         average run:          0 μs
         runs/second:   Infinity
               units:        100   
        units/second:   Infinity
       time per unit:     0.0000 μs

PrimitiveEquatable
          total runs:    823 014   
          total time:     2.0000  s
         average run:          2 μs
         runs/second:    500 000   
               units:        100   
        units/second: 50 000 000   
       time per unit:     0.0200 μs

CollectionEquatable (small)
          total runs:    490 253   
          total time:     2.0000  s
         average run:          4 μs
         runs/second:    250 000   
               units:        100   
        units/second: 25 000 000   
       time per unit:     0.0400 μs

CollectionEquatable (medium)
          total runs:    494 469   
          total time:     2.0000  s
         average run:          4 μs
         runs/second:    250 000   
               units:        100   
        units/second: 25 000 000   
       time per unit:     0.0400 μs

CollectionEquatable (large)
          total runs:    494 548   
          total time:     2.0000  s
         average run:          4 μs
         runs/second:    250 000   
               units:        100   
        units/second: 25 000 000   
       time per unit:     0.0400 μs

MacBook Pro (M1 Pro, 16GB RAM)
Dart SDK version: 3.3.4 (stable) (Tue Apr 16 19:56:12 2024 +0000) on "macos_arm64"

I'm not able to reproduce the significant performance increase you describe. Maybe DeepCollectionEquality has been optimized since you last tested?

Let me know if you're still able to reproduce the ~20% performance improvement, thanks!

@felangel
Copy link
Owner

Just saw this. besides that I think it would definitely great to get this merged, why not borrow the optimization of fast_equatable and cache the hashcode if that really improves performance that much :-)

Because the classes should be immutable (e.g. all fields should be final and have a const constructor). Classes that use package:fast_equatable are not immutable and cannot have a const constructor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants