Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: add in-memory cache with ttl to reduce hash time computation #4693

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

guillaumelamirand
Copy link
Contributor

@guillaumelamirand guillaumelamirand commented Jul 13, 2023

Description

When using a keyless plan, the remote address is used as a client identifier. To avoid divulging sensitive data, we hash it before. However, that is CPU consuming and so the idea here is to add a in-memory cache with short ttl to deal with a burst of requests.

Additional context


馃摎聽聽View the storybook of this branch here

@guillaumelamirand guillaumelamirand force-pushed the optimize-client-identifier-computation branch from a1b44bf to 364e11e Compare September 27, 2023 09:41
@guillaumelamirand guillaumelamirand marked this pull request as ready for review September 27, 2023 09:41
@guillaumelamirand guillaumelamirand requested a review from a team as a code owner September 27, 2023 09:41
Copy link
Contributor

@remibaptistegio remibaptistegio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Member

@ytvnr ytvnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

were you able to run a performance test scenario to check the results ?

@guillaumelamirand guillaumelamirand force-pushed the optimize-client-identifier-computation branch from 364e11e to 8de0470 Compare October 5, 2023 15:05
@guillaumelamirand guillaumelamirand force-pushed the optimize-client-identifier-computation branch 4 times, most recently from d6110df to a99441d Compare November 21, 2023 15:26
return fallback;
}
}

private Cache<String, String> getOrCreateCache() {
return cacheManager.getOrCreateCache(
REMOTE_ADDRESS_HASHES_CACHE,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe use the classname so it is unique for sure.

private Cache<String, String> getOrCreateCache() {
return cacheManager.getOrCreateCache(
REMOTE_ADDRESS_HASHES_CACHE,
CacheConfiguration.builder().timeToLiveInMs(60_000).timeToIdleInMs(60_000).maxSize(1000).distributed(false).build()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 min is quite short. I understand there might be a lot of request. How did you arrive to the conclusion that 1 minute is the best value. I'm not asking a new parameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to only limit the computation during burst requests

@@ -289,6 +307,7 @@ void should_suffix_client_identifier_header_with_hash_when_subscription_equals_r
verify(mockRequest).clientIdentifier(AdditionalMatchers.not(eq(TRANSACTION_ID)));
verify(mockRequest).clientIdentifier(AdditionalMatchers.not(eq(REMOTE_ADDRESS)));
assertThat(spyCtx.metrics().getClientIdentifier()).isEqualTo(clientIdentifier);
assertThat(cacheManager.getOrCreateCache(REMOTE_ADDRESS_HASHES_CACHE).get(REMOTE_ADDRESS)).isNotNull();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't test TTL, I'm saying test the cache, I'm saying test that TLL works (maybe you need to tweak you class so you can change TTL values)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose you missed a word. Anyway ttl of the cache is tested on the cache module itself. Do you want to wait 1minute to make sure the value is evicted ?

@guillaumelamirand guillaumelamirand force-pushed the optimize-client-identifier-computation branch from a99441d to f7b8d41 Compare November 22, 2023 10:16
@guillaumelamirand guillaumelamirand force-pushed the optimize-client-identifier-computation branch from f7b8d41 to c80f87c Compare November 22, 2023 14:54
@guillaumelamirand
Copy link
Contributor Author

On hold waiting for 4.2 release.

@phiz71
Copy link
Member

phiz71 commented Apr 23, 2024

@guillaumelamirand 4.2 (and 4.3) has been released. Should we consider updating this PR? Or can we close it?

@guillaumelamirand
Copy link
Contributor Author

@guillaumelamirand 4.2 (and 4.3) has been released. Should we consider updating this PR? Or can we close it?

Yes, the idea was to wait the 4.2 to have the time to properly test the fixe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants