Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃┕ Fix limiter middleware db connection #1813

Merged
merged 6 commits into from Mar 15, 2022

Conversation

qracer
Copy link

@qracer qracer commented Mar 7, 2022

Closes #1812

This PR just removes release method from manager.go so SkipSuccessfulRequests and SkipFailedRequests from the limiter config will work properly.

Close my PR if you find my approach inappropriate to merge. I just used the simplest way to fix the bug but it can be fixed via refactoring code in limited_config.go

@ReneWerner87
Copy link
Member

i will check this tomorrow

@ReneWerner87
Copy link
Member

@qracer I think releasing the memory and putting it back into the pool is important for the performance.

You are right, if another storage is used, the following construct makes no sense

I think we should execute the get on the key in the body of the block and set it again after the change, then the releasing can also remain in it.

Should work or?

@qracer
Copy link
Author

qracer commented Mar 12, 2022

@ReneWerner87 I reckon we can do without calling get twice. A more convenient solution would be to rearrange the mutex schedule and postpone set execution with defer. I will commit a new bugfix this evening.

@ReneWerner87
Copy link
Member

Ok.

Thought one the position is binding, because one has only after the next the results.
I'm curious how your fix looks like.
Best would also be more advanced tests that test just that.

@qracer
Copy link
Author

qracer commented Mar 12, 2022

@ReneWerner87 I've updated the pull request and changed the bugfix approach. Sorry for no test, I don't know how to mock every available database efficiently to test

@ReneWerner87
Copy link
Member

Ok, no problem

@@ -69,12 +72,6 @@ func (FixedWindow) New(cfg Config) fiber.Handler {
// Set how many hits we have left
remaining := cfg.Max - e.currHits

// Update storage
manager.set(key, e, cfg.Expiration)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think moving the set function must also be done in the part where the limit was reached

@@ -44,6 +44,9 @@ func (FixedWindow) New(cfg Config) fiber.Handler {

// Lock entry
mux.Lock()
defer func() {
mux.Unlock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not a good idea, the mutex in the defer would block the reading for too long

Because with c.Next() all possible handlers are executed

@ReneWerner87
Copy link
Member

@qracer i added some comments, can you please check it again

@qracer
Copy link
Author

qracer commented Mar 14, 2022

@ReneWerner87 I've read your comments and agree with your point, mutexing in my solution is inefficient. I'll push changes right now

@ReneWerner87 ReneWerner87 merged commit c92a505 into gofiber:master Mar 15, 2022
@qracer qracer deleted the fix/limiterDBConn branch March 15, 2022 17:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

馃悰 Limiter middleware works improperly with any database
3 participants