New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add keep-alive connection tracking and reaping (resolve #3617) #3619
Conversation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
The predominant thing you are testing with load testing new connection for every request is Node.js ability to accept them, the TCP settings of your machine and similar things. They are not useful for evaluating the performance of a framework. Note that for this specific case I expect the actual performance to be worse for short lived connections that do 2-3 requests and then are closed (we can benchmark this with autocannon) as they will enter/exit the A solution for this is to put it behind a flag. |
Yes. The full implementation would be behind a flag. Doing this automatically would be a breaking change and would not be something everyone wants without explicitly saying so. |
Then I'm good with this approach. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I have tested two methods of skipping connection tracking by default. First, by providing a "Set" that does nothing. Second by passing through a boolean that skips the whole "add to set" path in the route handler. I am seeing no difference between the two methods. Therefore, I am going to opt for the easier to maintain "noop Set" implementation. Benchmark for noop Set
Benchmark for boolean toggle
|
The noop set would be ok for me. |
a1c2b23
to
d6bf012
Compare
I'm getting the exact same missing coverage on the |
// for the client to have seen the socket get destroyed. The mere fact | ||
// that we have reached this callback is enough indication that the | ||
// feature being tested works as designed. | ||
t.equal(client.closed, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the client receive an error
event due to the socket being closed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe?
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Node: 12 Node: 14 Node: 16 |
@@ -16,7 +16,7 @@ | |||
"lint:typescript": "eslint -c types/.eslintrc.json types/**/*.d.ts test/types/**/*.test-d.ts", | |||
"prepublishOnly": "tap --no-check-coverage test/internals/version.test.js", | |||
"test": "npm run lint && npm run unit && npm run test:typescript", | |||
"test:ci": "npm run unit -- --cov --coverage-report=lcovonly && npm run test:typescript", | |||
"test:ci": "npm run unit -- -R terse --cov --coverage-report=lcovonly && npm run test:typescript", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll call this out before anyone else:
I added this because one of the macOS runs flaked. We have so many tests that scrolling through them to figure out why that run flaked was effectively impossible. By using -R terse
, only the failures will be published to the test run logs. This will make it far easier to find the problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
THANKS!!! I didn't know this was possiblez
[Symbol.iterator]: function * () {}, | ||
add () {}, | ||
delete () {}, | ||
has () { return true } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't the noop set be empty, and return false
for all .has()
calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Returning true
prevents new connections from being added to the set and thus avoids a branch and extra function invocation (.add()
).
persistent connections to timeout first. Important: connections are not | ||
inspected to determine if requests have been completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it could be possible to order the shutdown cycle in this way:
- Disable acceptance of new requests
- In-progress request handlers are allowed to complete
- Once all handlers have completed, close the sockets with keep-alive connections?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know, it is impossible to determine if a request is pending via the socket. We only retain the sockets so that we do not keep the full request(s) in memory for the duration of the persistent connection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node.js knows btw, however that info is private
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know, it is impossible to determine if a request is pending via the socket.
Wouldn't we know via fastify tracking handler()
s that are pending, rather than the socket?
We only retain the sockets so that we do not keep the full request(s) in memory for the duration of the persistent connection.
Yeah, I'm talking about while the request is still active and the connection is active.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@scmorse I believe this PR as written does the best we can do in Fastify without significant impact to performance. Any other changes will rely on nodejs/node#41578 going through.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
We might want to add something similar for HTTP/2, but I would push that to a separate issue.
Folks, |
That is not a requirement for Fastify PRs. Not everyone knows TypeScript. A subsequent PR has already been created #3646. |
2022-01-14:
This is a proof of concept. The code is bare minimum to add the feature set. It needs to be cleaned up to follow proper internals code if the feature set will be accepted. Also, automated tests will be needed. The PR is created to get feedback on the implementation and discussion on whether to move forward.
Checklist
npm run test
andnpm run benchmark
and the Code of conduct