Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Curious behavior with idle timeouts (Postgres) #73

Closed
olivierlacan opened this issue Feb 17, 2022 · 3 comments
Closed

Curious behavior with idle timeouts (Postgres) #73

olivierlacan opened this issue Feb 17, 2022 · 3 comments

Comments

@olivierlacan
Copy link

We're using Knex 0.95.15 with tarn 3.0.2 and noticed some of our CI runs are hanging for many seconds past mocha's exit. After adding wtfnode to investigate we see the following open handles and timers running past the end of our test suite:

- Timers:
1909  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1910  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1911  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1912  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1913  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1914  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1915  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1916  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1917  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1918  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1919  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1920  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1921  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1922  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1923  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1924  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1925  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1926  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1927  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324
1928  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/pg-pool@3.4.1_pg@8.6.0/node_modules/pg-pool/index.js:324

There's very likely something I missed here, but this doesn't feel right since the Tarn idleTimeoutMillis defaults to 30000ms. Even when I set it (through Knex) lower, this issue still occurs with 10 second timeouts and clearly seems to originate from pg-pool and not Tarn.

There's a section in Knex' PoolConfig types that explicitly references Tarn configs in a way that appears separate from the rest of the PoolConfig interface, which makes me curious.

My expectation was that changing idleTimeoutMillis in our Test environment would impact these timeouts. Interestingly, node-postgres recently added a (false defaulted) config to allowExitOnIdle which feels like it could be useful here but I can't set it through Knex/Tarn since it's not allowed by the PoolConfig interface.

Despite looking into this for quite a while now I can't quite seem to find where Knex/Tarn interact with node-postgres and where these configs would be clashing. I'd be grateful if anyone has pointers, and hopefully someone encountering similar issues can benefit from my research. 🙃

@sharadpattanshetti

This comment was marked as off-topic.

@mautematico
Copy link

@elhigu
Copy link
Collaborator

elhigu commented May 16, 2022

I think since tarn does not depend from knex nor posgresql, this should be knex issue. Knex creates internally tarn pool and adds necessary handlers to create/destroy resources etc.

Usually if CI hangs after mocha exists, there are some timers etc. still running. Maybe you did not call knex.destroy() when CI is tearing down?

Adding that allowExitOnIdle parameter support needs to be also implemented in knex.

Closing, since this does not seem to be tarn issue.

@elhigu elhigu closed this as completed May 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants