Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? 1 #4395

Closed
himharsh1997 opened this issue Mar 25, 2021 · 16 comments

Comments

@himharsh1997
Copy link

himharsh1997 commented Mar 25, 2021

Hi! guys in my application I was facing (sometime in between)
TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?

My config of knex is

 const config =    client: process.env.CLIENT,
    connection: {
      host: process.env.DBHOST,
      user: process.env.DBUSER,
      password: process.env.DBPASS,
      database: process.env.DATABASE
    },
    pool: { min: 0, max: 30, acquireTimeoutMillis: 60 * 1000 },
    seeds: {
      directory: './db/sds'
    },
    migrations: {
      directory: './db/mg'
    }
}
import knexLib from 'knex';
export const con = knexLib(config);

and I'm using it something like

import {con} from './con';
import {} from '../err'


const handler = (req)=>{
const trx = con.transaction();
try{
  const result = await con('insert-table').transacting(trx).insert(req.list).returning('*');
  const resultOfLog = await Promise.all(
  result.map((o)=>{
 return con('log-table').transacting(trx).insert({event_id: 1, resource: o.id});
})
);
trx.commit();
return result;
} catch(error){
trx.rollback();
  return new FormatError(error);
}
}

@vintagexav
Copy link

vintagexav commented Mar 25, 2021

  • Did you try adding the following?
idleTimeoutMillis: 600000

Connection is automatically closed when on connection idle after idleTimeoutMillis

see https://cloud.google.com/sql/docs/postgres/samples/cloud-sql-postgres-knex-timeout

  • Did you try adding the following?
npm install pg@latest --save

and check pg - node versions #2820 (comment) and #3912 (comment)

  • otherwise you can also see set propagateCreateError to false in the pool (not advised & might not work out for you anyways )
propagateCreateError: false

see #2820 (comment)

@himharsh1997
Copy link
Author

himharsh1997 commented Mar 26, 2021

idleTimeoutMillis: 600000

Did we really require this. There must be default value less than this.

Also we using lambda do we require this idleTimeoutMillis.

@elhigu
Copy link
Member

elhigu commented Mar 26, 2021

You are creating transactions and never committing/rolling them back. That fills the pool until there is no connections left.

Better way to use transactions with automatic commit/rollback is like this:

const resultsAfterTransactionIsComplete = await knex.transaction(async trx => {
  const result = await trx('insert-table').insert(req.list).returning('*');

  // insert logs in the same transaction
  const logEntries = result.map(o=> ({event_id: 1, resource: o.id}));
  await trx('log-table').insert(logEntries);

  // returning from transaction handler automatically commits and frees connection back to the pool
  return results; 
});
´``

@elhigu elhigu closed this as completed Mar 26, 2021
@himharsh1997
Copy link
Author

Sorry, at the time I write sample code I mistakenly forget to add commit/rollback but I have added them in actual code. But still getting the same error in between.

@himharsh1997 himharsh1997 changed the title Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? 1 Mar 26, 2021
@elhigu
Copy link
Member

elhigu commented Mar 26, 2021

Another error there is that you are creating a new query, which is outside of transaction for every result line:

  const resultOfLog = await Promise.all(
  result.map((o)=>{
 return con('log-table').insert({event_id: 1, resource: o.id});
})
);

Depending on result set size, those queries can overwhelm DB and are not ran fast enough to meet the acquireTimeout. Other possible problem is that if that code is ran concurrenctly many times it is possible that first 100 concurrent transactions take all the connections and then that log write query is not ran in transaction and there might not be any connections left in pool so transactions will never be finished.

@himharsh1997
Copy link
Author

Another error there is that you are creating a new query, which is outside of transaction for every result line:

I didn't get the context of "Another error there is that you are creating a new query, which is outside of transaction for every result line:"

@himharsh1997
Copy link
Author

You are creating transactions and never committing/rolling them back. That fills the pool until there is no connections left.

Better way to use transactions with automatic commit/rollback is like this:

const resultsAfterTransactionIsComplete = await knex.transaction(async trx => {
  const result = await trx('insert-table').insert(req.list).returning('*');

  // insert logs in the same transaction
  const logEntries = result.map(o=> ({event_id: 1, resource: o.id}));
  await trx('log-table').insert(logEntries);

  // returning from transaction handler automatically commits and frees connection back to the pool
  return results; 
});
´``

So using this way we will never fall into this error and connection will get manage properly?

@elhigu
Copy link
Member

elhigu commented Mar 26, 2021

It can still happen, but probably in that case you need more resources to the database, since error will be caused because of DB not being able to handle queries fast enough.

@elhigu
Copy link
Member

elhigu commented Mar 26, 2021

Another error there is that you are creating a new query, which is outside of transaction for every result line:

I didn't get the context of "Another error there is that you are creating a new query, which is outside of transaction for every result line:"

con('log-table').insert({event_id: 1, resource: o.id});

That query is not executed through the same transaction that the first query is.

@himharsh1997
Copy link
Author

Another error there is that you are creating a new query, which is outside of transaction for every result line:

I didn't get the context of "Another error there is that you are creating a new query, which is outside of transaction for every result line:"

con('log-table').insert({event_id: 1, resource: o.id});

That query is not executed through the same transaction that the first query is.

But I passing all the same transaction instance created in first like outside the try catch block.

@himharsh1997
Copy link
Author

himharsh1997 commented Mar 26, 2021

Another error there is that you are creating a new query, which is outside of transaction for every result line:

I didn't get the context of "Another error there is that you are creating a new query, which is outside of transaction for every result line:"

con('log-table').insert({event_id: 1, resource: o.id});

That query is not executed through the same transaction that the first query is.

We are using db.m6g.large. and we. using lambda function.

@himharsh1997
Copy link
Author

const resultsAfterTransactionIsComplete = await knex.transaction(async trx => {
const result = await trx('insert-table').insert(req.list).returning('*');

// insert logs in the same transaction
const logEntries = result.map(o=> ({event_id: 1, resource: o.id}));
await trx('log-table').insert(logEntries);

// returning from transaction handler automatically commits and frees connection back to the pool
return results;
});

Absolutely agree with you. From my code when I try to debug query in knex using DEBUG:knex:* I found out that all query executing in the same transaction but the connection is not getting release back at the end to the pool after taken. I try your method which automatically commit + release connection back to pool on return and rollback on throw error.

@azebuji
Copy link

azebuji commented May 13, 2021

I'm having the same problem, i'm using the transaction like this

return await knex.transaction(async (trx) => { return 'teste' })

"inside of this code there are some rows, but I removed it for better comprehension".

I tested a lot of times, and the pool is getting full and don't commiting the transactions, if I save the project when It running the commit occurs, I really don't know what happens.

I'm using:
node version 16.0.0
"pg": "^8.6.0"
"ts-node": "^9.1.1",
"ts-node-dev": "^1.1.6",
"knex": "0.21.17",

and I tried another versions of node, pg, knex and ts-node, none of them have solved my problem.

Anyone have the same problem?

@OneTuskedMario
Copy link

adding idleTimeoutMillis: 600000 and propagateCreateError: false, fixes it for me

@hassanrahimi
Copy link

hi
I had this problem and solved when added correct port at connection setting
You can find connection port at PostgreSql whit right click on the PorstGreSql15 then connection tab.

image

@hassanrahimi
Copy link

or if you use remote control for create or update database from your computer. you must use port of that remote address link.
image

host : remote......com
port : 31715

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants