Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BulkInsert #102

Open
byte47 opened this issue Jun 20, 2020 · 4 comments
Open

BulkInsert #102

byte47 opened this issue Jun 20, 2020 · 4 comments
Labels
status: accepted Change is accepted and is open to community PRs. type: feature Request for a new feature or enhancement.

Comments

@byte47
Copy link

byte47 commented Jun 20, 2020

context

Option to insert bulk data in batches

proposed solution

In Knex , insert function (ref) taking either a hash of properties to be inserted into the row, or an array of inserts . Can that be implemented in trilogy as well

alternatives

Unable to insert data at once using trilogy, but as a workaround, knex can be used
eg:

await db.knex<UserDocType>("users").insert(userArry);
@byte47 byte47 added the type: feature Request for a new feature or enhancement. label Jun 20, 2020
@haltcase
Copy link
Owner

Sounds good. This would be nice to add as createMany, with a few considerations:

  • Batching bulk inserts to configurable chunks (default = 100?)
  • sqlite doesn't support returning or output clauses. I worked around that in the design of create but I'm not sure that would translate to createMany, so the return value would be the number of rows, a boolean, or nothing.

@haltcase haltcase added the status: accepted Change is accepted and is open to community PRs. label Jun 20, 2020
@vazra
Copy link

vazra commented Jun 20, 2020

  • Batching bulk inserts to configurable chunks (default = 100?)

I think there are some limitations in sqlite bulk inserts, (eg: SQLITE ERROR, Toomany Variables) when you insert many rows at a time. Is there any basis for deciding the default chunk value to 100?

  • sqlite doesn't support returning or output clauses. I worked around that in the design of create but I'm not sure that would translate to createMany.

yeah, but I don't think that is required in bulk inserts.

@haltcase
Copy link
Owner

Is there any basis for deciding the default chunk value to 100?

None at all :) Probably 500 or even 1,000 is doable, I just don't know what the typical limit is and will have to look into it.

yeah, but I don't think that is required in bulk inserts.

Agreed, I just wanted to point out the return value would be different between create and createMany, but that'll be documented anyway.

@vazra
Copy link

vazra commented Jun 20, 2020

I just tried with a table with 4 columns , and fails at chunk of 240 with this, await db.knex<UserDocType>("users").insert(userArry), function. (Error : SQLITE ERROR, Toomany Variables).

When the chunks are smaller the net time taken for the query is more when compared with larger chunks.. Is it possible optimize the chunks count dynamically ? 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: accepted Change is accepted and is open to community PRs. type: feature Request for a new feature or enhancement.
Projects
None yet
Development

No branches or pull requests

3 participants