You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In all our supported databases a big enough IN query will panic and crash due to the database not allowing too large set of variables. It's especially easy to replicate with a big enough database due to the relation queries joining the data with a huge IN statement.
Example:
queryA {
findManyUser { address { street } }
}
If having thousands of users (between 5000-10000 or more), the address query will be including a filter with user_id IN (thousands of ids) and the database will error, causing a panic.
We could instead load the addresses in batches of maximum 5000 ids and concatenating the results in the connector.
Things to be aware of:
Databases will dedup ids in an IN statement. For batched statements, the deduplicating should be done in the connector
Ordering will cause trouble. For batched queries, the ordering needs to happen in the connector.
The text was updated successfully, but these errors were encountered:
In all our supported databases a big enough
IN
query will panic and crash due to the database not allowing too large set of variables. It's especially easy to replicate with a big enough database due to the relation queries joining the data with a hugeIN
statement.Example:
If having thousands of users (between 5000-10000 or more), the address query will be including a filter with
user_id IN (thousands of ids)
and the database will error, causing a panic.We could instead load the addresses in batches of maximum 5000 ids and concatenating the results in the connector.
Things to be aware of:
IN
statement. For batched statements, the deduplicating should be done in the connectorThe text was updated successfully, but these errors were encountered: