Query batching & transactions support #667
Comments
@timsuchanek and I discussed this in a call and came to the following conclusion: 1. Don't introduce 2. Introduce |
@mavilein While I don't want to dismiss the I personally think this kind of api has proven to be very valuable and are the basis of libraries such as slonik and the more popular typeorm. One of the major reasons I'm very much in favour of handling transactions like this is that they allow for middleware style approaches without having to explicitly think about the transaction itself. A great example for this usage is in one of Prisma's related libraries const MutationTransactionPlugin = plugin({
name: "MutationTransactionPlugin",
onCreateFieldResolver(config) {
if (config.parentTypeConfig.name !== "Mutation") {
return;
}
return async (root, args, ctx, info, next) => {
return ctx.connection.transaction(async entityManager => {
ctx.entityManager = entityManager;
// Omitting the try-catch and restoring original entityManager here...
const result = await next(root, args, ctx, info);
return result;
});
};
},
}); I've used similar approaches in many other applications, since many web frameworks employ a middleware approach as such this technique is very powerful. While the |
I agree with @Nayni Not taking anything away from Being able to intermix database activity with application logic within the scope of a transaction is really important. If you want Prisma to be treated and used in 'serious' ways, we really need good transaction support. If we can arrive at having something along the lines of what is mentioned here as the 'long-running transactions'. then we'll have the best outcome. @Nayni points to a way to do that perhaps. |
@mavilein's approach seems reasonable in my opinion. I don't see any major limitations as to what can be done to form the array passed to On the other hand I also feel like a scoped API would be easier to work with, and would probably be more appealing to anyone who has worked with older ORMs. |
I want to be clear: I am in favor of Prisma providing both variations of .transaction(). The 'major limitation' I see with only supporting the array style is that you are going to be forcing users of Prisma to come up with database designs or interaction patterns that allow consistency to be obtained by means that do not rely on what the databases provide for such cases. eg: if the requirement is to delete a certain set of records, all or nothing, you are now required to ensure that the state of each record being deleted matches the state it was found in when the logic made the determination to delete those records. I can think of a couple of ways of doing this with the .transaction() array, but, it's a lot of effort and ideally, you have the ability to affect the design the database. You don't always have that flexibility. One method relies on each record having a 'state' value that by convention, is updated to a new value on each record update. So now, you have to pass both the unique record id, plus the state id, check that these are still matching before doing the update or delete. If you are unable to alter the database design and application logic, but just the user of it, then, an option is for you to resort to passing in all of each record's column values to use as proof of the record being in the state you expect before updating or deleting the record. Note: This will not guard against the record actually having been changed multiple times and it being returned to the same set of values that you saw when you were computing which records need to be affected. It will only tell you that the values in the columns are the same. I would really prefer to avoid both of the above scenarios, and it can be avoided, by providing the ability to execute arbitrary logic within the transaction scope. Certainly, the approaches above are probably something you'd have to do to some degree or another if you are working with something that has a front end to which data is provided, and at some point in time later, a request can come to make a change. You're not going to be able to use a database transaction for this kind of situation, as far as the end-to-end with the client goes. The client might never come back, after all. Often, in this case, you'd still benefit from using a transaction so that you can re-validate that the data you expect is still ready and available to be deleted, and then proceed. In cases where there is no front end involved, eg, you are performing back-end processing, having the ability to put logic within the transaction scope is highly advantageous versus the alternatives. |
I suppose that I should look at #242 as well if I were interested in tracking dataloader / optimized batching support? |
@jhanschoo We already batch related |
Problem
It's a common use case to send multiple queries at once. However, right now this is not possible yet with Prisma Client.
Note: Theoretically this should also already be possible by doing the following by leveraging the built-in dataloader functionality, however the currently implemented optimizations only work for queries with the same pattern/selection set.
Solution
I suggest to introduce an explicit batching API:
Related: prisma/specs#356 and https://github.com/prisma/prisma-client-js/issues/349 and #667
The text was updated successfully, but these errors were encountered: