Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does / should revertData have old data as well? #416

Open
chandru89new opened this issue Dec 18, 2020 · 22 comments
Open

Does / should revertData have old data as well? #416

chandru89new opened this issue Dec 18, 2020 · 22 comments

Comments

@chandru89new
Copy link

In the examples shown for revertData in the optimistic data updates, I find that it takes only one parameter and that parameter is the latest local data (after updateDataOptimistic has run).

For a use case where I am trying to edit some resource, I run into this issue where I am not able to revertData unless I also pass the older data down.

Example:

const editBook = (editedBook) => ({
  type: EDIT_BOOK_OPTIMISTIC,
  request: {
    url: `/book/${editedBook.id}`,
    method: 'PUT',
  },
  meta: {
    mutations: {
      FETCH_BOOKS: {
        updateDataOptimistic: data => data.map(d => d.id === editedBook.id ? editedBook : d),
        revertData: data => ???
      }
    }
  }
})

In this case's revertData, I need access to old/previous data for the given book which is being edited.

Is this already available (and I am missing it)? Or, if it's not available and the only solution is to pass the older data down to the dispatch action (like editBook = (editedBook, oldBook))), is it possible to add the previous data as the second arg to revertData?

@klis87
Copy link
Owner

klis87 commented Dec 18, 2020

Very good question! Actually you did it in the optimal way, passing the old data (current from perspective of time before a request is fired). Indeed old data is not passed to revertData. Some of the reasons:

  1. in the opposite to many libraries for optimistic updates, we don't store any copy to revert it back in case of failure, what happens is just on request updateOptimisticData is called, data is optimistically updated and the old data in reducer is gone, why? see the next point
  2. we need to take concurrent requests of the same type into account, imagine a mutation of type X fired while another X is pending, the order of responses might be different than requests, any could fail
  3. many things could happen to data, for example during X request another mutation could update this data, the reverting it back to state before X could be wrong
  4. because of 2 and 3, this library doesn't take any assumptions, you can pass anything to request action like oldBook in your case and do whatever you want in revertData
  5. instead of passing oldBook, alternatively you could use thunk, for example https://redux-requests.klisiczynski.com/docs/guides/actions#thunks
  6. because revertData is called onError, reading data from state cannot read old data (this doesn't exist anymore in reducer). However, I really can think that in theory we could achieve what you need, for example we could do sth like this in the middleware which handles request actions
const oldData = getOldData(store.getState());
action = {
  ...action,
  meta: {
    ...action.meta,
    mutations: Object.entries(action.meta.mutations).reduce((prev, [queryType, mutation]) => {
      mutation = {
        ...mutation,
        revertData: mutation.revertData && data => mutation.revertData(data, oldData)
      };
      prev[queryType] = mutation;
      return prev;
    }, {}),
  },
}

so basically in theory we could overwrite revertData by passing oldData in the middleware, which then will be correct due to the way JS closures work. I am not sure though we really should do it due to potential problems this could cause with parallel requests. Maybe this could be fine though, together with a proper warning somewhere. If you are interested, I wrote more on this topic here https://dev.to/klis87/taming-network-with-redux-requests-part-6-optimistic-updates-4ig5 Wondering whats your opinion on all this!

@chandru89new
Copy link
Author

Tbh, I never thought about race conditions in concurrent requests. That definitely seems to make things super-complicated. Your post makes sense. I think it's safest to pass in old data to the config and use that in the revertData function. I will look into thunks as well.

Thanks much for replying in detail :)

p.s: I'll keep this open in case anyone else wants to comment their thoughts on it but please feel free to close this issue.

@klis87
Copy link
Owner

klis87 commented Dec 19, 2020

Thx for your answer! And I will keep this opened, this is kind of question/potential feature request. Concurrency is a risk, but there are many cases in apps where they won't happen, for example button could be disabled on pending and so on, so no parallel mutations would be even possible. Then, oldData passed by the lib might be handy, it always would save some job on the app level in such scenarios.

@klis87
Copy link
Owner

klis87 commented Dec 29, 2020

@chandru89new I have another idea, with your suggestion, actually in case of not passing revertData at all, we could provide default revert logic like (data, oldData) => oldData - that's it, the default would just take back data to the state before mutation was fired. Without concurrent request it would be correct, with concurrent people could always take that into consideration and provide custom revertData. We could also write some warnings about this auto behaviour in the docs.

@klis87
Copy link
Owner

klis87 commented Dec 29, 2020

I also plan adding thins like new effect, right now we have takeLatest, takeEvery, we could also add takeLeading - in case of firing request X while another is pending, another wouldn't be fired, but it would just take response from the pending one. Then, concurrency could be avoided in the first place, even without disabling buttons

@chandru89new
Copy link
Author

provide default revert logic like (data, oldData) => oldData

This was my first thought when I started playing with updateDataOptimistic but I felt that that would be becoming too opinionated. Also, explicitly asking the developer to handle the error scenario (in optimistic updates) seems the wiser thing to do instead of implicitly forcing a default behavior. (Again, this is coming from the possibility that the developers might do something to change the actual oldData somewhere else in the code and then that would mean the oldData in the revertData function is stale which the developer might never know.)

@chandru89new
Copy link
Author

I also plan adding thins like new effect, right now we have takeLatest, takeEvery, we could also add takeLeading - in case of firing request X while another is pending, another wouldn't be fired, but it would just take response from the pending one. Then, concurrency could be avoided in the first place, even without disabling buttons

I'm thinking about this and running some (possibly edge-case) scenarios. Overall, seems like a nice idea.

I don't understand some bits though.

takeLeading .... in case of firing request X while another is pending, another wouldn't be fired, but it would just take response from the pending one.

  1. Let's say UI fired request (R#1) and optimistically updates.
  2. Now, UI fires one more request (R#2) before there is any response for R#1. Given takeLeading: true, does the UI update optimistically again?
  3. Assuming R#1 failed, the revertData of R#1 would kick-in. Now, UI has gone two steps ahead (if it updated in step 2) and it will jump two steps back assuming oldData exists and it is the state when R#1 was fired.
  4. Will R#2 now actually make the request?

(Also, apologies if some of these are already answered by understanding the codebase; I havent had the time/chance to go through the codebase and understand how things are designed, yet). Feel free to let me know if that's the case and I'll happily dive in :)

@chandru89new
Copy link
Author

Sounds naive but why not this rule?

If you use updateDataOptimistic and revertData in your request X, the library will automatically queue the requests of type X so at any given point of time, revertData's oldData is always the last-non-stale copy of your store just before request X was made.

@klis87
Copy link
Owner

klis87 commented Dec 29, 2020

This was my first thought when I started playing with updateDataOptimistic but I felt that that would be becoming too opinionated.

I am wondering, maybe an option like meta.autoRevert: true could achieve this behaviour? The point is, the most common use case indeed is to revert just to... old data. Sometimes it is possible to calculate previous data by decrementing and so on (to provide a mathematically reverse operation in comparison to updateDataOptimistic), but I don't think all cases could handle race conditions. Maybe sometimes it is just better to remove any possibility of such issues by removing parallelism.

Now, UI fires one more request (R#2) before there is any response for R#1. Given takeLeading: true, does the UI update optimistically again?

No, because R#2 would just join R#1, no 2nd request would not be fired at all, and promise of dispatched R2 would resolve with the same as R1

If you use updateDataOptimistic and revertData in your request X, the library will automatically queue the requests of type X so at any given point of time, revertData's oldData is always the last-non-stale copy of your store just before request X was made.

This is a very good idea, I think this is yet another effect, like takeQueued. This would indeed solve parallel requests issues, reverting would work fine as all would be in a sequence.

Regarding takeLeading, this would work for a mutations with the very same variables, for example imagine a silly example like likePost mutation, and imagine it is fired like that:

  1. data.liked is false initially
  2. R1 is fired, data.liked is optimistically set to true, oldData is false
  3. R2 is fired, duplicating the 2), but fired anyway, both new data and old data liked is true
  4. R2 succeedded, data.liked is still true
  5. R1 failes, data.liked is incorrectly reverted to false! R2 was a success, so post was liked!

Take leading would solve this as R2 wouldn't be fired at all. However, if likePost could have variables like id, then takeLeading would make issues as likePost(1), likePost(2) would ignore liking of post 2 incorrectly. But then, together with meta.requestKey: id, this would again work!

Queues would also help in this scenario, even without requestKey, but performance would be worse as requests couldn't be fired at once.

To sum up, in my opinion this library should provide sensible defaults and to have enough declarative configuration so any real-life case like above could be handled with a set of properties, for example mixture of an effect and requestKey

@klis87
Copy link
Owner

klis87 commented Dec 30, 2020

Btw, it seems that I will need to implement takeLeading effect now, as I need it for SSR in suspense mode (new upcoming feature)

@klis87
Copy link
Owner

klis87 commented Dec 30, 2020

What's interesting, adding queue effect and takeLeading, we will match RXJS operators for most common AJAX use cases - https://dev.to/rxjs/about-switchmap-and-friends-2jmm

@klis87
Copy link
Owner

klis87 commented Dec 30, 2020

We also match redux-saga, apart from queue/concatMap, I believe this would be needed to implement on user land in sagas. Anyway, what I think is this:

  • takeEvery/mergeMap is perfect when order doesn't matter, in my opinion this is usually good for mutations, but bad for queries
  • takeLatest/switchMap is usually the best for queries, perfect for things like typehead, pagination, so you prevent race conditions and you don't waste resources, bad very bad for mutations, maybe unless we don't care about mutation responses in some cases
  • concatMap is perfect for mutations with optimistic updates, queueing actually doesnt hurt perceived performance thanks to optimistic updates!
  • takeLeading/exhaustMap - this is really needed for SSR and queries, and on the client it would be cool for queries without variables and perhaps for mutations which are rather single at a moment, to prevent redundant liking of an already liked entity and so on

For now we support takeLatest and takeEvery. TakeLatest is the default for queries and takeEvery for mutations, but I am pretty sure that defaults could be set dynamically depending on circumstances, like described above. Then defaults would be sensibly chosen, but always possible to override

@klis87
Copy link
Owner

klis87 commented Dec 31, 2020

For the reference, this might be more problematic actually, let's try something complicated!

  1. Let's use queues, assume we have 3 mutations, R1, R2, R3, initial data is 0, R1 sets it to1, R2 to 2, R3 to 3, we assume auto reverts to oldData
  2. We fire 3 mutations, data is optimistically set to 3, R1 request is pending, R2 and R3 are queued
  3. Some random scenarios:
    a) R1 success, R2 success, R3 error, reverting to oldData 2
    b) R1 success, R2 error, R3 pending, data still 3, this point is tricky, it seems that we don't need to do anything about data, but we must, for auto revert case this will work, but for custom revertData like revertData: data => data - 1 and updateOptimistic: data => data + 1 (to revert incrementing by decrementing for instance) we need to revert R3, revert R2, again optimistically update R3! so it is like reverting all queued in the reverse order, reverting error, and applying optimisticUpdate again in sending order
    c) R1 error, R2 pending, R3 queued, again we need to revert R3, revert R2 revert R1, update R2, update R3

Thanks to queues I think this would cover all the cases, but this would require many unit tests not to make any mistake. The reward will be high though, we could automate optimistic updates without any risk of race conditions with reverts. This is the direction this lib should take, declarative and reliable by default, but still configurable.

@klis87
Copy link
Owner

klis87 commented Dec 31, 2020

What is also interesting, above pattern should take into consideration that queue would be per type, so instead of like, unlike action, we should have setLike action, the point is to move all mutations which could suffer from race condition under one action, with a dynamic config.

What we also need to take into account is the possibility to optimistically updated multiple queries at the same time, queues should be per query, or per mutation but iterated over each query to properly revert data.

Another one, automatic normalisation must be also solved here.

@khalilbenachir
Copy link

@klis87
currently, i'm facing the same problem , so my question is how to add a custom middleware to redux-request , to achieve this behavior , if i don't add a custom revertData returns the old data, if u can help me with this I'll be thankful.

@klis87
Copy link
Owner

klis87 commented May 7, 2021

@khalilbenachir please provide an example what you want to achieve - an action example and how you use it, and what you would like to abstract in a middleware

@klis87
Copy link
Owner

klis87 commented May 7, 2021

if you mean that you don't want to define old data at all, but rather to make it returning old data automatically, I would need to think about it, but please confirm that this is what you need. otherwise, please give me an example

@khalilbenachir
Copy link

khalilbenachir commented May 7, 2021

@klis87
i want is , every time i don't specify revertData inside mutations object to return oldData .
example :

`
const editBook = (editedBook) => ({
  type: EDIT_BOOK_OPTIMISTIC,
  request: {
    url: `/book/${editedBook.id}`,
    method: 'PUT',
  },
  meta: {
    mutations: {
      FETCH_BOOKS: {
        updateDataOptimistic: data => data.map(d => d.id === editedBook.id ? editedBook : d),
        revertData: (data, oldData)=> oldData  //(i won't to specify this every time but return by deafault oldData)
      }
    }
  }
})
`

@klis87
Copy link
Owner

klis87 commented May 7, 2021

@khalilbenachir for now revertData doesn't even have 2nd arg passed, this is TODO for the version 2.0.

For now, you could do sth like this:

const mapMutations(mutations, store) {
  // in this function you can map mutations, like
  if (mutation.updateDataOptimistic && !mutation.revertData) {
   const currentState = getQuery(store.getState(), { type: queryKey })

    mutation.revertData = () => currentState.data // ofc remember not to mutate it, but do it in redux way
  }
};

const autoRevertMiddleware = store => next => action => {
  if (action.meta && action.meta.mutations) {
    return next({
      ...action,
      meta: {
        ...action.meta,
        mutations: mapMutations(action.meta.mutations, store),
      }   
    })
  } 
  
  return next(action);
}

This is not a working code, and mapMutations is just a direction how to do it, but in this way you should achieve what you need. I hope that will help you!

@khalilbenachir
Copy link

khalilbenachir commented May 8, 2021

@klis87 i added a middleware that to autoRevert , but i can't solve my problem, so my question is why when i dispatch a mutation action [POST_BOOK] , i didn't add any mutations to the action, but when the mutation triggered, she also updated the queries [FETCH_BOOK], so how can that happen if u have any idea?

@klis87
Copy link
Owner

klis87 commented May 8, 2021

@khalilbenachir I need all actions and the middleware code to help you.

@klis87
Copy link
Owner

klis87 commented May 11, 2021

@khalilbenachir I dont see any FETCH action in your example. Anyway, I believe I could help you only if you provided me an example repo with reproduction. Especially that putConnaissanceClient doesn't even have meta.autoRevert prop

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants