You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current sync logic stores every event in memory during sync. This means that large initial syncs run OOM often. It is also difficult to debug exactly what is going on. Given our learnings, we can greatly simplify this process.
Two goals would be to (1) do not rely so heavily on memory to sync, (2) break up each individual piece to be more logical/manageable/debuggable.
Figure out how to not sync all the way back. In practice, we only need a few days of items except in rare cases. Accounting for this solves almost all resource issues we’ve ever seen with V1.
Possibly have an “archive” sync like geth, but honestly this will probably never be used and thus is not worth adding
It might be best to handle the “rare cases” mentioned above in a custom way. For example, if a G→P route isn’t widely used, it may be (an arbitrary amount of months) before the root is created. If we regularly prune, we won’t see all of these transfers anymore. We can have logic that says “if pruned, then recollect the data for use now”. This logic will be resource intensive, but only happens like 1 time per year so is worth
The text was updated successfully, but these errors were encountered:
Current sync logic stores every event in memory during sync. This means that large initial syncs run OOM often. It is also difficult to debug exactly what is going on. Given our learnings, we can greatly simplify this process.
Two goals would be to (1) do not rely so heavily on memory to sync, (2) break up each individual piece to be more logical/manageable/debuggable.
The text was updated successfully, but these errors were encountered: