New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate parsing and normalization #2841
Conversation
the BufferedSource version
__typename needs to happen first
THey were relying on the Json containing "id" but if we use the models as source of truth, that doesn't work anymore
It's not needed anymore since we don't normalize while parsing anymore
val data = operation.adapter().fromResponse(responseReader, null) | ||
val records = operation.normalize(data, customScalarAdapters, networkResponseNormalizer() as ResponseNormalizer<Map<String, Any>?>) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we always use neworkResponseNormalizer
here, we can remove the parameter and do other cleanups. I didn't do them yet in order to not clash with #2839
builder<D>(operation) | ||
.data(data) | ||
.fromCache(true) | ||
.dependentKeys(responseNormalizer.dependentKeys()) | ||
.dependentKeys(records.dependentKeys()) // Do we need the dependentKeys here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason why we would need the dependent keys when reading from the cache?
Remove normalization from the parsing step. Normalization is now done from the models, like it is done when using the cache write APIs.
Advantages:
ApolloStore#writeAndPublish
overwrites cache entries when using fragments聽#2818)This also enables the streaming parser in
ApolloParseInterceptor
, which is the main use case, which is something like ~25% faster on the whole parsing + normalization path, potentially event faster without normalization 馃帀 (see the README.md in the benchmarks for actual numbers)