You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want different int8 columns to deserialize differently: number vs BigInt vs Buffer.
I have schema-aware JSON deserializers. I'd like to use those to deserialize different jsonb columns differently.
With "pg", you can only have a single serializer/deserializer for a type.
Mammoth has richer type information, e.g. jsonb<Article>. If Mammoth allowed transforming values, I could get exactly the serialization/deserialization I want.
One downside is performance. But maybe there won't be a performance hit if Mammoth hooks in to "pg" at a lower level, parsing the text/binary protocol directly. (Hopefully "pg-protocol" can do most of the work.)
The text was updated successfully, but these errors were encountered:
The statement about "parsing the text/binary protocol directly" was a bit extreme. I think we might get acceptable performance just passing in a custom value parser, e.g:
This just relays the raw string/binary value, so Mammoth can parse the values depending on the Mammoth schema. (It can use 'pg-types' to do the low-level work.)
I problem I have:
int8
columns to deserialize differently:number
vsBigInt
vsBuffer
.jsonb
columns differently.With "pg", you can only have a single serializer/deserializer for a type.
Mammoth has richer type information, e.g.
jsonb<Article>
. If Mammoth allowed transforming values, I could get exactly the serialization/deserialization I want.One downside is performance. But maybe there won't be a performance hit if Mammoth hooks in to "pg" at a lower level, parsing the text/binary protocol directly. (Hopefully "pg-protocol" can do most of the work.)
The text was updated successfully, but these errors were encountered: