You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
Currently I only know how to read and write e57 files while storing the data for an entire file in a buffer. Since I have some large .e57's I would like to work with, I was wondering if it was possible to:
Read from 0-5 million points, write 5 million points, then read from 5-10 million points etc. not so much memory is being used.
Looking at the docs I saw CompressedVectorReader.seek(), but I was not able to get this working for me. I have also not been able to find any example in the tests. If you anyone could outline a way for this to be done I would greatly appreciate it.
The text was updated successfully, but these errors were encountered:
This is related to #79 - though you are also asking for a batch/streaming interface.
(I've mentioned in otherplaces that I started a new implementation from scratch a while ago. I'd implemented batched reading the way you describe because I think it makes a lot of sense!)
Sorry to revive this, but is the fact that libe57 uses Xerces preventing file streaming? I'm just curious as I've been using this library a lot and have started to poke around the code to gain a better understanding.
Great project btw.
is the fact that libe57 uses Xerces preventing file streaming?
Nope - that's a separate issue. The issue with Xerces is that it is like using a sledgehammer to push a tack into cork - and it's been a constant source of problems to include & build. A small simple implementation like pugixml would be better. The structure of libE57Format's code, however, makes replacing the XML a fair bit of work.
For streaming, I think it would be possible to implement CompressedVectorReaderImpl::seek and use it somehow (which I believe was the original intent), but not efficiently because the library doesn't implement certain features from the standard (e.g. indexing).
Hello,
Currently I only know how to read and write e57 files while storing the data for an entire file in a buffer. Since I have some large .e57's I would like to work with, I was wondering if it was possible to:
Read from 0-5 million points, write 5 million points, then read from 5-10 million points etc. not so much memory is being used.
Looking at the docs I saw CompressedVectorReader.seek(), but I was not able to get this working for me. I have also not been able to find any example in the tests. If you anyone could outline a way for this to be done I would greatly appreciate it.
The text was updated successfully, but these errors were encountered: