You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which OpenObserve functionalities are the source of the bug?
ingestion
Is this a regression?
No
Description
Hi Team,
We are using openobserve for our production system. There, we are storing the logs in to a S3 bucket. Usually at the end of the day when we check the folders, the individual parquet (Every 10 mins / 10 MB or 5 MB , which ever is earlier a parquet will get created is that what I believe), will be bundled as a single parquet. But recently that capability didn't happen.
Do we need to explicitly mention that argument inorder to make that work. From the documents we came to know that the ZO_COMPACT_MAX_FILE_SIZE was enabled by default. Please correct me if I am wrong.
ZO_COMPACT_ENABLED true No enable compact for small files.
ZO_COMPACT_INTERVAL 60 No interval at which job compacts small files into larger files. default is 60s, unit: second
ZO_COMPACT_MAX_FILE_SIZE 256 No max file size for a single compacted file, after compaction all files will be below this value. default is 256MB, unit: MB
Please help us to get rid of this issue.
Openobserve Version we are using is ,
Version
v0.8.0
Commit Hash 921b397
Build Date
2024-01-29T17:18:37Z
Please provide a link to a minimal reproduction of the bug
No response
Please provide the exception or error you saw
No response
Please provide the version you discovered this bug in (check about page for version information)
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered:
Which OpenObserve functionalities are the source of the bug?
ingestion
Is this a regression?
No
Description
Hi Team,
We are using openobserve for our production system. There, we are storing the logs in to a S3 bucket. Usually at the end of the day when we check the folders, the individual parquet (Every 10 mins / 10 MB or 5 MB , which ever is earlier a parquet will get created is that what I believe), will be bundled as a single parquet. But recently that capability didn't happen.
Do we need to explicitly mention that argument inorder to make that work. From the documents we came to know that the ZO_COMPACT_MAX_FILE_SIZE was enabled by default. Please correct me if I am wrong.
ZO_COMPACT_ENABLED true No enable compact for small files.
ZO_COMPACT_INTERVAL 60 No interval at which job compacts small files into larger files. default is 60s, unit: second
ZO_COMPACT_MAX_FILE_SIZE 256 No max file size for a single compacted file, after compaction all files will be below this value. default is 256MB, unit: MB
Please help us to get rid of this issue.
Openobserve Version we are using is ,
Version
v0.8.0
Commit Hash
921b397
Build Date
2024-01-29T17:18:37Z
Please provide a link to a minimal reproduction of the bug
No response
Please provide the exception or error you saw
No response
Please provide the version you discovered this bug in (check about page for version information)
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: