-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The historical dat file has not been truly deleted, and the disk is full #488
Comments
Hi @lisabiya I am one of the main maintainers for this project. Did not fully understand what file should be deleted in your use case. Could you help explain more about those files? |
Hello @lisabiya, Merge will only remove invalid data and then rewrite valid data to the disk instead of regularly deleting all files. And we recommend updating nutsdb to the latest version. We have fixed some bugs and made some optimizations. |
Thank you for your reminder. I have now updated the version to 0.14.1 and will continue to observe the data. However, it may take several days for the data to accumulate before it appears. I will provide feedback on the structure. Thank you again for your help |
If you want to delete all files containing key-value pairs that have not been deleted, it may be more appropriate to set an expiration deletion time.😁 @lisabiya |
|
Unfortunately, when I replaced it with 0.14.1, the problem still exists What I have identified is the most frequently used
For configuring caching, I have a question. If the same key is used for overwriting storage, will the old data be released? Here is an example
|
I think we may meet this issue: And it's saying that: |
Thanks for the detailed description. When writing a new value with the same Key, the old data on the disk will not be released immediately, and will not be released until Merge is triggered. I checked your description again. The reason may be that after deleting the file, the file descriptor is still held by the nutsdb instance and the disk occupation is not released, so it is released after restarting. |
A fix will be made after we verify @lisabiya |
Describe the bug
I have investigated bagerDB, boltDB and nusDB before, and I think that the bucket design of nutsDB has Redis like data structures and operators that meet my product requirements
At present, I am trying to use Nutsdb in a production environment, and it has been running very well. However, a few days later, I received a server alarm saying that the hard drive is full.
lsof |grep deleted After querying, it was found that many files were not actually deleted, thus occupying a large amount of space
Give sample code if you can.
init
Expected behavior
Expected scheduled deletion of files
What actually happens
Disk occupancy is extremely high
Screenshots
please complete the following information :
Additional context
Looking forward to your reply
The text was updated successfully, but these errors were encountered: