-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance with persistent tables (Sled) #1457
Comments
Did you mean selecting 360,000 (100 * 60 * 60) rows from sled storage took 2.7 seconds or 1 row from 360,000 rows?? And i wonder what was the full query text. |
https://gist.github.com/JakkuSakura/4bb9678501dbabf56c1b6d95269740aa
|
It is 3,600 rows from 360,000, as in getting one symbol out of 100 symbols for 1 hr data. |
Jakku has set up a test with slight modification to above, changing symbol string to symbol ID. It did help quite a lot, so my next optimization guess is to change from Decimal to f64. persistent database insertion for 1s data: 2ms But again, query for selecting 86,400 rows out of 8,640,000 in persistent table taking 800+ms is quite slow, we are aiming at somewhere below 50ms. |
I'm wondering what are the limiting factors to the performances here. I get that changing symbol string to id improves performance, because it reduces the time for symbol comparison. But say for size of each row, do they make big difference in terms of performance in GlueSQL selection? Roughly speaking, Sled is structured in BTreeMap, so the best possible for query should be O(logN). However, currently 1hr vs 24hr data is taking 36ms vs 838ms, at roughly 24 times O(N) instead of 4.5 times O(logN). Same applies to SharedMemoryStorage, 17ms vs 385ms is around the 24 times. Operation per row that's taking too much time that the O(logN) issue is becoming a O(N) issue. |
@kanekoshoyu I'm interested in knowing if you've figured this out. Since |
Hi @jeromegn
This is because I had it run with flamegraph and I saw like a quarter of its runtime doing file IO. Plus one thing I found is that the stock sled storage does not do concurrency well. Between a start and end transaction, it locks the file, so each transaction has to be "atomic". I have a modded version which I wrapped with Arc<RwLock> that you can run it concurrently without database being locked. (You need to gracefully terminate the program though)
Although the table storage structure might be in a hashmap, filtering and sorting still makes the result 0(n+) by nature, so a query minimised by 100 is at least 100 times faster. |
https://github.com/kanekoshoyu/gluesql_shared_sled_storage This is the link to the modded sled storage |
Also a little micro-optimisations here and there. Do not do string sorting/filtering, they are too slow. Try using an index instead. Try using AST directly instead of the query string. Each query string is converted to AST at run-time. |
"HighThroughput" and "LowSpace" modes don't actually do anything by the way. I'm unsure why it's even possible to set them, it's a no-op. |
Hi,
lately I've been doing quite a lot of stuff with GlueSQL, and one thing is that I am storing some fininancial data. However, I find that the performance of GlueSQL is not sufficient to fulfill requirements. Namely, the persistent tables are really slow, so I was wondering if there is any optimization available to both SharedMemoryStorage and SledStorage that you would recommend.
Currently I have about 100 (symbols) * 24 * 60 * 60 seconds of data that I have to query quite a lot of times. Each data is a row in the table. I was expecting like multiple queries in a second, apparently 1 single selection data is already taking lots of time, as below.
I changed the Sled Config to do 2GB data cache, which is the exact same amount as the data stored, and set Mode to HighThroughput. It did improve a bit of performance.
I wonder if any proper use of primary key within GlueSQL would help with the performance. Any optimizable options would be great.
The text was updated successfully, but these errors were encountered: