-
-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try fst to support fuzzy search #118
Comments
Not sure I know how to answer that without more details on how you're building the index. "fuzzy search" can refer to many different things... I'd be happy to say more if you shared more of your plan. :-) |
Sure, sorry. Here's a writeup. It's basically using a bloom filter now but only full words can be searched. What I like to have is an optional, alternative engine that could provide prefix matches and "typo correction" (e.g. configurable Levenshtein). From what I can tell, all of this is supported by fst and it was recommended to me by two independent people, so I thought about giving it a try. My only concern is that the output size of the final (wasm) package, which contains the index will be too big for serving it directly to the clients. |
I guess what I meant was, what specifically would be the keys in your FST? I see your JSON file, but there are many different things there. Is it every word in the Either way, it's hard for me to say up front. Just think of an FST as a finite state machine. If there are a lot of common prefixes or suffixes, then an FST should be able to exploit that. I would, however, expect a bloom filter to give you more control over the size of your index. e.g., You can make trade offs like "give me a smaller filter perhaps at the expense of a higher false positive rate" more easily than you can with an FST. But yes, if your FST keys are words, then you could use levenshtein to find all matching words within an edit distance of 1, for example. And that's a pretty easy and well supported thing you can do via the |
Okay, I see. The From my prior tests, I found that there are indeed a lot of common prefixes (at least for the sample dataset from my blog). I didn't even consider suffixes, but potentially exploiting that would be really nice. Seeing that it's not an entirely weird idea, I'll go ahead and give it a shot (or somebody else that might find the time). Will report back here. |
Great! Would love to hear how it goes and bounce more ideas around. |
Thanks for the quick response and the encouragement. Stay tuned for updates. |
https://github.com/BurntSushi/fst
https://blog.burntsushi.net/transducers/#common-crawl
Expect that this blows up the index size significantly, but it might be an alternative backend that could be enabled with a feature flag, e.g.
tinysearch --engine=fst
or so.@BurntSushi, do you think that a corpus like that would compress nicely with fst? https://github.com/tinysearch/tinysearch/blob/master/fixtures/index.json
Think I'll play around with your sample code on the main repository when I find the time.
The text was updated successfully, but these errors were encountered: