-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plugin and/or passed-callable mechanisms for customizing Inventory.suggest()
scoring
#207
Comments
To some extent, the plug-in mechanism and the plug-in selection may be decoupled. How would a command-line user select an alternative scoring mechanism? Unless either all detected scorers would be used, or some heuristic would select a scoring mechanism based on the query, then presumably a command line option would name an alternate scoring function, right? Once the alternative is named by the user, you could scan the Special plugin names is easy and common. Sphinx seems to encourage |
Yeah, I've never written a plugin system before, so I definitely will be consulting existing best practices. My current plan for the cascade of how a pluggable would be selected is in the numbered list in the original issue comment. (1) and (2) would just involve passing/storing callable Python objects, simple enough. For (3) and (4), I figure I would use the same entry-point syntax as For (5), I would want to use the stricter approach of
I guess that's one of the big questions to decide, though -- will |
Hehe, exactly what you said:
|
The CLI arg for (3) could line up with the Then, the env variable for (4) would just be something like Ahh, and (5) wouldn't be a mechanism for specifying the scorer to use, but just for provisioning a scorer to be available...! |
For plugin Should also expose a mechanism to let the plugin define the default suggest threshold for CLI display, because different metrics may tend to lie in different ranges of 0-100. |
Will want an API and CLI for listing available scorers. Will want API and CLI for picking a scorer based on an entry_point spec, as opposed to only exposing scorers via name/ID. This should just be a matter of exposing functionality created as part of the This should hopefully make it easier for anyone developing a scorer, especially in the early stages, because then they can just pass the entry point, which (I think) doesn't even require them to have set up packaging for their code... it could just be in a free module. |
Sprinkling this in various issues: |
Related discussion here.
Need to research plugin system best practices. Given the narrow scope of the plugging, a fully featured plugin system tool like
pluggy
may be overkill. OTOH,pluggy
is dependent on by lots of things, so it might already be in most installed environments anyways.Usual hierarchical sourcing of callables:
Inventory.suggest
(select on a per-call basis; API usage)Inventory
(programmatic default definition, avoiding need for per-call passing; API usage)a. Might be too unwieldy to be practical, but could be useful/convenient for initial stages of trialing an alternative scorer.
entry_points
(operating environment configuration; API and CLI usage, key for CLI)Any other plugin mechanisms?
Once this is implemented, the docs will have to be updated, since
fuzzywuzzy
is specifically named as the scoring function in many places.Other note:
suggest
module, and then over time exposing public abstract classes/interfaces/protocols for plugins to implement toThe text was updated successfully, but these errors were encountered: