Name |
Type |
Description |
Notes |
filters |
ChunkFilter |
|
[optional] |
get_total_pages |
bool |
Get total page count for the query accounting for the applied filters. Defaults to false, but can be set to true when the latency penalty is acceptable (typically 50-200ms). |
[optional] |
group_id |
str |
Group specifies the group to search within. Results will only consist of chunks which are bookmarks within the specified group. |
[optional] |
group_tracking_id |
str |
Group_tracking_id specifies the group to search within by tracking id. Results will only consist of chunks which are bookmarks within the specified group. If both group_id and group_tracking_id are provided, group_id will be used. |
[optional] |
highlight_delimiters |
List[str] |
Set highlight_delimiters to a list of strings to use as delimiters for highlighting. If not specified, this defaults to ["?", ",", ".", "!"]. |
[optional] |
highlight_results |
bool |
Set highlight_results to false for a slight latency improvement (1-10ms). If not specified, this defaults to true. This will add `<b><mark>` tags to the chunk_html of the chunks to highlight matching sub-sentences. |
[optional] |
highlight_threshold |
float |
Set highlight_threshold to a lower or higher value to adjust the sensitivity of the highlights applied to the chunk html. If not specified, this defaults to 0.8. The range is 0.0 to 1.0. |
[optional] |
page |
int |
The page of chunks to fetch. Page is 1-indexed. |
[optional] |
page_size |
int |
The page size is the number of chunks to fetch. This can be used to fetch more than 10 chunks at a time. |
[optional] |
query |
str |
The query is the search query. This can be any string. The query will be used to create an embedding vector and/or SPLADE vector which will be used to find the result set. |
|
recency_bias |
float |
Recency Bias lets you determine how much of an effect the recency of chunks will have on the search results. If not specified, this defaults to 0.0. |
[optional] |
score_threshold |
float |
Set score_threshold to a float to filter out chunks with a score below the threshold. |
[optional] |
search_type |
str |
Search_type can be either "semantic", "fulltext", or "hybrid". "hybrid" will pull in one page (10 chunks) of both semantic and full-text results then re-rank them using BAAI/bge-reranker-large. "semantic" will pull in one page (10 chunks) of the nearest cosine distant vectors. "fulltext" will pull in one page (10 chunks) of full-text results based on SPLADE. |
|
slim_chunks |
bool |
Set slim_chunks to true to avoid returning the content and chunk_html of the chunks. This is useful for when you want to reduce amount of data over the wire for latency improvement (typicall 10-50ms). Default is false. |
[optional] |
tag_weights |
Dict[str, float] |
Tag weights is a JSON object which can be used to boost the ranking of chunks with certain tags. This is useful for when you want to be able to bias towards chunks with a certain tag on the fly. The keys are the tag names and the values are the weights. |
[optional] |
use_weights |
bool |
Set use_weights to true to use the weights of the chunks in the result set in order to sort them. If not specified, this defaults to true. |
[optional] |
from trieve_py_client.models.search_within_group_data import SearchWithinGroupData
# TODO update the JSON string below
json = "{}"
# create an instance of SearchWithinGroupData from a JSON string
search_within_group_data_instance = SearchWithinGroupData.from_json(json)
# print the JSON string representation of the object
print(SearchWithinGroupData.to_json())
# convert the object into a dict
search_within_group_data_dict = search_within_group_data_instance.to_dict()
# create an instance of SearchWithinGroupData from a dict
search_within_group_data_form_dict = search_within_group_data.from_dict(search_within_group_data_dict)
[Back to Model list] [Back to API list] [Back to README]