Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation with retriever #383

Open
preritt opened this issue Apr 23, 2024 · 0 comments
Open

Evaluation with retriever #383

preritt opened this issue Apr 23, 2024 · 0 comments

Comments

@preritt
Copy link

preritt commented Apr 23, 2024

Is there a reson to use questions_torchhub_0_shot.jsonl instead of questions_torchhub_bm25.jsonl when evaluating the retriever with torchub.

This is what is mentioned in the documentation for evaluation script. The arguments seem to be the same as the one with zero shot, except that the retriever is also passed.

python get_llm_responses_retriever.py --retriever bm25 --model gpt-3.5-turbo --api_key $API_KEY --output_file gpt-3.5-turbo_torchhub_0_shot.jsonl --question_data eval-data/questions/torchhub/questions_torchhub_0_shot.jsonl --api_name torchhub --api_dataset ../data/api/torchhub_api.jsonl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant