New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Reproducibility] OpenFunctions-v2: <Issue> Unable to reproduce the AST scores reported in leaderboard with OS checkpoint #352
Comments
Hi Jason, thanks for your interest in Berkeley Function-Calling Leaderboard! Responding to your question, there are two things we want to raise. First, we noticed that you added special tokens in places, namely at the start of the system prompt and user prompt system = "<s>[INST] You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer." return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: [/INST]"
return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: [/INST]" We provide def get_prompt(user_query: str, functions: list = []) -> str:
"""
Generates a conversation prompt based on the user's query and a list of functions.
Parameters:
- user_query (str): The user's query.
- functions (list): A list of functions to include in the prompt.
Returns:
- str: The formatted conversation prompt.
"""
system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
if len(functions) == 0:
return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
functions_string = json.dumps(functions)
return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: " Secondly, our We will support evaluating vllm hosted |
@CharlieJCJ @ShishirPatil Hey, thanks for detailed response, I started a PR #360 to support openfunctionv2 handler to address this issue #360, will finish the test hopefully by end of day so we can close this issue. |
Describe the bug
A clear and concise description of what the bug is.
Great work on gorilla!
I have used the OS model checkpoint https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2 with vLLM to try reproducing the leaderboard score using local GPU inference (4 A100). However, I obtained lower summary AST scores compared to the leaderboard. I am wondering if I am using the wrong prompt template or if I missed something. Your help would be much appreciated.
Evaluation accuracy I got locally:
summary_ast["accuracy"]: 0.38625954198473283
simple_ast["accuracy"]: 0.21875
multiple_ast["accuracy"]: 0.41
parallel_ast["accuracy"]: 0.005
parallel_multiple_ast["accuracy"]: 0.005
To Reproduce
Steps to reproduce the behavior:
I am using the following OSS handler code
python model_handler/oss_handler.py --data-path /home/jobuser/gorilla/berkeley-function-call-leaderboard/data/BFCL/questions_for_oss.json --model-name /path_to_model/gorilla-openfunctions-v2
python /home/jobuser/gorilla/berkeley-function-call-leaderboard/eval_checker/eval_runner.py --model /path_to_model/gorilla-openfunctions-v2 --skip-api-sanity-check --test-category simple sql rest relevance parallel_multiple_function parallel_function multiple_function
Proposed Solution
If you want to suggest a proposed solution or an idea for one?
Maybe my prompt template is inconsistent with the one used in training, or I missed to apply the chat template?
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: