Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Reproducibility] OpenFunctions-v2: <Issue> Unable to reproduce the AST scores reported in leaderboard with OS checkpoint #352

Open
JasonZhu1313 opened this issue Apr 14, 2024 · 2 comments
Labels
hosted-openfunctions-v2 Issues with OpenFunctions-v2

Comments

@JasonZhu1313
Copy link
Contributor

Describe the bug
A clear and concise description of what the bug is.

Great work on gorilla!

I have used the OS model checkpoint https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2 with vLLM to try reproducing the leaderboard score using local GPU inference (4 A100). However, I obtained lower summary AST scores compared to the leaderboard. I am wondering if I am using the wrong prompt template or if I missed something. Your help would be much appreciated.

Evaluation accuracy I got locally:

2024-04-13 23:48:06,886 INFO worker.py:1454 -- Calling ray.init() again after it has already been called.
🔍 Running test: parallel_multiple_function
✅ Test completed: parallel_multiple_function. 🎯 Accuracy: 0.005
2024-04-13 23:48:06,964 INFO worker.py:1454 -- Calling ray.init() again after it has already been called.
🔍 Running test: rest
100%|███████████████████████████████████████████████████████████████████████████████████████████| 70/70 [00:00<00:00, 628696.53it/s]
✅ Test completed: rest. 🎯 Accuracy: 0.0
2024-04-13 23:48:07,034 INFO worker.py:1454 -- Calling ray.init() again after it has already been called.
2024-04-13 23:48:07,100 INFO worker.py:1454 -- Calling ray.init() again after it has already been called.
🔍 Running test: parallel_function
✅ Test completed: parallel_function. 🎯 Accuracy: 0.005
2024-04-13 23:48:07,178 INFO worker.py:1454 -- Calling ray.init() again after it has already been called.
🔍 Running test: simple
✅ Test completed: simple. 🎯 Accuracy: 0.455
2024-04-13 23:48:07,263 INFO worker.py:1454 -- Calling ray.init() again after it has already been called.
🔍 Running test: relevance
✅ Test completed: relevance. 🎯 Accuracy: 1.0
2024-04-13 23:48:07,330 INFO worker.py:1454 -- Calling ray.init() again after it has already been called.
🔍 Running test: multiple_function
✅ Test completed: multiple_function. 🎯 Accuracy: 0.41

summary_ast["accuracy"]: 0.38625954198473283
simple_ast["accuracy"]: 0.21875
multiple_ast["accuracy"]: 0.41
parallel_ast["accuracy"]: 0.005
parallel_multiple_ast["accuracy"]: 0.005

To Reproduce
Steps to reproduce the behavior:
I am using the following OSS handler code

from model_handler.handler import BaseHandler
from model_handler.model_style import ModelStyle
from model_handler.utils import (
    ast_parse,
    augment_prompt_by_languge,
    language_specific_pre_processing,
)
# from eval_checker.eval_runner_helper import FILENAME_INDEX_MAPPING
import shortuuid, ray, os, json, torch
import argparse

FILENAME_INDEX_MAPPING = {
    "executable_parallel_function": (0, 49),
    "parallel_multiple_function": (50, 249),
    "executable_simple": (250, 349),
    "rest": (350, 419),
    "sql": (420, 519),
    "parallel_function": (520, 719),
    "chatable": (720, 919),
    "java": (920, 1019),
    "javascript": (1020, 1069),
    "executable_multiple_function": (1070, 1119),
    "simple": (1120, 1519),
    "relevance": (1520, 1759),
    "executable_parallel_multiple_function": (1760, 1799),
    "multiple_function": (1800, 1999),
}

GPU_NUMBER = 2
TP_SIZE = 2

class OSSHandler(BaseHandler):
    def __init__(self, model_name, temperature=0.7, top_p=1, max_tokens=1000) -> None:
        super().__init__(model_name, temperature, top_p, max_tokens)
        self.model_style = ModelStyle.OSSMODEL
        self._init_model()

    def _init_model(self):
        ray.init(ignore_reinit_error=True, num_cpus=8)

    # def _format_prompt(prompt, function):
    #     SYSTEM_PROMPT = """
    #         You are an helpful assistant who has access to the following functions to help the user, you can use the functions if needed-
    #     """
    #     functions = ""
    #     if isinstance(function, list):
    #         for idx, func in enumerate(function):
    #             functions += "\n" + str(func)
    #     else:
    #         functions += "\n" + str(function)
    #     return f"SYSTEM: {SYSTEM_PROMPT}\n{functions}\nUSER: {prompt}\nASSISTANT: "

    def _format_prompt(user_query, function) -> str:
        """
        Generates a conversation prompt based on the user's query and a list of functions.

        Parameters:
        - user_query (str): The user's query.
        - functions (list): A list of functions to include in the prompt.

        Returns:
        - str: The formatted conversation prompt.
        """
        system = "<s>[INST] You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
        functions_string = ""
        if isinstance(function, list):
            functions_string = json.dumps(function)
        else:
            functions_string += "\n" + str(function)
        
        if not function:
            return f"{system}\n### Instruction: <<question>> {user_query}\n### Response:  [/INST]"
        
        return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response:  [/INST]"


    @ray.remote(num_gpus=GPU_NUMBER)
    @torch.inference_mode()
    def _batch_generate(
        question_jsons,
        test_category,
        model_path,
        temperature,
        max_tokens,
        top_p,
        format_prompt_func,
        index,
    ):
        from vllm import LLM, SamplingParams

        prompts = []
        ans_jsons = []
        for line in question_jsons:
            for key, value in FILENAME_INDEX_MAPPING.items():
                start, end = value
                if index >= start and index < end:
                    test_category = key
                    break
            ques_json = line
            prompt = augment_prompt_by_languge(ques_json["question"], test_category)
            functions = language_specific_pre_processing(
                ques_json["function"], test_category, False
            )
            prompts.append(format_prompt_func(prompt, functions))
            ans_id = shortuuid.uuid()
            ans_jsons.append(
                {
                    "answer_id": ans_id,
                    "question": ques_json["question"],
                }
            )

        print("start generating: ", len(prompts))
        sampling_params = SamplingParams(
            temperature=temperature, max_tokens=max_tokens, top_p=top_p
        )
        llm = LLM(model=model_path, dtype="float16", trust_remote_code=True, tensor_parallel_size=TP_SIZE)
        outputs = llm.generate(prompts, sampling_params)
        final_ans_jsons = []
        for output, ans_json in zip(outputs, ans_jsons):
            text = output.outputs[0].text
            ans_json["text"] = text
            final_ans_jsons.append(ans_json)
        return final_ans_jsons

    def inference(
        self, question_file, test_category, num_gpus, format_prompt_func=_format_prompt
    ):

        ques_jsons = []
        with open(question_file, "r") as ques_file:
            for line in ques_file:
                ques_jsons.append(json.loads(line))

        chunk_size = len(ques_jsons) * GPU_NUMBER // num_gpus 
        ans_handles = []
        for i in range(0, len(ques_jsons), chunk_size):
            ans_handles.append(
                self._batch_generate.remote(
                    ques_jsons[i : i + chunk_size],
                    test_category,
                    self.model_name,
                    self.temperature,
                    self.max_tokens,
                    self.top_p,
                    format_prompt_func,
                    i,
                )
            )
        ans_jsons = []
        for ans_handle in ans_handles:
            ans_jsons.extend(ray.get(ans_handle))

        return ans_jsons, {"input_tokens": 0, "output_tokens": 0, "latency": 0}

    def decode_ast(self, result, language="Python"):
        func = result
        if " " == func[0]:
            func = func[1:]
        if not func.startswith("["):
            func = "[" + func
        if not func.endswith("]"):
            func = func + "]"
        decode_output = ast_parse(func, language)
        return decode_output

    def decode_execute(self, result):
        return result

    def write(self, result, file_to_open):
        if not os.path.exists("./result"):
            os.mkdir("./result")
        if not os.path.exists("./result/" + self.model_name.replace("/", "_")):
            os.mkdir("./result/" + self.model_name.replace("/", "_"))
        with open(
            "./result/" + self.model_name.replace("/", "_") + "/" + file_to_open, "a+"
        ) as f:
            f.write(json.dumps(result) + "\n")

    def load_result(self, test_category):
        eval_data = []
        with open("./eval_data_total.json") as f:
            for line in f:
                eval_data.append(json.loads(line))
        result_list = []
        idx = 0
        with open(f"./result/{self.model_name}/result.json") as f:
            for line in f:
                if eval_data[idx]["test_category"] == test_category:
                    result_list.append(json.loads(line))
                idx += 1
        return result_list


if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Run model inference with OS candidate.")

    # Add arguments for two lists of strings
    parser.add_argument(
        "--data-path", type=str, help="Test category"
    )
    parser.add_argument(
        "--model-name", type=str, help="Model path"
    )
    parser.add_argument(
        "--test-category", type=str, help="Test category"
    )
    parser.add_argument(
        "--num-gpus", default=4, type=int, help="Number of GPUs for inference"
    )
    
    args = parser.parse_args()
    oss_handler = OSSHandler(args.model_name)
    result_json, _ = oss_handler.inference(args.data_path, args.test_category, args.num_gpus)
    for index, raw_line in enumerate(result_json):
        print(f"processing index {index}")
        oss_handler.write(raw_line, "result.json")

  1. python model_handler/oss_handler.py --data-path /home/jobuser/gorilla/berkeley-function-call-leaderboard/data/BFCL/questions_for_oss.json --model-name /path_to_model/gorilla-openfunctions-v2

  2. python /home/jobuser/gorilla/berkeley-function-call-leaderboard/eval_checker/eval_runner.py --model /path_to_model/gorilla-openfunctions-v2 --skip-api-sanity-check --test-category simple sql rest relevance parallel_multiple_function parallel_function multiple_function

Proposed Solution
If you want to suggest a proposed solution or an idea for one?
Maybe my prompt template is inconsistent with the one used in training, or I missed to apply the chat template?

Additional context
Add any other context about the problem here.

@JasonZhu1313 JasonZhu1313 added the hosted-openfunctions-v2 Issues with OpenFunctions-v2 label Apr 14, 2024
@CharlieJCJ
Copy link
Contributor

CharlieJCJ commented Apr 14, 2024

Hi Jason, thanks for your interest in Berkeley Function-Calling Leaderboard!

Responding to your question, there are two things we want to raise. First, we noticed that you added special tokens in places, namely at the start of the system prompt and user prompt <s> [INST], and [/INST].

        system = "<s>[INST] You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
            return f"{system}\n### Instruction: <<question>> {user_query}\n### Response:  [/INST]"
        
        return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response:  [/INST]"

We provide get_prompt function in repo and huggingface, consistent with our hosted endpoint’s inference, and defined as follows. Please use our official get_prompt method in your oss evaluation for replication for consistency in evalution. Thank you!

def get_prompt(user_query: str, functions: list = []) -> str:
    """
    Generates a conversation prompt based on the user's query and a list of functions.

    Parameters:
    - user_query (str): The user's query.
    - functions (list): A list of functions to include in the prompt.

    Returns:
    - str: The formatted conversation prompt.
    """
    system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
    if len(functions) == 0:
        return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
    functions_string = json.dumps(functions)
    return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "

Secondly, our oss_handler is not intended to evaluate gorilla-openfunctions-v2 (vllm hosted) at the moment, since we inference oss models’ responses through a format that is not consistent with that of gorilla-openfunctions-v2. We support evaluation on our hosted endpoint through gorilla_handler (through API calls to our hosted endpoint), thus one should use the decode_ast, decode_execute defined in gorilla_handler to parse gorilla responses, instead of those defined in oss_handler.

We will support evaluating vllm hosted gorilla-openfunctions-v2 for reproducibility and then close the issue shortly via a new PR. Thanks again for flagging this to our attention!

@JasonZhu1313
Copy link
Contributor Author

JasonZhu1313 commented Apr 15, 2024

@CharlieJCJ @ShishirPatil Hey, thanks for detailed response, I started a PR #360 to support openfunctionv2 handler to address this issue #360, will finish the test hopefully by end of day so we can close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hosted-openfunctions-v2 Issues with OpenFunctions-v2
Projects
None yet
Development

No branches or pull requests

2 participants