You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
can we show the detailed scores for Java/Javascript splits on the leaderboard (instead of the averaged Simple function)? One observation I have from running the benchmark myself is that Prompt model seems to have more advantages over FC endpoints.
for Java types (Array/ArrayList/Hashmap), from the parsing code here:
are we expecting the parameter values are in the form of "new ArrayList", "new HashMap", etc. I think usually users' expectations on FC endpoint outputs (not Prompt) are JSON object.
Thanks!
The text was updated successfully, but these errors were encountered:
Regarding your first question:
Good point. We will display the detailed score breakdown of the AST Simple category in the next website release (likely next week on May 16/17).
Regarding your second question:
Yes, we are expecting values in the format of new ArrayList; it needs to be valid Java syntax, and a JSON list [] would not be correct.
For Java (and JavaScript as well), before querying the model, we do some pre-processing on the prompt and function document. Specifically, at the end of the prompt, we will explicitly state that the provided function is in Java 8/JavaScript/Python syntax. And for parameter types that are not native to JSON, we will change their type to String (since String is JSON compatible) and add in the parameter description that "This is Java/Javascript" + {original_type} + " in string representation."
So in the example you provided above, when expecting type ArrayList, model will get the instruction that this is a String type parameter with the parameter description containing "This is Java ArrayList in string representation.", and thus the model should output the value in String format (eg, "new ArrayList<>(Arrays.asList(10, 20, 30))"), which is JSON compatible.
The relevant code that handles the pre-processing is here.
gorilla/berkeley-function-call-leaderboard/eval_checker/java_type_converter.py
Line 37 in ae5f0a2
Thanks!
The text was updated successfully, but these errors were encountered: