You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for reporting the issue - I have also observed the issue and am looking into the debug process. It seems that the tokenizer treats _ as a space, regardless of the context for CodeQwen.
yeah, white space exists for using vllm as well so its a model thing rather than serving framework thing. I was thinking shiftleft the cursor a few chars and then check the completion result with overlapping substrings to overcome this and thats the only workaround in my mind.
Observed that codeqwen not always returns extra white space, and sometimes the white space is meaningful. Therefore, trim the leading white space may not be an approach.
Other models like deepseek works without a problem.
The text was updated successfully, but these errors were encountered: