Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompt Format Updates for LLama3 #1035

Open
IAINATDBI opened this issue Apr 19, 2024 · 12 comments
Open

Prompt Format Updates for LLama3 #1035

IAINATDBI opened this issue Apr 19, 2024 · 12 comments

Comments

@IAINATDBI
Copy link

I've tried the following in the .env.local file but get parsing errors (Error: Parse error on line 7:
...ssistant}}<|eot_id|>)

"chatPromptTemplate" : "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\r\n\r\n{{ You are a friendly assistant }}<|eot_id|><|start_header_id|>user<|end_header_id|>\r\n\r\n{{#each messages}}{{#ifUser}}{{content}}{{/ifUser}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\r\n\r\n{{#each messages}}{{#ifAssistant}}{{ content }}{{/ifAssistant}}<|eot_id|>",

Any thoughts?

@mlim15
Copy link

mlim15 commented Apr 20, 2024

This seems to be working-ish based on what I've seen passed around elsewhere (e.g. ollama's prompt template or the sample provided on the llama.cpp pull request) :

{
    "name": "Llama 3",
    "preprompt": "This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision.",
    "chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
    "parameters": {
        (...snip...)
        "stop": ["<|end_of_text|>", "<|eot_id|>"] // Verify that this is correct.
    },
    (...snip...)
}

There are still some issues with the response not ending (ollama/ollama#3759) and the stop button not working (#890) that I'm still running into. That's probably related to the specific thing I've set as "stop" in the above definition here, as well as the tokenizer config when the model is converted to GGUF (if you do that). Apparently you can edit the tokenizer config JSON to fix some of these issues. See ongoing discussions floating around about Llama 3's stop tokens: ggerganov/llama.cpp#6770, ggerganov/llama.cpp#6745 (comment), ggerganov/llama.cpp#6751 (comment).

@IAINATDBI
Copy link
Author

Thank you @mlim15 that worked just fine. I spun up the 70B Instruct model and it appears to stop when intended. I do see some special tokens (start and stop header) streamed at the start but those are tidied up at the end of streaming. That's maybe the chat ui code rather than the model.

@iChristGit
Copy link

This seems to be working-ish based on what I've seen passed around elsewhere (e.g. ollama's prompt template or the sample provided on the llama.cpp pull request) :

{
    "name": "Llama 3",
    "preprompt": "This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision.",
    "chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}",
    "parameters": {
        (...snip...)
        "stop": ["<|end_of_text|>", "<|eot_id|>"] // Verify that this is correct.
    },
    (...snip...)
}

There are still some issues with the response not ending (ollama/ollama#3759) and the stop button not working (#890) that I'm still running into. That's probably related to the specific thing I've set as "stop" in the above definition here, as well as the tokenizer config when the model is converted to GGUF (if you do that). Apparently you can edit the tokenizer config JSON to fix some of these issues. See ongoing discussions floating around about Llama 3's stop tokens: ggerganov/llama.cpp#6770, ggerganov/llama.cpp#6745 (comment), ggerganov/llama.cpp#6751 (comment).

Thank you @mlim15 that worked just fine. I spun up the 70B Instruct model and it appears to stop when intended. I do see some special tokens (start and stop header) streamed at the start but those are tidied up at the end of streaming. That's maybe the chat ui code rather than the model.

Hello!
I tried using the recommended template you provided, but the response are never stopping, and the LLM wont choose a topic for the conversation (no title - just "New Chat")
Can you link the whole .env.local?

@iChristGit
Copy link

iChristGit commented Apr 21, 2024

Also, I am using Text-Generation-Webui, do you use the same?
Edit:
I was using the original meta fp16 model, now when generating with the GGUF version it works fine!

@BlueskyFR
Copy link

was anyone able to make it work?

@nsarrazin
Copy link
Collaborator

In prod for HuggingChat this is what we use:

    "tokenizer" : "philschmid/meta-llama-3-tokenizer",
    "parameters": {
      "stop": ["<|eot_id|>"]
    }

chat-ui supports using the template that is stored in the tokenizer config so that should work. Let me know if it doesn't, maybe there's some endpoint specific thing going on.

@iChristGit
Copy link

iChristGit commented Apr 23, 2024

In prod for HuggingChat this is what we use:

    "tokenizer" : "philschmid/meta-llama-3-tokenizer",
    "parameters": {
      "stop": ["<|eot_id|>"]
    }

chat-ui supports using the template that is stored in the tokenizer config so that should work. Let me know if it doesn't, maybe there's some endpoint specific thing going on.

"name": "Llama-3",
"chatPromptTemplate": "<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\n\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}} {{#ifUser}}<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\n\n{{content}} <|eot_id|>{{/ifAssistant}}{{/each}}",
"preprompt": "This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision.",

"stop": ["<|end_of_text|>", "<|eot_id|>"]

Im using this config, if I want to use "tokenizer" : "philschmid/meta-llama-3-tokenizer", should I remove chatPromptTemplate and Preprompt ?

@nsarrazin
Copy link
Collaborator

You can keep preprompt but you should get rid of the chatPromptTemplate yes!

@iChristGit
Copy link

You can keep preprompt but you should get rid of the chatPromptTemplate yes!

Il try that! although the current config works flawlessly!
Thank you

@nsarrazin
Copy link
Collaborator

At the end of the day, use what works for you 🤗 We support both custom prompt templates with chatPromptTemplate but for easy setup sometimes it's nicer if you can get the chat template directly from the tokenizer

@BlueskyFR
Copy link

@nsarrazin thanks for the answer, I'll try it soon!
Though is there a place we could find all the configs for the models for our .env.local? For instance could we get the list you use in production? It would be easier IMO

@nsarrazin
Copy link
Collaborator

nsarrazin commented Apr 23, 2024

If you want a list of templates we've used in the past, you got PROMPTS.md

If you want to see the current HuggingChat prod config it's .env.template

and ideally try to see if the model you want has a tokenizer_config.json file on the hub. if it does you can just do

"tokenizer": "namespace/on-the-hub" in your config and it should pick up the template. see the .env.template for some examples

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants