Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request, local assistants #1142

Open
Zibri opened this issue May 15, 2024 · 2 comments
Open

Feature request, local assistants #1142

Zibri opened this issue May 15, 2024 · 2 comments
Labels
support A request for help setting things up

Comments

@Zibri
Copy link

Zibri commented May 15, 2024

I experimented with a few assistants on HF.
The problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).
I tried everything I could thing of.
I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.
Please help.
I am available for chat on discord https://discordapp.com/users/Zibri/

@Zibri
Copy link
Author

Zibri commented May 15, 2024

Note: it would be great to have the feature to export the full assistant definition as a llama.cpp "main" command. (or a gpt4all prompt)

@nsarrazin
Copy link
Collaborator

I think HF does some filtering or rephrasing or has an additional prompt before the assistant description.

None at all! Make sure your prompt format is correct, that's usually the main culprit. Could you share your model config?

@nsarrazin nsarrazin added the support A request for help setting things up label May 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
support A request for help setting things up
Projects
None yet
Development

No branches or pull requests

2 participants