Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing LangChain CustomLLM Class for use with other Models #80

Open
zfreeman32 opened this issue Sep 26, 2023 · 0 comments
Open

Comments

@zfreeman32
Copy link

zfreeman32 commented Sep 26, 2023

The interface seems like it is currently only compatible with GPT4All or LlamaCpp models. I have Fine-Tuned a Vicuna-7b base model and want to utilize that in the Interface. How do I Integrate a CustomLLM into privategpt.py?
Langchain claims they can support custom models with the Class below but how do I implement their CustomLLM class into this particular Interface?

from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM

class CustomLLM(LLM):
    n: int

    @property
    def _llm_type(self) -> str:
        return "custom"

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> str:
        if stop is not None:
            raise ValueError("stop kwargs are not permitted.")
        return prompt[: self.n]

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {"n": self.n}

llm = CustomLLM(n=10)
@zfreeman32 zfreeman32 changed the title Using Fine-Tuned CustomLLM Models Implementing LangChain CustomLLM Class for use with other Models Sep 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant