Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DRAFT : PromptStrategy can use Individual ChatModelProviders & set own configuration (llm model, temperature, top_k, top_p...) #6898

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

ph-ausseil
Copy link
Contributor

@ph-ausseil ph-ausseil commented Feb 23, 2024

PromptStrategy can use individual ChatModelProviders & set own configuration (llm model, temperature, top_k, top_p,...).

Overview

  • This PR introduce a ChatModelWrapper that can Wrap different ChatModelProviders (including OpenAI), new providers can be created for Gemini, Mixtra...
  • This PR modify PromptManager to create the ChatModelWrapper object ; PromptManager is now an intermediary between a PromptStrategy a ChatModelWrapper.
  • PromptStrategy.build_promp() still return a ChatPrompt; which data is channeled via an object ChatCompletionKwargs to structure the data
  • Dependency are injected between PromptStrategy ↔️ ChatModelWrapper ↔️ ChatModelProviders
  • Introduce AbstractChatMessage so each provider can have it's own roles & messages
  • LanguageModelFunction (via dependency injection), ChatModelResponse (via extension of an interface) & Argument passed to the LLM will be formatted as the ChatModelProviders specify it thus enabling different providers & API.

Remaining work

  • DONE : The "lib" works well, it comes from a fork that contains +40k line change with master
  • TODO : Integration (and would like to team up for them as I have no time)
    • PrompStrategy& PromptManager had an AgentMixin that adds methods such as set_agent() and _agent , they will need to be added
    • Imports needs to be fixed
    • poetry dependency might need update
    • Langchain code can be removed with no issues 😃

More info :

  • Tested under 3.12
  • Might need to be downgraded to Pydantic < 2.0.0 (mainly revert model_dump() to dict(), revert the model_config to BaseModel.Config )
  • ChatMessages can't be generated OpenAIChatMessage (evolution of AGPT ChatMessage for OpenAI (should not work for Gemini)) or via LangChain. Choice is left to the developper for now however the Langchain dependency has not been added to the poetry.
  • All adapters have to extends AbstractChatModelAdapter & implement the chat()method. In the chat method, any client can be used starting with OpenAI client (commented out in the file) or Langchain (if a langchain dependency is added to the poetry.lock )

New functionalities :

  • The lib allows tool_choice to select a specific tool (for better guidance of the LLM and new use cases)
  • The lib introduce a mechanism where the absence of tool_call can trigger a new attempts (was useful as GPT3.5 tend not to call functions). This mechanism offer the possibility to force a specific tool (via tool_choice) after X (default to 3) failed attempts
  • The lib offer Jinja2 integration to build prompt (requires poetry update)

⚠️ Regressions :

  • The lib removes has_oa_tool_calls_api / has_function_call_api and deem providers must provide an function_call API in 2024 (Ollama, Gemini, Mixtra do...)
  • The lib erase last week change with tool_id made by pwut :(
  • The lib erase the new retry system made by pwut :(
  • 🛑 The lib doesn't support Embeddings provider anymore as context is given above. I made a similar wrapper to handle various LLM Providers & made use of Langchain which I understand can be an philosophical issue for some.

⚠️ Behaviour changes :

  • If only 1 tool is provided this tool is automatically called as it is

Considerations :

  • ChatPrompt might integrate ChatCompletionKwargs
  • LLM Models are hardcoded in the adapter => Might want to configure it another way such as in the .env
  • LLM Models versions are not pinned
  • Remove : **model_configuration_dict => I need to make robust unit tests before I touch it !

Disclaimer :

Can't share unit tests at the moment, but I have shared the run I have on my branch in discord

e
Not ported to autogpt, but very close to the finish line.

@ph-ausseil ph-ausseil requested a review from a team as a code owner February 23, 2024 17:47
Copy link

This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.

Copy link

netlify bot commented Feb 23, 2024

Deploy Preview for auto-gpt-docs ready!

Name Link
🔨 Latest commit 02b9d74
🔍 Latest deploy log https://app.netlify.com/sites/auto-gpt-docs/deploys/65d9cbba51602c000802b3ea
😎 Deploy Preview https://deploy-preview-6898--auto-gpt-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Signed-off-by: ph-ausseil <ph.ausseil@gmail.com>
Copy link

This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.

@ph-ausseil ph-ausseil changed the title PromptStrategy can use Individual ChatModelProviders & set own configuration (llm model, temperature, top_k, top_p...) DRAFT : PromptStrategy can use Individual ChatModelProviders & set own configuration (llm model, temperature, top_k, top_p...) Feb 23, 2024
@ph-ausseil ph-ausseil marked this pull request as draft February 23, 2024 17:52
* Remove has_oa_function_call_api
* Create AbstractChatModelProvider.llm_api_client for dependency injection of the client
* Improve defaulting mechanism when looking for a model name
Copy link

This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.

@Pwuts
Copy link
Member

Pwuts commented Mar 15, 2024

Related:

  • 🚀 AutoGPT Roadmap - Vendor Liberty 🗝️ #6969

    Actionables

    [...]

    • Amend PromptStrategy class to allow specifying compatible models in order of preference
    • Amend PromptStrategy class (or subclasses) to allow customizing the prompt based on the available model(s)

@Wladastic
Copy link
Contributor

I am loving it so far, it sort of works like version 0.4.1 before function calling was a thing.
If you want to we can pair program on this.

@ph-ausseil
Copy link
Contributor Author

I am loving it so far, it sort of works like version 0.4.1 before function calling was a thing.

If you want to we can pair program on this.

Hey @Wladastic ! You are welcome to help me on the last straight line !

If import are fixed, the PR would be fully functional. It's code that is running for over a month on a fork.

However, the fork has 30 000+ line changes. I have not tried to integrate it. I have just made a bunch of ctrl+F to fix the import but did not bother to run the agent (as there are +30 000 line difference).

I hope most import are right and I have not forgot any important file. If any, hit me up and I would add the missing file.

I would say to close the PR , we need :

  • To fix import
  • may be move file here and there
  • may be add a missing file here and there
  • write tests

It's spring and and I will be gardening on my spare time, not coding 🙁

Copy link

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Mar 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AutoGPT Agent conflicts Automatically applied to PRs with merge conflicts function: prompt generation size/xl
Projects
Status: 🆕 Needs initial review
Development

Successfully merging this pull request may close these issues.

None yet

3 participants