Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM based inference of Workflow node parameters when they are not explicitly defined. #4193

Open
4 tasks done
bekhruz-ti opened this issue May 8, 2024 · 1 comment
Open
4 tasks done
Assignees
Labels
💪 enhancement New feature or request

Comments

@bekhruz-ti
Copy link

Self Checks

  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • Pleas do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing?

Issue with Dify's ChatFlow feature. I initiated some nodes that required certain parameters. ChatFlow allowed me to initiate a run from a message in the chatbox. The message contained all necessary parameters for the intermediate nodes to execute successfully, but these parameter values were not inferred, causing the chatflow run to fail with an error.

2. Describe the feature you'd like to see

I'd like a mechanism similar to Agent Behavior, where when a tool parameter is specified as form: llm in the schema, the system automatically fills in the parameter value based on the user's message. This would eliminate the need for the user to prefill the parameter value in the chatflow tool node. Without this feature, the capabilities offered by ChatFlow are very similar to those of WorkFlow, and there's no significant differentiation.

For improved inference, it would be beneficial to provide the entire chat history at the time of parameter inference.

3. How will this feature improve your workflow or experience?

I can interact with my workflow in the same way I interact with agents.
I won't have to fill out forms.
I'll be able to publish more reusable ChatFlows. As, I can specify the tool parameters during execution time instead of at chatflow creation time, it will enable more generic use cases for each published ChatFlow.

4. Additional context or comments

This inferencing capability should be added to every built-in tool, likely within the class ToolNode after tool configurations describing the schema have been fetched. If a parameter value is empty, user input and preferably chat history should be passed to an LLM call along with the input schema fetched from the tool.yml file. The LLM should then be able to fill in the parameters passed into the tool, just like Agents currently do.

5. Can you help us with this feature?

  • I am interested in contributing to this feature.
@dosubot dosubot bot added the 💪 enhancement New feature or request label May 8, 2024
@takatost
Copy link
Collaborator

takatost commented May 9, 2024

This might be similar to the parameter inference node principle we are currently developing. We will release it next week at the earliest. You can try it out then to see if it meets your requirements. 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💪 enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants