Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Images passed to Vision models with low quality parameter, even though high quality was chosen #4268

Open
4 tasks done
sigvardt opened this issue May 10, 2024 · 1 comment
Labels
🐞 bug Something isn't working

Comments

@sigvardt
Copy link

Self Checks

  • This is only for bug report, if you would like to ask a quesion, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • Pleas do not modify this template :) and fill in all the required fields.

Dify version

0.6.7

Cloud or Self Hosted

Cloud

Steps to reproduce

  1. Make a workflow with image capabilities and a LLM node with vision capabilities
  2. Ensure LLM node is set to High Quality vision
  3. Publish and run app
  4. Provide image file to the workflow
  5. The request sent to the LLM has the quality parameter set to low, even tough it is defined to be high
    Tested across models and providers - issue seems to be persistent
image image

✔️ Expected Behavior

  • Defined vision quality settings are reflected in the call the to the LLM endpoint

❌ Actual Behavior

  • Discrepancy between settings and executed call to endpoint
Copy link

dosubot bot commented May 10, 2024

It appears there's a bug affecting the way the Dify platform handles the vision quality setting for LLM nodes, leading to a discrepancy between the selected 'High' quality in the UI and the 'low' quality parameter being sent in the request. This issue might stem from either the UI component responsible for setting the vision quality or the backend logic that constructs the request to the LLM endpoint.

The relevant code for handling vision quality settings is located in the VisionConfig class within the entities.py file, which includes a detail attribute for setting the quality parameter. The UI component for selecting the vision resolution, found in param-config-content.tsx, uses a RadioGroup component to update the visionConfig.detail state. The transition from UI selection to the request sent to the LLM endpoint is managed by the _generate method in the NVIDIALargeLanguageModel class, which constructs the request payload including the quality parameter.

To address this issue, it's essential to review the synchronization between the UI component's state and the backend request construction logic, particularly how the model_parameters are handled in the _generate method. Checking for any recent changes in these areas or known bugs could also provide insights.

For a more detailed investigation, sharing specific models, providers, and any error messages or logs you're encountering would be beneficial. This information could help pinpoint the exact cause of the discrepancy and facilitate a quicker resolution.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@dosubot dosubot bot added the 🐞 bug Something isn't working label May 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant