Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Gemini Api Integration #150

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

unaisshemim
Copy link

completed Gemin Api Integration on Skyvern

@unaisshemim unaisshemim changed the title Adding Gemini support #132 Adding Gemini Api Integration Apr 3, 2024
if SettingsManager.get_settings().ENABLE_GEMINI:
LLMConfigRegistry.register_config(
"GEMINI_GPT4V",
litellm.completion("gpt-4-vision-preview",
Copy link
Contributor

@suchintan suchintan Apr 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
litellm.completion("gpt-4-vision-preview",
LLMConfig("gemini/gemini-pro",

According to this https://litellm.vercel.app/docs/providers/gemini

@@ -72,3 +72,11 @@ def get_config(cls, llm_key: str) -> LLMRouterConfig | LLMConfig:
True,
),
)
if SettingsManager.get_settings().ENABLE_GEMINI:
LLMConfigRegistry.register_config(
"GEMINI_GPT4V",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"GEMINI_GPT4V",
"GOOGLE_GEMINI_PRO",

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot from 2024-04-04 09-52-16

Failing When i run
precommit run all-files

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you running it within an active poetry shell?

first run, poetry shell, then run the precommit command. let me know how it goes

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
got this error when running ./run_skyvern.sh
is there any code changes needed others than above mentioned files.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we have a check that makes sure at least one provider is enabled. You need to update here as well:

https://github.com/Skyvern-AI/skyvern/blob/c538523c88ded6457b6a6f4c7b1e4a31756b2aef/skyvern/forge/sdk/api/llm/config_registry.py#L46C1-L54C35

@suchintan
Copy link
Contributor

suchintan commented Apr 4, 2024

I think you also need to do poetry add google-generativeai

https://litellm.vercel.app/docs/providers/gemini

@unaisshemim
Copy link
Author

i have made changes please review

Copy link
Contributor

@ykeremy ykeremy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for going over some of the review comments.

I added some more. Please let me know if anything's not clear

setup.sh Outdated Show resolved Hide resolved
skyvern/forge/sdk/api/llm/config_registry.py Outdated Show resolved Hide resolved
skyvern/forge/sdk/api/llm/config_registry.py Outdated Show resolved Hide resolved
@@ -72,3 +74,12 @@ def get_config(cls, llm_key: str) -> LLMRouterConfig | LLMConfig:
True,
),
)
if SettingsManager.get_settings().ENABLE_GEMINI:
LLMConfigRegistry.register_config(
"GEMINI_GPT4V",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"GEMINI_GPT4V",
"GEMINI_PRO_VISION",

skyvern-frontend/package-lock.json Show resolved Hide resolved
skyvern/config.py Outdated Show resolved Hide resolved
setup.sh Outdated Show resolved Hide resolved
setup.sh Outdated Show resolved Hide resolved
@unaisshemim unaisshemim requested a review from ykeremy April 8, 2024 03:52
@unaisshemim
Copy link
Author

merged

Copy link

This pull request is stale because it has been open for 14 days with no activity.

@github-actions github-actions bot added the Stale label Apr 28, 2024
@@ -61,14 +61,15 @@ class Settings(BaseSettings):
# LLM Configuration #
#####################
# ACTIVE LLM PROVIDER
LLM_KEY: str = "OPENAI_GPT4V"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we keep the default llm_key as OPENAI_GPT4V as this is the most reliable model that's been well tested by the skyvern team?

# COMMON
LLM_CONFIG_MAX_TOKENS: int = 4096
LLM_CONFIG_TEMPERATURE: float = 0
# LLM PROVIDER SPECIFIC
ENABLE_OPENAI: bool = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same. let's keep this True

@github-actions github-actions bot removed the Stale label Apr 29, 2024
Copy link

This pull request is stale because it has been open for 14 days with no activity.

@github-actions github-actions bot added the Stale label May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants