Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model Garden Gemma Deployment on Vertex - incomplete documentation about prediction response format #2799

Open
afirstenberg opened this issue Mar 25, 2024 · 1 comment

Comments

@afirstenberg
Copy link

Environment

  • Deployed a gemma-7b-it model on Vertex AI Model Garden using the "Deploy" button from the Gemma card. No additional tuning was done.
  • I have an instance running on a g2-standard-12 machine with I4 GPU. It is visible in the Online Prediction section of my Cloud Console.
  • I am able to reach the endpoint without any issues.

Unable to find any good documentation on what needs to be sent to the model and what to get back, I used the "Model Garden Gemma Deployment on Vertex" notebook to try and get an idea. It did provide an example for what to provide to the prompt:

However, it does not indicate what to expect for the reply. So it wasn't clear that the reply includes the original prompt as well as part of the reply string and this would need to be parsed out:

{
  "predictions": [
    "Prompt:\nWhat is a car?\nOutput:\nA car is a motor vehicle that is propelled by gasoline. It has four wheels, a steering wheel, and a seat."
  ],
  "deployedModelId": "xxx",
  "model": "projects/111/locations/us-central1/models/gemma-7b-it-google",
  "modelDisplayName": "gemma-7b-it-google",
  "modelVersionId": "1"
}

The documentation should make clear what the output will be.

@gericdong
Copy link
Contributor

@kathyyu-google: could you please assist? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants