Java idiomatic SDK for Vertex AI.
If you are using Maven with BOM, add this to your pom.xml file:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>libraries-bom</artifactId>
<version>26.29.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-vertexai</artifactId>
</dependency>
If you are using Maven without the BOM, add this to your dependencies:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-vertexai</artifactId>
<version>0.2.0</version>
</dependency>
If you are using Gradle without BOM, add this to your dependencies:
implementation 'com.google.cloud:google-cloud-vertexai:0.2.0'
If you are using SBT, add this to your dependencies:
libraryDependencies += "com.google.cloud" % "google-cloud-vertexai" % "0.2.0"
See the Authentication section in the base directory's README.
The client application making API calls must be granted authorization scopes required for the desired Vertex AI APIs, and the authenticated principal must have the IAM role(s) required to access GCP resources using the Vertex AI API calls.
You will need a Google Cloud Platform Console project with the Vertex AI API enabled.
You will need to enable billing to use Google Vertex AI.
Follow these instructions to get your project set up. You will also need to set up the local development environment by
installing the Google Cloud Command Line Interface and running the following commands in command line:
gcloud auth login
and gcloud config set project [YOUR PROJECT ID]
.
To acquire user credentials to use for Application Default Credentials, run gcloud auth application-default login
.
You'll need to obtain the google-cloud-vertexai
library. See the Add Dependency section
to add google-cloud-vertexai
as a dependency in your code.
Vertex AI is an integrated suite of machine learning tools and services for building and using ML models with AutoML or custom code. It was previously known as AI Platform. Vertex AI offers both novices and experts the best workbench for the entire machine learning development lifecycle. This SDK currently only supports Generative AI service on the Vertex AI platform. To access the full set of services on the Vertex AI, consider using the google-cloud-aiplatform
client libraries.
Vertex AI provides Generative AI Studio that supports text generation from multi-modality input via a set of most advanced models from Google. This brings out a wide range of applications.
Vertex AI SDK allows you to access the service programmatically. The following code snippet is the most basic usage of SDK
package <your package name>
import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.generativeai.preview.GenerativeModel;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import java.io.IOException;
public class Main {
private static final String PROJECT_ID = <your project id>;
private static final String LOCATION = <location>;
public static void main(String[] args) throws IOException {
try (VertexAI vertexAi = new VertexAI(PROJECT_ID, LOCATION);) {
GenerativeModel model = new GenerativeModel("gemini-pro", vertexAi);
GenerateContentResponse response = model.generateContent("How are you?");
// Do something with the response
}
}
}
To get a streamed output, you can use the generateContentStream
method
package <your package name>
import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.generativeai.preview.GenerativeModel;
import com.google.cloud.vertexai.generativeai.preview.ResponseStream;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import java.io.IOException;
public class Main {
private static final String PROJECT_ID = <your project id>;
private static final String LOCATION = <location>;
public static void main(String[] args) throws IOException {
try (VertexAI vertexAi = new VertexAI(PROJECT_ID, LOCATION);) {
GenerativeModel model = new GenerativeModel("gemini-pro", vertexAi);
ResponseStream<GenerateContentResponse> responseStream = model.generateContent("How are you?");
// Do something with the ResponseStream, which is an iterable.
}
}
}
To generate text based on data of multiple modalities, one needs to make a Content
, which is made easier by ContentMaker
:
package <your package name>;
import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.generativeai.preview.GenerativeModel;
import com.google.cloud.vertexai.generativeai.preview.ResponseStream;
import com.google.cloud.vertexai.generativeai.preview.ContentMaker;
import com.google.cloud.vertexai.generativeai.preview.PartMaker;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import java.util.Arrays;
public class Main {
private static final String PROJECT_ID = <your project id>;
private static final String LOCATION = <location>;
private static final String IMAGE_URI = <gcs uri to your image>
public static void main(String[] args) throws Exception {
try (VertexAI vertexAi = new VertexAI(PROJECT_ID, LOCATION); ) {
// Vision model must be used for multi-modal input
GenerativeModel model = new GenerativeModel("gemini-pro-vision", vertexAi);
ResponseStream<GenerateContentResponse> stream =
model.generateContentStream(ContentMaker.fromMultiModalData(
"Please describe this image",
PartMaker.fromMimeTypeAndData("image/jpeg", IMAGE_URI)
));
// Do something with the ResponseStream, which is an iterable.
}
}
}
For a multi-turn conversation, one needs to make a Content
list to represent the whole conversation between two roles: "user" and "model".
package <your package name>;
import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.Content;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.preview.ContentMaker;
import com.google.cloud.vertexai.generativeai.preview.GenerativeModel;
import com.google.cloud.vertexai.generativeai.preview.ResponseHandler;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
public class Main {
private static final String PROJECT_ID = <your project id>;
private static final String LOCATION = <location>;
private static final String MODEL_NAME = "gemini-pro";
public static void main(String[] args) throws IOException {
try (VertexAI vertexAi = new VertexAI(PROJECT_ID, LOCATION); ) {
GenerativeModel model =
new GenerativeModel(MODEL_NAME, vertexAi);
// Put all the contents in a Content list
List<Content> contents =
Arrays.asList(
ContentMaker.fromString("Hi!"),
ContentMaker.forRole("model")
.fromString("Hello! How may I assist you?"),
ContentMaker.fromString(
"Can you explain quantum mechanis as well in only a few sentences?"));
// generate the result
GenerateContentResponse response = model.generateContent(contents);
// ResponseHandler.getText is a helper function to retrieve the text part of the answer.
System.out.println("\nPrint response: ");
System.out.println(ResponseHandler.getText(response));
System.out.println("\n");
}
}
}
Yeah, we know, that isn't the most intuitive and easy way to chat with a model. Therefore we provide a Chat
class:
package <your package name>;
import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.generativeai.preview.GenerativeModel;
import com.google.cloud.vertexai.generativeai.preview.ChatSession;
import com.google.cloud.vertexai.generativeai.preview.ResponseStream;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.api.GenerationConfig;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
public class Main {
private static final String PROJECT_ID = <your project id>;
private static final String LOCATION = <location>;
public static void main(String[] args) throws IOException {
try (VertexAI vertexAi = new VertexAI(PROJECT_ID, LOCATION); ) {
GenerativeModel model =
new GenerativeModel("gemini-pro", vertexAi);
ChatSession chat = model.startChat();
// Send the first message.
// ChatSession also has two versions of sendMessage, stream and non-stream
ResponseStream<GenerateContentResponse> response = chat.sendMessageStream("Hi!");
// Do something with the output stream, possibly with ResponseHandler
// Now send another message. The history will be remembered by the ChatSession.
// Note: the stream needs to be consumed before you send another message
// or fetch the history.
ResponseStream<GenerateContentResponse> anotherResponse = chat.sendMessageStream("Can you explain quantum mechanis as well in a few sentences?");
// Do something with the second response
// See the whole history. Make sure you have consumed the stream.
List<Content> history = chat.getHistory();
}
}
}
See the Vertex AI SDK docs to learn more about how to use this Vertex AI SDK in more advanced ways.
To get help, follow the instructions in the shared Troubleshooting document.
Vertex AI uses gRPC and rest for the transport layer. By default, we use gRPC transport. To use rest, passing a Transport.REST
to the VertexAI
constructor as the example below:
package <your package name>
import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.generativeai.preview.GenerativeModel;
import com.google.cloud.vertexai.generativeai.preview.Transport;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import java.io.IOException;
public class Main {
private static final String PROJECT_ID = <your project id>;
private static final String LOCATION = <location>;
public static void main(String[] args) throws IOException {
try (VertexAI vertexAi = new VertexAI(PROJECT_ID, LOCATION, Transport.REST);) {
GenerativeModel model = new GenerativeModel("gemini-pro", vertexAi);
GenerateContentResponse response = model.generateContent("How are you?");
// Do something with the response
}
}
}
Java 8 or above is required for using this client.
Google's Java client libraries, Google Cloud Client Libraries and Google Cloud API Libraries, follow the Oracle Java SE support roadmap (see the Oracle Java SE Product Releases section).
In general, new feature development occurs with support for the lowest Java LTS version covered by Oracle's Premier Support (which typically lasts 5 years from initial General Availability). If the minimum required JVM for a given library is changed, it is accompanied by a semver major release.
Java 11 and (in September 2021) Java 17 are the best choices for new development.
Google tests its client libraries with all current LTS versions covered by Oracle's Extended Support (which typically lasts 8 years from initial General Availability).
Google's client libraries support legacy versions of Java runtimes with long term stable libraries that don't receive feature updates on a best efforts basis as it may not be possible to backport all patches.
Google provides updates on a best efforts basis to apps that continue to use Java 7, though apps might need to upgrade to current versions of the library that supports their JVM.
The latest versions and the supported Java versions are identified on
the individual GitHub repository github.com/GoogleAPIs/java-SERVICENAME
and on google-cloud-java.
This library follows Semantic Versioning.
Contributions to this library are always welcome and highly encouraged.
See CONTRIBUTING for more information how to get started.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.
Apache 2.0 - See LICENSE for more information.
Java Version | Status |
---|---|
Java 8 | |
Java 8 OSX | |
Java 8 Windows | |
Java 11 |
Java is a registered trademark of Oracle and/or its affiliates.