Skip to content

Commit

Permalink
feat(discoveryengine): update the api
Browse files Browse the repository at this point in the history
#### discoveryengine:v1alpha

The following keys were added:
- resources.projects.resources.locations.resources.rankingConfigs.methods.rank (Total Keys: 12)
- schemas.GoogleCloudDiscoveryengineV1alphaBigtableOptions (Total Keys: 19)
- schemas.GoogleCloudDiscoveryengineV1alphaBigtableSource (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaChunk.properties.chunkMetadata (Total Keys: 2)
- schemas.GoogleCloudDiscoveryengineV1alphaChunk.properties.pageSpan.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaChunkChunkMetadata (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaChunkPageSpan (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaCloudSqlSource (Total Keys: 8)
- schemas.GoogleCloudDiscoveryengineV1alphaFhirStoreSource (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1alphaFirestoreSource (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaGroundingConfig (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1alphaImportCompletionSuggestionsMetadata (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaImportCompletionSuggestionsResponse (Total Keys: 8)
- schemas.GoogleCloudDiscoveryengineV1alphaImportDocumentsRequest.properties.bigtableSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaImportDocumentsRequest.properties.cloudSqlSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaImportDocumentsRequest.properties.fhirStoreSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaImportDocumentsRequest.properties.firestoreSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaImportDocumentsRequest.properties.spannerSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaRankRequest (Total Keys: 9)
- schemas.GoogleCloudDiscoveryengineV1alphaRankResponse (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1alphaRankingRecord (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaSearchRequestContentSearchSpec.properties.chunkSpec.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaSearchRequestContentSearchSpecChunkSpec (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaSearchResponseSummaryReference.properties.chunkContents (Total Keys: 2)
- schemas.GoogleCloudDiscoveryengineV1alphaSearchResponseSummaryReferenceChunkContent (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1alphaSpannerSource (Total Keys: 7)
- schemas.GoogleCloudDiscoveryengineV1alphaTrainCustomModelResponse.properties.metrics (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1alphaWidgetConfig.properties.enableSearchAsYouType.type (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1betaTrainCustomModelResponse.properties.metrics (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1betaTuneEngineMetadata (Total Keys: 3)

#### discoveryengine:v1beta

The following keys were added:
- resources.projects.resources.locations.resources.collections.resources.engines.methods.pause (Total Keys: 12)
- resources.projects.resources.locations.resources.collections.resources.engines.methods.resume (Total Keys: 12)
- resources.projects.resources.locations.resources.collections.resources.engines.methods.tune (Total Keys: 12)
- resources.projects.resources.locations.resources.rankingConfigs.methods.rank (Total Keys: 12)
- schemas.GoogleCloudDiscoveryengineV1alphaGroundingConfig (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1alphaImportCompletionSuggestionsMetadata (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaImportCompletionSuggestionsResponse (Total Keys: 8)
- schemas.GoogleCloudDiscoveryengineV1alphaTrainCustomModelResponse.properties.metrics (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1betaBigtableOptions (Total Keys: 19)
- schemas.GoogleCloudDiscoveryengineV1betaBigtableSource (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1betaCloudSqlSource (Total Keys: 8)
- schemas.GoogleCloudDiscoveryengineV1betaFhirStoreSource (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1betaFirestoreSource (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1betaImportDocumentsRequest.properties.bigtableSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1betaImportDocumentsRequest.properties.cloudSqlSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1betaImportDocumentsRequest.properties.fhirStoreSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1betaImportDocumentsRequest.properties.firestoreSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1betaImportDocumentsRequest.properties.spannerSource.$ref (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1betaPauseEngineRequest (Total Keys: 2)
- schemas.GoogleCloudDiscoveryengineV1betaRankRequest (Total Keys: 9)
- schemas.GoogleCloudDiscoveryengineV1betaRankResponse (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1betaRankingRecord (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1betaResumeEngineRequest (Total Keys: 2)
- schemas.GoogleCloudDiscoveryengineV1betaSearchResponseSummaryReference.properties.chunkContents (Total Keys: 2)
- schemas.GoogleCloudDiscoveryengineV1betaSearchResponseSummaryReferenceChunkContent (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1betaSpannerSource (Total Keys: 7)
- schemas.GoogleCloudDiscoveryengineV1betaTrainCustomModelResponse.properties.metrics (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1betaTuneEngineMetadata (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1betaTuneEngineRequest (Total Keys: 2)
  • Loading branch information
yoshi-automation committed Apr 2, 2024
1 parent a212ce6 commit 0693eae
Show file tree
Hide file tree
Showing 26 changed files with 2,485 additions and 76 deletions.
63 changes: 60 additions & 3 deletions docs/dyn/discoveryengine_v1alpha.locations.html

Large diffs are not rendered by default.

Expand Up @@ -107,6 +107,14 @@ <h3>Method Details</h3>
An object of the form:

{ # Chunk captures all raw metadata information of items to be recommended or searched in the chunk mode.
&quot;chunkMetadata&quot;: { # Metadata of the current chunk. This field is only populated on SearchService.Search API. # Output only. Metadata of the current chunk.
&quot;nextChunks&quot;: [ # The next chunks of the current chunk. The number is controlled by SearchRequest.ContentSearchSpec.ChunkSpec.num_next_chunks. This field is only populated on SearchService.Search API.
# Object with schema name: GoogleCloudDiscoveryengineV1alphaChunk
],
&quot;previousChunks&quot;: [ # The previous chunks of the current chunk. The number is controlled by SearchRequest.ContentSearchSpec.ChunkSpec.num_previous_chunks. This field is only populated on SearchService.Search API.
# Object with schema name: GoogleCloudDiscoveryengineV1alphaChunk
],
},
&quot;content&quot;: &quot;A String&quot;, # Content is a string from a document (parsed content).
&quot;derivedStructData&quot;: { # Output only. This field is OUTPUT_ONLY. It contains derived data that are not in the original input document.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
Expand All @@ -117,6 +125,10 @@ <h3>Method Details</h3>
},
&quot;id&quot;: &quot;A String&quot;, # Unique chunk id of the current chunk.
&quot;name&quot;: &quot;A String&quot;, # The full resource name of the chunk. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}/branches/{branch}/documents/{document_id}/chunks/{chunk_id}`. This field must be a UTF-8 encoded string with a length limit of 1024 characters.
&quot;pageSpan&quot;: { # Page span of the chunk. # Page span of the chunk.
&quot;pageEnd&quot;: 42, # The end page of the chunk.
&quot;pageStart&quot;: 42, # The start page of the chunk.
},
}</pre>
</div>

Expand All @@ -139,6 +151,14 @@ <h3>Method Details</h3>
{ # Response message for ChunkService.ListChunks method.
&quot;chunks&quot;: [ # The Chunks.
{ # Chunk captures all raw metadata information of items to be recommended or searched in the chunk mode.
&quot;chunkMetadata&quot;: { # Metadata of the current chunk. This field is only populated on SearchService.Search API. # Output only. Metadata of the current chunk.
&quot;nextChunks&quot;: [ # The next chunks of the current chunk. The number is controlled by SearchRequest.ContentSearchSpec.ChunkSpec.num_next_chunks. This field is only populated on SearchService.Search API.
# Object with schema name: GoogleCloudDiscoveryengineV1alphaChunk
],
&quot;previousChunks&quot;: [ # The previous chunks of the current chunk. The number is controlled by SearchRequest.ContentSearchSpec.ChunkSpec.num_previous_chunks. This field is only populated on SearchService.Search API.
# Object with schema name: GoogleCloudDiscoveryengineV1alphaChunk
],
},
&quot;content&quot;: &quot;A String&quot;, # Content is a string from a document (parsed content).
&quot;derivedStructData&quot;: { # Output only. This field is OUTPUT_ONLY. It contains derived data that are not in the original input document.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
Expand All @@ -149,6 +169,10 @@ <h3>Method Details</h3>
},
&quot;id&quot;: &quot;A String&quot;, # Unique chunk id of the current chunk.
&quot;name&quot;: &quot;A String&quot;, # The full resource name of the chunk. Format: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}/branches/{branch}/documents/{document_id}/chunks/{chunk_id}`. This field must be a UTF-8 encoded string with a length limit of 1024 characters.
&quot;pageSpan&quot;: { # Page span of the chunk. # Page span of the chunk.
&quot;pageEnd&quot;: 42, # The end page of the chunk.
&quot;pageStart&quot;: 42, # The start page of the chunk.
},
},
],
&quot;nextPageToken&quot;: &quot;A String&quot;, # A token that can be sent as ListChunksRequest.page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.
Expand Down
Expand Up @@ -302,7 +302,7 @@ <h3>Method Details</h3>
The object takes the form of:

{ # Request message for Import methods.
&quot;autoGenerateIds&quot;: True or False, # Whether to automatically generate IDs for the documents if absent. If set to `true`, Document.ids are automatically generated based on the hash of the payload, where IDs may not be consistent during multiple imports. In which case ReconciliationMode.FULL is highly recommended to avoid duplicate contents. If unset or set to `false`, Document.ids have to be specified using id_field, otherwise, documents without IDs fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom` or `csv`. Otherwise, an INVALID_ARGUMENT error is thrown.
&quot;autoGenerateIds&quot;: True or False, # Whether to automatically generate IDs for the documents if absent. If set to `true`, Document.ids are automatically generated based on the hash of the payload, where IDs may not be consistent during multiple imports. In which case ReconciliationMode.FULL is highly recommended to avoid duplicate contents. If unset or set to `false`, Document.ids have to be specified using id_field, otherwise, documents without IDs fail to be imported. Supported data sources: * GcsSource. GcsSource.data_schema must be `custom` or `csv`. Otherwise, an INVALID_ARGUMENT error is thrown. * BigQuerySource. BigQuerySource.data_schema must be `custom` or `csv`. Otherwise, an INVALID_ARGUMENT error is thrown. * SpannerSource * CloudSqlSource * FirestoreSource * BigtableSource
&quot;bigquerySource&quot;: { # BigQuery source import data from. # BigQuery input source.
&quot;dataSchema&quot;: &quot;A String&quot;, # The schema to use when parsing the data from the source. Supported values for user event imports: * `user_event` (default): One UserEvent per row. Supported values for document imports: * `document` (default): One Document format per row. Each document must have a valid Document.id and one of Document.json_data or Document.struct_data. * `custom`: One custom data per row in arbitrary format that conforms to the defined Schema of the data store. This can only be used by Gen App Builder.
&quot;datasetId&quot;: &quot;A String&quot;, # Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.
Expand All @@ -315,16 +315,57 @@ <h3>Method Details</h3>
&quot;projectId&quot;: &quot;A String&quot;, # The project ID (can be project # or ID) that the BigQuery source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.
&quot;tableId&quot;: &quot;A String&quot;, # Required. The BigQuery table to copy the data from with a length limit of 1,024 characters.
},
&quot;bigtableSource&quot;: { # The Cloud Bigtable source for importing data # Cloud Bigtable input source.
&quot;bigtableOptions&quot;: { # The Bigtable Options object that contains information to support the import. # Required. Bigtable options that contains information needed when parsing data into typed structures. For example, column type annotations.
&quot;families&quot;: { # The mapping from family names to an object that contains column families level information for the given column family. If a family is not present in this map it will be ignored.
&quot;a_key&quot;: {
&quot;columns&quot;: [ # The list of objects that contains column level information for each column. If a column is not present in this list it will be ignored.
{
&quot;encoding&quot;: &quot;A String&quot;, # Optional. The encoding mode of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in &#x27;columns&#x27; and specifying an encoding for it.
&quot;fieldName&quot;: &quot;A String&quot;, # The field name to use for this column in the UCS document. The name has to match a-zA-Z0-9* If not set, we will parse it from the qualifier bytes with best effort. However, field name collisions could happen, where parsing behavior is undefined.
&quot;qualifier&quot;: &quot;A String&quot;, # Required. Qualifier of the column. If cannot decode with utf-8, store a base-64 encoded string.
&quot;type&quot;: &quot;A String&quot;, # Optional. The type of values in this column family. The values are expected to be encoded using HBase Bytes.toBytes function when the encoding value is set to BINARY.
},
],
&quot;encoding&quot;: &quot;A String&quot;, # Optional. The encoding mode of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in &#x27;columns&#x27; and specifying an encoding for it.
&quot;fieldName&quot;: &quot;A String&quot;, # The field name to use for this column family in the UCS document. The name has to match a-zA-Z0-9* If not set, we will parse it from the family name with best effort. However, due to difference naming pattern, there could be field name collisions, where parsing behavior is undefined.
&quot;type&quot;: &quot;A String&quot;, # Optional. The type of values in this column family. The values are expected to be encoded using HBase Bytes.toBytes function when the encoding value is set to BINARY.
},
},
&quot;keyFieldName&quot;: &quot;A String&quot;, # The field name used for saving row key value in the UCS document. The name has to match a-zA-Z0-9*
},
&quot;instanceId&quot;: &quot;A String&quot;, # Required. The instance ID of the Cloud Bigtable that needs to be exported.
&quot;projectId&quot;: &quot;A String&quot;, # The project ID (can be project # or ID) that the Bigtable source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.
&quot;tableId&quot;: &quot;A String&quot;, # Required. The table ID of the Cloud Bigtable that needs to be exported.
},
&quot;cloudSqlSource&quot;: { # Cloud SQL source import data from. # Cloud SQL input source.
&quot;databaseId&quot;: &quot;A String&quot;, # Required. The Cloud SQL database to copy the data from with a length limit of 256 characters.
&quot;gcsStagingDir&quot;: &quot;A String&quot;, # Optional. Intermediate Cloud Storage directory used for the import with a length limit of 2,000 characters. Can be specified if one wants to have the Cloud SQL export to a specific Cloud Storage directory. Please ensure that the Cloud SQL service account has the necessary GCS Storage Admin permissions to access the specified GCS directory.
&quot;instanceId&quot;: &quot;A String&quot;, # Required. The Cloud SQL instance to copy the data from with a length limit of 256 characters.
&quot;offload&quot;: True or False, # Optional. Option for serverless export. Enabling this option will incur additional cost. More info: https://cloud.google.com/sql/pricing#serverless
&quot;projectId&quot;: &quot;A String&quot;, # Optional. The project ID (can be project # or ID) that the Cloud SQL source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.
&quot;tableId&quot;: &quot;A String&quot;, # Required. The Cloud SQL table to copy the data from with a length limit of 256 characters.
},
&quot;errorConfig&quot;: { # Configuration of destination for Import related errors. # The desired location of errors incurred during the Import.
&quot;gcsPrefix&quot;: &quot;A String&quot;, # Cloud Storage prefix for import errors. This must be an empty, existing Cloud Storage directory. Import errors are written to sharded files in this directory, one per line, as a JSON-encoded `google.rpc.Status` message.
},
&quot;fhirStoreSource&quot;: { # Cloud FhirStore source import data from. # FhirStore input source.
&quot;fhirStore&quot;: &quot;A String&quot;, # Required. The full resource name of the FHIR store to import data from, in the format of `projects/{project}/locations/{location}/datasets/{dataset}/fhirStores/{fhir_store}`.
&quot;gcsStagingDir&quot;: &quot;A String&quot;, # Intermediate Cloud Storage directory used for the import with a length limit of 2,000 characters. Can be specified if one wants to have the FhirStore export to a specific Cloud Storage directory.
},
&quot;firestoreSource&quot;: { # Firestore source import data from. # Firestore input source.
&quot;collectionId&quot;: &quot;A String&quot;, # Required. The Firestore collection to copy the data from with a length limit of 1500 characters.
&quot;databaseId&quot;: &quot;A String&quot;, # Required. The Firestore database to copy the data from with a length limit of 256 characters.
&quot;gcsStagingDir&quot;: &quot;A String&quot;, # Optional. Intermediate Cloud Storage directory used for the import with a length limit of 2,000 characters. Can be specified if one wants to have the Firestore export to a specific Cloud Storage directory. Please ensure that the Firestore service account has the necessary GCS Storage Admin permissions to access the specified GCS directory.
&quot;projectId&quot;: &quot;A String&quot;, # Optional. The project ID (can be project # or ID) that the Cloud SQL source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.
},
&quot;gcsSource&quot;: { # Cloud Storage location for input content. # Cloud Storage location for the input content.
&quot;dataSchema&quot;: &quot;A String&quot;, # The schema to use when parsing the data from the source. Supported values for document imports: * `document` (default): One JSON Document per line. Each document must have a valid Document.id. * `content`: Unstructured data (e.g. PDF, HTML). Each file matched by `input_uris` becomes a document, with the ID set to the first 128 bits of SHA256(URI) encoded as a hex string. * `custom`: One custom data JSON per row in arbitrary format that conforms to the defined Schema of the data store. This can only be used by Gen App Builder. * `csv`: A CSV file with header conforming to the defined Schema of the data store. Each entry after the header is imported as a Document. This can only be used by Gen App Builder. Supported values for user even imports: * `user_event` (default): One JSON UserEvent per line.
&quot;inputUris&quot;: [ # Required. Cloud Storage URIs to input files. URI can be up to 2000 characters long. URIs can match the full object path (for example, `gs://bucket/directory/object.json`) or a pattern matching one or more files, such as `gs://bucket/directory/*.json`. A request can contain at most 100 files (or 100,000 files if `data_schema` is `content`). Each file can be up to 2 GB (or 100 MB if `data_schema` is `content`).
&quot;A String&quot;,
],
},
&quot;idField&quot;: &quot;A String&quot;, # The field in the Cloud Storage and BigQuery sources that indicates the unique IDs of the documents. For GcsSource it is the key of the JSON field. For instance, `my_id` for JSON `{&quot;my_id&quot;: &quot;some_uuid&quot;}`. For BigQuerySource it is the column name of the BigQuery table where the unique ids are stored. The values of the JSON field or the BigQuery column are used as the Document.ids. The JSON field or the BigQuery column must be of string type, and the values must be set as valid strings conform to [RFC-1034](https://tools.ietf.org/html/rfc1034) with 1-63 characters. Otherwise, documents without valid IDs fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. And only set this field when auto_generate_ids is unset or set as `false`. Otherwise, an INVALID_ARGUMENT error is thrown. If it is unset, a default value `_id` is used when importing from the allowed data sources.
&quot;idField&quot;: &quot;A String&quot;, # The field indicates the ID field or column to be used as unique IDs of the documents. For GcsSource it is the key of the JSON field. For instance, `my_id` for JSON `{&quot;my_id&quot;: &quot;some_uuid&quot;}`. For others, it may be the column name of the table where the unique ids are stored. The values of the JSON field or the table column are used as the Document.ids. The JSON field or the table column must be of string type, and the values must be set as valid strings conform to [RFC-1034](https://tools.ietf.org/html/rfc1034) with 1-63 characters. Otherwise, documents without valid IDs fail to be imported. Only set this field when auto_generate_ids is unset or set as `false`. Otherwise, an INVALID_ARGUMENT error is thrown. If it is unset, a default value `_id` is used when importing from the allowed data sources. Supported data sources: * GcsSource. GcsSource.data_schema must be `custom` or `csv`. Otherwise, an INVALID_ARGUMENT error is thrown. * BigQuerySource. BigQuerySource.data_schema must be `custom` or `csv`. Otherwise, an INVALID_ARGUMENT error is thrown. * SpannerSource * CloudSqlSource * FirestoreSource * BigtableSource
&quot;inlineSource&quot;: { # The inline source for the input config for ImportDocuments method. # The Inline source for the input content for documents.
&quot;documents&quot;: [ # Required. A list of documents to update/create. Each document must have a valid Document.id. Recommended max of 100 items.
{ # Document captures all raw metadata information of items to be recommended or searched.
Expand Down Expand Up @@ -361,6 +402,13 @@ <h3>Method Details</h3>
],
},
&quot;reconciliationMode&quot;: &quot;A String&quot;, # The mode of reconciliation between existing documents and the documents to be imported. Defaults to ReconciliationMode.INCREMENTAL.
&quot;spannerSource&quot;: { # The Spanner source for importing data # Spanner input source.
&quot;databaseId&quot;: &quot;A String&quot;, # Required. The database ID of the source Spanner table.
&quot;enableDataBoost&quot;: True or False, # Optional. Whether to apply data boost on Spanner export. Enabling this option will incur additional cost. More info: https://cloud.google.com/spanner/docs/databoost/databoost-overview#billing_and_quotas
&quot;instanceId&quot;: &quot;A String&quot;, # Required. The instance ID of the source Spanner table.
&quot;projectId&quot;: &quot;A String&quot;, # The project ID that the Spanner source is in with a length limit of 128 characters. If not specified, inherits the project ID from the parent request.
&quot;tableId&quot;: &quot;A String&quot;, # Required. The table name of the Spanner database that needs to be imported.
},
}

x__xgafv: string, V1 error format.
Expand Down

0 comments on commit 0693eae

Please sign in to comment.