Skip to content

Commit

Permalink
feat(discoveryengine): update the api
Browse files Browse the repository at this point in the history
#### discoveryengine:v1alpha

The following keys were added:
- schemas.GoogleCloudDiscoveryengineV1ImportDocumentsMetadata (Total Keys: 10)
- schemas.GoogleCloudDiscoveryengineV1ImportDocumentsResponse (Total Keys: 5)
- schemas.GoogleCloudDiscoveryengineV1ImportErrorConfig (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1ImportUserEventsMetadata (Total Keys: 10)
- schemas.GoogleCloudDiscoveryengineV1ImportUserEventsResponse (Total Keys: 9)
- schemas.GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata (Total Keys: 10)
- schemas.GoogleCloudDiscoveryengineV1PurgeDocumentsResponse (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1Schema (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSitesResponse (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1alphaImportDocumentsRequest.properties.autoGenerateIds.type (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaImportDocumentsRequest.properties.idField.type (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1alphaTargetSite (Total Keys: 12)

#### discoveryengine:v1beta

The following keys were added:
- schemas.GoogleCloudDiscoveryengineV1ImportDocumentsMetadata (Total Keys: 10)
- schemas.GoogleCloudDiscoveryengineV1ImportDocumentsResponse (Total Keys: 5)
- schemas.GoogleCloudDiscoveryengineV1ImportErrorConfig (Total Keys: 3)
- schemas.GoogleCloudDiscoveryengineV1ImportUserEventsMetadata (Total Keys: 10)
- schemas.GoogleCloudDiscoveryengineV1ImportUserEventsResponse (Total Keys: 9)
- schemas.GoogleCloudDiscoveryengineV1PurgeDocumentsMetadata (Total Keys: 10)
- schemas.GoogleCloudDiscoveryengineV1PurgeDocumentsResponse (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1Schema (Total Keys: 6)
- schemas.GoogleCloudDiscoveryengineV1alphaBatchCreateTargetSitesResponse (Total Keys: 4)
- schemas.GoogleCloudDiscoveryengineV1alphaTargetSite (Total Keys: 12)
- schemas.GoogleCloudDiscoveryengineV1betaImportDocumentsRequest.properties.autoGenerateIds.type (Total Keys: 1)
- schemas.GoogleCloudDiscoveryengineV1betaImportDocumentsRequest.properties.idField.type (Total Keys: 1)
  • Loading branch information
yoshi-automation committed May 30, 2023
1 parent 306c214 commit 1e5b551
Show file tree
Hide file tree
Showing 6 changed files with 504 additions and 2 deletions.
Expand Up @@ -202,6 +202,7 @@ <h3>Method Details</h3>
The object takes the form of:

{ # Request message for Import methods.
&quot;autoGenerateIds&quot;: True or False, # Whether to automatically generate IDs for the documents if absent. If set to `true`, Document.ids are automatically generated based on the hash of the payload, where IDs may not be consistent during multiple imports. In which case ReconciliationMode.FULL is highly recommended to avoid duplicate contents. If unset or set to `false`, Document.ids have to be specified using id_field, otherwises, documents without IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. Otherwise, an INVALID_ARGUMENT error is thrown.
&quot;bigquerySource&quot;: { # BigQuery source import data from. # BigQuery input source.
&quot;dataSchema&quot;: &quot;A String&quot;, # The schema to use when parsing the data from the source. Supported values for user event imports: * `user_event` (default): One UserEvent per row. Supported values for document imports: * `document` (default): One Document format per row. Each document must have a valid Document.id and one of Document.json_data or Document.struct_data. * `custom`: One custom data per row in arbitrary format that conforms the defined Schema of the data store. This can only be used by the GENERIC Data Store vertical.
&quot;datasetId&quot;: &quot;A String&quot;, # Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.
Expand All @@ -223,6 +224,7 @@ <h3>Method Details</h3>
&quot;A String&quot;,
],
},
&quot;idField&quot;: &quot;A String&quot;, # The field in the Cloud Storage and BigQuery sources that indicates the unique IDs of the documents. For GcsSource it is the key of the JSON field. For instance, `my_id` for JSON `{&quot;my_id&quot;: &quot;some_uuid&quot;}`. For BigQuerySource it is the column name of the BigQuery table where the unique ids are stored. The values of the JSON field or the BigQuery column will be used as the Document.ids. The JSON field or the BigQuery column must be of string type, and the values must be set as valid strings conform to [RFC-1034](https://tools.ietf.org/html/rfc1034) with 1-63 characters. Otherwise, documents without valid IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. And only set this field when auto_generate_ids is unset or set as `false`. Otherwise, an INVALID_ARGUMENT error is thrown. If it is unset, a default value `_id` is used when importing from the allowed data sources.
&quot;inlineSource&quot;: { # The inline source for the input config for ImportDocuments method. # The Inline source for the input content for documents.
&quot;documents&quot;: [ # Required. A list of documents to update/create. Each document must have a valid Document.id. Recommended max of 100 items.
{ # Document captures all raw metadata information of items to be recommended or searched.
Expand Down
Expand Up @@ -202,6 +202,7 @@ <h3>Method Details</h3>
The object takes the form of:

{ # Request message for Import methods.
&quot;autoGenerateIds&quot;: True or False, # Whether to automatically generate IDs for the documents if absent. If set to `true`, Document.ids are automatically generated based on the hash of the payload, where IDs may not be consistent during multiple imports. In which case ReconciliationMode.FULL is highly recommended to avoid duplicate contents. If unset or set to `false`, Document.ids have to be specified using id_field, otherwises, documents without IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. Otherwise, an INVALID_ARGUMENT error is thrown.
&quot;bigquerySource&quot;: { # BigQuery source import data from. # BigQuery input source.
&quot;dataSchema&quot;: &quot;A String&quot;, # The schema to use when parsing the data from the source. Supported values for user event imports: * `user_event` (default): One UserEvent per row. Supported values for document imports: * `document` (default): One Document format per row. Each document must have a valid Document.id and one of Document.json_data or Document.struct_data. * `custom`: One custom data per row in arbitrary format that conforms the defined Schema of the data store. This can only be used by the GENERIC Data Store vertical.
&quot;datasetId&quot;: &quot;A String&quot;, # Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.
Expand All @@ -223,6 +224,7 @@ <h3>Method Details</h3>
&quot;A String&quot;,
],
},
&quot;idField&quot;: &quot;A String&quot;, # The field in the Cloud Storage and BigQuery sources that indicates the unique IDs of the documents. For GcsSource it is the key of the JSON field. For instance, `my_id` for JSON `{&quot;my_id&quot;: &quot;some_uuid&quot;}`. For BigQuerySource it is the column name of the BigQuery table where the unique ids are stored. The values of the JSON field or the BigQuery column will be used as the Document.ids. The JSON field or the BigQuery column must be of string type, and the values must be set as valid strings conform to [RFC-1034](https://tools.ietf.org/html/rfc1034) with 1-63 characters. Otherwise, documents without valid IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. And only set this field when auto_generate_ids is unset or set as `false`. Otherwise, an INVALID_ARGUMENT error is thrown. If it is unset, a default value `_id` is used when importing from the allowed data sources.
&quot;inlineSource&quot;: { # The inline source for the input config for ImportDocuments method. # The Inline source for the input content for documents.
&quot;documents&quot;: [ # Required. A list of documents to update/create. Each document must have a valid Document.id. Recommended max of 100 items.
{ # Document captures all raw metadata information of items to be recommended or searched.
Expand Down
Expand Up @@ -202,6 +202,7 @@ <h3>Method Details</h3>
The object takes the form of:

{ # Request message for Import methods.
&quot;autoGenerateIds&quot;: True or False, # Whether to automatically generate IDs for the documents if absent. If set to `true`, Document.ids are automatically generated based on the hash of the payload, where IDs may not be consistent during multiple imports. In which case ReconciliationMode.FULL is highly recommended to avoid duplicate contents. If unset or set to `false`, Document.ids have to be specified using id_field, otherwises, documents without IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. Otherwise, an INVALID_ARGUMENT error is thrown.
&quot;bigquerySource&quot;: { # BigQuery source import data from. # BigQuery input source.
&quot;dataSchema&quot;: &quot;A String&quot;, # The schema to use when parsing the data from the source. Supported values for user event imports: * `user_event` (default): One UserEvent per row. Supported values for document imports: * `document` (default): One Document format per row. Each document must have a valid Document.id and one of Document.json_data or Document.struct_data. * `custom`: One custom data per row in arbitrary format that conforms the defined Schema of the data store. This can only be used by the GENERIC Data Store vertical.
&quot;datasetId&quot;: &quot;A String&quot;, # Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.
Expand All @@ -223,6 +224,7 @@ <h3>Method Details</h3>
&quot;A String&quot;,
],
},
&quot;idField&quot;: &quot;A String&quot;, # The field in the Cloud Storage and BigQuery sources that indicates the unique IDs of the documents. For GcsSource it is the key of the JSON field. For instance, `my_id` for JSON `{&quot;my_id&quot;: &quot;some_uuid&quot;}`. For BigQuerySource it is the column name of the BigQuery table where the unique ids are stored. The values of the JSON field or the BigQuery column will be used as the Document.ids. The JSON field or the BigQuery column must be of string type, and the values must be set as valid strings conform to [RFC-1034](https://tools.ietf.org/html/rfc1034) with 1-63 characters. Otherwise, documents without valid IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. And only set this field when auto_generate_ids is unset or set as `false`. Otherwise, an INVALID_ARGUMENT error is thrown. If it is unset, a default value `_id` is used when importing from the allowed data sources.
&quot;inlineSource&quot;: { # The inline source for the input config for ImportDocuments method. # The Inline source for the input content for documents.
&quot;documents&quot;: [ # Required. A list of documents to update/create. Each document must have a valid Document.id. Recommended max of 100 items.
{ # Document captures all raw metadata information of items to be recommended or searched.
Expand Down
Expand Up @@ -202,6 +202,7 @@ <h3>Method Details</h3>
The object takes the form of:

{ # Request message for Import methods.
&quot;autoGenerateIds&quot;: True or False, # Whether to automatically generate IDs for the documents if absent. If set to `true`, Document.ids are automatically generated based on the hash of the payload, where IDs may not be consistent during multiple imports. In which case ReconciliationMode.FULL is highly recommended to avoid duplicate contents. If unset or set to `false`, Document.ids have to be specified using id_field, otherwises, documents without IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. Otherwise, an INVALID_ARGUMENT error is thrown.
&quot;bigquerySource&quot;: { # BigQuery source import data from. # BigQuery input source.
&quot;dataSchema&quot;: &quot;A String&quot;, # The schema to use when parsing the data from the source. Supported values for user event imports: * `user_event` (default): One UserEvent per row. Supported values for document imports: * `document` (default): One Document format per row. Each document must have a valid Document.id and one of Document.json_data or Document.struct_data. * `custom`: One custom data per row in arbitrary format that conforms the defined Schema of the data store. This can only be used by the GENERIC Data Store vertical.
&quot;datasetId&quot;: &quot;A String&quot;, # Required. The BigQuery data set to copy the data from with a length limit of 1,024 characters.
Expand All @@ -223,6 +224,7 @@ <h3>Method Details</h3>
&quot;A String&quot;,
],
},
&quot;idField&quot;: &quot;A String&quot;, # The field in the Cloud Storage and BigQuery sources that indicates the unique IDs of the documents. For GcsSource it is the key of the JSON field. For instance, `my_id` for JSON `{&quot;my_id&quot;: &quot;some_uuid&quot;}`. For BigQuerySource it is the column name of the BigQuery table where the unique ids are stored. The values of the JSON field or the BigQuery column will be used as the Document.ids. The JSON field or the BigQuery column must be of string type, and the values must be set as valid strings conform to [RFC-1034](https://tools.ietf.org/html/rfc1034) with 1-63 characters. Otherwise, documents without valid IDs will fail to be imported. Only set this field when using GcsSource or BigQuerySource, and when GcsSource.data_schema or BigQuerySource.data_schema is `custom`. And only set this field when auto_generate_ids is unset or set as `false`. Otherwise, an INVALID_ARGUMENT error is thrown. If it is unset, a default value `_id` is used when importing from the allowed data sources.
&quot;inlineSource&quot;: { # The inline source for the input config for ImportDocuments method. # The Inline source for the input content for documents.
&quot;documents&quot;: [ # Required. A list of documents to update/create. Each document must have a valid Document.id. Recommended max of 100 items.
{ # Document captures all raw metadata information of items to be recommended or searched.
Expand Down

0 comments on commit 1e5b551

Please sign in to comment.