diff --git a/docs/dyn/adsense_v2.accounts.adclients.adunits.html b/docs/dyn/adsense_v2.accounts.adclients.adunits.html index 7cce189dd0..aaeb6503cd 100644 --- a/docs/dyn/adsense_v2.accounts.adclients.adunits.html +++ b/docs/dyn/adsense_v2.accounts.adclients.adunits.html @@ -79,7 +79,7 @@

Instance Methods

Close httplib2 connections.

create(parent, body=None, x__xgafv=None)

-

Creates an ad unit. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product. Note that ad units can only be created for ad clients with an "AFC" product code. For more info see the [AdClient resource](/adsense/management/reference/rest/v2/accounts.adclients). For now, this method can only be used to create `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566

+

Creates an ad unit. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method. Note that ad units can only be created for ad clients with an "AFC" product code. For more info see the [AdClient resource](/adsense/management/reference/rest/v2/accounts.adclients). For now, this method can only be used to create `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566

get(name, x__xgafv=None)

Gets an ad unit from a specified account and ad client.

@@ -100,7 +100,7 @@

Instance Methods

Retrieves the next page of results.

patch(name, body=None, updateMask=None, x__xgafv=None)

-

Updates an ad unit. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product. For now, this method can only be used to update `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566

+

Updates an ad unit. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method. For now, this method can only be used to update `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566

Method Details

close() @@ -109,7 +109,7 @@

Method Details

create(parent, body=None, x__xgafv=None) -
Creates an ad unit. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product. Note that ad units can only be created for ad clients with an "AFC" product code. For more info see the [AdClient resource](/adsense/management/reference/rest/v2/accounts.adclients). For now, this method can only be used to create `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566
+  
Creates an ad unit. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method. Note that ad units can only be created for ad clients with an "AFC" product code. For more info see the [AdClient resource](/adsense/management/reference/rest/v2/accounts.adclients). For now, this method can only be used to create `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566
 
 Args:
   parent: string, Required. Ad client to create an ad unit under. Format: accounts/{account}/adclients/{adclient} (required)
@@ -284,7 +284,7 @@ 

Method Details

patch(name, body=None, updateMask=None, x__xgafv=None) -
Updates an ad unit. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product. For now, this method can only be used to update `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566
+  
Updates an ad unit. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method. For now, this method can only be used to update `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566
 
 Args:
   name: string, Output only. Resource name of the ad unit. Format: accounts/{account}/adclients/{adclient}/adunits/{adunit} (required)
diff --git a/docs/dyn/adsense_v2.accounts.adclients.customchannels.html b/docs/dyn/adsense_v2.accounts.adclients.customchannels.html
index 2413a2091c..36fa291940 100644
--- a/docs/dyn/adsense_v2.accounts.adclients.customchannels.html
+++ b/docs/dyn/adsense_v2.accounts.adclients.customchannels.html
@@ -79,10 +79,10 @@ 

Instance Methods

Close httplib2 connections.

create(parent, body=None, x__xgafv=None)

-

Creates a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.

+

Creates a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.

delete(name, x__xgafv=None)

-

Deletes a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.

+

Deletes a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.

get(name, x__xgafv=None)

Gets information about the selected custom channel.

@@ -100,7 +100,7 @@

Instance Methods

Retrieves the next page of results.

patch(name, body=None, updateMask=None, x__xgafv=None)

-

Updates a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.

+

Updates a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.

Method Details

close() @@ -109,7 +109,7 @@

Method Details

create(parent, body=None, x__xgafv=None) -
Creates a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.
+  
Creates a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.
 
 Args:
   parent: string, Required. The ad client to create a custom channel under. Format: accounts/{account}/adclients/{adclient} (required)
@@ -141,7 +141,7 @@ 

Method Details

delete(name, x__xgafv=None) -
Deletes a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.
+  
Deletes a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.
 
 Args:
   name: string, Required. Name of the custom channel to delete. Format: accounts/{account}/adclients/{adclient}/customchannels/{customchannel} (required)
@@ -271,7 +271,7 @@ 

Method Details

patch(name, body=None, updateMask=None, x__xgafv=None) -
Updates a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.
+  
Updates a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.
 
 Args:
   name: string, Output only. Resource name of the custom channel. Format: accounts/{account}/adclients/{adclient}/customchannels/{customchannel} (required)
diff --git a/docs/dyn/adsense_v2.accounts.html b/docs/dyn/adsense_v2.accounts.html
index f851b5bb7d..0c8108b350 100644
--- a/docs/dyn/adsense_v2.accounts.html
+++ b/docs/dyn/adsense_v2.accounts.html
@@ -89,6 +89,11 @@ 

Instance Methods

Returns the payments Resource.

+

+ policyIssues() +

+

Returns the policyIssues Resource.

+

reports()

diff --git a/docs/dyn/adsense_v2.accounts.policyIssues.html b/docs/dyn/adsense_v2.accounts.policyIssues.html new file mode 100644 index 0000000000..b9b8efd68b --- /dev/null +++ b/docs/dyn/adsense_v2.accounts.policyIssues.html @@ -0,0 +1,214 @@ + + + +

AdSense Management API . accounts . policyIssues

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ get(name, x__xgafv=None)

+

Gets information about the selected policy issue.

+

+ list(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists all the policy issues for the specified account.

+

+ list_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ get(name, x__xgafv=None) +
Gets information about the selected policy issue.
+
+Args:
+  name: string, Required. Name of the policy issue. Format: accounts/{account}/policyIssues/{policy_issue} (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Representation of a policy issue for a single entity (site, site-section, or page). All issues for a single entity are represented by a single PolicyIssue resource, though that PolicyIssue can have multiple causes (or "topics") that can change over time. Policy issues are removed if there are no issues detected recently or if there's a recent successful appeal for the entity.
+  "action": "A String", # Required. The most severe action taken on the entity over the past seven days.
+  "adClients": [ # Optional. List of ad clients associated with the policy issue (either as the primary ad client or an associated host/secondary ad client). In the latter case, this will be an ad client that is not owned by the current account.
+    "A String",
+  ],
+  "adRequestCount": "A String", # Required. Total number of ad requests affected by the policy violations over the past seven days.
+  "entityType": "A String", # Required. Type of the entity indicating if the entity is a site, site-section, or page.
+  "firstDetectedDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Required. The date (in the America/Los_Angeles timezone) when policy violations were first detected on the entity.
+    "day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
+    "month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
+    "year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
+  },
+  "lastDetectedDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Required. The date (in the America/Los_Angeles timezone) when policy violations were last detected on the entity.
+    "day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
+    "month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
+    "year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
+  },
+  "name": "A String", # Required. Resource name of the entity with policy issues. Format: accounts/{account}/policyIssues/{policy_issue}
+  "policyTopics": [ # Required. Unordered list. The policy topics that this entity was found to violate over the past seven days.
+    { # Information about a particular policy topic. A policy topic represents a single class of policy issue that can impact ad serving for your site. For example, sexual content or having ads that obscure your content. A single policy issue can have multiple policy topics for a single entity.
+      "mustFix": True or False, # Required. Indicates if this is a policy violation or not. When the value is true, issues that are instances of this topic must be addressed to remain in compliance with the partner's agreements with Google. A false value indicates that it's not mandatory to fix the issues but advertising demand might be restricted.
+      "topic": "A String", # Required. The policy topic. For example, "sexual-content" or "ads-obscuring-content"."
+    },
+  ],
+  "site": "A String", # Required. Hostname/domain of the entity (for example "foo.com" or "www.foo.com"). This _should_ be a bare domain/host name without any protocol. This will be present for all policy issues.
+  "siteSection": "A String", # Optional. Prefix of the site-section having policy issues (For example "foo.com/bar-section"). This will be present if the `entity_type` is `SITE_SECTION` and will be absent for other entity types.
+  "uri": "A String", # Optional. URI of the page having policy violations (for example "foo.com/bar" or "www.foo.com/bar"). This will be present if the `entity_type` is `PAGE` and will be absent for other entity types.
+  "warningEscalationDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Optional. The date (in the America/Los_Angeles timezone) when the entity will have ad serving demand restricted or ad serving disabled. This is present only for issues with a `WARNED` enforcement action. See https://support.google.com/adsense/answer/11066888.
+    "day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
+    "month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
+    "year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
+  },
+}
+
+ +
+ list(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists all the policy issues for the specified account.
+
+Args:
+  parent: string, Required. The account for which policy issues are being retrieved. Format: accounts/{account} (required)
+  pageSize: integer, The maximum number of policy issues to include in the response, used for paging. If unspecified, at most 10000 policy issues will be returned. The maximum value is 10000; values above 10000 will be coerced to 10000.
+  pageToken: string, A page token, received from a previous `ListPolicyIssues` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListPolicyIssues` must match the call that provided the page token.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response definition for the policy issues list rpc. Policy issues are reported only if the publisher has at least one AFC ad client in READY or GETTING_READY state. If the publisher has no such AFC ad client, the response will be an empty list.
+  "nextPageToken": "A String", # Continuation token used to page through policy issues. To retrieve the next page of the results, set the next request's "page_token" value to this.
+  "policyIssues": [ # The policy issues returned in the list response.
+    { # Representation of a policy issue for a single entity (site, site-section, or page). All issues for a single entity are represented by a single PolicyIssue resource, though that PolicyIssue can have multiple causes (or "topics") that can change over time. Policy issues are removed if there are no issues detected recently or if there's a recent successful appeal for the entity.
+      "action": "A String", # Required. The most severe action taken on the entity over the past seven days.
+      "adClients": [ # Optional. List of ad clients associated with the policy issue (either as the primary ad client or an associated host/secondary ad client). In the latter case, this will be an ad client that is not owned by the current account.
+        "A String",
+      ],
+      "adRequestCount": "A String", # Required. Total number of ad requests affected by the policy violations over the past seven days.
+      "entityType": "A String", # Required. Type of the entity indicating if the entity is a site, site-section, or page.
+      "firstDetectedDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Required. The date (in the America/Los_Angeles timezone) when policy violations were first detected on the entity.
+        "day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
+        "month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
+        "year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
+      },
+      "lastDetectedDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Required. The date (in the America/Los_Angeles timezone) when policy violations were last detected on the entity.
+        "day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
+        "month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
+        "year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
+      },
+      "name": "A String", # Required. Resource name of the entity with policy issues. Format: accounts/{account}/policyIssues/{policy_issue}
+      "policyTopics": [ # Required. Unordered list. The policy topics that this entity was found to violate over the past seven days.
+        { # Information about a particular policy topic. A policy topic represents a single class of policy issue that can impact ad serving for your site. For example, sexual content or having ads that obscure your content. A single policy issue can have multiple policy topics for a single entity.
+          "mustFix": True or False, # Required. Indicates if this is a policy violation or not. When the value is true, issues that are instances of this topic must be addressed to remain in compliance with the partner's agreements with Google. A false value indicates that it's not mandatory to fix the issues but advertising demand might be restricted.
+          "topic": "A String", # Required. The policy topic. For example, "sexual-content" or "ads-obscuring-content"."
+        },
+      ],
+      "site": "A String", # Required. Hostname/domain of the entity (for example "foo.com" or "www.foo.com"). This _should_ be a bare domain/host name without any protocol. This will be present for all policy issues.
+      "siteSection": "A String", # Optional. Prefix of the site-section having policy issues (For example "foo.com/bar-section"). This will be present if the `entity_type` is `SITE_SECTION` and will be absent for other entity types.
+      "uri": "A String", # Optional. URI of the page having policy violations (for example "foo.com/bar" or "www.foo.com/bar"). This will be present if the `entity_type` is `PAGE` and will be absent for other entity types.
+      "warningEscalationDate": { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values. * A month and day, with a zero year (for example, an anniversary). * A year on its own, with a zero month and a zero day. * A year and month, with a zero day (for example, a credit card expiration date). Related types: * google.type.TimeOfDay * google.type.DateTime * google.protobuf.Timestamp # Optional. The date (in the America/Los_Angeles timezone) when the entity will have ad serving demand restricted or ad serving disabled. This is present only for issues with a `WARNED` enforcement action. See https://support.google.com/adsense/answer/11066888.
+        "day": 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
+        "month": 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
+        "year": 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
+      },
+    },
+  ],
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/aiplatform_v1.projects.locations.customJobs.html b/docs/dyn/aiplatform_v1.projects.locations.customJobs.html index 4024e0394e..6144b94f67 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.customJobs.html +++ b/docs/dyn/aiplatform_v1.projects.locations.customJobs.html @@ -167,6 +167,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -280,6 +281,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -435,6 +437,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -561,6 +564,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", diff --git a/docs/dyn/aiplatform_v1.projects.locations.endpoints.html b/docs/dyn/aiplatform_v1.projects.locations.endpoints.html index a35e488eed..c13c5a5bb3 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.endpoints.html +++ b/docs/dyn/aiplatform_v1.projects.locations.endpoints.html @@ -1108,40 +1108,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -1149,20 +1115,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -1173,7 +1150,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, @@ -1491,7 +1468,7 @@

Method Details

Args: parent: string, Required. The resource name of the Location from which to list the Endpoints. Format: `projects/{project}/locations/{location}` (required) - filter: string, Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports = and !=. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports = and, != * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:* or labels:key - key existence * A key including a space must be quoted. `labels."a key"`. * `base_model_name` only supports = Some examples: * `endpoint=1` * `displayName="myDisplayName"` * `labels.myKey="myValue"` * `baseModelName="text-bison"` + filter: string, Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports `=` and `!=`. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports `=` and `!=`. * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:*` or `labels:key` - key existence * A key including a space must be quoted. `labels."a key"`. * `base_model_name` only supports `=`. Some examples: * `endpoint=1` * `displayName="myDisplayName"` * `labels.myKey="myValue"` * `baseModelName="text-bison"` orderBy: string, A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields: * `display_name` * `create_time` * `update_time` Example: `display_name, create_time desc`. pageSize: integer, Optional. The standard list page size. pageToken: string, Optional. The standard list page token. Typically obtained via ListEndpointsResponse.next_page_token of the previous EndpointService.ListEndpoints call. @@ -2580,40 +2557,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -2621,20 +2564,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -2645,7 +2599,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, diff --git a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html index 0b1ff1ec3e..d0d692055e 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html +++ b/docs/dyn/aiplatform_v1.projects.locations.featureOnlineStores.featureViews.html @@ -255,6 +255,14 @@

Method Details

An object of the form: { # Response message for FeatureOnlineStoreService.FetchFeatureValues + "dataKey": { # Lookup key for a feature view. # The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs. + "compositeKey": { # ID that is comprised from several parts (columns). # The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec. + "parts": [ # Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order. + "A String", + ], + }, + "key": "A String", # String key to use for lookup. + }, "keyValues": { # Response structure in the format of key (feature name) and (feature) value pair. # Feature values in KeyValue format. "features": [ # List of feature names and values. { # Feature name & value pair. @@ -533,6 +541,14 @@

Method Details

"distance": 3.14, # The distance between the neighbor and the query vector. "entityId": "A String", # The id of the similar entity. "entityKeyValues": { # Response message for FeatureOnlineStoreService.FetchFeatureValues # The attributes of the neighbor, e.g. filters, crowding and metadata Note that full entities are returned only when "return_full_entity" is set to true. Otherwise, only the "entity_id" and "distance" fields are populated. + "dataKey": { # Lookup key for a feature view. # The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs. + "compositeKey": { # ID that is comprised from several parts (columns). # The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec. + "parts": [ # Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order. + "A String", + ], + }, + "key": "A String", # String key to use for lookup. + }, "keyValues": { # Response structure in the format of key (feature name) and (feature) value pair. # Feature values in KeyValue format. "features": [ # List of feature names and values. { # Feature name & value pair. diff --git a/docs/dyn/aiplatform_v1.projects.locations.hyperparameterTuningJobs.html b/docs/dyn/aiplatform_v1.projects.locations.hyperparameterTuningJobs.html index 0791e580cc..db0bc1330e 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.hyperparameterTuningJobs.html +++ b/docs/dyn/aiplatform_v1.projects.locations.hyperparameterTuningJobs.html @@ -268,6 +268,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -516,6 +517,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -806,6 +808,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -1067,6 +1070,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", diff --git a/docs/dyn/aiplatform_v1.projects.locations.nasJobs.html b/docs/dyn/aiplatform_v1.projects.locations.nasJobs.html index eff4ab06c3..3b6a4c7c46 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.nasJobs.html +++ b/docs/dyn/aiplatform_v1.projects.locations.nasJobs.html @@ -223,6 +223,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -304,6 +305,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -471,6 +473,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -552,6 +555,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -761,6 +765,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -842,6 +847,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -1022,6 +1028,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", @@ -1103,6 +1110,7 @@

Method Details

"A String", ], "network": "A String", # Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network. + "persistentResourceId": "A String", # Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected. "protectedArtifactLocationId": "A String", # The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations "reservedIpRanges": [ # Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range']. "A String", diff --git a/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html b/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html index a04edd8375..154cd5e1cc 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html +++ b/docs/dyn/aiplatform_v1.projects.locations.persistentResources.html @@ -74,6 +74,11 @@

Vertex AI API . projects . locations . persistentResources

Instance Methods

+

+ operations() +

+

Returns the operations Resource.

+

close()

Close httplib2 connections.

@@ -95,6 +100,9 @@

Instance Methods

patch(name, body=None, updateMask=None, x__xgafv=None)

Updates a PersistentResource.

+

+ reboot(name, body=None, x__xgafv=None)

+

Reboots a PersistentResource.

Method Details

close() @@ -549,4 +557,45 @@

Method Details

}
+
+ reboot(name, body=None, x__xgafv=None) +
Reboots a PersistentResource.
+
+Args:
+  name: string, Required. The name of the PersistentResource resource. Format: `projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for PersistentResourceService.RebootPersistentResource.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ \ No newline at end of file diff --git a/docs/dyn/aiplatform_v1.projects.locations.persistentResources.operations.html b/docs/dyn/aiplatform_v1.projects.locations.persistentResources.operations.html new file mode 100644 index 0000000000..39c1167b1d --- /dev/null +++ b/docs/dyn/aiplatform_v1.projects.locations.persistentResources.operations.html @@ -0,0 +1,268 @@ + + + +

Vertex AI API . projects . locations . persistentResources . operations

+

Instance Methods

+

+ cancel(name, x__xgafv=None)

+

Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.

+

+ close()

+

Close httplib2 connections.

+

+ delete(name, x__xgafv=None)

+

Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`.

+

+ get(name, x__xgafv=None)

+

Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.

+

+ list(name, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.

+

+ list_next()

+

Retrieves the next page of results.

+

+ wait(name, timeout=None, x__xgafv=None)

+

Waits until the specified long-running operation is done or reaches at most a specified timeout, returning the latest state. If the operation is already done, the latest state is immediately returned. If the timeout specified is greater than the default HTTP/RPC timeout, the HTTP/RPC timeout is used. If the server does not support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Note that this method is on a best-effort basis. It may return the latest state before the specified timeout (including immediately), meaning even an immediate response is no guarantee that the operation is done.

+

Method Details

+
+ cancel(name, x__xgafv=None) +
Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.
+
+Args:
+  name: string, The name of the operation resource to be cancelled. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ close() +
Close httplib2 connections.
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`.
+
+Args:
+  name: string, The name of the operation resource to be deleted. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
+}
+
+ +
+ get(name, x__xgafv=None) +
Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
+
+Args:
+  name: string, The name of the operation resource. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ +
+ list(name, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.
+
+Args:
+  name: string, The name of the operation's parent resource. (required)
+  filter: string, The standard list filter.
+  pageSize: integer, The standard list page size.
+  pageToken: string, The standard list page token.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # The response message for Operations.ListOperations.
+  "nextPageToken": "A String", # The standard List next-page token.
+  "operations": [ # A list of operations that matches the specified filter in the request.
+    { # This resource represents a long-running operation that is the result of a network API call.
+      "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+      "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+        "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+        "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+          {
+            "a_key": "", # Properties of the object. Contains field @type with type URL.
+          },
+        ],
+        "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+      },
+      "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+      "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+      "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    },
+  ],
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ wait(name, timeout=None, x__xgafv=None) +
Waits until the specified long-running operation is done or reaches at most a specified timeout, returning the latest state. If the operation is already done, the latest state is immediately returned. If the timeout specified is greater than the default HTTP/RPC timeout, the HTTP/RPC timeout is used. If the server does not support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Note that this method is on a best-effort basis. It may return the latest state before the specified timeout (including immediately), meaning even an immediate response is no guarantee that the operation is done.
+
+Args:
+  name: string, The name of the operation resource to wait on. (required)
+  timeout: string, The maximum duration to wait before timing out. If left blank, the wait will be at most the time permitted by the underlying HTTP/RPC protocol. If RPC context deadline is also specified, the shorter one will be used.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html b/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html index 1e2a359dcc..8a9ab3fc73 100644 --- a/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html +++ b/docs/dyn/aiplatform_v1.projects.locations.publishers.models.html @@ -269,40 +269,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -310,20 +276,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -334,7 +311,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, @@ -781,40 +758,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -822,20 +765,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -846,7 +800,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, diff --git a/docs/dyn/aiplatform_v1.publishers.models.html b/docs/dyn/aiplatform_v1.publishers.models.html index 7727c7007e..092ffda7e9 100644 --- a/docs/dyn/aiplatform_v1.publishers.models.html +++ b/docs/dyn/aiplatform_v1.publishers.models.html @@ -217,6 +217,87 @@

Method Details

"A String", ], }, + "multiDeployVertex": { # Multiple setups to deploy the PublisherModel. # Optional. Multiple setups to deploy the PublisherModel to Vertex Endpoint. + "multiDeployVertex": [ # Optional. One click deployment configurations. + { # Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests. + "artifactUri": "A String", # Optional. The path to the directory containing the Model artifact and any of its supporting files. + "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. + "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number. + "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error. + }, + "containerSpec": { # Specification of a container for serving predictions. Some fields in this message correspond to fields in the [Kubernetes Container v1 core specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). # Optional. The specification of the container that is to be used when deploying this Model in Vertex AI. Not present for Large Models. + "args": [ # Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd). Specify this field as an array of executable and arguments, similar to a Docker `CMD`'s "default parameters" form. If you don't specify this field but do specify the command field, then the command from the `command` field runs without any additional arguments. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). If you don't specify this field and don't specify the `command` field, then the container's [`ENTRYPOINT`](https://docs.docker.com/engine/reference/builder/#cmd) and `CMD` determine what runs based on their default behavior. See the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `args` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + "A String", + ], + "command": [ # Immutable. Specifies the command that runs when the container starts. This overrides the container's [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint). Specify this field as an array of executable and arguments, similar to a Docker `ENTRYPOINT`'s "exec" form, not its "shell" form. If you do not specify this field, then the container's `ENTRYPOINT` runs, in conjunction with the args field or the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd), if either exists. If this field is not specified and the container does not have an `ENTRYPOINT`, then refer to the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). If you specify this field, then you can also specify the `args` field to provide additional arguments for this command. However, if you specify this field, then the container's `CMD` is ignored. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `command` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + "A String", + ], + "deploymentTimeout": "A String", # Immutable. Deployment timeout. Limit for deployment timeout is 2 hours. + "env": [ # Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable `VAR_2` to have the value `foo bar`: ```json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ] ``` If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to the `env` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + { # Represents an environment variable present in a Container or Python Module. + "name": "A String", # Required. Name of the environment variable. Must be a valid C identifier. + "value": "A String", # Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not. + }, + ], + "grpcPorts": [ # Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, gRPC requests to the container will be disabled. Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers v1 core API. + { # Represents a network port in a container. + "containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive. + }, + ], + "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. + "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + "A String", + ], + }, + "periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'. + "timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'. + }, + "healthRoute": "A String", # Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about [health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#health). For example, if you set this field to `/bar`, then Vertex AI intermittently sends a GET request to the `/bar` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/ DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) + "imageUri": "A String", # Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the [container publishing requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#publishing), including permissions requirements for the Vertex AI Service Agent. The container image is ingested upon ModelService.UploadModel, stored internally, and this original path is afterwards not used. To learn about the requirements for the Docker image itself, see [Custom container requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#). You can use the URI to one of Vertex AI's [pre-built container images for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) in this field. + "ports": [ # Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends [liveness and health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#liveness) to this port. If you do not specify this field, it defaults to following value: ```json [ { "containerPort": 8080 } ] ``` Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + { # Represents a network port in a container. + "containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive. + }, + ], + "predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) + "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. + "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. + "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + "A String", + ], + }, + "periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'. + "timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'. + }, + }, + "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. + "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`. + { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count. + "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` + "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided. + }, + ], + "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction. + "acceleratorCount": 42, # The number of accelerators to attach to the machine. + "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. + "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. + "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). + }, + "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). + "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. + }, + "largeModelReference": { # Contains information about the Large Model. # Optional. Large model reference. When this is set, model_artifact_spec is not needed. + "name": "A String", # Required. The unique name of the large Foundation or pre-built model. Like "chat-bison", "text-bison". Or model name with version ID, like "chat-bison@001", "text-bison@005", etc. + }, + "modelDisplayName": "A String", # Optional. Default model display name. + "publicArtifactUri": "A String", # Optional. The signed URI for ephemeral Cloud Storage access to model artifact. + "sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "title": "A String", # Required. The title of the regional resource reference. + }, + ], + }, "openEvaluationPipeline": { # The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc.. # Optional. Open evaluation pipeline of the PublisherModel. "references": { # Required. "a_key": { # Reference to a resource. diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html b/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html index a209ceabce..484b65d6ef 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.endpoints.html @@ -1254,40 +1254,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -1295,20 +1261,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -1319,7 +1296,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, @@ -1678,7 +1655,7 @@

Method Details

Args: parent: string, Required. The resource name of the Location from which to list the Endpoints. Format: `projects/{project}/locations/{location}` (required) - filter: string, Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports = and !=. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports = and, != * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:* or labels:key - key existence * A key including a space must be quoted. `labels."a key"`. * `base_model_name` only supports = Some examples: * `endpoint=1` * `displayName="myDisplayName"` * `labels.myKey="myValue"` * `baseModelName="text-bison"` + filter: string, Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports `=` and `!=`. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports `=` and `!=`. * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:*` or `labels:key` - key existence * A key including a space must be quoted. `labels."a key"`. * `base_model_name` only supports `=`. Some examples: * `endpoint=1` * `displayName="myDisplayName"` * `labels.myKey="myValue"` * `baseModelName="text-bison"` pageSize: integer, Optional. The standard list page size. pageToken: string, Optional. The standard list page token. Typically obtained via ListEndpointsResponse.next_page_token of the previous EndpointService.ListEndpoints call. readMask: string, Optional. Mask specifying which fields to read. @@ -2848,40 +2825,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -2889,20 +2832,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -2913,7 +2867,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html b/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html index e6c65f92b8..d2b9d90d72 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.extensions.html @@ -108,6 +108,9 @@

Instance Methods

patch(name, body=None, updateMask=None, x__xgafv=None)

Updates an Extension.

+

+ query(name, body=None, x__xgafv=None)

+

Queries an extension with a default controller.

Method Details

close() @@ -159,31 +162,32 @@

Method Details

The object takes the form of: { # Request message for ExtensionExecutionService.ExecuteExtension. - "operationId": "A String", # Required. The operation to be executed in this extension as defined in ExtensionOperation.operation_id. + "operationId": "A String", # Required. The desired ID of the operation to be executed in this extension as defined in ExtensionOperation.operation_id. "operationParams": { # Optional. Request parameters that will be used for executing this operation. The struct should be in a form of map with param name as the key and actual param value as the value. E.g. If this operation requires a param "name" to be set to "abc". you can set this to something like {"name": "abc"}. "a_key": "", # Properties of the object. }, "runtimeAuthConfig": { # Auth configuration to run the extension. # Optional. Auth config provided at runtime to override the default value in Extension.manifest.auth_config. The AuthConfig.auth_type should match the value in Extension.manifest.auth_config. "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth. - "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. "httpElementLocation": "A String", # Required. The location of the API key. "name": "A String", # Required. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name. }, "authType": "A String", # Type of auth scheme. "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth. - "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If it is not specified, the Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used. - If the service account is provided, the service account should grant Vertex AI Extension Service Agent `iam.serviceAccounts.getAccessToken` permission. + "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension. }, "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth. - "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. }, "noAuth": { # Empty message, used to indicate no authentication for an endpoint. # Config for no auth. }, "oauthConfig": { # Config for user oauth. # Config for user oauth. - "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. - "serviceAccount": "A String", # The service account that the extension execution service will use to query extension. Used for generating OAuth token on behalf of provided service account. - If the service account is provided, the service account should grant Vertex AI Service Agent `iam.serviceAccounts.getAccessToken` permission. + "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account. }, "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth. - "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. + "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, } @@ -198,9 +202,6 @@

Method Details

{ # Response message for ExtensionExecutionService.ExecuteExtension. "content": "A String", # Response content from the extension. The content should be conformant to the response.content schema in the extension's manifest/OpenAPI spec. - "output": { # Output from the extension. The output should be conformant to the extension's manifest/OpenAPI spec. The output can contain values for keys like "content", "headers", etc. This field is deprecated, please use content field below for the extension execution result. - "a_key": "", # Properties of the object. - }, }
@@ -229,20 +230,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -256,25 +268,26 @@

Method Details

}, "authConfig": { # Auth configuration to run the extension. # Required. Immutable. Type of auth supported by this extension. "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth. - "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. "httpElementLocation": "A String", # Required. The location of the API key. "name": "A String", # Required. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name. }, "authType": "A String", # Type of auth scheme. "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth. - "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If it is not specified, the Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used. - If the service account is provided, the service account should grant Vertex AI Extension Service Agent `iam.serviceAccounts.getAccessToken` permission. + "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension. }, "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth. - "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. }, "noAuth": { # Empty message, used to indicate no authentication for an endpoint. # Config for no auth. }, "oauthConfig": { # Config for user oauth. # Config for user oauth. - "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. - "serviceAccount": "A String", # The service account that the extension execution service will use to query extension. Used for generating OAuth token on behalf of provided service account. - If the service account is provided, the service account should grant Vertex AI Service Agent `iam.serviceAccounts.getAccessToken` permission. + "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account. }, "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth. - "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. + "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. @@ -323,20 +336,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -350,25 +374,26 @@

Method Details

}, "authConfig": { # Auth configuration to run the extension. # Required. Immutable. Type of auth supported by this extension. "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth. - "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. "httpElementLocation": "A String", # Required. The location of the API key. "name": "A String", # Required. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name. }, "authType": "A String", # Type of auth scheme. "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth. - "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If it is not specified, the Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used. - If the service account is provided, the service account should grant Vertex AI Extension Service Agent `iam.serviceAccounts.getAccessToken` permission. + "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension. }, "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth. - "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. }, "noAuth": { # Empty message, used to indicate no authentication for an endpoint. # Config for no auth. }, "oauthConfig": { # Config for user oauth. # Config for user oauth. - "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. - "serviceAccount": "A String", # The service account that the extension execution service will use to query extension. Used for generating OAuth token on behalf of provided service account. - If the service account is provided, the service account should grant Vertex AI Service Agent `iam.serviceAccounts.getAccessToken` permission. + "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account. }, "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth. - "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. + "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. @@ -456,20 +481,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -483,25 +519,26 @@

Method Details

}, "authConfig": { # Auth configuration to run the extension. # Required. Immutable. Type of auth supported by this extension. "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth. - "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. "httpElementLocation": "A String", # Required. The location of the API key. "name": "A String", # Required. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name. }, "authType": "A String", # Type of auth scheme. "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth. - "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If it is not specified, the Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used. - If the service account is provided, the service account should grant Vertex AI Extension Service Agent `iam.serviceAccounts.getAccessToken` permission. + "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension. }, "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth. - "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. }, "noAuth": { # Empty message, used to indicate no authentication for an endpoint. # Config for no auth. }, "oauthConfig": { # Config for user oauth. # Config for user oauth. - "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. - "serviceAccount": "A String", # The service account that the extension execution service will use to query extension. Used for generating OAuth token on behalf of provided service account. - If the service account is provided, the service account should grant Vertex AI Service Agent `iam.serviceAccounts.getAccessToken` permission. + "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account. }, "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth. - "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. + "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. @@ -567,20 +604,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -594,25 +642,26 @@

Method Details

}, "authConfig": { # Auth configuration to run the extension. # Required. Immutable. Type of auth supported by this extension. "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth. - "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. "httpElementLocation": "A String", # Required. The location of the API key. "name": "A String", # Required. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name. }, "authType": "A String", # Type of auth scheme. "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth. - "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If it is not specified, the Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used. - If the service account is provided, the service account should grant Vertex AI Extension Service Agent `iam.serviceAccounts.getAccessToken` permission. + "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension. }, "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth. - "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. }, "noAuth": { # Empty message, used to indicate no authentication for an endpoint. # Config for no auth. }, "oauthConfig": { # Config for user oauth. # Config for user oauth. - "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. - "serviceAccount": "A String", # The service account that the extension execution service will use to query extension. Used for generating OAuth token on behalf of provided service account. - If the service account is provided, the service account should grant Vertex AI Service Agent `iam.serviceAccounts.getAccessToken` permission. + "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account. }, "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth. - "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. + "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. @@ -640,7 +689,7 @@

Method Details

"updateTime": "A String", # Output only. Timestamp when this Extension was most recently updated. } - updateMask: string, Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description` + updateMask: string, Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description` * `tool_use_examples` x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -660,20 +709,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -687,25 +747,26 @@

Method Details

}, "authConfig": { # Auth configuration to run the extension. # Required. Immutable. Type of auth supported by this extension. "apiKeyConfig": { # Config for authentication with API key. # Config for API key auth. - "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "apiKeySecret": "A String", # Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. "httpElementLocation": "A String", # Required. The location of the API key. "name": "A String", # Required. The parameter name of the API key. E.g. If the API request is "https://example.com/act?api_key=", "api_key" would be the parameter name. }, "authType": "A String", # Type of auth scheme. "googleServiceAccountConfig": { # Config for Google Service Account Authentication. # Config for Google Service Account auth. - "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If it is not specified, the Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used. - If the service account is provided, the service account should grant Vertex AI Extension Service Agent `iam.serviceAccounts.getAccessToken` permission. + "serviceAccount": "A String", # Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension. }, "httpBasicAuthConfig": { # Config for HTTP Basic Authentication. # Config for HTTP Basic auth. - "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` + "credentialSecret": "A String", # Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource. }, "noAuth": { # Empty message, used to indicate no authentication for an endpoint. # Config for no auth. }, "oauthConfig": { # Config for user oauth. # Config for user oauth. - "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. - "serviceAccount": "A String", # The service account that the extension execution service will use to query extension. Used for generating OAuth token on behalf of provided service account. - If the service account is provided, the service account should grant Vertex AI Service Agent `iam.serviceAccounts.getAccessToken` permission. + "accessToken": "A String", # Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account. }, "oidcConfig": { # Config for user OIDC auth. # Config for user OIDC auth. - "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time. + "idToken": "A String", # OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time. + "serviceAccount": "A String", # The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents). }, }, "description": "A String", # Required. The natural language description shown to the LLM. It should describe the usage of the extension, and is essential for the LLM to perform reasoning. @@ -734,4 +795,135 @@

Method Details

}
+
+ query(name, body=None, x__xgafv=None) +
Queries an extension with a default controller.
+
+Args:
+  name: string, Required. Name (identifier) of the extension; Format: `projects/{project}/locations/{location}/extensions/{extension}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for ExtensionExecutionService.QueryExtension.
+  "contents": [ # Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.
+    { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+      "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+        { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+          "fileData": { # URI based data. # Optional. URI based data.
+            "fileUri": "A String", # Required. URI.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+            "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+              "a_key": "", # Properties of the object.
+            },
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+          },
+          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+            "response": { # Required. The function response in JSON object format.
+              "a_key": "", # Properties of the object.
+            },
+          },
+          "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data.
+            "data": "A String", # Required. Raw bytes for media formats.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "text": "A String", # Optional. Text part (can be code).
+          "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+            "endOffset": "A String", # Optional. The end offset of the video.
+            "startOffset": "A String", # Optional. The start offset of the video.
+          },
+        },
+      ],
+      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+    },
+  ],
+  "query": { # User provided query message. # Required. User provided input query message.
+    "query": "A String", # Required. The query from user.
+  },
+  "useFunctionCall": True or False, # Optional. Experiment control on whether to use function call.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response message for ExtensionExecutionService.QueryExtension.
+  "failureMessage": "A String", # Failure message if any.
+  "metadata": { # Metadata for response # Metadata related to the query execution.
+    "checkpoint": { # Placeholder for all checkpoint related data. Any data needed to restore a request and more go/vertex-extension-query-operation # Optional. Checkpoint to restore a request
+      "content": "A String", # Required. encoded checkpoint
+    },
+    "executionPlan": { # Execution plan for a request. # Optional. Execution plan for the request.
+      "steps": [ # Required. Sequence of steps to execute a request.
+        { # Single step in query execution plan.
+          "extensionExecution": { # Extension execution step. # Extension execution step.
+            "extension": "A String", # Required. extension resource name
+            "operationId": "A String", # Required. the operation id
+          },
+          "respondToUser": { # Respond to user step. # Respond to user step.
+          },
+        },
+      ],
+    },
+    "flowOutputs": { # To surface the v2 flow output.
+      "a_key": "", # Properties of the object.
+    },
+  },
+  "queryResponseMetadata": {
+    "steps": [ # ReAgent execution steps.
+      { # ReAgent execution steps.
+        "error": "A String", # Error messages from the extension or during response parsing.
+        "extensionInstruction": "A String", # Planner's instruction to the extension.
+        "extensionInvoked": "A String", # Planner's choice of extension to invoke.
+        "response": "A String", # Response of the extension.
+        "success": True or False, # When set to False, either the extension fails to execute or the response cannot be summarized.
+        "thought": "A String", # Planner's thought.
+      },
+    ],
+    "useCreativity": True or False, # Whether the reasoning agent used creativity (instead of extensions provided) to build the response.
+  },
+  "response": "A String", # Response to the user's query.
+  "steps": [ # Steps of extension or LLM interaction, can contain function call, function response, or text response. The last step contains the final response to the query.
+    { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.
+      "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types.
+        { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.
+          "fileData": { # URI based data. # Optional. URI based data.
+            "fileUri": "A String", # Required. URI.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values.
+            "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details.
+              "a_key": "", # Properties of the object.
+            },
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name].
+          },
+          "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model.
+            "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name].
+            "response": { # Required. The function response in JSON object format.
+              "a_key": "", # Properties of the object.
+            },
+          },
+          "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data.
+            "data": "A String", # Required. Raw bytes for media formats.
+            "mimeType": "A String", # Required. The IANA standard MIME type of the source data.
+          },
+          "text": "A String", # Optional. Text part (can be code).
+          "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data.
+            "endOffset": "A String", # Optional. The end offset of the video.
+            "startOffset": "A String", # Optional. The start offset of the video.
+          },
+        },
+      ],
+      "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset.
+    },
+  ],
+}
+
+ \ No newline at end of file diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html index 4ecfdf2fda..9f525353a9 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.featureOnlineStores.featureViews.html @@ -117,6 +117,9 @@

Instance Methods

setIamPolicy(resource, body=None, x__xgafv=None)

Sets the access control policy on the specified resource. Replaces any existing policy. Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.

+

+ streamingFetchFeatureValues(featureView, body=None, x__xgafv=None)

+

Bidirectional streaming RPC to fetch feature values under a FeatureView. Requests may not have a one-to-one mapping to responses and responses may be returned out-of-order to reduce latency.

sync(featureView, body=None, x__xgafv=None)

Triggers on-demand sync for the FeatureView.

@@ -282,6 +285,14 @@

Method Details

An object of the form: { # Response message for FeatureOnlineStoreService.FetchFeatureValues + "dataKey": { # Lookup key for a feature view. # The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs. + "compositeKey": { # ID that is comprised from several parts (columns). # The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec. + "parts": [ # Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order. + "A String", + ], + }, + "key": "A String", # String key to use for lookup. + }, "keyValues": { # Response structure in the format of key (feature name) and (feature) value pair. # Feature values in KeyValue format. "features": [ # List of feature names and values. { # Feature name & value pair. @@ -643,6 +654,14 @@

Method Details

"distance": 3.14, # The distance between the neighbor and the query vector. "entityId": "A String", # The id of the similar entity. "entityKeyValues": { # Response message for FeatureOnlineStoreService.FetchFeatureValues # The attributes of the neighbor, e.g. filters, crowding and metadata Note that full entities are returned only when "return_full_entity" is set to true. Otherwise, only the "entity_id" and "distance" fields are populated. + "dataKey": { # Lookup key for a feature view. # The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs. + "compositeKey": { # ID that is comprised from several parts (columns). # The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec. + "parts": [ # Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order. + "A String", + ], + }, + "key": "A String", # String key to use for lookup. + }, "keyValues": { # Response structure in the format of key (feature name) and (feature) value pair. # Feature values in KeyValue format. "features": [ # List of feature names and values. { # Feature name & value pair. @@ -748,6 +767,112 @@

Method Details

}
+
+ streamingFetchFeatureValues(featureView, body=None, x__xgafv=None) +
Bidirectional streaming RPC to fetch feature values under a FeatureView. Requests may not have a one-to-one mapping to responses and responses may be returned out-of-order to reduce latency.
+
+Args:
+  featureView: string, Required. FeatureView resource format `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}/featureViews/{featureView}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for FeatureOnlineStoreService.StreamingFetchFeatureValues. For the entities requested, all features under the requested feature view will be returned.
+  "dataFormat": "A String", # Specify response data format. If not set, KeyValue format will be used.
+  "dataKeys": [
+    { # Lookup key for a feature view.
+      "compositeKey": { # ID that is comprised from several parts (columns). # The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec.
+        "parts": [ # Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order.
+          "A String",
+        ],
+      },
+      "key": "A String", # String key to use for lookup.
+    },
+  ],
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response message for FeatureOnlineStoreService.StreamingFetchFeatureValues.
+  "data": [
+    { # Response message for FeatureOnlineStoreService.FetchFeatureValues
+      "dataKey": { # Lookup key for a feature view. # The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs.
+        "compositeKey": { # ID that is comprised from several parts (columns). # The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec.
+          "parts": [ # Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order.
+            "A String",
+          ],
+        },
+        "key": "A String", # String key to use for lookup.
+      },
+      "keyValues": { # Response structure in the format of key (feature name) and (feature) value pair. # Feature values in KeyValue format.
+        "features": [ # List of feature names and values.
+          { # Feature name & value pair.
+            "name": "A String", # Feature short name.
+            "value": { # Value for a feature. # Feature value.
+              "boolArrayValue": { # A list of boolean values. # A list of bool type feature value.
+                "values": [ # A list of bool values.
+                  True or False,
+                ],
+              },
+              "boolValue": True or False, # Bool type feature value.
+              "bytesValue": "A String", # Bytes feature value.
+              "doubleArrayValue": { # A list of double values. # A list of double type feature value.
+                "values": [ # A list of double values.
+                  3.14,
+                ],
+              },
+              "doubleValue": 3.14, # Double type feature value.
+              "int64ArrayValue": { # A list of int64 values. # A list of int64 type feature value.
+                "values": [ # A list of int64 values.
+                  "A String",
+                ],
+              },
+              "int64Value": "A String", # Int64 feature value.
+              "metadata": { # Metadata of feature value. # Metadata of feature value.
+                "generateTime": "A String", # Feature generation timestamp. Typically, it is provided by user at feature ingestion time. If not, feature store will use the system timestamp when the data is ingested into feature store. For streaming ingestion, the time, aligned by days, must be no older than five years (1825 days) and no later than one year (366 days) in the future.
+              },
+              "stringArrayValue": { # A list of string values. # A list of string type feature value.
+                "values": [ # A list of string values.
+                  "A String",
+                ],
+              },
+              "stringValue": "A String", # String feature value.
+            },
+          },
+        ],
+      },
+      "protoStruct": { # Feature values in proto Struct format.
+        "a_key": "", # Properties of the object.
+      },
+    },
+  ],
+  "dataKeysWithError": [
+    { # Lookup key for a feature view.
+      "compositeKey": { # ID that is comprised from several parts (columns). # The actual Entity ID will be composed from this struct. This should match with the way ID is defined in the FeatureView spec.
+        "parts": [ # Parts to construct Entity ID. Should match with the same ID columns as defined in FeatureView in the same order.
+          "A String",
+        ],
+      },
+      "key": "A String", # String key to use for lookup.
+    },
+  ],
+  "status": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Response status. If OK, then StreamingFetchFeatureValuesResponse.data will be populated. Otherwise StreamingFetchFeatureValuesResponse.data_keys_with_error will be populated with the appropriate data keys. The error only applies to the listed data keys - the stream will remain open for further FeatureOnlineStoreService.StreamingFetchFeatureValuesRequest requests.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+}
+
+
sync(featureView, body=None, x__xgafv=None)
Triggers on-demand sync for the FeatureView.
diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.persistentResources.html b/docs/dyn/aiplatform_v1beta1.projects.locations.persistentResources.html
index 9632e09bb8..3238f16a9d 100644
--- a/docs/dyn/aiplatform_v1beta1.projects.locations.persistentResources.html
+++ b/docs/dyn/aiplatform_v1beta1.projects.locations.persistentResources.html
@@ -100,6 +100,9 @@ 

Instance Methods

patch(name, body=None, updateMask=None, x__xgafv=None)

Updates a PersistentResource.

+

+ reboot(name, body=None, x__xgafv=None)

+

Reboots a PersistentResource.

Method Details

close() @@ -554,4 +557,45 @@

Method Details

}
+
+ reboot(name, body=None, x__xgafv=None) +
Reboots a PersistentResource.
+
+Args:
+  name: string, Required. The name of the PersistentResource resource. Format: `projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Request message for PersistentResourceService.RebootPersistentResource.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ \ No newline at end of file diff --git a/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html b/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html index d923bb8e7b..eaf2899f93 100644 --- a/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html +++ b/docs/dyn/aiplatform_v1beta1.projects.locations.publishers.models.html @@ -269,40 +269,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -310,20 +276,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -334,7 +311,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, @@ -816,40 +793,6 @@

Method Details

"threshold": "A String", # Required. The harm block threshold. }, ], - "systemInstructions": [ # Optional. The user provided system instructions for the model. - { # The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn. - "parts": [ # Required. Ordered `Parts` that constitute a single message. Parts may have different IANA MIME types. - { # A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes. - "fileData": { # URI based data. # Optional. URI based data. - "fileUri": "A String", # Required. URI. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "functionCall": { # A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values. # Optional. A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] with the parameters and their values. - "args": { # Optional. Required. The function parameters and values in JSON object format. See [FunctionDeclaration.parameters] for parameter details. - "a_key": "", # Properties of the object. - }, - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name]. - }, - "functionResponse": { # The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction. # Optional. The result output of a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function call. It is used as context to the model. - "name": "A String", # Required. The name of the function to call. Matches [FunctionDeclaration.name] and [FunctionCall.name]. - "response": { # Required. The function response in JSON object format. - "a_key": "", # Properties of the object. - }, - }, - "inlineData": { # Raw media bytes. Text should not be sent as raw bytes, use the 'text' field. # Optional. Inlined bytes data. - "data": "A String", # Required. Raw bytes for media formats. - "mimeType": "A String", # Required. The IANA standard MIME type of the source data. - }, - "text": "A String", # Optional. Text part (can be code). - "videoMetadata": { # Metadata describes the input video content. # Optional. Video metadata. The metadata should only be specified while the video data is presented in inline_data or file_data. - "endOffset": "A String", # Optional. The end offset of the video. - "startOffset": "A String", # Optional. The start offset of the video. - }, - }, - ], - "role": "A String", # Optional. The producer of the content. Must be either 'user' or 'model'. Useful to set for multi-turn conversations, otherwise can be left blank or unset. - }, - ], "tools": [ # Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. { # Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval). "functionDeclarations": [ # Optional. Function tool type. One or more function declarations to be passed to the model along with the current user query. Model may decide to call a subset of these functions by populating FunctionCall in the response. User should provide a FunctionResponse for each function call in the next turn. Based on the function responses, Model will generate the final response back to the user. Maximum 64 function declarations can be provided. @@ -857,20 +800,31 @@

Method Details

"description": "A String", # Optional. Description and purpose of the function. Model uses it to decide how and whether to call the function. "name": "A String", # Required. The name of the function to call. Must start with a letter or an underscore. Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a maximum length of 64. "parameters": { # Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed. # Optional. Describes the parameters to this function in JSON Schema Object format. Reflects the Open API 3.03 Parameter Object. string Key: the name of the parameter. Parameter names are case sensitive. Schema Value: the Schema defining the type used for the parameter. For function with no parameters, this can be left unset. Parameter names must start with a letter or an underscore and must only contain chars a-z, A-Z, 0-9, or underscores with a maximum length of 64. Example with 1 required and 1 optional parameter: type: OBJECT properties: param1: type: STRING param2: type: INTEGER required: - param1 + "default": "", # Optional. Default value of the data. "description": "A String", # Optional. The description of the data. "enum": [ # Optional. Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} "A String", ], "example": "", # Optional. Example of the object. Will only populated when the object is the root. - "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64 - "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. Schema of the elements of Type.ARRAY. + "format": "A String", # Optional. The format of the data. Supported formats: for NUMBER type: "float", "double" for INTEGER type: "int32", "int64" for STRING type: "email", "byte", etc + "items": # Object with schema name: GoogleCloudAiplatformV1beta1Schema # Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY. + "maxItems": "A String", # Optional. Maximum number of the elements for Type.ARRAY. + "maxLength": "A String", # Optional. Maximum length of the Type.STRING + "maxProperties": "A String", # Optional. Maximum number of the properties for Type.OBJECT. + "maximum": 3.14, # Optional. Maximum value of the Type.INTEGER and Type.NUMBER + "minItems": "A String", # Optional. Minimum number of the elements for Type.ARRAY. + "minLength": "A String", # Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING + "minProperties": "A String", # Optional. Minimum number of the properties for Type.OBJECT. + "minimum": 3.14, # Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER "nullable": True or False, # Optional. Indicates if the value may be null. - "properties": { # Optional. Properties of Type.OBJECT. + "pattern": "A String", # Optional. Pattern of the Type.STRING to restrict a string to a regular expression. + "properties": { # Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT. "a_key": # Object with schema name: GoogleCloudAiplatformV1beta1Schema }, "required": [ # Optional. Required properties of Type.OBJECT. "A String", ], + "title": "A String", # Optional. The title of the Schema. "type": "A String", # Optional. The type of the data. }, }, @@ -881,7 +835,7 @@

Method Details

"retrieval": { # Defines a retrieval tool that model can call to access external knowledge. # Optional. Retrieval tool type. System will always execute the provided retrieval tool(s) to get external knowledge to answer the prompt. Retrieval results are presented to the model for generation. "disableAttribution": True or False, # Optional. Disable using the result from this tool in detecting grounding attribution. This does not affect how the result is given to the model for generation. "vertexAiSearch": { # Retrieve from Vertex AI Search datastore for grounding. See https://cloud.google.com/vertex-ai-search-and-conversation # Set to use data source powered by Vertex AI Search. - "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<> + "datastore": "A String", # Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore} }, }, }, diff --git a/docs/dyn/aiplatform_v1beta1.publishers.models.html b/docs/dyn/aiplatform_v1beta1.publishers.models.html index a5233c4bd2..85e0c78bd1 100644 --- a/docs/dyn/aiplatform_v1beta1.publishers.models.html +++ b/docs/dyn/aiplatform_v1beta1.publishers.models.html @@ -232,6 +232,87 @@

Method Details

"A String", ], }, + "multiDeployVertex": { # Multiple setups to deploy the PublisherModel. # Optional. Multiple setups to deploy the PublisherModel to Vertex Endpoint. + "multiDeployVertex": [ # Optional. One click deployment configurations. + { # Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests. + "artifactUri": "A String", # Optional. The path to the directory containing the Model artifact and any of its supporting files. + "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. + "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number. + "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error. + }, + "containerSpec": { # Specification of a container for serving predictions. Some fields in this message correspond to fields in the [Kubernetes Container v1 core specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). # Optional. The specification of the container that is to be used when deploying this Model in Vertex AI. Not present for Large Models. + "args": [ # Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd). Specify this field as an array of executable and arguments, similar to a Docker `CMD`'s "default parameters" form. If you don't specify this field but do specify the command field, then the command from the `command` field runs without any additional arguments. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). If you don't specify this field and don't specify the `command` field, then the container's [`ENTRYPOINT`](https://docs.docker.com/engine/reference/builder/#cmd) and `CMD` determine what runs based on their default behavior. See the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `args` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + "A String", + ], + "command": [ # Immutable. Specifies the command that runs when the container starts. This overrides the container's [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint). Specify this field as an array of executable and arguments, similar to a Docker `ENTRYPOINT`'s "exec" form, not its "shell" form. If you do not specify this field, then the container's `ENTRYPOINT` runs, in conjunction with the args field or the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd), if either exists. If this field is not specified and the container does not have an `ENTRYPOINT`, then refer to the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). If you specify this field, then you can also specify the `args` field to provide additional arguments for this command. However, if you specify this field, then the container's `CMD` is ignored. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `command` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + "A String", + ], + "deploymentTimeout": "A String", # Immutable. Deployment timeout. Limit for deployment timeout is 2 hours. + "env": [ # Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable `VAR_2` to have the value `foo bar`: ```json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ] ``` If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to the `env` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + { # Represents an environment variable present in a Container or Python Module. + "name": "A String", # Required. Name of the environment variable. Must be a valid C identifier. + "value": "A String", # Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not. + }, + ], + "grpcPorts": [ # Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, gRPC requests to the container will be disabled. Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers v1 core API. + { # Represents a network port in a container. + "containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive. + }, + ], + "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. + "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + "A String", + ], + }, + "periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'. + "timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'. + }, + "healthRoute": "A String", # Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about [health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#health). For example, if you set this field to `/bar`, then Vertex AI intermittently sends a GET request to the `/bar` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/ DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) + "imageUri": "A String", # Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the [container publishing requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#publishing), including permissions requirements for the Vertex AI Service Agent. The container image is ingested upon ModelService.UploadModel, stored internally, and this original path is afterwards not used. To learn about the requirements for the Docker image itself, see [Custom container requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#). You can use the URI to one of Vertex AI's [pre-built container images for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) in this field. + "ports": [ # Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends [liveness and health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#liveness) to this port. If you do not specify this field, it defaults to following value: ```json [ { "containerPort": 8080 } ] ``` Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + { # Represents a network port in a container. + "containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive. + }, + ], + "predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) + "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. + "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. + "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + "A String", + ], + }, + "periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'. + "timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'. + }, + }, + "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. + "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`. + { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count. + "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` + "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided. + }, + ], + "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction. + "acceleratorCount": 42, # The number of accelerators to attach to the machine. + "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. + "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. + "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). + }, + "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). + "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. + }, + "largeModelReference": { # Contains information about the Large Model. # Optional. Large model reference. When this is set, model_artifact_spec is not needed. + "name": "A String", # Required. The unique name of the large Foundation or pre-built model. Like "chat-bison", "text-bison". Or model name with version ID, like "chat-bison@001", "text-bison@005", etc. + }, + "modelDisplayName": "A String", # Optional. Default model display name. + "publicArtifactUri": "A String", # Optional. The signed URI for ephemeral Cloud Storage access to model artifact. + "sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "title": "A String", # Required. The title of the regional resource reference. + }, + ], + }, "openEvaluationPipeline": { # The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc.. # Optional. Open evaluation pipeline of the PublisherModel. "references": { # Required. "a_key": { # Reference to a resource. @@ -528,6 +609,87 @@

Method Details

"A String", ], }, + "multiDeployVertex": { # Multiple setups to deploy the PublisherModel. # Optional. Multiple setups to deploy the PublisherModel to Vertex Endpoint. + "multiDeployVertex": [ # Optional. One click deployment configurations. + { # Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests. + "artifactUri": "A String", # Optional. The path to the directory containing the Model artifact and any of its supporting files. + "automaticResources": { # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines. # A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. + "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number. + "minReplicaCount": 42, # Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error. + }, + "containerSpec": { # Specification of a container for serving predictions. Some fields in this message correspond to fields in the [Kubernetes Container v1 core specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). # Optional. The specification of the container that is to be used when deploying this Model in Vertex AI. Not present for Large Models. + "args": [ # Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd). Specify this field as an array of executable and arguments, similar to a Docker `CMD`'s "default parameters" form. If you don't specify this field but do specify the command field, then the command from the `command` field runs without any additional arguments. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). If you don't specify this field and don't specify the `command` field, then the container's [`ENTRYPOINT`](https://docs.docker.com/engine/reference/builder/#cmd) and `CMD` determine what runs based on their default behavior. See the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `args` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + "A String", + ], + "command": [ # Immutable. Specifies the command that runs when the container starts. This overrides the container's [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint). Specify this field as an array of executable and arguments, similar to a Docker `ENTRYPOINT`'s "exec" form, not its "shell" form. If you do not specify this field, then the container's `ENTRYPOINT` runs, in conjunction with the args field or the container's [`CMD`](https://docs.docker.com/engine/reference/builder/#cmd), if either exists. If this field is not specified and the container does not have an `ENTRYPOINT`, then refer to the Docker documentation about [how `CMD` and `ENTRYPOINT` interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). If you specify this field, then you can also specify the `args` field to provide additional arguments for this command. However, if you specify this field, then the container's `CMD` is ignored. See the [Kubernetes documentation about how the `command` and `args` fields interact with a container's `ENTRYPOINT` and `CMD`](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes). In this field, you can reference [environment variables set by Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables) and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with `$$`; for example: $$(VARIABLE_NAME) This field corresponds to the `command` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + "A String", + ], + "deploymentTimeout": "A String", # Immutable. Deployment timeout. Limit for deployment timeout is 2 hours. + "env": [ # Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable `VAR_2` to have the value `foo bar`: ```json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ] ``` If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to the `env` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + { # Represents an environment variable present in a Container or Python Module. + "name": "A String", # Required. Name of the environment variable. Must be a valid C identifier. + "value": "A String", # Required. Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not. + }, + ], + "grpcPorts": [ # Immutable. List of ports to expose from the container. Vertex AI sends gRPC prediction requests that it receives to the first port on this list. Vertex AI also sends liveness and health checks to this port. If you do not specify this field, gRPC requests to the container will be disabled. Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers v1 core API. + { # Represents a network port in a container. + "containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive. + }, + ], + "healthProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes readiness probe. + "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + "A String", + ], + }, + "periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'. + "timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'. + }, + "healthRoute": "A String", # Immutable. HTTP path on the container to send health checks to. Vertex AI intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about [health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#health). For example, if you set this field to `/bar`, then Vertex AI intermittently sends a GET request to the `/bar` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/ DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) + "imageUri": "A String", # Required. Immutable. URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry or Container Registry. Learn more about the [container publishing requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#publishing), including permissions requirements for the Vertex AI Service Agent. The container image is ingested upon ModelService.UploadModel, stored internally, and this original path is afterwards not used. To learn about the requirements for the Docker image itself, see [Custom container requirements](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#). You can use the URI to one of Vertex AI's [pre-built container images for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) in this field. + "ports": [ # Immutable. List of ports to expose from the container. Vertex AI sends any prediction requests that it receives to the first port on this list. Vertex AI also sends [liveness and health checks](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#liveness) to this port. If you do not specify this field, it defaults to following value: ```json [ { "containerPort": 8080 } ] ``` Vertex AI does not use ports other than the first one listed. This field corresponds to the `ports` field of the Kubernetes Containers [v1 core API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core). + { # Represents a network port in a container. + "containerPort": 42, # The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive. + }, + ], + "predictRoute": "A String", # Immutable. HTTP path on the container to send prediction requests to. Vertex AI forwards requests sent using projects.locations.endpoints.predict to this path on the container's IP address and port. Vertex AI then returns the container's response in the API response. For example, if you set this field to `/foo`, then when Vertex AI receives a prediction request, it forwards the request body in a POST request to the `/foo` path on the port of your container specified by the first value of this `ModelContainerSpec`'s ports field. If you don't specify this field, it defaults to the following value when you deploy this Model to an Endpoint: /v1/endpoints/ENDPOINT/deployedModels/DEPLOYED_MODEL:predict The placeholders in this value are replaced as follows: * ENDPOINT: The last segment (following `endpoints/`)of the Endpoint.name][] field of the Endpoint where this Model has been deployed. (Vertex AI makes this value available to your container code as the [`AIP_ENDPOINT_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) * DEPLOYED_MODEL: DeployedModel.id of the `DeployedModel`. (Vertex AI makes this value available to your container code as the [`AIP_DEPLOYED_MODEL_ID` environment variable](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#aip-variables).) + "sharedMemorySizeMb": "A String", # Immutable. The amount of the VM memory to reserve as the shared memory for the model in megabytes. + "startupProbe": { # Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. # Immutable. Specification for Kubernetes startup probe. + "exec": { # ExecAction specifies a command to execute. # Exec specifies the action to take. + "command": [ # Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + "A String", + ], + }, + "periodSeconds": 42, # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Must be less than timeout_seconds. Maps to Kubernetes probe argument 'periodSeconds'. + "timeoutSeconds": 42, # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Must be greater or equal to period_seconds. Maps to Kubernetes probe argument 'timeoutSeconds'. + }, + }, + "dedicatedResources": { # A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration. # A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. + "autoscalingMetricSpecs": [ # Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`. + { # The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count. + "metricName": "A String", # Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization` + "target": 42, # The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided. + }, + ], + "machineSpec": { # Specification of a single machine. # Required. Immutable. The specification of a single machine used by the prediction. + "acceleratorCount": 42, # The number of accelerators to attach to the machine. + "acceleratorType": "A String", # Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count. + "machineType": "A String", # Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. + "tpuTopology": "A String", # Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1"). + }, + "maxReplicaCount": 42, # Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). + "minReplicaCount": 42, # Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. + }, + "largeModelReference": { # Contains information about the Large Model. # Optional. Large model reference. When this is set, model_artifact_spec is not needed. + "name": "A String", # Required. The unique name of the large Foundation or pre-built model. Like "chat-bison", "text-bison". Or model name with version ID, like "chat-bison@001", "text-bison@005", etc. + }, + "modelDisplayName": "A String", # Optional. Default model display name. + "publicArtifactUri": "A String", # Optional. The signed URI for ephemeral Cloud Storage access to model artifact. + "sharedResources": "A String", # The resource name of the shared DeploymentResourcePool to deploy on. Format: `projects/{project}/locations/{location}/deploymentResourcePools/{deployment_resource_pool}` + "title": "A String", # Required. The title of the regional resource reference. + }, + ], + }, "openEvaluationPipeline": { # The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc.. # Optional. Open evaluation pipeline of the PublisherModel. "references": { # Required. "a_key": { # Reference to a resource. diff --git a/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html b/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html index d52e4e364c..8b8b8dc7ad 100644 --- a/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html +++ b/docs/dyn/alloydb_v1.projects.locations.clusters.instances.html @@ -154,6 +154,14 @@

Method Details

"cpuCount": 42, # The number of CPU's in the VM instance. }, "name": "A String", # Output only. The name of the instance resource with the format: * projects/{project}/locations/{region}/clusters/{cluster_id}/instances/{instance_id} where the cluster and instance ID segments should satisfy the regex expression `[a-z]([a-z0-9-]{0,61}[a-z0-9])?`, e.g. 1-63 characters of lowercase letters, numbers, and dashes, starting with a letter, and ending with a letter or number. For more details see https://google.aip.dev/122. The prefix of the instance resource name is the name of the parent resource: * projects/{project}/locations/{region}/clusters/{cluster_id} + "networkConfig": { # Metadata related to instance level network configuration. # Optional. Instance level network configuration. + "authorizedExternalNetworks": [ # Optional. A list of external network authorized to access this instance. + { # AuthorizedNetwork contains metadata for an authorized network. + "cidrRange": "A String", # CIDR range for one authorzied network of the instance. + }, + ], + "enablePublicIp": True or False, # Optional. Enabling public ip for the instance. + }, "nodes": [ # Output only. List of available read-only VMs in this instance, including the standby for a PRIMARY instance. { # Details of a single node in the instance. Nodes in an AlloyDB instance are ephemereal, they can change during update, failover, autohealing and resize operations. "id": "A String", # The identifier of the VM e.g. "test-read-0601-407e52be-ms3l". @@ -162,6 +170,7 @@

Method Details

"zoneId": "A String", # The Compute Engine zone of the VM e.g. "us-central1-b". }, ], + "publicIpAddress": "A String", # Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application. "queryInsightsConfig": { # QueryInsights Instance specific configuration. # Configuration for query insights. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. The default value is 5. Any integer between 0 and 20 is considered valid. "queryStringLength": 42, # Query string length. The default value is 1024. Any integer between 256 and 4500 is considered valid. @@ -254,6 +263,14 @@

Method Details

"cpuCount": 42, # The number of CPU's in the VM instance. }, "name": "A String", # Output only. The name of the instance resource with the format: * projects/{project}/locations/{region}/clusters/{cluster_id}/instances/{instance_id} where the cluster and instance ID segments should satisfy the regex expression `[a-z]([a-z0-9-]{0,61}[a-z0-9])?`, e.g. 1-63 characters of lowercase letters, numbers, and dashes, starting with a letter, and ending with a letter or number. For more details see https://google.aip.dev/122. The prefix of the instance resource name is the name of the parent resource: * projects/{project}/locations/{region}/clusters/{cluster_id} + "networkConfig": { # Metadata related to instance level network configuration. # Optional. Instance level network configuration. + "authorizedExternalNetworks": [ # Optional. A list of external network authorized to access this instance. + { # AuthorizedNetwork contains metadata for an authorized network. + "cidrRange": "A String", # CIDR range for one authorzied network of the instance. + }, + ], + "enablePublicIp": True or False, # Optional. Enabling public ip for the instance. + }, "nodes": [ # Output only. List of available read-only VMs in this instance, including the standby for a PRIMARY instance. { # Details of a single node in the instance. Nodes in an AlloyDB instance are ephemereal, they can change during update, failover, autohealing and resize operations. "id": "A String", # The identifier of the VM e.g. "test-read-0601-407e52be-ms3l". @@ -262,6 +279,7 @@

Method Details

"zoneId": "A String", # The Compute Engine zone of the VM e.g. "us-central1-b". }, ], + "publicIpAddress": "A String", # Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application. "queryInsightsConfig": { # QueryInsights Instance specific configuration. # Configuration for query insights. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. The default value is 5. Any integer between 0 and 20 is considered valid. "queryStringLength": 42, # Query string length. The default value is 1024. Any integer between 256 and 4500 is considered valid. @@ -445,6 +463,14 @@

Method Details

"cpuCount": 42, # The number of CPU's in the VM instance. }, "name": "A String", # Output only. The name of the instance resource with the format: * projects/{project}/locations/{region}/clusters/{cluster_id}/instances/{instance_id} where the cluster and instance ID segments should satisfy the regex expression `[a-z]([a-z0-9-]{0,61}[a-z0-9])?`, e.g. 1-63 characters of lowercase letters, numbers, and dashes, starting with a letter, and ending with a letter or number. For more details see https://google.aip.dev/122. The prefix of the instance resource name is the name of the parent resource: * projects/{project}/locations/{region}/clusters/{cluster_id} + "networkConfig": { # Metadata related to instance level network configuration. # Optional. Instance level network configuration. + "authorizedExternalNetworks": [ # Optional. A list of external network authorized to access this instance. + { # AuthorizedNetwork contains metadata for an authorized network. + "cidrRange": "A String", # CIDR range for one authorzied network of the instance. + }, + ], + "enablePublicIp": True or False, # Optional. Enabling public ip for the instance. + }, "nodes": [ # Output only. List of available read-only VMs in this instance, including the standby for a PRIMARY instance. { # Details of a single node in the instance. Nodes in an AlloyDB instance are ephemereal, they can change during update, failover, autohealing and resize operations. "id": "A String", # The identifier of the VM e.g. "test-read-0601-407e52be-ms3l". @@ -453,6 +479,7 @@

Method Details

"zoneId": "A String", # The Compute Engine zone of the VM e.g. "us-central1-b". }, ], + "publicIpAddress": "A String", # Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application. "queryInsightsConfig": { # QueryInsights Instance specific configuration. # Configuration for query insights. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. The default value is 5. Any integer between 0 and 20 is considered valid. "queryStringLength": 42, # Query string length. The default value is 1024. Any integer between 256 and 4500 is considered valid. @@ -495,6 +522,7 @@

Method Details

"instanceUid": "A String", # Output only. The unique ID of the Instance. "ipAddress": "A String", # Output only. The private network IP address for the Instance. This is the default IP for the instance and is always created (even if enable_public_ip is set). This is the connection endpoint for an end-user application. "name": "A String", # The name of the ConnectionInfo singleton resource, e.g.: projects/{project}/locations/{location}/clusters/*/instances/*/connectionInfo This field currently has no semantic meaning. + "publicIpAddress": "A String", # Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application. }
@@ -591,6 +619,14 @@

Method Details

"cpuCount": 42, # The number of CPU's in the VM instance. }, "name": "A String", # Output only. The name of the instance resource with the format: * projects/{project}/locations/{region}/clusters/{cluster_id}/instances/{instance_id} where the cluster and instance ID segments should satisfy the regex expression `[a-z]([a-z0-9-]{0,61}[a-z0-9])?`, e.g. 1-63 characters of lowercase letters, numbers, and dashes, starting with a letter, and ending with a letter or number. For more details see https://google.aip.dev/122. The prefix of the instance resource name is the name of the parent resource: * projects/{project}/locations/{region}/clusters/{cluster_id} + "networkConfig": { # Metadata related to instance level network configuration. # Optional. Instance level network configuration. + "authorizedExternalNetworks": [ # Optional. A list of external network authorized to access this instance. + { # AuthorizedNetwork contains metadata for an authorized network. + "cidrRange": "A String", # CIDR range for one authorzied network of the instance. + }, + ], + "enablePublicIp": True or False, # Optional. Enabling public ip for the instance. + }, "nodes": [ # Output only. List of available read-only VMs in this instance, including the standby for a PRIMARY instance. { # Details of a single node in the instance. Nodes in an AlloyDB instance are ephemereal, they can change during update, failover, autohealing and resize operations. "id": "A String", # The identifier of the VM e.g. "test-read-0601-407e52be-ms3l". @@ -599,6 +635,7 @@

Method Details

"zoneId": "A String", # The Compute Engine zone of the VM e.g. "us-central1-b". }, ], + "publicIpAddress": "A String", # Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application. "queryInsightsConfig": { # QueryInsights Instance specific configuration. # Configuration for query insights. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. The default value is 5. Any integer between 0 and 20 is considered valid. "queryStringLength": 42, # Query string length. The default value is 1024. Any integer between 256 and 4500 is considered valid. @@ -680,6 +717,14 @@

Method Details

"cpuCount": 42, # The number of CPU's in the VM instance. }, "name": "A String", # Output only. The name of the instance resource with the format: * projects/{project}/locations/{region}/clusters/{cluster_id}/instances/{instance_id} where the cluster and instance ID segments should satisfy the regex expression `[a-z]([a-z0-9-]{0,61}[a-z0-9])?`, e.g. 1-63 characters of lowercase letters, numbers, and dashes, starting with a letter, and ending with a letter or number. For more details see https://google.aip.dev/122. The prefix of the instance resource name is the name of the parent resource: * projects/{project}/locations/{region}/clusters/{cluster_id} + "networkConfig": { # Metadata related to instance level network configuration. # Optional. Instance level network configuration. + "authorizedExternalNetworks": [ # Optional. A list of external network authorized to access this instance. + { # AuthorizedNetwork contains metadata for an authorized network. + "cidrRange": "A String", # CIDR range for one authorzied network of the instance. + }, + ], + "enablePublicIp": True or False, # Optional. Enabling public ip for the instance. + }, "nodes": [ # Output only. List of available read-only VMs in this instance, including the standby for a PRIMARY instance. { # Details of a single node in the instance. Nodes in an AlloyDB instance are ephemereal, they can change during update, failover, autohealing and resize operations. "id": "A String", # The identifier of the VM e.g. "test-read-0601-407e52be-ms3l". @@ -688,6 +733,7 @@

Method Details

"zoneId": "A String", # The Compute Engine zone of the VM e.g. "us-central1-b". }, ], + "publicIpAddress": "A String", # Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application. "queryInsightsConfig": { # QueryInsights Instance specific configuration. # Configuration for query insights. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. The default value is 5. Any integer between 0 and 20 is considered valid. "queryStringLength": 42, # Query string length. The default value is 1024. Any integer between 256 and 4500 is considered valid. diff --git a/docs/dyn/androidpublisher_v3.purchases.products.html b/docs/dyn/androidpublisher_v3.purchases.products.html index 53cac8dea0..1d800f7c42 100644 --- a/docs/dyn/androidpublisher_v3.purchases.products.html +++ b/docs/dyn/androidpublisher_v3.purchases.products.html @@ -159,6 +159,7 @@

Method Details

"purchaseToken": "A String", # The purchase token generated to identify this purchase. May not be present. "purchaseType": 42, # The type of purchase of the inapp product. This field is only set if this purchase was not made using the standard in-app billing flow. Possible values are: 0. Test (i.e. purchased from a license testing account) 1. Promo (i.e. purchased using a promo code) 2. Rewarded (i.e. from watching a video ad instead of paying) "quantity": 42, # The quantity associated with the purchase of the inapp product. If not present, the quantity is 1. + "refundableQuantity": 42, # The quantity eligible for refund, i.e. quantity that hasn't been refunded. The value reflects quantity-based partial refunds and full refunds. "regionCode": "A String", # ISO 3166-1 alpha-2 billing region code of the user at the time the product was granted. }
diff --git a/docs/dyn/androidpublisher_v3.purchases.voidedpurchases.html b/docs/dyn/androidpublisher_v3.purchases.voidedpurchases.html index 014a772e54..37bca9d584 100644 --- a/docs/dyn/androidpublisher_v3.purchases.voidedpurchases.html +++ b/docs/dyn/androidpublisher_v3.purchases.voidedpurchases.html @@ -78,7 +78,7 @@

Instance Methods

close()

Close httplib2 connections.

- list(packageName, endTime=None, maxResults=None, startIndex=None, startTime=None, token=None, type=None, x__xgafv=None)

+ list(packageName, endTime=None, includeQuantityBasedPartialRefund=None, maxResults=None, startIndex=None, startTime=None, token=None, type=None, x__xgafv=None)

Lists the purchases that were canceled, refunded or charged-back.

Method Details

@@ -87,12 +87,13 @@

Method Details

- list(packageName, endTime=None, maxResults=None, startIndex=None, startTime=None, token=None, type=None, x__xgafv=None) + list(packageName, endTime=None, includeQuantityBasedPartialRefund=None, maxResults=None, startIndex=None, startTime=None, token=None, type=None, x__xgafv=None)
Lists the purchases that were canceled, refunded or charged-back.
 
 Args:
   packageName: string, The package name of the application for which voided purchases need to be returned (for example, 'com.some.thing'). (required)
   endTime: string, The time, in milliseconds since the Epoch, of the newest voided purchase that you want to see in the response. The value of this parameter cannot be greater than the current time and is ignored if a pagination token is set. Default value is current time. Note: This filter is applied on the time at which the record is seen as voided by our systems and not the actual voided time returned in the response.
+  includeQuantityBasedPartialRefund: boolean, Optional. Whether to include voided purchases of quantity-based partial refunds, which are applicable only to multi-quantity purchases. If true, additional voided purchases may be returned with voidedQuantity that indicates the refund quantity of a quantity-based partial refund. The default value is false.
   maxResults: integer, Defines how many results the list operation should return. The default number depends on the resource collection.
   startIndex: integer, Defines the index of the first element to return. This can only be used if indexed paging is enabled.
   startTime: string, The time, in milliseconds since the Epoch, of the oldest voided purchase that you want to see in the response. The value of this parameter cannot be older than 30 days and is ignored if a pagination token is set. Default value is current time minus 30 days. Note: This filter is applied on the time at which the record is seen as voided by our systems and not the actual voided time returned in the response.
@@ -122,6 +123,7 @@ 

Method Details

"orderId": "A String", # The order id which uniquely identifies a one-time purchase, subscription purchase, or subscription renewal. "purchaseTimeMillis": "A String", # The time at which the purchase was made, in milliseconds since the epoch (Jan 1, 1970). "purchaseToken": "A String", # The token which uniquely identifies a one-time purchase or subscription. To uniquely identify subscription renewals use order_id (available starting from version 3 of the API). + "voidedQuantity": 42, # The voided quantity as the result of a quantity-based partial refund. Voided purchases of quantity-based partial refunds may only be returned when includeQuantityBasedPartialRefund is set to true. "voidedReason": 42, # The reason why the purchase was voided, possible values are: 0. Other 1. Remorse 2. Not_received 3. Defective 4. Accidental_purchase 5. Fraud 6. Friendly_fraud 7. Chargeback "voidedSource": 42, # The initiator of voided purchase, possible values are: 0. User 1. Developer 2. Google "voidedTimeMillis": "A String", # The time at which the purchase was canceled/refunded/charged-back, in milliseconds since the epoch (Jan 1, 1970). diff --git a/docs/dyn/apigee_v1.organizations.instances.html b/docs/dyn/apigee_v1.organizations.instances.html index 8092a9737f..2379bb3b0d 100644 --- a/docs/dyn/apigee_v1.organizations.instances.html +++ b/docs/dyn/apigee_v1.organizations.instances.html @@ -129,6 +129,10 @@

Method Details

The object takes the form of: { # Apigee runtime instance. + "accessLoggingConfig": { # Access logging configuration enables customers to ship the access logs from the tenant projects to their own project's cloud logging. The feature is at the instance level ad disabled by default. It can be enabled during CreateInstance or UpdateInstance. # Optional. Access logging configuration enables the access logging feature at the instance. Apigee customers can enable access logging to ship the access logs to their own project's cloud logging. + "enabled": True or False, # Optional. Boolean flag that specifies whether the customer access log feature is enabled. + "filter": "A String", # Optional. Ship the access log entries that match the status_code defined in the filter. The status_code is the only expected/supported filter field. (Ex: status_code) The filter will parse it to the Common Expression Language semantics for expression evaluation to build the filter condition. (Ex: "filter": status_code >= 200 && status_code < 300 ) + }, "consumerAcceptList": [ # Optional. Customer accept list represents the list of projects (id/number) on customer side that can privately connect to the service attachment. It is an optional field which the customers can provide during the instance creation. By default, the customer project associated with the Apigee organization will be included to the list. "A String", ], @@ -227,6 +231,10 @@

Method Details

An object of the form: { # Apigee runtime instance. + "accessLoggingConfig": { # Access logging configuration enables customers to ship the access logs from the tenant projects to their own project's cloud logging. The feature is at the instance level ad disabled by default. It can be enabled during CreateInstance or UpdateInstance. # Optional. Access logging configuration enables the access logging feature at the instance. Apigee customers can enable access logging to ship the access logs to their own project's cloud logging. + "enabled": True or False, # Optional. Boolean flag that specifies whether the customer access log feature is enabled. + "filter": "A String", # Optional. Ship the access log entries that match the status_code defined in the filter. The status_code is the only expected/supported filter field. (Ex: status_code) The filter will parse it to the Common Expression Language semantics for expression evaluation to build the filter condition. (Ex: "filter": status_code >= 200 && status_code < 300 ) + }, "consumerAcceptList": [ # Optional. Customer accept list represents the list of projects (id/number) on customer side that can privately connect to the service attachment. It is an optional field which the customers can provide during the instance creation. By default, the customer project associated with the Apigee organization will be included to the list. "A String", ], @@ -266,6 +274,10 @@

Method Details

{ # Response for ListInstances. "instances": [ # Instances in the specified organization. { # Apigee runtime instance. + "accessLoggingConfig": { # Access logging configuration enables customers to ship the access logs from the tenant projects to their own project's cloud logging. The feature is at the instance level ad disabled by default. It can be enabled during CreateInstance or UpdateInstance. # Optional. Access logging configuration enables the access logging feature at the instance. Apigee customers can enable access logging to ship the access logs to their own project's cloud logging. + "enabled": True or False, # Optional. Boolean flag that specifies whether the customer access log feature is enabled. + "filter": "A String", # Optional. Ship the access log entries that match the status_code defined in the filter. The status_code is the only expected/supported filter field. (Ex: status_code) The filter will parse it to the Common Expression Language semantics for expression evaluation to build the filter condition. (Ex: "filter": status_code >= 200 && status_code < 300 ) + }, "consumerAcceptList": [ # Optional. Customer accept list represents the list of projects (id/number) on customer side that can privately connect to the service attachment. It is an optional field which the customers can provide during the instance creation. By default, the customer project associated with the Apigee organization will be included to the list. "A String", ], @@ -313,6 +325,10 @@

Method Details

The object takes the form of: { # Apigee runtime instance. + "accessLoggingConfig": { # Access logging configuration enables customers to ship the access logs from the tenant projects to their own project's cloud logging. The feature is at the instance level ad disabled by default. It can be enabled during CreateInstance or UpdateInstance. # Optional. Access logging configuration enables the access logging feature at the instance. Apigee customers can enable access logging to ship the access logs to their own project's cloud logging. + "enabled": True or False, # Optional. Boolean flag that specifies whether the customer access log feature is enabled. + "filter": "A String", # Optional. Ship the access log entries that match the status_code defined in the filter. The status_code is the only expected/supported filter field. (Ex: status_code) The filter will parse it to the Common Expression Language semantics for expression evaluation to build the filter condition. (Ex: "filter": status_code >= 200 && status_code < 300 ) + }, "consumerAcceptList": [ # Optional. Customer accept list represents the list of projects (id/number) on customer side that can privately connect to the service attachment. It is an optional field which the customers can provide during the instance creation. By default, the customer project associated with the Apigee organization will be included to the list. "A String", ], diff --git a/docs/dyn/batch_v1.projects.locations.jobs.html b/docs/dyn/batch_v1.projects.locations.jobs.html index d4b122e9cf..1b48c7874f 100644 --- a/docs/dyn/batch_v1.projects.locations.jobs.html +++ b/docs/dyn/batch_v1.projects.locations.jobs.html @@ -300,7 +300,7 @@

Method Details

}, ], "maxRetryCount": 42, # Maximum number of retries on failures. The default, 0, which means never retry. The valid value range is [0, 10]. - "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. + "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999], "runnables": [ # The sequence of scripts or containers to run for this Task. Each Task using this TaskSpec executes its list of runnables in order. The Task succeeds if all of its runnables either exit with a zero status or any that exit with a non-zero status have the ignore_exit_status flag. Background runnables are killed automatically (if they have not already exited) a short time after all foreground runnables have completed. Even though this is likely to result in a non-zero exit status for the background runnable, these automatic kills are not treated as Task failures. { # Runnable describes instructions for executing a specific script or container as part of a Task. "alwaysRun": True or False, # By default, after a Runnable fails, no further Runnable are executed. This flag indicates that this Runnable must be run even if the Task has already failed. This is useful for Runnables that copy output files off of the VM or for debugging. The always_run flag does not override the Task's overall max_run_duration. If the max_run_duration has expired then no further Runnables will execute, not even always_run Runnables. @@ -568,7 +568,7 @@

Method Details

}, ], "maxRetryCount": 42, # Maximum number of retries on failures. The default, 0, which means never retry. The valid value range is [0, 10]. - "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. + "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999], "runnables": [ # The sequence of scripts or containers to run for this Task. Each Task using this TaskSpec executes its list of runnables in order. The Task succeeds if all of its runnables either exit with a zero status or any that exit with a non-zero status have the ignore_exit_status flag. Background runnables are killed automatically (if they have not already exited) a short time after all foreground runnables have completed. Even though this is likely to result in a non-zero exit status for the background runnable, these automatic kills are not treated as Task failures. { # Runnable describes instructions for executing a specific script or container as part of a Task. "alwaysRun": True or False, # By default, after a Runnable fails, no further Runnable are executed. This flag indicates that this Runnable must be run even if the Task has already failed. This is useful for Runnables that copy output files off of the VM or for debugging. The always_run flag does not override the Task's overall max_run_duration. If the max_run_duration has expired then no further Runnables will execute, not even always_run Runnables. @@ -878,7 +878,7 @@

Method Details

}, ], "maxRetryCount": 42, # Maximum number of retries on failures. The default, 0, which means never retry. The valid value range is [0, 10]. - "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. + "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999], "runnables": [ # The sequence of scripts or containers to run for this Task. Each Task using this TaskSpec executes its list of runnables in order. The Task succeeds if all of its runnables either exit with a zero status or any that exit with a non-zero status have the ignore_exit_status flag. Background runnables are killed automatically (if they have not already exited) a short time after all foreground runnables have completed. Even though this is likely to result in a non-zero exit status for the background runnable, these automatic kills are not treated as Task failures. { # Runnable describes instructions for executing a specific script or container as part of a Task. "alwaysRun": True or False, # By default, after a Runnable fails, no further Runnable are executed. This flag indicates that this Runnable must be run even if the Task has already failed. This is useful for Runnables that copy output files off of the VM or for debugging. The always_run flag does not override the Task's overall max_run_duration. If the max_run_duration has expired then no further Runnables will execute, not even always_run Runnables. @@ -1157,7 +1157,7 @@

Method Details

}, ], "maxRetryCount": 42, # Maximum number of retries on failures. The default, 0, which means never retry. The valid value range is [0, 10]. - "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. + "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999], "runnables": [ # The sequence of scripts or containers to run for this Task. Each Task using this TaskSpec executes its list of runnables in order. The Task succeeds if all of its runnables either exit with a zero status or any that exit with a non-zero status have the ignore_exit_status flag. Background runnables are killed automatically (if they have not already exited) a short time after all foreground runnables have completed. Even though this is likely to result in a non-zero exit status for the background runnable, these automatic kills are not treated as Task failures. { # Runnable describes instructions for executing a specific script or container as part of a Task. "alwaysRun": True or False, # By default, after a Runnable fails, no further Runnable are executed. This flag indicates that this Runnable must be run even if the Task has already failed. This is useful for Runnables that copy output files off of the VM or for debugging. The always_run flag does not override the Task's overall max_run_duration. If the max_run_duration has expired then no further Runnables will execute, not even always_run Runnables. diff --git a/docs/dyn/batch_v1.projects.locations.state.html b/docs/dyn/batch_v1.projects.locations.state.html index 2f9d23e872..9c71d1a496 100644 --- a/docs/dyn/batch_v1.projects.locations.state.html +++ b/docs/dyn/batch_v1.projects.locations.state.html @@ -168,7 +168,7 @@

Method Details

"a_key": "A String", }, }, - "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. + "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999], "runnables": [ # AgentTaskRunnable is runanbles that will be executed on the agent. { # AgentTaskRunnable is the Runnable representation between Agent and CLH communication. "alwaysRun": True or False, # By default, after a Runnable fails, no further Runnable are executed. This flag indicates that this Runnable must be run even if the Task has already failed. This is useful for Runnables that copy output files off of the VM or for debugging. The always_run flag does not override the Task's overall max_run_duration. If the max_run_duration has expired then no further Runnables will execute, not even always_run Runnables. @@ -243,7 +243,7 @@

Method Details

}, ], "maxRetryCount": 42, # Maximum number of retries on failures. The default, 0, which means never retry. The valid value range is [0, 10]. - "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. + "maxRunDuration": "A String", # Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999], "runnables": [ # The sequence of scripts or containers to run for this Task. Each Task using this TaskSpec executes its list of runnables in order. The Task succeeds if all of its runnables either exit with a zero status or any that exit with a non-zero status have the ignore_exit_status flag. Background runnables are killed automatically (if they have not already exited) a short time after all foreground runnables have completed. Even though this is likely to result in a non-zero exit status for the background runnable, these automatic kills are not treated as Task failures. { # Runnable describes instructions for executing a specific script or container as part of a Task. "alwaysRun": True or False, # By default, after a Runnable fails, no further Runnable are executed. This flag indicates that this Runnable must be run even if the Task has already failed. This is useful for Runnables that copy output files off of the VM or for debugging. The always_run flag does not override the Task's overall max_run_duration. If the max_run_duration has expired then no further Runnables will execute, not even always_run Runnables. diff --git a/docs/dyn/bigquerydatatransfer_v1.projects.html b/docs/dyn/bigquerydatatransfer_v1.projects.html index eae66aa02f..852454309d 100644 --- a/docs/dyn/bigquerydatatransfer_v1.projects.html +++ b/docs/dyn/bigquerydatatransfer_v1.projects.html @@ -106,7 +106,7 @@

Method Details

Enroll data sources in a user project. This allows users to create transfer configurations for these data sources. They will also appear in the ListDataSources RPC and as such, will appear in the [BigQuery UI](https://console.cloud.google.com/bigquery), and the documents can be found in the public guide for [BigQuery Web UI](https://cloud.google.com/bigquery/bigquery-web-ui) and [Data Transfer Service](https://cloud.google.com/bigquery/docs/working-with-transfers).
 
 Args:
-  name: string, The name of the project resource in the form: `projects/{project_id}` (required)
+  name: string, Required. The name of the project resource in the form: `projects/{project_id}` (required)
   body: object, The request body.
     The object takes the form of:
 
diff --git a/docs/dyn/bigquerydatatransfer_v1.projects.locations.html b/docs/dyn/bigquerydatatransfer_v1.projects.locations.html
index c34efa252e..82e5faca79 100644
--- a/docs/dyn/bigquerydatatransfer_v1.projects.locations.html
+++ b/docs/dyn/bigquerydatatransfer_v1.projects.locations.html
@@ -113,7 +113,7 @@ 

Method Details

Enroll data sources in a user project. This allows users to create transfer configurations for these data sources. They will also appear in the ListDataSources RPC and as such, will appear in the [BigQuery UI](https://console.cloud.google.com/bigquery), and the documents can be found in the public guide for [BigQuery Web UI](https://cloud.google.com/bigquery/bigquery-web-ui) and [Data Transfer Service](https://cloud.google.com/bigquery/docs/working-with-transfers).
 
 Args:
-  name: string, The name of the project resource in the form: `projects/{project_id}` (required)
+  name: string, Required. The name of the project resource in the form: `projects/{project_id}` (required)
   body: object, The request body.
     The object takes the form of:
 
@@ -216,7 +216,7 @@ 

Method Details

Unenroll data sources in a user project. This allows users to remove transfer configurations for these data sources. They will no longer appear in the ListDataSources RPC and will also no longer appear in the [BigQuery UI](https://console.cloud.google.com/bigquery). Data transfers configurations of unenrolled data sources will not be scheduled.
 
 Args:
-  name: string, The name of the project resource in the form: `projects/{project_id}` (required)
+  name: string, Required. The name of the project resource in the form: `projects/{project_id}` (required)
   body: object, The request body.
     The object takes the form of:
 
diff --git a/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.html b/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.html
index 7dc7496dfc..6d998b21d4 100644
--- a/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.html
+++ b/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.html
@@ -134,7 +134,7 @@ 

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -178,7 +178,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -244,7 +244,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -298,7 +298,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -340,7 +340,7 @@

Method Details

Updates a data transfer configuration. All fields must be set, even if they are not updated.
 
 Args:
-  name: string, The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. (required)
+  name: string, Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. (required)
   body: object, The request body.
     The object takes the form of:
 
@@ -357,7 +357,7 @@ 

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -402,7 +402,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -463,7 +463,7 @@

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. @@ -485,7 +485,7 @@

Method Details

Start manual transfer runs to be executed now with schedule_time equal to current time. The transfer runs can be created for a time range where the run_time is between start_time (inclusive) and end_time (exclusive), or for a specific run_time.
 
 Args:
-  parent: string, Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`. (required)
+  parent: string, Required. Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`. (required)
   body: object, The request body.
     The object takes the form of:
 
@@ -523,7 +523,7 @@ 

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. diff --git a/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.runs.html b/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.runs.html index a13b97a1b1..28d5984cbd 100644 --- a/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.runs.html +++ b/docs/dyn/bigquerydatatransfer_v1.projects.locations.transferConfigs.runs.html @@ -148,7 +148,7 @@

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. @@ -210,7 +210,7 @@

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. diff --git a/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.html b/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.html index a3012a81e4..093a03b424 100644 --- a/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.html +++ b/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.html @@ -134,7 +134,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -178,7 +178,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -244,7 +244,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -298,7 +298,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -340,7 +340,7 @@

Method Details

Updates a data transfer configuration. All fields must be set, even if they are not updated.
 
 Args:
-  name: string, The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. (required)
+  name: string, Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. (required)
   body: object, The request body.
     The object takes the form of:
 
@@ -357,7 +357,7 @@ 

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -402,7 +402,7 @@

Method Details

"encryptionConfiguration": { # Represents the encryption configuration for a transfer. # The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent. "kmsKeyName": "A String", # The name of the KMS key used for encrypting BigQuery data. }, - "name": "A String", # The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. + "name": "A String", # Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config. "nextRunTime": "A String", # Output only. Next time when data transfer will run. "notificationPubsubTopic": "A String", # Pub/Sub topic where notifications will be sent after transfer runs associated with this transfer config finish. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "ownerInfo": { # Information about a user. # Output only. Information about the user whose credentials are used to transfer data. Populated only for `transferConfigs.get` requests. In case the user information is not available, this field will not be populated. @@ -463,7 +463,7 @@

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. @@ -485,7 +485,7 @@

Method Details

Start manual transfer runs to be executed now with schedule_time equal to current time. The transfer runs can be created for a time range where the run_time is between start_time (inclusive) and end_time (exclusive), or for a specific run_time.
 
 Args:
-  parent: string, Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`. (required)
+  parent: string, Required. Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`. (required)
   body: object, The request body.
     The object takes the form of:
 
@@ -523,7 +523,7 @@ 

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. diff --git a/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.runs.html b/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.runs.html index 197f4bded9..36610f70c2 100644 --- a/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.runs.html +++ b/docs/dyn/bigquerydatatransfer_v1.projects.transferConfigs.runs.html @@ -148,7 +148,7 @@

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. @@ -210,7 +210,7 @@

Method Details

], "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. }, - "name": "A String", # The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. + "name": "A String", # Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run. "notificationPubsubTopic": "A String", # Output only. Pub/Sub topic where a notification will be sent after this transfer run finishes. The format for specifying a pubsub topic is: `projects/{project_id}/topics/{topic_id}` "params": { # Output only. Parameters specific to each data source. For more information see the bq tab in the 'Setting up a data transfer' section for each data source. For example the parameters for Cloud Storage transfers are listed here: https://cloud.google.com/bigquery-transfer/docs/cloud-storage-transfer#bq "a_key": "", # Properties of the object. diff --git a/docs/dyn/bigqueryreservation_v1.projects.locations.reservations.html b/docs/dyn/bigqueryreservation_v1.projects.locations.reservations.html index 78954a5409..c16e737539 100644 --- a/docs/dyn/bigqueryreservation_v1.projects.locations.reservations.html +++ b/docs/dyn/bigqueryreservation_v1.projects.locations.reservations.html @@ -88,6 +88,9 @@

Instance Methods

delete(name, x__xgafv=None)

Deletes a reservation. Returns `google.rpc.Code.FAILED_PRECONDITION` when reservation has assignments.

+

+ failoverReservation(name, body=None, x__xgafv=None)

+

Failover a reservation to the secondary location. The operation should be done in the current secondary location, which will be promoted to the new primary location for the reservation. Attempting to failover a reservation in the current primary location will fail with the error code `google.rpc.Code.FAILED_PRECONDITION`.

get(name, x__xgafv=None)

Returns information about the reservation.

@@ -126,6 +129,9 @@

Method Details

"ignoreIdleSlots": True or False, # If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, a query or pipeline job using this reservation will execute with the slot capacity specified in the slot_capacity field at most. "multiRegionAuxiliary": True or False, # Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field. "name": "A String", # The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters. + "originalPrimaryLocation": "A String", # Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "primaryLocation": "A String", # Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "secondaryLocation": "A String", # Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation. "slotCapacity": "A String", # Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes. "updateTime": "A String", # Output only. Last update time of the reservation. } @@ -150,6 +156,9 @@

Method Details

"ignoreIdleSlots": True or False, # If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, a query or pipeline job using this reservation will execute with the slot capacity specified in the slot_capacity field at most. "multiRegionAuxiliary": True or False, # Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field. "name": "A String", # The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters. + "originalPrimaryLocation": "A String", # Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "primaryLocation": "A String", # Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "secondaryLocation": "A String", # Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation. "slotCapacity": "A String", # Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes. "updateTime": "A String", # Output only. Last update time of the reservation. }
@@ -173,6 +182,45 @@

Method Details

}
+
+ failoverReservation(name, body=None, x__xgafv=None) +
Failover a reservation to the secondary location. The operation should be done in the current secondary location, which will be promoted to the new primary location for the reservation. Attempting to failover a reservation in the current primary location will fail with the error code `google.rpc.Code.FAILED_PRECONDITION`.
+
+Args:
+  name: string, Required. Resource name of the reservation to failover. E.g., `projects/myproject/locations/US/reservations/team1-prod` (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # The request for ReservationService.FailoverReservation.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # A reservation is a mechanism used to guarantee slots to users.
+  "autoscale": { # Auto scaling settings. # The configuration parameters for the auto scaling feature.
+    "currentSlots": "A String", # Output only. The slot capacity added to this reservation when autoscale happens. Will be between [0, max_slots].
+    "maxSlots": "A String", # Number of slots to be scaled when needed.
+  },
+  "concurrency": "A String", # Job concurrency target which sets a soft upper bound on the number of jobs that can run concurrently in this reservation. This is a soft target due to asynchronous nature of the system and various optimizations for small queries. Default value is 0 which means that concurrency target will be automatically computed by the system. NOTE: this field is exposed as target job concurrency in the Information Schema, DDL and BQ CLI.
+  "creationTime": "A String", # Output only. Creation time of the reservation.
+  "edition": "A String", # Edition of the reservation.
+  "ignoreIdleSlots": True or False, # If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, a query or pipeline job using this reservation will execute with the slot capacity specified in the slot_capacity field at most.
+  "multiRegionAuxiliary": True or False, # Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field.
+  "name": "A String", # The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters.
+  "originalPrimaryLocation": "A String", # Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions.
+  "primaryLocation": "A String", # Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions.
+  "secondaryLocation": "A String", # Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation.
+  "slotCapacity": "A String", # Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes.
+  "updateTime": "A String", # Output only. Last update time of the reservation.
+}
+
+
get(name, x__xgafv=None)
Returns information about the reservation.
@@ -198,6 +246,9 @@ 

Method Details

"ignoreIdleSlots": True or False, # If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, a query or pipeline job using this reservation will execute with the slot capacity specified in the slot_capacity field at most. "multiRegionAuxiliary": True or False, # Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field. "name": "A String", # The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters. + "originalPrimaryLocation": "A String", # Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "primaryLocation": "A String", # Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "secondaryLocation": "A String", # Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation. "slotCapacity": "A String", # Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes. "updateTime": "A String", # Output only. Last update time of the reservation. }
@@ -233,6 +284,9 @@

Method Details

"ignoreIdleSlots": True or False, # If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, a query or pipeline job using this reservation will execute with the slot capacity specified in the slot_capacity field at most. "multiRegionAuxiliary": True or False, # Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field. "name": "A String", # The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters. + "originalPrimaryLocation": "A String", # Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "primaryLocation": "A String", # Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "secondaryLocation": "A String", # Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation. "slotCapacity": "A String", # Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes. "updateTime": "A String", # Output only. Last update time of the reservation. }, @@ -274,6 +328,9 @@

Method Details

"ignoreIdleSlots": True or False, # If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, a query or pipeline job using this reservation will execute with the slot capacity specified in the slot_capacity field at most. "multiRegionAuxiliary": True or False, # Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field. "name": "A String", # The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters. + "originalPrimaryLocation": "A String", # Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "primaryLocation": "A String", # Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "secondaryLocation": "A String", # Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation. "slotCapacity": "A String", # Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes. "updateTime": "A String", # Output only. Last update time of the reservation. } @@ -298,6 +355,9 @@

Method Details

"ignoreIdleSlots": True or False, # If false, any query or pipeline job using this reservation will use idle slots from other reservations within the same admin project. If true, a query or pipeline job using this reservation will execute with the slot capacity specified in the slot_capacity field at most. "multiRegionAuxiliary": True or False, # Applicable only for reservations located within one of the BigQuery multi-regions (US or EU). If set to true, this reservation is placed in the organization's secondary region which is designated for disaster recovery purposes. If false, this reservation is placed in the organization's default region. NOTE: this is a preview feature. Project must be allow-listed in order to set this field. "name": "A String", # The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters. + "originalPrimaryLocation": "A String", # Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "primaryLocation": "A String", # Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions. + "secondaryLocation": "A String", # Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation. "slotCapacity": "A String", # Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes. "updateTime": "A String", # Output only. Last update time of the reservation. } diff --git a/docs/dyn/bigtableadmin_v2.projects.instances.appProfiles.html b/docs/dyn/bigtableadmin_v2.projects.instances.appProfiles.html index 0792fa95a3..2c9e6811e1 100644 --- a/docs/dyn/bigtableadmin_v2.projects.instances.appProfiles.html +++ b/docs/dyn/bigtableadmin_v2.projects.instances.appProfiles.html @@ -111,6 +111,9 @@

Method Details

The object takes the form of: { # A configuration object describing how Cloud Bigtable should treat traffic from a particular end user application. + "dataBoostIsolationReadOnly": { # Data Boost is a serverless compute capability that lets you run high-throughput read jobs on your Bigtable data, without impacting the performance of the clusters that handle your application traffic. Currently, Data Boost exclusively supports read-only use-cases with single-cluster routing. Data Boost reads are only guaranteed to see the results of writes that were written at least 30 minutes ago. This means newly written values may not become visible for up to 30m, and also means that old values may remain visible for up to 30m after being deleted or overwritten. To mitigate the staleness of the data, users may either wait 30m, or use CheckConsistency. # Specifies that this app profile is intended for read-only usage via the Data Boost feature. + "computeBillingOwner": "A String", # The Compute Billing Owner for this Data Boost App Profile. + }, "description": "A String", # Long form description of the use case for this AppProfile. "etag": "A String", # Strongly validated etag for optimistic concurrency control. Preserve the value returned from `GetAppProfile` when calling `UpdateAppProfile` to fail the request if there has been a modification in the mean time. The `update_mask` of the request need not include `etag` for this protection to apply. See [Wikipedia](https://en.wikipedia.org/wiki/HTTP_ETag) and [RFC 7232](https://tools.ietf.org/html/rfc7232#section-2.3) for more details. "multiClusterRoutingUseAny": { # Read/write requests are routed to the nearest cluster in the instance, and will fail over to the nearest cluster that is available in the event of transient errors or delays. Clusters in a region are considered equidistant. Choosing this option sacrifices read-your-writes consistency to improve availability. # Use a multi-cluster routing policy. @@ -140,6 +143,9 @@

Method Details

An object of the form: { # A configuration object describing how Cloud Bigtable should treat traffic from a particular end user application. + "dataBoostIsolationReadOnly": { # Data Boost is a serverless compute capability that lets you run high-throughput read jobs on your Bigtable data, without impacting the performance of the clusters that handle your application traffic. Currently, Data Boost exclusively supports read-only use-cases with single-cluster routing. Data Boost reads are only guaranteed to see the results of writes that were written at least 30 minutes ago. This means newly written values may not become visible for up to 30m, and also means that old values may remain visible for up to 30m after being deleted or overwritten. To mitigate the staleness of the data, users may either wait 30m, or use CheckConsistency. # Specifies that this app profile is intended for read-only usage via the Data Boost feature. + "computeBillingOwner": "A String", # The Compute Billing Owner for this Data Boost App Profile. + }, "description": "A String", # Long form description of the use case for this AppProfile. "etag": "A String", # Strongly validated etag for optimistic concurrency control. Preserve the value returned from `GetAppProfile` when calling `UpdateAppProfile` to fail the request if there has been a modification in the mean time. The `update_mask` of the request need not include `etag` for this protection to apply. See [Wikipedia](https://en.wikipedia.org/wiki/HTTP_ETag) and [RFC 7232](https://tools.ietf.org/html/rfc7232#section-2.3) for more details. "multiClusterRoutingUseAny": { # Read/write requests are routed to the nearest cluster in the instance, and will fail over to the nearest cluster that is available in the event of transient errors or delays. Clusters in a region are considered equidistant. Choosing this option sacrifices read-your-writes consistency to improve availability. # Use a multi-cluster routing policy. @@ -193,6 +199,9 @@

Method Details

An object of the form: { # A configuration object describing how Cloud Bigtable should treat traffic from a particular end user application. + "dataBoostIsolationReadOnly": { # Data Boost is a serverless compute capability that lets you run high-throughput read jobs on your Bigtable data, without impacting the performance of the clusters that handle your application traffic. Currently, Data Boost exclusively supports read-only use-cases with single-cluster routing. Data Boost reads are only guaranteed to see the results of writes that were written at least 30 minutes ago. This means newly written values may not become visible for up to 30m, and also means that old values may remain visible for up to 30m after being deleted or overwritten. To mitigate the staleness of the data, users may either wait 30m, or use CheckConsistency. # Specifies that this app profile is intended for read-only usage via the Data Boost feature. + "computeBillingOwner": "A String", # The Compute Billing Owner for this Data Boost App Profile. + }, "description": "A String", # Long form description of the use case for this AppProfile. "etag": "A String", # Strongly validated etag for optimistic concurrency control. Preserve the value returned from `GetAppProfile` when calling `UpdateAppProfile` to fail the request if there has been a modification in the mean time. The `update_mask` of the request need not include `etag` for this protection to apply. See [Wikipedia](https://en.wikipedia.org/wiki/HTTP_ETag) and [RFC 7232](https://tools.ietf.org/html/rfc7232#section-2.3) for more details. "multiClusterRoutingUseAny": { # Read/write requests are routed to the nearest cluster in the instance, and will fail over to the nearest cluster that is available in the event of transient errors or delays. Clusters in a region are considered equidistant. Choosing this option sacrifices read-your-writes consistency to improve availability. # Use a multi-cluster routing policy. @@ -231,6 +240,9 @@

Method Details

{ # Response message for BigtableInstanceAdmin.ListAppProfiles. "appProfiles": [ # The list of requested app profiles. { # A configuration object describing how Cloud Bigtable should treat traffic from a particular end user application. + "dataBoostIsolationReadOnly": { # Data Boost is a serverless compute capability that lets you run high-throughput read jobs on your Bigtable data, without impacting the performance of the clusters that handle your application traffic. Currently, Data Boost exclusively supports read-only use-cases with single-cluster routing. Data Boost reads are only guaranteed to see the results of writes that were written at least 30 minutes ago. This means newly written values may not become visible for up to 30m, and also means that old values may remain visible for up to 30m after being deleted or overwritten. To mitigate the staleness of the data, users may either wait 30m, or use CheckConsistency. # Specifies that this app profile is intended for read-only usage via the Data Boost feature. + "computeBillingOwner": "A String", # The Compute Billing Owner for this Data Boost App Profile. + }, "description": "A String", # Long form description of the use case for this AppProfile. "etag": "A String", # Strongly validated etag for optimistic concurrency control. Preserve the value returned from `GetAppProfile` when calling `UpdateAppProfile` to fail the request if there has been a modification in the mean time. The `update_mask` of the request need not include `etag` for this protection to apply. See [Wikipedia](https://en.wikipedia.org/wiki/HTTP_ETag) and [RFC 7232](https://tools.ietf.org/html/rfc7232#section-2.3) for more details. "multiClusterRoutingUseAny": { # Read/write requests are routed to the nearest cluster in the instance, and will fail over to the nearest cluster that is available in the event of transient errors or delays. Clusters in a region are considered equidistant. Choosing this option sacrifices read-your-writes consistency to improve availability. # Use a multi-cluster routing policy. @@ -280,6 +292,9 @@

Method Details

The object takes the form of: { # A configuration object describing how Cloud Bigtable should treat traffic from a particular end user application. + "dataBoostIsolationReadOnly": { # Data Boost is a serverless compute capability that lets you run high-throughput read jobs on your Bigtable data, without impacting the performance of the clusters that handle your application traffic. Currently, Data Boost exclusively supports read-only use-cases with single-cluster routing. Data Boost reads are only guaranteed to see the results of writes that were written at least 30 minutes ago. This means newly written values may not become visible for up to 30m, and also means that old values may remain visible for up to 30m after being deleted or overwritten. To mitigate the staleness of the data, users may either wait 30m, or use CheckConsistency. # Specifies that this app profile is intended for read-only usage via the Data Boost feature. + "computeBillingOwner": "A String", # The Compute Billing Owner for this Data Boost App Profile. + }, "description": "A String", # Long form description of the use case for this AppProfile. "etag": "A String", # Strongly validated etag for optimistic concurrency control. Preserve the value returned from `GetAppProfile` when calling `UpdateAppProfile` to fail the request if there has been a modification in the mean time. The `update_mask` of the request need not include `etag` for this protection to apply. See [Wikipedia](https://en.wikipedia.org/wiki/HTTP_ETag) and [RFC 7232](https://tools.ietf.org/html/rfc7232#section-2.3) for more details. "multiClusterRoutingUseAny": { # Read/write requests are routed to the nearest cluster in the instance, and will fail over to the nearest cluster that is available in the event of transient errors or delays. Clusters in a region are considered equidistant. Choosing this option sacrifices read-your-writes consistency to improve availability. # Use a multi-cluster routing policy. diff --git a/docs/dyn/bigtableadmin_v2.projects.instances.tables.html b/docs/dyn/bigtableadmin_v2.projects.instances.tables.html index eefcb10e1d..988fa82072 100644 --- a/docs/dyn/bigtableadmin_v2.projects.instances.tables.html +++ b/docs/dyn/bigtableadmin_v2.projects.instances.tables.html @@ -139,6 +139,10 @@

Method Details

{ # Request message for google.bigtable.admin.v2.BigtableTableAdmin.CheckConsistency "consistencyToken": "A String", # Required. The token created using GenerateConsistencyToken for the Table. + "dataBoostReadLocalWrites": { # Checks that all writes before the consistency token was generated in the same cluster are readable by Databoost. # Checks that reads using an app profile with `DataBoostIsolationReadOnly` can see all writes committed before the token was created, but only if the read and write target the same cluster. + }, + "standardReadRemoteWrites": { # Checks that all writes before the consistency token was generated are replicated in every cluster and readable. # Checks that reads using an app profile with `StandardIsolation` can see all writes committed before the token was created, even if the read and write target different clusters. + }, } x__xgafv: string, V1 error format. @@ -223,6 +227,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, }, "deletionProtection": True or False, # Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited: * The table. * The column families in the table. * The instance containing the table. Note one can still delete the data stored in the table through Data APIs. @@ -305,6 +335,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, }, "deletionProtection": True or False, # Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited: * The table. * The column families in the table. * The instance containing the table. Note one can still delete the data stored in the table through Data APIs. @@ -470,6 +526,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, }, "deletionProtection": True or False, # Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited: * The table. * The column families in the table. * The instance containing the table. Note one can still delete the data stored in the table through Data APIs. @@ -627,6 +709,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, }, "deletionProtection": True or False, # Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited: * The table. * The column families in the table. * The instance containing the table. Note one can still delete the data stored in the table through Data APIs. @@ -700,6 +808,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, "drop": True or False, # Drop (delete) the column family with the given ID, or fail if no such family exists. "id": "A String", # The ID of the column family to be modified. @@ -723,6 +857,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, "updateMask": "A String", # Optional. A mask specifying which fields (e.g. `gc_rule`) in the `update` mod should be updated, ignored for other modification types. If unset or empty, we treat it as updating `gc_rule` to be backward compatible. }, @@ -786,6 +946,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, }, "deletionProtection": True or False, # Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited: * The table. * The column families in the table. * The instance containing the table. Note one can still delete the data stored in the table through Data APIs. @@ -868,6 +1054,32 @@

Method Details

"averageColumnsPerRow": 3.14, # How many column qualifiers are present in this column family, averaged over all rows in the table. e.g. For column family "family" in a table with 3 rows: * A row with cells in "family:col" and "other:col" (1 column in "family") * A row with cells in "family:col", "family:other_col", and "other:data" (2 columns in "family") * A row with cells in "other:col" (0 columns in "family", "family" not present) would report (1 + 2 + 0)/3 = 1.5 in this field. "logicalDataBytes": "A String", # How much space the data in the column family occupies. This is roughly how many bytes would be needed to read the contents of the entire column family (e.g. by streaming all contents out). }, + "valueType": { # `Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an "encoding chain," for example to convert from INT64 -> STRING -> raw bytes. In most cases, a "link" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING("-00001") > STRING("00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java? # The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations + "aggregateType": { # A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` . # Aggregate + "inputType": # Object with schema name: Type # Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs. + "stateType": # Object with schema name: Type # Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding. + "sum": { # Computes the sum of the input values. Allowed input: `Int64` State: same as input # Sum aggregator. + }, + }, + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # Bytes + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + "int64Type": { # Int64 Values of type `Int64` are stored in `Value.int_value`. # Int64 + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "bigEndianBytes": { # Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN` # Use `BigEndianBytes` encoding. + "bytesType": { # Bytes Values of type `Bytes` are stored in `Value.bytes_value`. # The underlying `Bytes` type, which may be able to encode further. + "encoding": { # Rules used to convert to/from lower level types. # The encoding to use when converting to/from lower level types. + "raw": { # Leaves the value "as-is" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A # Use `Raw` encoding. + }, + }, + }, + }, + }, + }, + }, }, }, "deletionProtection": True or False, # Set to true to make the table protected against data loss. i.e. deleting the following resources through Admin APIs are prohibited: * The table. * The column families in the table. * The instance containing the table. Note one can still delete the data stored in the table through Data APIs. diff --git a/docs/dyn/certificatemanager_v1.projects.locations.trustConfigs.html b/docs/dyn/certificatemanager_v1.projects.locations.trustConfigs.html index c1c4333fdc..035ad3b80b 100644 --- a/docs/dyn/certificatemanager_v1.projects.locations.trustConfigs.html +++ b/docs/dyn/certificatemanager_v1.projects.locations.trustConfigs.html @@ -111,6 +111,11 @@

Method Details

The object takes the form of: { # Defines a trust config. + "allowlistedCertificates": [ # Optional. A certificate matching an allowlisted certificate is always considered valid as long as the certificate is parseable, proof of private key possession is established, and constraints on the certificate’s SAN field are met. + { # Defines an allowlisted certificate. + "pemCertificate": "A String", # Required. PEM certificate that is allowlisted. The certificate can be up to 5k bytes, and must be a parseable X.509 certificate. + }, + ], "createTime": "A String", # Output only. The creation timestamp of a TrustConfig. "description": "A String", # One or more paragraphs of text description of a TrustConfig. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. @@ -216,6 +221,11 @@

Method Details

An object of the form: { # Defines a trust config. + "allowlistedCertificates": [ # Optional. A certificate matching an allowlisted certificate is always considered valid as long as the certificate is parseable, proof of private key possession is established, and constraints on the certificate’s SAN field are met. + { # Defines an allowlisted certificate. + "pemCertificate": "A String", # Required. PEM certificate that is allowlisted. The certificate can be up to 5k bytes, and must be a parseable X.509 certificate. + }, + ], "createTime": "A String", # Output only. The creation timestamp of a TrustConfig. "description": "A String", # One or more paragraphs of text description of a TrustConfig. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. @@ -263,6 +273,11 @@

Method Details

"nextPageToken": "A String", # If there might be more results than those appearing in this response, then `next_page_token` is included. To get the next set of results, call this method again using the value of `next_page_token` as `page_token`. "trustConfigs": [ # A list of TrustConfigs for the parent resource. { # Defines a trust config. + "allowlistedCertificates": [ # Optional. A certificate matching an allowlisted certificate is always considered valid as long as the certificate is parseable, proof of private key possession is established, and constraints on the certificate’s SAN field are met. + { # Defines an allowlisted certificate. + "pemCertificate": "A String", # Required. PEM certificate that is allowlisted. The certificate can be up to 5k bytes, and must be a parseable X.509 certificate. + }, + ], "createTime": "A String", # Output only. The creation timestamp of a TrustConfig. "description": "A String", # One or more paragraphs of text description of a TrustConfig. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. @@ -317,6 +332,11 @@

Method Details

The object takes the form of: { # Defines a trust config. + "allowlistedCertificates": [ # Optional. A certificate matching an allowlisted certificate is always considered valid as long as the certificate is parseable, proof of private key possession is established, and constraints on the certificate’s SAN field are met. + { # Defines an allowlisted certificate. + "pemCertificate": "A String", # Required. PEM certificate that is allowlisted. The certificate can be up to 5k bytes, and must be a parseable X.509 certificate. + }, + ], "createTime": "A String", # Output only. The creation timestamp of a TrustConfig. "description": "A String", # One or more paragraphs of text description of a TrustConfig. "etag": "A String", # This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. diff --git a/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html b/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html index 75e4132994..bb5e90d277 100644 --- a/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html +++ b/docs/dyn/chromemanagement_v1.customers.telemetry.devices.html @@ -398,7 +398,7 @@

Method Details

Args: parent: string, Required. Customer id or "my_customer" to use the customer associated to the account making the request. (required) - filter: string, Optional. Only include resources that match the filter. Supported filter fields: - org_unit_id - serial_number - device_id - reports_timestamp The "reports_timestamp" filter accepts either the Unix Epoch milliseconds format or the RFC3339 UTC "Zulu" format with nanosecond resolution and up to nine fractional digits. Both formats should be surrounded by simple double quotes. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z", "1679283943823". + filter: string, Optional. Only include resources that match the filter. Requests that don't specify a "reports_timestamp" value will default to returning only recent reports. Specify "reports_timestamp>=0" to get all report data. Supported filter fields: - org_unit_id - serial_number - device_id - reports_timestamp The "reports_timestamp" filter accepts either the Unix Epoch milliseconds format or the RFC3339 UTC "Zulu" format with nanosecond resolution and up to nine fractional digits. Both formats should be surrounded by simple double quotes. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z", "1679283943823". pageSize: integer, Maximum number of results to return. Default value is 100. Maximum value is 1000. pageToken: string, Token to specify next page in the list. readMask: string, Required. Read mask to specify which fields to return. Supported read_mask paths are: - name - org_unit_id - device_id - serial_number - cpu_info - cpu_status_report - memory_info - memory_status_report - network_info - network_diagnostics_report - network_status_report - os_update_status - graphics_info - graphics_status_report - battery_info - battery_status_report - storage_info - storage_status_report - thunderbolt_info - audio_status_report - boot_performance_report - heartbeat_status_report - network_bandwidth_report - peripherals_report - kiosk_app_status_report - app_report - runtime_counters_report diff --git a/docs/dyn/cloudasset_v1p1beta1.iamPolicies.html b/docs/dyn/cloudasset_v1p1beta1.iamPolicies.html index 8d42500eee..9f00012f54 100644 --- a/docs/dyn/cloudasset_v1p1beta1.iamPolicies.html +++ b/docs/dyn/cloudasset_v1p1beta1.iamPolicies.html @@ -79,7 +79,7 @@

Instance Methods

Close httplib2 connections.

searchAll(scope, pageSize=None, pageToken=None, query=None, x__xgafv=None)

-

Searches all the IAM policies within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the IAM policies within a scope, even if they don't have `.getIamPolicy` permission of all the IAM policies. Callers should have `cloud.assets.SearchAllIamPolicies` permission on the requested scope, otherwise the request will be rejected.

+

Searches all the IAM policies within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the IAM policies within a scope, even if they don't have `.getIamPolicy` permission of all the IAM policies. Callers should have `cloudasset.assets.searchAllIamPolicies` permission on the requested scope, otherwise the request will be rejected.

searchAll_next()

Retrieves the next page of results.

@@ -91,7 +91,7 @@

Method Details

searchAll(scope, pageSize=None, pageToken=None, query=None, x__xgafv=None) -
Searches all the IAM policies within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the IAM policies within a scope, even if they don't have `.getIamPolicy` permission of all the IAM policies. Callers should have `cloud.assets.SearchAllIamPolicies` permission on the requested scope, otherwise the request will be rejected.
+  
Searches all the IAM policies within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the IAM policies within a scope, even if they don't have `.getIamPolicy` permission of all the IAM policies. Callers should have `cloudasset.assets.searchAllIamPolicies` permission on the requested scope, otherwise the request will be rejected.
 
 Args:
   scope: string, Required. The relative name of an asset. The search is limited to the resources within the `scope`. The allowed value must be: * Organization number (such as "organizations/123") * Folder number (such as "folders/1234") * Project number (such as "projects/12345") * Project ID (such as "projects/abc") (required)
diff --git a/docs/dyn/cloudasset_v1p1beta1.resources.html b/docs/dyn/cloudasset_v1p1beta1.resources.html
index ccb75841b1..671722dc71 100644
--- a/docs/dyn/cloudasset_v1p1beta1.resources.html
+++ b/docs/dyn/cloudasset_v1p1beta1.resources.html
@@ -79,7 +79,7 @@ 

Instance Methods

Close httplib2 connections.

searchAll(scope, assetTypes=None, orderBy=None, pageSize=None, pageToken=None, query=None, x__xgafv=None)

-

Searches all the resources within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the resources within a scope, even if they don't have `.get` permission of all the resources. Callers should have `cloud.assets.SearchAllResources` permission on the requested scope, otherwise the request will be rejected.

+

Searches all the resources within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the resources within a scope, even if they don't have `.get` permission of all the resources. Callers should have `cloudasset.assets.searchAllResources` permission on the requested scope, otherwise the request will be rejected.

searchAll_next()

Retrieves the next page of results.

@@ -91,7 +91,7 @@

Method Details

searchAll(scope, assetTypes=None, orderBy=None, pageSize=None, pageToken=None, query=None, x__xgafv=None) -
Searches all the resources within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the resources within a scope, even if they don't have `.get` permission of all the resources. Callers should have `cloud.assets.SearchAllResources` permission on the requested scope, otherwise the request will be rejected.
+  
Searches all the resources within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the resources within a scope, even if they don't have `.get` permission of all the resources. Callers should have `cloudasset.assets.searchAllResources` permission on the requested scope, otherwise the request will be rejected.
 
 Args:
   scope: string, Required. The relative name of an asset. The search is limited to the resources within the `scope`. The allowed value must be: * Organization number (such as "organizations/123") * Folder number (such as "folders/1234") * Project number (such as "projects/12345") * Project ID (such as "projects/abc") (required)
diff --git a/docs/dyn/cloudbuild_v2.projects.locations.connections.repositories.html b/docs/dyn/cloudbuild_v2.projects.locations.connections.repositories.html
index 7e94673661..a9449dd4d6 100644
--- a/docs/dyn/cloudbuild_v2.projects.locations.connections.repositories.html
+++ b/docs/dyn/cloudbuild_v2.projects.locations.connections.repositories.html
@@ -317,7 +317,7 @@ 

Method Details

Args: repository: string, Required. The resource name of the repository in the format `projects/*/locations/*/connections/*/repositories/*`. (required) - pageSize: integer, Optional. Number of results to return in the list. Default to 100. + pageSize: integer, Optional. Number of results to return in the list. Default to 20. pageToken: string, Optional. Page start. refType: string, Type of refs to fetch Allowed values diff --git a/docs/dyn/clouderrorreporting_v1beta1.projects.events.html b/docs/dyn/clouderrorreporting_v1beta1.projects.events.html index 628472cfbc..ed995f7be4 100644 --- a/docs/dyn/clouderrorreporting_v1beta1.projects.events.html +++ b/docs/dyn/clouderrorreporting_v1beta1.projects.events.html @@ -85,7 +85,7 @@

Instance Methods

Retrieves the next page of results.

report(projectName, body=None, x__xgafv=None)

-

Report an individual error event and record the event to a log. This endpoint accepts **either** an OAuth token, **or** an [API key](https://support.google.com/cloud/answer/6158862) for authentication. To use an API key, append it to the URL as the value of a `key` parameter. For example: `POST https://clouderrorreporting.googleapis.com/v1beta1/{projectName}/events:report?key=123ABC456` **Note:** [Error Reporting] (https://cloud.google.com/error-reporting) is a global service built on Cloud Logging and doesn't analyze logs stored in regional log buckets or logs routed to other Google Cloud projects.

+

Report an individual error event and record the event to a log. This endpoint accepts **either** an OAuth token, **or** an [API key](https://support.google.com/cloud/answer/6158862) for authentication. To use an API key, append it to the URL as the value of a `key` parameter. For example: `POST https://clouderrorreporting.googleapis.com/v1beta1/{projectName}/events:report?key=123ABC456` **Note:** [Error Reporting] (https://cloud.google.com/error-reporting) is a global service built on Cloud Logging and doesn't analyze logs stored in regional log buckets.

Method Details

close() @@ -175,7 +175,7 @@

Method Details

report(projectName, body=None, x__xgafv=None) -
Report an individual error event and record the event to a log. This endpoint accepts **either** an OAuth token, **or** an [API key](https://support.google.com/cloud/answer/6158862) for authentication. To use an API key, append it to the URL as the value of a `key` parameter. For example: `POST https://clouderrorreporting.googleapis.com/v1beta1/{projectName}/events:report?key=123ABC456` **Note:** [Error Reporting] (https://cloud.google.com/error-reporting) is a global service built on Cloud Logging and doesn't analyze logs stored in regional log buckets or logs routed to other Google Cloud projects.
+  
Report an individual error event and record the event to a log. This endpoint accepts **either** an OAuth token, **or** an [API key](https://support.google.com/cloud/answer/6158862) for authentication. To use an API key, append it to the URL as the value of a `key` parameter. For example: `POST https://clouderrorreporting.googleapis.com/v1beta1/{projectName}/events:report?key=123ABC456` **Note:** [Error Reporting] (https://cloud.google.com/error-reporting) is a global service built on Cloud Logging and doesn't analyze logs stored in regional log buckets.
 
 Args:
   projectName: string, Required. The resource name of the Google Cloud Platform project. Written as `projects/{projectId}`, where `{projectId}` is the [Google Cloud Platform project ID](https://support.google.com/cloud/answer/6158840). Example: // `projects/my-project-123`. (required)
diff --git a/docs/dyn/cloudsupport_v2beta.caseClassifications.html b/docs/dyn/cloudsupport_v2beta.caseClassifications.html
index 60519b533a..bfe52f22b6 100644
--- a/docs/dyn/cloudsupport_v2beta.caseClassifications.html
+++ b/docs/dyn/cloudsupport_v2beta.caseClassifications.html
@@ -78,7 +78,7 @@ 

Instance Methods

close()

Close httplib2 connections.

- search(pageSize=None, pageToken=None, product_productLine=None, product_productSubline=None, query=None, x__xgafv=None)

+ search(pageSize=None, pageToken=None, query=None, x__xgafv=None)

Retrieve valid classifications to use when creating a support case. Classifications are hierarchical. Each classification is a string containing all levels of the hierarchy separated by `" > "`. For example, `"Technical Issue > Compute > Compute Engine"`. Classification IDs returned by this endpoint are valid for at least six months. When a classification is deactivated, this endpoint immediately stops returning it. After six months, `case.create` requests using the classification will fail. EXAMPLES: cURL: ```shell curl \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ 'https://cloudsupport.googleapis.com/v2/caseClassifications:search?query=display_name:"*Compute%20Engine*"' ``` Python: ```python import googleapiclient.discovery supportApiService = googleapiclient.discovery.build( serviceName="cloudsupport", version="v2", discoveryServiceUrl=f"https://cloudsupport.googleapis.com/$discovery/rest?version=v2", ) request = supportApiService.caseClassifications().search( query='display_name:"*Compute Engine*"' ) print(request.execute()) ```

search_next()

@@ -90,18 +90,12 @@

Method Details

- search(pageSize=None, pageToken=None, product_productLine=None, product_productSubline=None, query=None, x__xgafv=None) + search(pageSize=None, pageToken=None, query=None, x__xgafv=None)
Retrieve valid classifications to use when creating a support case. Classifications are hierarchical. Each classification is a string containing all levels of the hierarchy separated by `" > "`. For example, `"Technical Issue > Compute > Compute Engine"`. Classification IDs returned by this endpoint are valid for at least six months. When a classification is deactivated, this endpoint immediately stops returning it. After six months, `case.create` requests using the classification will fail. EXAMPLES: cURL: ```shell curl \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ 'https://cloudsupport.googleapis.com/v2/caseClassifications:search?query=display_name:"*Compute%20Engine*"' ``` Python: ```python import googleapiclient.discovery supportApiService = googleapiclient.discovery.build( serviceName="cloudsupport", version="v2", discoveryServiceUrl=f"https://cloudsupport.googleapis.com/$discovery/rest?version=v2", ) request = supportApiService.caseClassifications().search( query='display_name:"*Compute Engine*"' ) print(request.execute()) ```
 
 Args:
   pageSize: integer, The maximum number of classifications fetched with each request.
   pageToken: string, A token identifying the page of results to return. If unspecified, the first page is retrieved.
-  product_productLine: string, The Product Line of the Product.
-    Allowed values
-      PRODUCT_LINE_UNSPECIFIED - Unknown product type.
-      GOOGLE_CLOUD - Google Cloud
-      GOOGLE_MAPS - Google Maps
-  product_productSubline: string, The Product Subline of the Product, such as "Maps Billing".
   query: string, An expression used to filter case classifications. If it's an empty string, then no filtering happens. Otherwise, case classifications will be returned that match the filter.
   x__xgafv: string, V1 error format.
     Allowed values
@@ -116,10 +110,6 @@ 

Method Details

{ # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, ], "nextPageToken": "A String", # A token to retrieve the next page of results. Set this in the `page_token` field of subsequent `caseClassifications.list` requests. If unspecified, there are no more results to retrieve. diff --git a/docs/dyn/cloudsupport_v2beta.cases.html b/docs/dyn/cloudsupport_v2beta.cases.html index 1daca808cf..16499624bf 100644 --- a/docs/dyn/cloudsupport_v2beta.cases.html +++ b/docs/dyn/cloudsupport_v2beta.cases.html @@ -97,7 +97,7 @@

Instance Methods

get(name, x__xgafv=None)

Retrieve a case. EXAMPLES: cURL: ```shell case="projects/some-project/cases/16033687" curl \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://cloudsupport.googleapis.com/v2/$case" ``` Python: ```python import googleapiclient.discovery api_version = "v2" supportApiService = googleapiclient.discovery.build( serviceName="cloudsupport", version=api_version, discoveryServiceUrl=f"https://cloudsupport.googleapis.com/$discovery/rest?version={api_version}", ) request = supportApiService.cases().get( name="projects/some-project/cases/43595344", ) print(request.execute()) ```

- list(parent, filter=None, pageSize=None, pageToken=None, productLine=None, x__xgafv=None)

+ list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

Retrieve all cases under a parent, but not its children. For example, listing cases under an organization only returns the cases that are directly parented by that organization. To retrieve cases under an organization and its projects, use `cases.search`. EXAMPLES: cURL: ```shell parent="projects/some-project" curl \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://cloudsupport.googleapis.com/v2/$parent/cases" ``` Python: ```python import googleapiclient.discovery api_version = "v2" supportApiService = googleapiclient.discovery.build( serviceName="cloudsupport", version=api_version, discoveryServiceUrl=f"https://cloudsupport.googleapis.com/$discovery/rest?version={api_version}", ) request = supportApiService.cases().list(parent="projects/some-project") print(request.execute()) ```

list_next()

@@ -136,10 +136,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -179,10 +175,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -220,10 +212,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -278,10 +266,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -326,10 +310,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -357,7 +337,7 @@

Method Details

- list(parent, filter=None, pageSize=None, pageToken=None, productLine=None, x__xgafv=None) + list(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)
Retrieve all cases under a parent, but not its children. For example, listing cases under an organization only returns the cases that are directly parented by that organization. To retrieve cases under an organization and its projects, use `cases.search`. EXAMPLES: cURL: ```shell parent="projects/some-project" curl \ --header "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://cloudsupport.googleapis.com/v2/$parent/cases" ``` Python: ```python import googleapiclient.discovery api_version = "v2" supportApiService = googleapiclient.discovery.build( serviceName="cloudsupport", version=api_version, discoveryServiceUrl=f"https://cloudsupport.googleapis.com/$discovery/rest?version={api_version}", ) request = supportApiService.cases().list(parent="projects/some-project") print(request.execute()) ```
 
 Args:
@@ -365,11 +345,6 @@ 

Method Details

filter: string, An expression used to filter cases. If it's an empty string, then no filtering happens. Otherwise, the endpoint returns the cases that match the filter. Expressions use the following fields separated by `AND` and specified with `=`: - `state`: Can be `OPEN` or `CLOSED`. - `priority`: Can be `P0`, `P1`, `P2`, `P3`, or `P4`. You can specify multiple values for priority using the `OR` operator. For example, `priority=P1 OR priority=P2`. - `creator.email`: The email address of the case creator. EXAMPLES: - `state=CLOSED` - `state=OPEN AND creator.email="tester@example.com"` - `state=OPEN AND (priority=P0 OR priority=P1)` pageSize: integer, The maximum number of cases fetched with each request. Defaults to 10. pageToken: string, A token identifying the page of results to return. If unspecified, the first page is retrieved. - productLine: string, The product line for which to request cases for. If unspecified, only Google Cloud cases will be returned. - Allowed values - PRODUCT_LINE_UNSPECIFIED - Unknown product type. - GOOGLE_CLOUD - Google Cloud - GOOGLE_MAPS - Google Maps x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -384,10 +359,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -444,10 +415,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -486,10 +453,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. @@ -539,10 +502,6 @@

Method Details

"classification": { # A Case Classification represents the topic that a case is about. It's very important to use accurate classifications, because they're used to route your cases to specialists who can help you. A classification always has an ID that is its unique identifier. A valid ID is required when creating a case. # The issue classification applicable to this case. "displayName": "A String", # A display name for the classification. The display name is not static and can change. To uniquely and consistently identify classifications, use the `CaseClassification.id` field. "id": "A String", # The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail. - "product": { # The full product a case may be associated with, including Product Line and Product Subline. # The full product the classification corresponds to. - "productLine": "A String", # The Product Line of the Product. - "productSubline": "A String", # The Product Subline of the Product, such as "Maps Billing". - }, }, "contactEmail": "A String", # A user-supplied email address to send case update notifications for. This should only be used in BYOID flows, where we cannot infer the user's email address directly from their EUCs. "createTime": "A String", # Output only. The time this case was created. diff --git a/docs/dyn/composer_v1.projects.locations.environments.html b/docs/dyn/composer_v1.projects.locations.environments.html index 426cb5ec9d..df0d675bda 100644 --- a/docs/dyn/composer_v1.projects.locations.environments.html +++ b/docs/dyn/composer_v1.projects.locations.environments.html @@ -152,7 +152,7 @@

Method Details

"airflowUri": "A String", # Output only. The URI of the Apache Airflow Web UI hosted within this environment (see [Airflow web interface](/composer/docs/how-to/accessing/airflow-web-interface)). "dagGcsPrefix": "A String", # Output only. The Cloud Storage prefix of the DAGs for this environment. Although Cloud Storage objects reside in a flat namespace, a hierarchical file tree can be simulated using "/"-delimited object name prefixes. DAG objects for this environment reside in a simulated directory with the given prefix. "dataRetentionConfig": { # The configuration setting for Airflow database data retention mechanism. # Optional. The configuration setting for Airflow database data retention mechanism. - "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. Details: go/composer-database-retention-2 + "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. "retentionDays": 42, # Optional. How many days data should be retained for. "retentionMode": "A String", # Optional. Retention can be either enabled or disabled. }, @@ -490,7 +490,7 @@

Method Details

"airflowUri": "A String", # Output only. The URI of the Apache Airflow Web UI hosted within this environment (see [Airflow web interface](/composer/docs/how-to/accessing/airflow-web-interface)). "dagGcsPrefix": "A String", # Output only. The Cloud Storage prefix of the DAGs for this environment. Although Cloud Storage objects reside in a flat namespace, a hierarchical file tree can be simulated using "/"-delimited object name prefixes. DAG objects for this environment reside in a simulated directory with the given prefix. "dataRetentionConfig": { # The configuration setting for Airflow database data retention mechanism. # Optional. The configuration setting for Airflow database data retention mechanism. - "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. Details: go/composer-database-retention-2 + "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. "retentionDays": 42, # Optional. How many days data should be retained for. "retentionMode": "A String", # Optional. Retention can be either enabled or disabled. }, @@ -674,7 +674,7 @@

Method Details

"airflowUri": "A String", # Output only. The URI of the Apache Airflow Web UI hosted within this environment (see [Airflow web interface](/composer/docs/how-to/accessing/airflow-web-interface)). "dagGcsPrefix": "A String", # Output only. The Cloud Storage prefix of the DAGs for this environment. Although Cloud Storage objects reside in a flat namespace, a hierarchical file tree can be simulated using "/"-delimited object name prefixes. DAG objects for this environment reside in a simulated directory with the given prefix. "dataRetentionConfig": { # The configuration setting for Airflow database data retention mechanism. # Optional. The configuration setting for Airflow database data retention mechanism. - "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. Details: go/composer-database-retention-2 + "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. "retentionDays": 42, # Optional. How many days data should be retained for. "retentionMode": "A String", # Optional. Retention can be either enabled or disabled. }, @@ -912,7 +912,7 @@

Method Details

"airflowUri": "A String", # Output only. The URI of the Apache Airflow Web UI hosted within this environment (see [Airflow web interface](/composer/docs/how-to/accessing/airflow-web-interface)). "dagGcsPrefix": "A String", # Output only. The Cloud Storage prefix of the DAGs for this environment. Although Cloud Storage objects reside in a flat namespace, a hierarchical file tree can be simulated using "/"-delimited object name prefixes. DAG objects for this environment reside in a simulated directory with the given prefix. "dataRetentionConfig": { # The configuration setting for Airflow database data retention mechanism. # Optional. The configuration setting for Airflow database data retention mechanism. - "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. Details: go/composer-database-retention-2 + "airflowMetadataRetentionConfig": { # The policy for airflow metadata database retention. # Optional. The retention policy for airflow metadata database. "retentionDays": 42, # Optional. How many days data should be retained for. "retentionMode": "A String", # Optional. Retention can be either enabled or disabled. }, diff --git a/docs/dyn/compute_alpha.html b/docs/dyn/compute_alpha.html index 6edffd557c..dfc6e751b5 100644 --- a/docs/dyn/compute_alpha.html +++ b/docs/dyn/compute_alpha.html @@ -289,6 +289,11 @@

Instance Methods

Returns the networkFirewallPolicies Resource.

+

+ networkPlacements() +

+

Returns the networkPlacements Resource.

+

networks()

diff --git a/docs/dyn/compute_alpha.instanceGroupManagers.html b/docs/dyn/compute_alpha.instanceGroupManagers.html index 10c92be077..0f50c72496 100644 --- a/docs/dyn/compute_alpha.instanceGroupManagers.html +++ b/docs/dyn/compute_alpha.instanceGroupManagers.html @@ -435,7 +435,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1393,7 +1393,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1590,7 +1590,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1917,7 +1917,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -2466,7 +2466,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -4335,7 +4335,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, diff --git a/docs/dyn/compute_alpha.instances.html b/docs/dyn/compute_alpha.instances.html index 45aabb06a9..32f6b75480 100644 --- a/docs/dyn/compute_alpha.instances.html +++ b/docs/dyn/compute_alpha.instances.html @@ -3007,7 +3007,7 @@

Method Details

}, ], "shortName": "A String", # [Output Only] The short name of the firewall policy. - "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL. + "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL. }, ], "firewalls": [ # Effective firewalls on the instance. diff --git a/docs/dyn/compute_alpha.networkPlacements.html b/docs/dyn/compute_alpha.networkPlacements.html new file mode 100644 index 0000000000..5e5af9f558 --- /dev/null +++ b/docs/dyn/compute_alpha.networkPlacements.html @@ -0,0 +1,241 @@ + + + +

Compute Engine API . networkPlacements

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ get(project, networkPlacement, x__xgafv=None)

+

Returns the specified network placement.

+

+ list(project, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)

+

Retrieves a list of network placements available to the specified project.

+

+ list_next()

+

Retrieves the next page of results.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ get(project, networkPlacement, x__xgafv=None) +
Returns the specified network placement.
+
+Args:
+  project: string, Project ID for this request. (required)
+  networkPlacement: string, Name of the network placement to return. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # NetworkPlacement Represents a Google managed network placement resource.
+  "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format.
+  "description": "A String", # [Output Only] An optional description of this resource.
+  "features": { # [Output Only] Features supported by the network.
+    "allowAutoModeSubnet": "A String", # Specifies whether auto mode subnet creation is allowed.
+    "allowCloudNat": "A String", # Specifies whether cloud NAT creation is allowed.
+    "allowCloudRouter": "A String", # Specifies whether cloud router creation is allowed.
+    "allowInterconnect": "A String", # Specifies whether Cloud Interconnect creation is allowed.
+    "allowLoadBalancing": "A String", # Specifies whether cloud load balancing is allowed.
+    "allowMultiNicInSameNetwork": "A String", # Specifies whether multi-nic in the same network is allowed.
+    "allowPacketMirroring": "A String", # Specifies whether Packet Mirroring 1.0 is supported.
+    "allowPrivateGoogleAccess": "A String", # Specifies whether private Google access is allowed.
+    "allowPsc": "A String", # Specifies whether PSC creation is allowed.
+    "allowSameNetworkUnicast": "A String", # Specifies whether unicast within the same network is allowed.
+    "allowStaticRoutes": "A String", # Specifies whether static route creation is allowed.
+    "allowVpcPeering": "A String", # Specifies whether VPC peering is allowed.
+    "allowVpn": "A String", # Specifies whether VPN creation is allowed.
+    "allowedSubnetPurposes": [ # Specifies which subnetwork purposes are supported.
+      "A String",
+    ],
+    "allowedSubnetStackTypes": [ # Specifies which subnetwork stack types are supported.
+      "A String",
+    ],
+    "interfaceTypes": [ # If set, limits the interface types that the network supports. If empty, all interface types are supported.
+      "A String",
+    ],
+    "multicast": "A String", # Specifies which type of multicast is supported.
+    "unicast": "A String", # Specifies which type of unicast is supported.
+  },
+  "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server.
+  "kind": "compute#networkPlacement", # [Output Only] Type of the resource. Always compute#networkPlacement for network placements.
+  "name": "A String", # [Output Only] Name of the resource.
+  "selfLink": "A String", # [Output Only] Server-defined URL for the resource.
+  "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id.
+  "zone": "A String", # [Output Only] Zone to which the network is restricted.
+}
+
+ +
+ list(project, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None) +
Retrieves a list of network placements available to the specified project.
+
+Args:
+  project: string, Project ID for this request. (required)
+  filter: string, A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = "Intel Skylake") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = "Intel Skylake") OR (cpuPlatform = "Intel Broadwell") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq "double quoted literal"` `(fieldname1 eq literal) (fieldname2 ne "literal")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name "instance", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.
+  maxResults: integer, The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)
+  orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.
+  pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.
+  returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Contains a list of network placements.
+  "etag": "A String",
+  "id": "A String", # [Output Only] Unique identifier for the resource; defined by the server.
+  "items": [ # A list of NetworkPlacement resources.
+    { # NetworkPlacement Represents a Google managed network placement resource.
+      "creationTimestamp": "A String", # [Output Only] Creation timestamp in RFC3339 text format.
+      "description": "A String", # [Output Only] An optional description of this resource.
+      "features": { # [Output Only] Features supported by the network.
+        "allowAutoModeSubnet": "A String", # Specifies whether auto mode subnet creation is allowed.
+        "allowCloudNat": "A String", # Specifies whether cloud NAT creation is allowed.
+        "allowCloudRouter": "A String", # Specifies whether cloud router creation is allowed.
+        "allowInterconnect": "A String", # Specifies whether Cloud Interconnect creation is allowed.
+        "allowLoadBalancing": "A String", # Specifies whether cloud load balancing is allowed.
+        "allowMultiNicInSameNetwork": "A String", # Specifies whether multi-nic in the same network is allowed.
+        "allowPacketMirroring": "A String", # Specifies whether Packet Mirroring 1.0 is supported.
+        "allowPrivateGoogleAccess": "A String", # Specifies whether private Google access is allowed.
+        "allowPsc": "A String", # Specifies whether PSC creation is allowed.
+        "allowSameNetworkUnicast": "A String", # Specifies whether unicast within the same network is allowed.
+        "allowStaticRoutes": "A String", # Specifies whether static route creation is allowed.
+        "allowVpcPeering": "A String", # Specifies whether VPC peering is allowed.
+        "allowVpn": "A String", # Specifies whether VPN creation is allowed.
+        "allowedSubnetPurposes": [ # Specifies which subnetwork purposes are supported.
+          "A String",
+        ],
+        "allowedSubnetStackTypes": [ # Specifies which subnetwork stack types are supported.
+          "A String",
+        ],
+        "interfaceTypes": [ # If set, limits the interface types that the network supports. If empty, all interface types are supported.
+          "A String",
+        ],
+        "multicast": "A String", # Specifies which type of multicast is supported.
+        "unicast": "A String", # Specifies which type of unicast is supported.
+      },
+      "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server.
+      "kind": "compute#networkPlacement", # [Output Only] Type of the resource. Always compute#networkPlacement for network placements.
+      "name": "A String", # [Output Only] Name of the resource.
+      "selfLink": "A String", # [Output Only] Server-defined URL for the resource.
+      "selfLinkWithId": "A String", # [Output Only] Server-defined URL for this resource with the resource id.
+      "zone": "A String", # [Output Only] Zone to which the network is restricted.
+    },
+  ],
+  "kind": "compute#networkPlacementList", # [Output Only] Type of resource. Always compute#networkPlacementList for network placements.
+  "nextPageToken": "A String", # [Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results.
+  "selfLink": "A String", # [Output Only] Server-defined URL for this resource.
+  "unreachables": [ # [Output Only] Unreachable resources. end_interface: MixerListResponseWithEtagBuilder
+    "A String",
+  ],
+  "warning": { # [Output Only] Informational warning message.
+    "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.
+    "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" }
+      {
+        "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).
+        "value": "A String", # [Output Only] A warning data value corresponding to the key.
+      },
+    ],
+    "message": "A String", # [Output Only] A human-readable description of the warning code.
+  },
+}
+
+ +
+ list_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ + \ No newline at end of file diff --git a/docs/dyn/compute_alpha.regionInstanceGroupManagers.html b/docs/dyn/compute_alpha.regionInstanceGroupManagers.html index 1e7524e53f..fab521c0b7 100644 --- a/docs/dyn/compute_alpha.regionInstanceGroupManagers.html +++ b/docs/dyn/compute_alpha.regionInstanceGroupManagers.html @@ -759,7 +759,6 @@

Method Details

"instances": [ # The URLs of one or more instances to delete. This can be a full URL or a partial URL, such as zones/[ZONE]/instances/[INSTANCE_NAME]. "A String", ], - "skipInapplicableInstances": True or False, # Skip instances which cannot be deleted (instances not belonging to this managed group, already being deleted or being abandoned). If `false`, fail whole flow, if such instance is passed. DEPRECATED: Use skip_instances_on_validation_error instead. "skipInstancesOnValidationError": True or False, # Specifies whether the request should proceed despite the inclusion of instances that are not members of the group or that are already in the process of being deleted or abandoned. If this field is set to `false` and such an instance is specified in the request, the operation fails. The operation always fails if the request contains a malformed instance URL or a reference to an instance that exists in a zone or region other than the group's zone or region. } @@ -1128,7 +1127,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1325,7 +1324,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1652,7 +1651,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -2201,7 +2200,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -4070,7 +4069,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, diff --git a/docs/dyn/compute_beta.futureReservations.html b/docs/dyn/compute_beta.futureReservations.html index 3b6444d1c3..3eba6eaea6 100644 --- a/docs/dyn/compute_beta.futureReservations.html +++ b/docs/dyn/compute_beta.futureReservations.html @@ -102,7 +102,7 @@

Instance Methods

list_next()

Retrieves the next page of results.

- update(project, zone, futureReservation, body=None, paths=None, requestId=None, updateMask=None, x__xgafv=None)

+ update(project, zone, futureReservation, body=None, requestId=None, updateMask=None, x__xgafv=None)

Updates the specified future reservation.

Method Details

@@ -1126,7 +1126,7 @@

Method Details

- update(project, zone, futureReservation, body=None, paths=None, requestId=None, updateMask=None, x__xgafv=None) + update(project, zone, futureReservation, body=None, requestId=None, updateMask=None, x__xgafv=None)
Updates the specified future reservation.
 
 Args:
@@ -1259,7 +1259,6 @@ 

Method Details

"zone": "A String", # [Output Only] URL of the Zone where this future reservation resides. } - paths: string, A parameter (repeated) requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000). updateMask: string, update_mask indicates fields to be updated as part of this request. x__xgafv: string, V1 error format. diff --git a/docs/dyn/compute_beta.instanceGroupManagerResizeRequests.html b/docs/dyn/compute_beta.instanceGroupManagerResizeRequests.html index 534f814e18..bb7b4b0d35 100644 --- a/docs/dyn/compute_beta.instanceGroupManagerResizeRequests.html +++ b/docs/dyn/compute_beta.instanceGroupManagerResizeRequests.html @@ -371,7 +371,7 @@

Method Details

An object of the form: { # InstanceGroupManagerResizeRequest represents a request to create a number of VMs: either immediately or by queuing the request for the specified time. This resize request is nested under InstanceGroupManager and the VMs created by this request are added to the owning InstanceGroupManager. - "count": 42, # The count of instances to create as part of this resize request. + "count": 42, # This field is deprecated, please use resize_by instead. The count of instances to create as part of this resize request. "creationTimestamp": "A String", # [Output Only] The creation timestamp for this resize request in RFC3339 text format. "description": "A String", # An optional description of this resource. "id": "A String", # [Output Only] A unique identifier for this resource type. The server generates this identifier. @@ -489,7 +489,7 @@

Method Details

The object takes the form of: { # InstanceGroupManagerResizeRequest represents a request to create a number of VMs: either immediately or by queuing the request for the specified time. This resize request is nested under InstanceGroupManager and the VMs created by this request are added to the owning InstanceGroupManager. - "count": 42, # The count of instances to create as part of this resize request. + "count": 42, # This field is deprecated, please use resize_by instead. The count of instances to create as part of this resize request. "creationTimestamp": "A String", # [Output Only] The creation timestamp for this resize request in RFC3339 text format. "description": "A String", # An optional description of this resource. "id": "A String", # [Output Only] A unique identifier for this resource type. The server generates this identifier. @@ -736,7 +736,7 @@

Method Details

"id": "A String", # [Output Only] Unique identifier for the resource; defined by the server. "items": [ # A list of resize request resources. { # InstanceGroupManagerResizeRequest represents a request to create a number of VMs: either immediately or by queuing the request for the specified time. This resize request is nested under InstanceGroupManager and the VMs created by this request are added to the owning InstanceGroupManager. - "count": 42, # The count of instances to create as part of this resize request. + "count": 42, # This field is deprecated, please use resize_by instead. The count of instances to create as part of this resize request. "creationTimestamp": "A String", # [Output Only] The creation timestamp for this resize request in RFC3339 text format. "description": "A String", # An optional description of this resource. "id": "A String", # [Output Only] A unique identifier for this resource type. The server generates this identifier. diff --git a/docs/dyn/compute_beta.instanceGroupManagers.html b/docs/dyn/compute_beta.instanceGroupManagers.html index 0dfe9f5882..6fadfd773c 100644 --- a/docs/dyn/compute_beta.instanceGroupManagers.html +++ b/docs/dyn/compute_beta.instanceGroupManagers.html @@ -417,7 +417,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1346,7 +1346,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1523,7 +1523,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1829,7 +1829,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -2353,7 +2353,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -4182,7 +4182,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, diff --git a/docs/dyn/compute_beta.instanceTemplates.html b/docs/dyn/compute_beta.instanceTemplates.html index 5023de418e..2075cf402c 100644 --- a/docs/dyn/compute_beta.instanceTemplates.html +++ b/docs/dyn/compute_beta.instanceTemplates.html @@ -87,7 +87,7 @@

Instance Methods

delete(project, instanceTemplate, requestId=None, x__xgafv=None)

Deletes the specified instance template. Deleting an instance template is permanent and cannot be undone. It is not possible to delete templates that are already in use by a managed instance group.

- get(project, instanceTemplate, x__xgafv=None)

+ get(project, instanceTemplate, view=None, x__xgafv=None)

Returns the specified instance template.

getIamPolicy(project, resource, optionsRequestedPolicyVersion=None, x__xgafv=None)

@@ -96,7 +96,7 @@

Instance Methods

insert(project, body=None, requestId=None, x__xgafv=None)

Creates an instance template in the specified project using the data that is included in the request. If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template.

- list(project, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)

+ list(project, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, view=None, x__xgafv=None)

Retrieves a list of instance templates that are contained within the specified project.

list_next()

@@ -342,6 +342,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -602,12 +609,17 @@

Method Details

- get(project, instanceTemplate, x__xgafv=None) + get(project, instanceTemplate, view=None, x__xgafv=None)
Returns the specified instance template.
 
 Args:
   project: string, Project ID for this request. (required)
   instanceTemplate: string, The name of the instance template. (required)
+  view: string, View of the instance template.
+    Allowed values
+      BASIC - Include everything except Partner Metadata.
+      FULL - Include everything.
+      INSTANCE_VIEW_UNSPECIFIED - The default / unset value. The API will default to the BASIC view.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -824,6 +836,13 @@ 

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -1233,6 +1252,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -1439,7 +1465,7 @@

Method Details

- list(project, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None) + list(project, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, view=None, x__xgafv=None)
Retrieves a list of instance templates that are contained within the specified project.
 
 Args:
@@ -1449,6 +1475,11 @@ 

Method Details

orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported. pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results. returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code. + view: string, View of the instance template. + Allowed values + BASIC - Include everything except Partner Metadata. + FULL - Include everything. + INSTANCE_VIEW_UNSPECIFIED - The default / unset value. The API will default to the BASIC view. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -1668,6 +1699,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. diff --git a/docs/dyn/compute_beta.instances.html b/docs/dyn/compute_beta.instances.html index 1da252ba25..c396da1eab 100644 --- a/docs/dyn/compute_beta.instances.html +++ b/docs/dyn/compute_beta.instances.html @@ -105,7 +105,7 @@

Instance Methods

detachDisk(project, zone, instance, deviceName, requestId=None, x__xgafv=None)

Detaches a disk from an instance.

- get(project, zone, instance, x__xgafv=None)

+ get(project, zone, instance, view=None, x__xgafv=None)

Returns the specified Instance resource.

getEffectiveFirewalls(project, zone, instance, networkInterface, x__xgafv=None)

@@ -116,6 +116,9 @@

Instance Methods

getIamPolicy(project, zone, resource, optionsRequestedPolicyVersion=None, x__xgafv=None)

Gets the access control policy for a resource. May be empty if no such policy or resource exists.

+

+ getPartnerMetadata(project, zone, instance, namespaces=None, x__xgafv=None)

+

Gets partner metadata of the specified instance and namespaces.

getScreenshot(project, zone, instance, x__xgafv=None)

Returns the screenshot from the specified instance.

@@ -132,7 +135,7 @@

Instance Methods

insert(project, zone, body=None, requestId=None, sourceInstanceTemplate=None, sourceMachineImage=None, x__xgafv=None)

Creates an instance resource in the specified project using the data included in the request.

- list(project, zone, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)

+ list(project, zone, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, view=None, x__xgafv=None)

Retrieves the list of instances contained within the specified zone.

listReferrers(project, zone, instance, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)

@@ -143,6 +146,9 @@

Instance Methods

list_next()

Retrieves the next page of results.

+

+ patchPartnerMetadata(project, zone, instance, body=None, requestId=None, x__xgafv=None)

+

Patches partner metadata of the specified instance.

performMaintenance(project, zone, instance, requestId=None, x__xgafv=None)

Perform a manual maintenance on the instance.

@@ -769,6 +775,13 @@

Method Details

"a_key": "A String", }, }, + "partnerMetadata": { # Partner Metadata assigned to the instance. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for the VM. If not specified, use INHERIT_FROM_SUBNETWORK as default. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that this instance can consume from. @@ -1372,6 +1385,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -1968,13 +1988,18 @@

Method Details

- get(project, zone, instance, x__xgafv=None) + get(project, zone, instance, view=None, x__xgafv=None)
Returns the specified Instance resource.
 
 Args:
   project: string, Project ID for this request. (required)
   zone: string, The name of the zone for this request. (required)
   instance: string, Name of the instance resource to return. (required)
+  view: string, View of the instance.
+    Allowed values
+      BASIC - Include everything except Partner Metadata.
+      FULL - Include everything.
+      INSTANCE_VIEW_UNSPECIFIED - The default / unset value. The API will default to the BASIC view.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -2210,6 +2235,13 @@ 

Method Details

"a_key": "A String", }, }, + "partnerMetadata": { # Partner Metadata assigned to the instance. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for the VM. If not specified, use INHERIT_FROM_SUBNETWORK as default. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that this instance can consume from. @@ -2413,7 +2445,7 @@

Method Details

}, ], "shortName": "A String", # [Output Only] The short name of the firewall policy. - "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL. + "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL. }, ], "firewalls": [ # Effective firewalls on the instance. @@ -2773,6 +2805,35 @@

Method Details

}
+
+ getPartnerMetadata(project, zone, instance, namespaces=None, x__xgafv=None) +
Gets partner metadata of the specified instance and namespaces.
+
+Args:
+  project: string, Project ID for this request. (required)
+  zone: string, The name of the zone for this request. (required)
+  instance: string, Name of the instance scoping this request. (required)
+  namespaces: string, Comma separated partner metadata namespaces.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Model definition of partner_metadata field. To be used in dedicated Partner Metadata methods and to be inlined in the Instance and InstanceTemplate resources.
+  "fingerprint": "A String", # Instance-level hash to be used for optimistic locking.
+  "partnerMetadata": { # Partner Metadata assigned to the instance. A map from a subdomain to entries map. Subdomain name must be compliant with RFC1035 definition. The total size of all keys and values must be less than 2MB. Subdomain 'metadata.compute.googleapis.com' is reserverd for instance's metadata.
+    "a_key": {
+      "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct.
+        "a_key": "",
+      },
+    },
+  },
+}
+
+
getScreenshot(project, zone, instance, x__xgafv=None)
Returns the screenshot from the specified instance.
@@ -3117,6 +3178,13 @@ 

Method Details

"a_key": "A String", }, }, + "partnerMetadata": { # Partner Metadata assigned to the instance. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for the VM. If not specified, use INHERIT_FROM_SUBNETWORK as default. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that this instance can consume from. @@ -3343,7 +3411,7 @@

Method Details

- list(project, zone, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None) + list(project, zone, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, view=None, x__xgafv=None)
Retrieves the list of instances contained within the specified zone.
 
 Args:
@@ -3354,6 +3422,11 @@ 

Method Details

orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported. pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results. returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code. + view: string, View of the instance. + Allowed values + BASIC - Include everything except Partner Metadata. + FULL - Include everything. + INSTANCE_VIEW_UNSPECIFIED - The default / unset value. The API will default to the BASIC view. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -3592,6 +3665,13 @@

Method Details

"a_key": "A String", }, }, + "partnerMetadata": { # Partner Metadata assigned to the instance. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for the VM. If not specified, use INHERIT_FROM_SUBNETWORK as default. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that this instance can consume from. @@ -3789,6 +3869,145 @@

Method Details

+
+ patchPartnerMetadata(project, zone, instance, body=None, requestId=None, x__xgafv=None) +
Patches partner metadata of the specified instance.
+
+Args:
+  project: string, Project ID for this request. (required)
+  zone: string, The name of the zone for this request. (required)
+  instance: string, Name of the instance scoping this request. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Model definition of partner_metadata field. To be used in dedicated Partner Metadata methods and to be inlined in the Instance and InstanceTemplate resources.
+  "fingerprint": "A String", # Instance-level hash to be used for optimistic locking.
+  "partnerMetadata": { # Partner Metadata assigned to the instance. A map from a subdomain to entries map. Subdomain name must be compliant with RFC1035 definition. The total size of all keys and values must be less than 2MB. Subdomain 'metadata.compute.googleapis.com' is reserverd for instance's metadata.
+    "a_key": {
+      "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct.
+        "a_key": "",
+      },
+    },
+  },
+}
+
+  requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/beta/globalOperations) * [Regional](/compute/docs/reference/rest/beta/regionOperations) * [Zonal](/compute/docs/reference/rest/beta/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zoneOperations` resource. For more information, read Global, Regional, and Zonal Resources. Note that completed Operation resources have a limited retention period.
+  "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise.
+  "creationTimestamp": "A String", # [Deprecated] This field is deprecated.
+  "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created.
+  "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format.
+  "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated.
+    "errors": [ # [Output Only] The array of errors encountered while processing this operation.
+      {
+        "code": "A String", # [Output Only] The error type identifier for this error.
+        "errorDetails": [ # [Output Only] An optional list of messages that contain the error details. There is a set of defined message types to use for providing details.The syntax depends on the error code. For example, QuotaExceededInfo will have details when the error code is QUOTA_EXCEEDED.
+          {
+            "errorInfo": { # Describes the cause of the error with structured details. Example of an error when contacting the "pubsub.googleapis.com" API when it is not enabled: { "reason": "API_DISABLED" "domain": "googleapis.com" "metadata": { "resource": "projects/123", "service": "pubsub.googleapis.com" } } This response indicates that the pubsub.googleapis.com API is not enabled. Example of an error that is returned when attempting to create a Spanner instance in a region that is out of stock: { "reason": "STOCKOUT" "domain": "spanner.googleapis.com", "metadata": { "availableRegions": "us-central1,us-east2" } }
+              "domain": "A String", # The logical grouping to which the "reason" belongs. The error domain is typically the registered service name of the tool or product that generates the error. Example: "pubsub.googleapis.com". If the error is generated by some common infrastructure, the error domain must be a globally unique value that identifies the infrastructure. For Google API infrastructure, the error domain is "googleapis.com".
+              "metadatas": { # Additional structured details about this error. Keys should match /[a-zA-Z0-9-_]/ and be limited to 64 characters in length. When identifying the current value of an exceeded limit, the units should be contained in the key, not the value. For example, rather than {"instanceLimit": "100/request"}, should be returned as, {"instanceLimitPerRequest": "100"}, if the client exceeds the number of instances that can be created in a single (batch) request.
+                "a_key": "A String",
+              },
+              "reason": "A String", # The reason of the error. This is a constant value that identifies the proximate cause of the error. Error reasons are unique within a particular domain of errors. This should be at most 63 characters and match a regular expression of `A-Z+[A-Z0-9]`, which represents UPPER_SNAKE_CASE.
+            },
+            "help": { # Provides links to documentation or for performing an out of band action. For example, if a quota check failed with an error indicating the calling project hasn't enabled the accessed service, this can contain a URL pointing directly to the right place in the developer console to flip the bit.
+              "links": [ # URL(s) pointing to additional information on handling the current error.
+                { # Describes a URL link.
+                  "description": "A String", # Describes what the link offers.
+                  "url": "A String", # The URL of the link.
+                },
+              ],
+            },
+            "localizedMessage": { # Provides a localized error message that is safe to return to the user which can be attached to an RPC error.
+              "locale": "A String", # The locale used following the specification defined at https://www.rfc-editor.org/rfc/bcp/bcp47.txt. Examples are: "en-US", "fr-CH", "es-MX"
+              "message": "A String", # The localized error message in the above locale.
+            },
+            "quotaInfo": { # Additional details for quota exceeded error for resource quota.
+              "dimensions": { # The map holding related quota dimensions.
+                "a_key": "A String",
+              },
+              "futureLimit": 3.14, # Future quota limit being rolled out. The limit's unit depends on the quota type or metric.
+              "limit": 3.14, # Current effective quota limit. The limit's unit depends on the quota type or metric.
+              "limitName": "A String", # The name of the quota limit.
+              "metricName": "A String", # The Compute Engine quota metric name.
+              "rolloutStatus": "A String", # Rollout status of the future quota limit.
+            },
+          },
+        ],
+        "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional.
+        "message": "A String", # [Output Only] An optional, human-readable error message.
+      },
+    ],
+  },
+  "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`.
+  "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found.
+  "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server.
+  "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format.
+  "instancesBulkInsertOperationMetadata": {
+    "perLocationStatus": { # Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "createdVmCount": 42, # [Output Only] Count of VMs successfully created so far.
+        "deletedVmCount": 42, # [Output Only] Count of VMs that got deleted during rollback.
+        "failedToCreateVmCount": 42, # [Output Only] Count of VMs that started creating but encountered an error.
+        "status": "A String", # [Output Only] Creation status of BulkInsert operation - information if the flow is rolling forward or rolling back.
+        "targetVmCount": 42, # [Output Only] Count of VMs originally planned to be created.
+      },
+    },
+  },
+  "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources.
+  "name": "A String", # [Output Only] Name of the operation.
+  "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request.
+  "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on.
+  "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses.
+  "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations.
+  "selfLink": "A String", # [Output Only] Server-defined URL for the resource.
+  "setCommonInstanceMetadataOperationMetadata": { # [Output Only] If the operation is for projects.setCommonInstanceMetadata, this field will contain information on all underlying zonal actions and their state.
+    "clientOperationId": "A String", # [Output Only] The client operation id.
+    "perLocationOperations": { # [Output Only] Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # [Output Only] If state is `ABANDONED` or `FAILED`, this field is populated.
+          "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+          "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+            {
+              "a_key": "", # Properties of the object. Contains field @type with type URL.
+            },
+          ],
+          "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+        },
+        "state": "A String", # [Output Only] Status of the action, which can be one of the following: `PROPAGATING`, `PROPAGATED`, `ABANDONED`, `FAILED`, or `DONE`.
+      },
+    },
+  },
+  "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format.
+  "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`.
+  "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation.
+  "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource.
+  "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from.
+  "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com` or `alice_smith_identifier (global/workforcePools/example-com-us-employees)`.
+  "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated.
+    {
+      "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.
+      "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" }
+        {
+          "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).
+          "value": "A String", # [Output Only] A warning data value corresponding to the key.
+        },
+      ],
+      "message": "A String", # [Output Only] A human-readable description of the warning code.
+    },
+  ],
+  "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations.
+}
+
+
performMaintenance(project, zone, instance, requestId=None, x__xgafv=None)
Perform a manual maintenance on the instance.
@@ -7356,6 +7575,13 @@ 

Method Details

"a_key": "A String", }, }, + "partnerMetadata": { # Partner Metadata assigned to the instance. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for the VM. If not specified, use INHERIT_FROM_SUBNETWORK as default. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that this instance can consume from. diff --git a/docs/dyn/compute_beta.machineImages.html b/docs/dyn/compute_beta.machineImages.html index 3bafaaee87..ee5988cd18 100644 --- a/docs/dyn/compute_beta.machineImages.html +++ b/docs/dyn/compute_beta.machineImages.html @@ -453,6 +453,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -1045,6 +1052,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -1664,6 +1678,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. diff --git a/docs/dyn/compute_beta.regionInstanceGroupManagers.html b/docs/dyn/compute_beta.regionInstanceGroupManagers.html index 6a37360e51..1c46fc84a5 100644 --- a/docs/dyn/compute_beta.regionInstanceGroupManagers.html +++ b/docs/dyn/compute_beta.regionInstanceGroupManagers.html @@ -1103,7 +1103,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1280,7 +1280,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -1586,7 +1586,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -2110,7 +2110,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, @@ -3939,7 +3939,7 @@

Method Details

}, ], "params": { # Input only additional params for instance group manager creation. # Input only. Additional params passed with the request, but not persisted as part of resource payload. - "resourceManagerTags": { # Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only. + "resourceManagerTags": { # Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources. "a_key": "A String", }, }, diff --git a/docs/dyn/compute_beta.regionInstanceTemplates.html b/docs/dyn/compute_beta.regionInstanceTemplates.html index 92142d4f44..bc9381e62e 100644 --- a/docs/dyn/compute_beta.regionInstanceTemplates.html +++ b/docs/dyn/compute_beta.regionInstanceTemplates.html @@ -81,13 +81,13 @@

Instance Methods

delete(project, region, instanceTemplate, requestId=None, x__xgafv=None)

Deletes the specified instance template. Deleting an instance template is permanent and cannot be undone.

- get(project, region, instanceTemplate, x__xgafv=None)

+ get(project, region, instanceTemplate, view=None, x__xgafv=None)

Returns the specified instance template.

insert(project, region, body=None, requestId=None, x__xgafv=None)

Creates an instance template in the specified project and region using the global instance template whose URL is included in the request.

- list(project, region, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)

+ list(project, region, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, view=None, x__xgafv=None)

Retrieves a list of instance templates that are contained within the specified project and region.

list_next()

@@ -224,13 +224,18 @@

Method Details

- get(project, region, instanceTemplate, x__xgafv=None) + get(project, region, instanceTemplate, view=None, x__xgafv=None)
Returns the specified instance template.
 
 Args:
   project: string, Project ID for this request. (required)
   region: string, The name of the region for this request. (required)
   instanceTemplate: string, The name of the instance template. (required)
+  view: string, View of the instance template.
+    Allowed values
+      BASIC - Include everything except Partner Metadata.
+      FULL - Include everything.
+      INSTANCE_VIEW_UNSPECIFIED - The default / unset value. The API will default to the BASIC view.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -447,6 +452,13 @@ 

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -754,6 +766,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. @@ -960,7 +979,7 @@

Method Details

- list(project, region, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None) + list(project, region, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, view=None, x__xgafv=None)
Retrieves a list of instance templates that are contained within the specified project and region.
 
 Args:
@@ -971,6 +990,11 @@ 

Method Details

orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported. pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results. returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code. + view: string, View of the instance template. + Allowed values + BASIC - Include everything except Partner Metadata. + FULL - Include everything. + INSTANCE_VIEW_UNSPECIFIED - The default / unset value. The API will default to the BASIC view. x__xgafv: string, V1 error format. Allowed values 1 - v1 error format @@ -1190,6 +1214,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. diff --git a/docs/dyn/compute_beta.regionInstances.html b/docs/dyn/compute_beta.regionInstances.html index 9d968737f9..43c16bf364 100644 --- a/docs/dyn/compute_beta.regionInstances.html +++ b/docs/dyn/compute_beta.regionInstances.html @@ -295,6 +295,13 @@

Method Details

"networkPerformanceConfig": { # Note that for MachineImage, this is not supported yet. "totalEgressBandwidthTier": "A String", }, + "partnerMetadata": { # Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map. + "a_key": { + "entries": { # Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct. + "a_key": "", + }, + }, + }, "postKeyRevocationActionType": "A String", # PostKeyRevocationActionType of the instance. "privateIpv6GoogleAccess": "A String", # The private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet. "reservationAffinity": { # Specifies the reservations that this instance can consume from. # Specifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet. diff --git a/docs/dyn/compute_beta.regionNetworkFirewallPolicies.html b/docs/dyn/compute_beta.regionNetworkFirewallPolicies.html index 57caaefab9..9ad51010be 100644 --- a/docs/dyn/compute_beta.regionNetworkFirewallPolicies.html +++ b/docs/dyn/compute_beta.regionNetworkFirewallPolicies.html @@ -962,7 +962,7 @@

Method Details

"tlsInspect": True or False, # Boolean flag indicating if the traffic should be TLS decrypted. Can be set only if action = 'apply_security_profile_group' and cannot be set for other actions. }, ], - "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL. + "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL. }, ], "firewalls": [ # Effective firewalls on the network. diff --git a/docs/dyn/compute_beta.regionUrlMaps.html b/docs/dyn/compute_beta.regionUrlMaps.html index accfa6ffed..c263c24124 100644 --- a/docs/dyn/compute_beta.regionUrlMaps.html +++ b/docs/dyn/compute_beta.regionUrlMaps.html @@ -277,13 +277,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -425,13 +425,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -564,13 +564,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -759,13 +759,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -906,13 +906,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1054,13 +1054,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1193,13 +1193,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1388,13 +1388,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1797,13 +1797,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1945,13 +1945,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2084,13 +2084,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2279,13 +2279,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2456,13 +2456,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2604,13 +2604,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2743,13 +2743,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2938,13 +2938,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3234,13 +3234,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3382,13 +3382,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3521,13 +3521,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3716,13 +3716,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3981,13 +3981,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4129,13 +4129,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4268,13 +4268,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4463,13 +4463,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], diff --git a/docs/dyn/compute_beta.routers.html b/docs/dyn/compute_beta.routers.html index faeb35e8c8..1ac819cf8e 100644 --- a/docs/dyn/compute_beta.routers.html +++ b/docs/dyn/compute_beta.routers.html @@ -86,6 +86,9 @@

Instance Methods

delete(project, region, router, requestId=None, x__xgafv=None)

Deletes the specified Router resource.

+

+ deleteRoutePolicy(project, region, router, policy=None, requestId=None, x__xgafv=None)

+

Deletes Route Policy

get(project, region, router, x__xgafv=None)

Returns the specified Router resource.

@@ -98,6 +101,9 @@

Instance Methods

getNatMappingInfo_next()

Retrieves the next page of results.

+

+ getRoutePolicy(project, region, router, policy=None, x__xgafv=None)

+

Returns specified Route Policy

getRouterStatus(project, region, router, x__xgafv=None)

Retrieves runtime information of the specified router.

@@ -107,6 +113,18 @@

Instance Methods

list(project, region, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)

Retrieves a list of Router resources available to the specified project.

+

+ listBgpRoutes(project, region, router, addressFamily=None, destinationPrefix=None, filter=None, maxResults=None, orderBy=None, pageToken=None, peer=None, policyApplied=None, returnPartialSuccess=None, routeType=None, x__xgafv=None)

+

Retrieves a list of router bgp routes available to the specified project.

+

+ listBgpRoutes_next()

+

Retrieves the next page of results.

+

+ listRoutePolicies(project, region, router, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None)

+

Retrieves a list of router route policy subresources available to the specified project.

+

+ listRoutePolicies_next()

+

Retrieves the next page of results.

list_next()

Retrieves the next page of results.

@@ -122,6 +140,9 @@

Instance Methods

update(project, region, router, body=None, requestId=None, x__xgafv=None)

Updates the specified Router resource with the data included in the request. This method conforms to PUT semantics, which requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload.

+

+ updateRoutePolicy(project, region, router, body=None, requestId=None, x__xgafv=None)

+

Updates or creates new Route Policy

Method Details

aggregatedList(project, filter=None, includeAllScopes=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, serviceProjectNumber=None, x__xgafv=None) @@ -475,6 +496,132 @@

Method Details

}
+
+ deleteRoutePolicy(project, region, router, policy=None, requestId=None, x__xgafv=None) +
Deletes Route Policy
+
+Args:
+  project: string, Project ID for this request. (required)
+  region: string, Name of the region for this request. (required)
+  router: string, Name of the Router resource where Route Policy is defined. (required)
+  policy: string, The Policy name for this request. Name must conform to RFC1035
+  requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/beta/globalOperations) * [Regional](/compute/docs/reference/rest/beta/regionOperations) * [Zonal](/compute/docs/reference/rest/beta/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zoneOperations` resource. For more information, read Global, Regional, and Zonal Resources. Note that completed Operation resources have a limited retention period.
+  "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise.
+  "creationTimestamp": "A String", # [Deprecated] This field is deprecated.
+  "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created.
+  "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format.
+  "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated.
+    "errors": [ # [Output Only] The array of errors encountered while processing this operation.
+      {
+        "code": "A String", # [Output Only] The error type identifier for this error.
+        "errorDetails": [ # [Output Only] An optional list of messages that contain the error details. There is a set of defined message types to use for providing details.The syntax depends on the error code. For example, QuotaExceededInfo will have details when the error code is QUOTA_EXCEEDED.
+          {
+            "errorInfo": { # Describes the cause of the error with structured details. Example of an error when contacting the "pubsub.googleapis.com" API when it is not enabled: { "reason": "API_DISABLED" "domain": "googleapis.com" "metadata": { "resource": "projects/123", "service": "pubsub.googleapis.com" } } This response indicates that the pubsub.googleapis.com API is not enabled. Example of an error that is returned when attempting to create a Spanner instance in a region that is out of stock: { "reason": "STOCKOUT" "domain": "spanner.googleapis.com", "metadata": { "availableRegions": "us-central1,us-east2" } }
+              "domain": "A String", # The logical grouping to which the "reason" belongs. The error domain is typically the registered service name of the tool or product that generates the error. Example: "pubsub.googleapis.com". If the error is generated by some common infrastructure, the error domain must be a globally unique value that identifies the infrastructure. For Google API infrastructure, the error domain is "googleapis.com".
+              "metadatas": { # Additional structured details about this error. Keys should match /[a-zA-Z0-9-_]/ and be limited to 64 characters in length. When identifying the current value of an exceeded limit, the units should be contained in the key, not the value. For example, rather than {"instanceLimit": "100/request"}, should be returned as, {"instanceLimitPerRequest": "100"}, if the client exceeds the number of instances that can be created in a single (batch) request.
+                "a_key": "A String",
+              },
+              "reason": "A String", # The reason of the error. This is a constant value that identifies the proximate cause of the error. Error reasons are unique within a particular domain of errors. This should be at most 63 characters and match a regular expression of `A-Z+[A-Z0-9]`, which represents UPPER_SNAKE_CASE.
+            },
+            "help": { # Provides links to documentation or for performing an out of band action. For example, if a quota check failed with an error indicating the calling project hasn't enabled the accessed service, this can contain a URL pointing directly to the right place in the developer console to flip the bit.
+              "links": [ # URL(s) pointing to additional information on handling the current error.
+                { # Describes a URL link.
+                  "description": "A String", # Describes what the link offers.
+                  "url": "A String", # The URL of the link.
+                },
+              ],
+            },
+            "localizedMessage": { # Provides a localized error message that is safe to return to the user which can be attached to an RPC error.
+              "locale": "A String", # The locale used following the specification defined at https://www.rfc-editor.org/rfc/bcp/bcp47.txt. Examples are: "en-US", "fr-CH", "es-MX"
+              "message": "A String", # The localized error message in the above locale.
+            },
+            "quotaInfo": { # Additional details for quota exceeded error for resource quota.
+              "dimensions": { # The map holding related quota dimensions.
+                "a_key": "A String",
+              },
+              "futureLimit": 3.14, # Future quota limit being rolled out. The limit's unit depends on the quota type or metric.
+              "limit": 3.14, # Current effective quota limit. The limit's unit depends on the quota type or metric.
+              "limitName": "A String", # The name of the quota limit.
+              "metricName": "A String", # The Compute Engine quota metric name.
+              "rolloutStatus": "A String", # Rollout status of the future quota limit.
+            },
+          },
+        ],
+        "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional.
+        "message": "A String", # [Output Only] An optional, human-readable error message.
+      },
+    ],
+  },
+  "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`.
+  "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found.
+  "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server.
+  "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format.
+  "instancesBulkInsertOperationMetadata": {
+    "perLocationStatus": { # Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "createdVmCount": 42, # [Output Only] Count of VMs successfully created so far.
+        "deletedVmCount": 42, # [Output Only] Count of VMs that got deleted during rollback.
+        "failedToCreateVmCount": 42, # [Output Only] Count of VMs that started creating but encountered an error.
+        "status": "A String", # [Output Only] Creation status of BulkInsert operation - information if the flow is rolling forward or rolling back.
+        "targetVmCount": 42, # [Output Only] Count of VMs originally planned to be created.
+      },
+    },
+  },
+  "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources.
+  "name": "A String", # [Output Only] Name of the operation.
+  "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request.
+  "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on.
+  "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses.
+  "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations.
+  "selfLink": "A String", # [Output Only] Server-defined URL for the resource.
+  "setCommonInstanceMetadataOperationMetadata": { # [Output Only] If the operation is for projects.setCommonInstanceMetadata, this field will contain information on all underlying zonal actions and their state.
+    "clientOperationId": "A String", # [Output Only] The client operation id.
+    "perLocationOperations": { # [Output Only] Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # [Output Only] If state is `ABANDONED` or `FAILED`, this field is populated.
+          "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+          "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+            {
+              "a_key": "", # Properties of the object. Contains field @type with type URL.
+            },
+          ],
+          "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+        },
+        "state": "A String", # [Output Only] Status of the action, which can be one of the following: `PROPAGATING`, `PROPAGATED`, `ABANDONED`, `FAILED`, or `DONE`.
+      },
+    },
+  },
+  "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format.
+  "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`.
+  "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation.
+  "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource.
+  "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from.
+  "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com` or `alice_smith_identifier (global/workforcePools/example-com-us-employees)`.
+  "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated.
+    {
+      "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.
+      "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" }
+        {
+          "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).
+          "value": "A String", # [Output Only] A warning data value corresponding to the key.
+        },
+      ],
+      "message": "A String", # [Output Only] A human-readable description of the warning code.
+    },
+  ],
+  "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations.
+}
+
+
get(project, region, router, x__xgafv=None)
Returns the specified Router resource.
@@ -762,6 +909,51 @@ 

Method Details

+
+ getRoutePolicy(project, region, router, policy=None, x__xgafv=None) +
Returns specified Route Policy
+
+Args:
+  project: string, Project ID for this request. (required)
+  region: string, Name of the region for this request. (required)
+  router: string, Name of the Router resource to query for the route policy. The name should conform to RFC1035. (required)
+  policy: string, The Policy name for this request. Name must conform to RFC1035
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    {
+  "resource": {
+    "fingerprint": "A String", # A fingerprint for the Route Policy being applied to this Router, which is essentially a hash of the Route Policy used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update Route Policy. You must always provide an up-to-date fingerprint hash in order to update or change labels. To see the latest fingerprint, make a getRoutePolicy() request to retrieve a Route Policy.
+    "name": "A String", # Route Policy name, which must be a resource ID segment and unique within all the router's Route Policies. Name should conform to RFC1035.
+    "terms": [ # List of terms (the order in the list is not important, they are evaluated in order of priority). Order of policies is not retained and might change when getting policy later.
+      {
+        "actions": [ # CEL expressions to evaluate to modify a route when this term matches.
+          { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
+            "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+            "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+            "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+            "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+          },
+        ],
+        "match": { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information. # CEL expression evaluated against a route to determine if this term applies. When not set, the term applies to all routes.
+          "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+          "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+          "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+          "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+        },
+        "priority": 42, # The evaluation priority for this term, which must be between 0 (inclusive) and 2^31 (exclusive), and unique within the list.
+      },
+    ],
+    "type": "A String",
+  },
+}
+
+
getRouterStatus(project, region, router, x__xgafv=None)
Retrieves runtime information of the specified router.
@@ -1512,6 +1704,180 @@ 

Method Details

}
+
+ listBgpRoutes(project, region, router, addressFamily=None, destinationPrefix=None, filter=None, maxResults=None, orderBy=None, pageToken=None, peer=None, policyApplied=None, returnPartialSuccess=None, routeType=None, x__xgafv=None) +
Retrieves a list of router bgp routes available to the specified project.
+
+Args:
+  project: string, Project ID for this request. (required)
+  region: string, Name of the region for this request. (required)
+  router: string, Name or id of the resource for this request. Name should conform to RFC1035. (required)
+  addressFamily: string, (Required) limit results to this address family (either IPv4 or IPv6)
+    Allowed values
+      IPV4 - 
+      IPV6 - 
+      UNSPECIFIED_IP_VERSION - 
+  destinationPrefix: string, Limit results to destinations that are subnets of this CIDR range
+  filter: string, A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = "Intel Skylake") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = "Intel Skylake") OR (cpuPlatform = "Intel Broadwell") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq "double quoted literal"` `(fieldname1 eq literal) (fieldname2 ne "literal")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name "instance", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.
+  maxResults: integer, The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)
+  orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.
+  pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.
+  peer: string, (Required) limit results to the BGP peer with the given name. Name should conform to RFC1035.
+  policyApplied: boolean, When true, the method returns post-policy routes. Otherwise, it returns pre-policy routes.
+  returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.
+  routeType: string, (Required) limit results to this type of route (either LEARNED or ADVERTISED)
+    Allowed values
+      ADVERTISED - 
+      LEARNED - 
+      UNSPECIFIED_ROUTE_TYPE - 
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    {
+  "etag": "A String",
+  "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server.
+  "kind": "compute#routersListBgpRoutes", # [Output Only] Type of resource. Always compute#routersListBgpRoutes for lists of bgp routes.
+  "nextPageToken": "A String", # [Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results.
+  "result": [ # [Output Only] A list of bgp routes.
+    {
+      "asPaths": [ # [Output only] AS-PATH for the route
+        {
+          "asns": [ # [Output only] ASNs in the path segment. When type is SEQUENCE, these are ordered.
+            42,
+          ],
+          "type": "A String", # [Output only] Type of AS-PATH segment (SEQUENCE or SET)
+        },
+      ],
+      "communities": [ # [Output only] BGP communities in human-readable A:B format.
+        "A String",
+      ],
+      "destination": { # Network Layer Reachability Information (NLRI) for a route. # [Output only] Destination IP range for the route, in human-readable CIDR format
+        "pathId": 42, # If the BGP session supports multiple paths (RFC 7911), the path identifier for this route.
+        "prefix": "A String", # Human readable CIDR notation for a prefix. E.g. 10.42.0.0/16.
+      },
+      "med": 42, # [Output only] BGP multi-exit discriminator
+      "origin": "A String", # [Output only] BGP origin (EGP, IGP or INCOMPLETE)
+    },
+  ],
+  "selfLink": "A String", # [Output Only] Server-defined URL for this resource.
+  "unreachables": [ # [Output Only] Unreachable resources.
+    "A String",
+  ],
+  "warning": { # [Output Only] Informational warning message.
+    "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.
+    "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" }
+      {
+        "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).
+        "value": "A String", # [Output Only] A warning data value corresponding to the key.
+      },
+    ],
+    "message": "A String", # [Output Only] A human-readable description of the warning code.
+  },
+}
+
+ +
+ listBgpRoutes_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ listRoutePolicies(project, region, router, filter=None, maxResults=None, orderBy=None, pageToken=None, returnPartialSuccess=None, x__xgafv=None) +
Retrieves a list of router route policy subresources available to the specified project.
+
+Args:
+  project: string, Project ID for this request. (required)
+  region: string, Name of the region for this request. (required)
+  router: string, Name or id of the resource for this request. Name should conform to RFC1035. (required)
+  filter: string, A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = "Intel Skylake") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = "Intel Skylake") OR (cpuPlatform = "Intel Broadwell") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq "double quoted literal"` `(fieldname1 eq literal) (fieldname2 ne "literal")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name "instance", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.
+  maxResults: integer, The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)
+  orderBy: string, Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy="creationTimestamp desc"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.
+  pageToken: string, Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.
+  returnPartialSuccess: boolean, Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    {
+  "etag": "A String",
+  "id": "A String", # [Output Only] The unique identifier for the resource. This identifier is defined by the server.
+  "kind": "compute#routersListRoutePolicies", # [Output Only] Type of resource. Always compute#routersListRoutePolicies for lists of route policies.
+  "nextPageToken": "A String", # [Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results.
+  "result": [ # [Output Only] A list of route policies.
+    {
+      "fingerprint": "A String", # A fingerprint for the Route Policy being applied to this Router, which is essentially a hash of the Route Policy used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update Route Policy. You must always provide an up-to-date fingerprint hash in order to update or change labels. To see the latest fingerprint, make a getRoutePolicy() request to retrieve a Route Policy.
+      "name": "A String", # Route Policy name, which must be a resource ID segment and unique within all the router's Route Policies. Name should conform to RFC1035.
+      "terms": [ # List of terms (the order in the list is not important, they are evaluated in order of priority). Order of policies is not retained and might change when getting policy later.
+        {
+          "actions": [ # CEL expressions to evaluate to modify a route when this term matches.
+            { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
+              "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+              "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+              "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+              "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+            },
+          ],
+          "match": { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information. # CEL expression evaluated against a route to determine if this term applies. When not set, the term applies to all routes.
+            "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+            "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+            "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+            "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+          },
+          "priority": 42, # The evaluation priority for this term, which must be between 0 (inclusive) and 2^31 (exclusive), and unique within the list.
+        },
+      ],
+      "type": "A String",
+    },
+  ],
+  "selfLink": "A String", # [Output Only] Server-defined URL for this resource.
+  "unreachables": [ # [Output Only] Unreachable resources.
+    "A String",
+  ],
+  "warning": { # [Output Only] Informational warning message.
+    "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.
+    "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" }
+      {
+        "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).
+        "value": "A String", # [Output Only] A warning data value corresponding to the key.
+      },
+    ],
+    "message": "A String", # [Output Only] A human-readable description of the warning code.
+  },
+}
+
+ +
+ listRoutePolicies_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+
list_next()
Retrieves the next page of results.
@@ -2440,4 +2806,157 @@ 

Method Details

}
+
+ updateRoutePolicy(project, region, router, body=None, requestId=None, x__xgafv=None) +
Updates or creates new Route Policy
+
+Args:
+  project: string, Project ID for this request. (required)
+  region: string, Name of the region for this request. (required)
+  router: string, Name of the Router resource where Route Policy is defined. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{
+  "fingerprint": "A String", # A fingerprint for the Route Policy being applied to this Router, which is essentially a hash of the Route Policy used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update Route Policy. You must always provide an up-to-date fingerprint hash in order to update or change labels. To see the latest fingerprint, make a getRoutePolicy() request to retrieve a Route Policy.
+  "name": "A String", # Route Policy name, which must be a resource ID segment and unique within all the router's Route Policies. Name should conform to RFC1035.
+  "terms": [ # List of terms (the order in the list is not important, they are evaluated in order of priority). Order of policies is not retained and might change when getting policy later.
+    {
+      "actions": [ # CEL expressions to evaluate to modify a route when this term matches.
+        { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
+          "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+          "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+          "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+          "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+        },
+      ],
+      "match": { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information. # CEL expression evaluated against a route to determine if this term applies. When not set, the term applies to all routes.
+        "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+        "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+        "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+        "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+      },
+      "priority": 42, # The evaluation priority for this term, which must be between 0 (inclusive) and 2^31 (exclusive), and unique within the list.
+    },
+  ],
+  "type": "A String",
+}
+
+  requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/beta/globalOperations) * [Regional](/compute/docs/reference/rest/beta/regionOperations) * [Zonal](/compute/docs/reference/rest/beta/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zoneOperations` resource. For more information, read Global, Regional, and Zonal Resources. Note that completed Operation resources have a limited retention period.
+  "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise.
+  "creationTimestamp": "A String", # [Deprecated] This field is deprecated.
+  "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created.
+  "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format.
+  "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated.
+    "errors": [ # [Output Only] The array of errors encountered while processing this operation.
+      {
+        "code": "A String", # [Output Only] The error type identifier for this error.
+        "errorDetails": [ # [Output Only] An optional list of messages that contain the error details. There is a set of defined message types to use for providing details.The syntax depends on the error code. For example, QuotaExceededInfo will have details when the error code is QUOTA_EXCEEDED.
+          {
+            "errorInfo": { # Describes the cause of the error with structured details. Example of an error when contacting the "pubsub.googleapis.com" API when it is not enabled: { "reason": "API_DISABLED" "domain": "googleapis.com" "metadata": { "resource": "projects/123", "service": "pubsub.googleapis.com" } } This response indicates that the pubsub.googleapis.com API is not enabled. Example of an error that is returned when attempting to create a Spanner instance in a region that is out of stock: { "reason": "STOCKOUT" "domain": "spanner.googleapis.com", "metadata": { "availableRegions": "us-central1,us-east2" } }
+              "domain": "A String", # The logical grouping to which the "reason" belongs. The error domain is typically the registered service name of the tool or product that generates the error. Example: "pubsub.googleapis.com". If the error is generated by some common infrastructure, the error domain must be a globally unique value that identifies the infrastructure. For Google API infrastructure, the error domain is "googleapis.com".
+              "metadatas": { # Additional structured details about this error. Keys should match /[a-zA-Z0-9-_]/ and be limited to 64 characters in length. When identifying the current value of an exceeded limit, the units should be contained in the key, not the value. For example, rather than {"instanceLimit": "100/request"}, should be returned as, {"instanceLimitPerRequest": "100"}, if the client exceeds the number of instances that can be created in a single (batch) request.
+                "a_key": "A String",
+              },
+              "reason": "A String", # The reason of the error. This is a constant value that identifies the proximate cause of the error. Error reasons are unique within a particular domain of errors. This should be at most 63 characters and match a regular expression of `A-Z+[A-Z0-9]`, which represents UPPER_SNAKE_CASE.
+            },
+            "help": { # Provides links to documentation or for performing an out of band action. For example, if a quota check failed with an error indicating the calling project hasn't enabled the accessed service, this can contain a URL pointing directly to the right place in the developer console to flip the bit.
+              "links": [ # URL(s) pointing to additional information on handling the current error.
+                { # Describes a URL link.
+                  "description": "A String", # Describes what the link offers.
+                  "url": "A String", # The URL of the link.
+                },
+              ],
+            },
+            "localizedMessage": { # Provides a localized error message that is safe to return to the user which can be attached to an RPC error.
+              "locale": "A String", # The locale used following the specification defined at https://www.rfc-editor.org/rfc/bcp/bcp47.txt. Examples are: "en-US", "fr-CH", "es-MX"
+              "message": "A String", # The localized error message in the above locale.
+            },
+            "quotaInfo": { # Additional details for quota exceeded error for resource quota.
+              "dimensions": { # The map holding related quota dimensions.
+                "a_key": "A String",
+              },
+              "futureLimit": 3.14, # Future quota limit being rolled out. The limit's unit depends on the quota type or metric.
+              "limit": 3.14, # Current effective quota limit. The limit's unit depends on the quota type or metric.
+              "limitName": "A String", # The name of the quota limit.
+              "metricName": "A String", # The Compute Engine quota metric name.
+              "rolloutStatus": "A String", # Rollout status of the future quota limit.
+            },
+          },
+        ],
+        "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional.
+        "message": "A String", # [Output Only] An optional, human-readable error message.
+      },
+    ],
+  },
+  "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`.
+  "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found.
+  "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server.
+  "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format.
+  "instancesBulkInsertOperationMetadata": {
+    "perLocationStatus": { # Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "createdVmCount": 42, # [Output Only] Count of VMs successfully created so far.
+        "deletedVmCount": 42, # [Output Only] Count of VMs that got deleted during rollback.
+        "failedToCreateVmCount": 42, # [Output Only] Count of VMs that started creating but encountered an error.
+        "status": "A String", # [Output Only] Creation status of BulkInsert operation - information if the flow is rolling forward or rolling back.
+        "targetVmCount": 42, # [Output Only] Count of VMs originally planned to be created.
+      },
+    },
+  },
+  "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources.
+  "name": "A String", # [Output Only] Name of the operation.
+  "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request.
+  "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on.
+  "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses.
+  "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations.
+  "selfLink": "A String", # [Output Only] Server-defined URL for the resource.
+  "setCommonInstanceMetadataOperationMetadata": { # [Output Only] If the operation is for projects.setCommonInstanceMetadata, this field will contain information on all underlying zonal actions and their state.
+    "clientOperationId": "A String", # [Output Only] The client operation id.
+    "perLocationOperations": { # [Output Only] Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # [Output Only] If state is `ABANDONED` or `FAILED`, this field is populated.
+          "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+          "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+            {
+              "a_key": "", # Properties of the object. Contains field @type with type URL.
+            },
+          ],
+          "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+        },
+        "state": "A String", # [Output Only] Status of the action, which can be one of the following: `PROPAGATING`, `PROPAGATED`, `ABANDONED`, `FAILED`, or `DONE`.
+      },
+    },
+  },
+  "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format.
+  "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`.
+  "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation.
+  "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource.
+  "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from.
+  "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com` or `alice_smith_identifier (global/workforcePools/example-com-us-employees)`.
+  "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated.
+    {
+      "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.
+      "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" }
+        {
+          "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).
+          "value": "A String", # [Output Only] A warning data value corresponding to the key.
+        },
+      ],
+      "message": "A String", # [Output Only] A human-readable description of the warning code.
+    },
+  ],
+  "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations.
+}
+
+ \ No newline at end of file diff --git a/docs/dyn/compute_beta.urlMaps.html b/docs/dyn/compute_beta.urlMaps.html index 459bc5cecd..9f2a341467 100644 --- a/docs/dyn/compute_beta.urlMaps.html +++ b/docs/dyn/compute_beta.urlMaps.html @@ -163,13 +163,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -311,13 +311,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -450,13 +450,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -645,13 +645,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -970,13 +970,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1118,13 +1118,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1257,13 +1257,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1452,13 +1452,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1598,13 +1598,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1746,13 +1746,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -1885,13 +1885,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2080,13 +2080,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2487,13 +2487,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2635,13 +2635,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2774,13 +2774,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -2969,13 +2969,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3145,13 +3145,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3293,13 +3293,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3432,13 +3432,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3627,13 +3627,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -3921,13 +3921,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4069,13 +4069,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4208,13 +4208,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4403,13 +4403,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4670,13 +4670,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4818,13 +4818,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -4957,13 +4957,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], @@ -5152,13 +5152,13 @@

Method Details

"allowMethods": [ # Specifies the content for the Access-Control-Allow-Methods header. "A String", ], - "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. + "allowOriginRegexes": [ # Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED. "A String", ], "allowOrigins": [ # Specifies the list of origins that is allowed to do CORS requests. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. "A String", ], - "disabled": True or False, # If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect. + "disabled": True or False, # If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect. "exposeHeaders": [ # Specifies the content for the Access-Control-Expose-Headers header. "A String", ], diff --git a/docs/dyn/compute_v1.html b/docs/dyn/compute_v1.html index 8dc3dcde7e..8b89e41cef 100644 --- a/docs/dyn/compute_v1.html +++ b/docs/dyn/compute_v1.html @@ -199,6 +199,11 @@

Instance Methods

Returns the instanceGroups Resource.

+

+ instanceSettings() +

+

Returns the instanceSettings Resource.

+

instanceTemplates()

diff --git a/docs/dyn/compute_v1.instanceSettings.html b/docs/dyn/compute_v1.instanceSettings.html new file mode 100644 index 0000000000..2478ea780e --- /dev/null +++ b/docs/dyn/compute_v1.instanceSettings.html @@ -0,0 +1,260 @@ + + + +

Compute Engine API . instanceSettings

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ get(project, zone, x__xgafv=None)

+

Get Instance settings.

+

+ patch(project, zone, body=None, requestId=None, updateMask=None, x__xgafv=None)

+

Patch Instance settings

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ get(project, zone, x__xgafv=None) +
Get Instance settings.
+
+Args:
+  project: string, Project ID for this request. (required)
+  zone: string, Name of the zone for this request. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents a Instance Settings resource. You can use instance settings to configure default settings for Compute Engine VM instances. For example, you can use it to configure default machine type of Compute Engine VM instances.
+  "fingerprint": "A String", # Specifies a fingerprint for instance settings, which is essentially a hash of the instance settings resource's contents and used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update the instance settings resource. You must always provide an up-to-date fingerprint hash in order to update or change the resource, otherwise the request will fail with error 412 conditionNotMet. To see the latest fingerprint, make a get() request to retrieve the resource.
+  "kind": "compute#instanceSettings", # [Output Only] Type of the resource. Always compute#instance_settings for instance settings.
+  "metadata": { # The metadata key/value pairs assigned to all the instances in the corresponding scope.
+    "items": { # A metadata key/value items map. The total size of all keys and values must be less than 512KB.
+      "a_key": "A String",
+    },
+    "kind": "compute#metadata", # [Output Only] Type of the resource. Always compute#metadata for metadata.
+  },
+  "zone": "A String", # [Output Only] URL of the zone where the resource resides You must specify this field as part of the HTTP request URL. It is not settable as a field in the request body.
+}
+
+ +
+ patch(project, zone, body=None, requestId=None, updateMask=None, x__xgafv=None) +
Patch Instance settings
+
+Args:
+  project: string, Project ID for this request. (required)
+  zone: string, The zone scoping this request. It should conform to RFC1035. (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # Represents a Instance Settings resource. You can use instance settings to configure default settings for Compute Engine VM instances. For example, you can use it to configure default machine type of Compute Engine VM instances.
+  "fingerprint": "A String", # Specifies a fingerprint for instance settings, which is essentially a hash of the instance settings resource's contents and used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update the instance settings resource. You must always provide an up-to-date fingerprint hash in order to update or change the resource, otherwise the request will fail with error 412 conditionNotMet. To see the latest fingerprint, make a get() request to retrieve the resource.
+  "kind": "compute#instanceSettings", # [Output Only] Type of the resource. Always compute#instance_settings for instance settings.
+  "metadata": { # The metadata key/value pairs assigned to all the instances in the corresponding scope.
+    "items": { # A metadata key/value items map. The total size of all keys and values must be less than 512KB.
+      "a_key": "A String",
+    },
+    "kind": "compute#metadata", # [Output Only] Type of the resource. Always compute#metadata for metadata.
+  },
+  "zone": "A String", # [Output Only] URL of the zone where the resource resides You must specify this field as part of the HTTP request URL. It is not settable as a field in the request body.
+}
+
+  requestId: string, An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
+  updateMask: string, update_mask indicates fields to be updated as part of this request.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents an Operation resource. Google Compute Engine has three Operation resources: * [Global](/compute/docs/reference/rest/v1/globalOperations) * [Regional](/compute/docs/reference/rest/v1/regionOperations) * [Zonal](/compute/docs/reference/rest/v1/zoneOperations) You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the `globalOperations` resource. - For regional operations, use the `regionOperations` resource. - For zonal operations, use the `zoneOperations` resource. For more information, read Global, Regional, and Zonal Resources. Note that completed Operation resources have a limited retention period.
+  "clientOperationId": "A String", # [Output Only] The value of `requestId` if you provided it in the request. Not present otherwise.
+  "creationTimestamp": "A String", # [Deprecated] This field is deprecated.
+  "description": "A String", # [Output Only] A textual description of the operation, which is set when the operation is created.
+  "endTime": "A String", # [Output Only] The time that this operation was completed. This value is in RFC3339 text format.
+  "error": { # [Output Only] If errors are generated during processing of the operation, this field will be populated.
+    "errors": [ # [Output Only] The array of errors encountered while processing this operation.
+      {
+        "code": "A String", # [Output Only] The error type identifier for this error.
+        "errorDetails": [ # [Output Only] An optional list of messages that contain the error details. There is a set of defined message types to use for providing details.The syntax depends on the error code. For example, QuotaExceededInfo will have details when the error code is QUOTA_EXCEEDED.
+          {
+            "errorInfo": { # Describes the cause of the error with structured details. Example of an error when contacting the "pubsub.googleapis.com" API when it is not enabled: { "reason": "API_DISABLED" "domain": "googleapis.com" "metadata": { "resource": "projects/123", "service": "pubsub.googleapis.com" } } This response indicates that the pubsub.googleapis.com API is not enabled. Example of an error that is returned when attempting to create a Spanner instance in a region that is out of stock: { "reason": "STOCKOUT" "domain": "spanner.googleapis.com", "metadata": { "availableRegions": "us-central1,us-east2" } }
+              "domain": "A String", # The logical grouping to which the "reason" belongs. The error domain is typically the registered service name of the tool or product that generates the error. Example: "pubsub.googleapis.com". If the error is generated by some common infrastructure, the error domain must be a globally unique value that identifies the infrastructure. For Google API infrastructure, the error domain is "googleapis.com".
+              "metadatas": { # Additional structured details about this error. Keys should match /[a-zA-Z0-9-_]/ and be limited to 64 characters in length. When identifying the current value of an exceeded limit, the units should be contained in the key, not the value. For example, rather than {"instanceLimit": "100/request"}, should be returned as, {"instanceLimitPerRequest": "100"}, if the client exceeds the number of instances that can be created in a single (batch) request.
+                "a_key": "A String",
+              },
+              "reason": "A String", # The reason of the error. This is a constant value that identifies the proximate cause of the error. Error reasons are unique within a particular domain of errors. This should be at most 63 characters and match a regular expression of `A-Z+[A-Z0-9]`, which represents UPPER_SNAKE_CASE.
+            },
+            "help": { # Provides links to documentation or for performing an out of band action. For example, if a quota check failed with an error indicating the calling project hasn't enabled the accessed service, this can contain a URL pointing directly to the right place in the developer console to flip the bit.
+              "links": [ # URL(s) pointing to additional information on handling the current error.
+                { # Describes a URL link.
+                  "description": "A String", # Describes what the link offers.
+                  "url": "A String", # The URL of the link.
+                },
+              ],
+            },
+            "localizedMessage": { # Provides a localized error message that is safe to return to the user which can be attached to an RPC error.
+              "locale": "A String", # The locale used following the specification defined at https://www.rfc-editor.org/rfc/bcp/bcp47.txt. Examples are: "en-US", "fr-CH", "es-MX"
+              "message": "A String", # The localized error message in the above locale.
+            },
+            "quotaInfo": { # Additional details for quota exceeded error for resource quota.
+              "dimensions": { # The map holding related quota dimensions.
+                "a_key": "A String",
+              },
+              "futureLimit": 3.14, # Future quota limit being rolled out. The limit's unit depends on the quota type or metric.
+              "limit": 3.14, # Current effective quota limit. The limit's unit depends on the quota type or metric.
+              "limitName": "A String", # The name of the quota limit.
+              "metricName": "A String", # The Compute Engine quota metric name.
+              "rolloutStatus": "A String", # Rollout status of the future quota limit.
+            },
+          },
+        ],
+        "location": "A String", # [Output Only] Indicates the field in the request that caused the error. This property is optional.
+        "message": "A String", # [Output Only] An optional, human-readable error message.
+      },
+    ],
+  },
+  "httpErrorMessage": "A String", # [Output Only] If the operation fails, this field contains the HTTP error message that was returned, such as `NOT FOUND`.
+  "httpErrorStatusCode": 42, # [Output Only] If the operation fails, this field contains the HTTP error status code that was returned. For example, a `404` means the resource was not found.
+  "id": "A String", # [Output Only] The unique identifier for the operation. This identifier is defined by the server.
+  "insertTime": "A String", # [Output Only] The time that this operation was requested. This value is in RFC3339 text format.
+  "instancesBulkInsertOperationMetadata": {
+    "perLocationStatus": { # Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "createdVmCount": 42, # [Output Only] Count of VMs successfully created so far.
+        "deletedVmCount": 42, # [Output Only] Count of VMs that got deleted during rollback.
+        "failedToCreateVmCount": 42, # [Output Only] Count of VMs that started creating but encountered an error.
+        "status": "A String", # [Output Only] Creation status of BulkInsert operation - information if the flow is rolling forward or rolling back.
+        "targetVmCount": 42, # [Output Only] Count of VMs originally planned to be created.
+      },
+    },
+  },
+  "kind": "compute#operation", # [Output Only] Type of the resource. Always `compute#operation` for Operation resources.
+  "name": "A String", # [Output Only] Name of the operation.
+  "operationGroupId": "A String", # [Output Only] An ID that represents a group of operations, such as when a group of operations results from a `bulkInsert` API request.
+  "operationType": "A String", # [Output Only] The type of operation, such as `insert`, `update`, or `delete`, and so on.
+  "progress": 42, # [Output Only] An optional progress indicator that ranges from 0 to 100. There is no requirement that this be linear or support any granularity of operations. This should not be used to guess when the operation will be complete. This number should monotonically increase as the operation progresses.
+  "region": "A String", # [Output Only] The URL of the region where the operation resides. Only applicable when performing regional operations.
+  "selfLink": "A String", # [Output Only] Server-defined URL for the resource.
+  "setCommonInstanceMetadataOperationMetadata": { # [Output Only] If the operation is for projects.setCommonInstanceMetadata, this field will contain information on all underlying zonal actions and their state.
+    "clientOperationId": "A String", # [Output Only] The client operation id.
+    "perLocationOperations": { # [Output Only] Status information per location (location name is key). Example key: zones/us-central1-a
+      "a_key": {
+        "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # [Output Only] If state is `ABANDONED` or `FAILED`, this field is populated.
+          "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+          "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+            {
+              "a_key": "", # Properties of the object. Contains field @type with type URL.
+            },
+          ],
+          "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+        },
+        "state": "A String", # [Output Only] Status of the action, which can be one of the following: `PROPAGATING`, `PROPAGATED`, `ABANDONED`, `FAILED`, or `DONE`.
+      },
+    },
+  },
+  "startTime": "A String", # [Output Only] The time that this operation was started by the server. This value is in RFC3339 text format.
+  "status": "A String", # [Output Only] The status of the operation, which can be one of the following: `PENDING`, `RUNNING`, or `DONE`.
+  "statusMessage": "A String", # [Output Only] An optional textual description of the current status of the operation.
+  "targetId": "A String", # [Output Only] The unique target ID, which identifies a specific incarnation of the target resource.
+  "targetLink": "A String", # [Output Only] The URL of the resource that the operation modifies. For operations related to creating a snapshot, this points to the persistent disk that the snapshot was created from.
+  "user": "A String", # [Output Only] User who requested the operation, for example: `user@example.com` or `alice_smith_identifier (global/workforcePools/example-com-us-employees)`.
+  "warnings": [ # [Output Only] If warning messages are generated during processing of the operation, this field will be populated.
+    {
+      "code": "A String", # [Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.
+      "data": [ # [Output Only] Metadata about this warning in key: value format. For example: "data": [ { "key": "scope", "value": "zones/us-east1-d" }
+        {
+          "key": "A String", # [Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).
+          "value": "A String", # [Output Only] A warning data value corresponding to the key.
+        },
+      ],
+      "message": "A String", # [Output Only] A human-readable description of the warning code.
+    },
+  ],
+  "zone": "A String", # [Output Only] The URL of the zone where the operation resides. Only applicable when performing per-zone operations.
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/compute_v1.instances.html b/docs/dyn/compute_v1.instances.html index 5b4e4ea421..a9fa4882b6 100644 --- a/docs/dyn/compute_v1.instances.html +++ b/docs/dyn/compute_v1.instances.html @@ -2289,7 +2289,7 @@

Method Details

}, ], "shortName": "A String", # [Output Only] The short name of the firewall policy. - "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL. + "type": "A String", # [Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL. }, ], "firewalls": [ # Effective firewalls on the instance. diff --git a/docs/dyn/connectors_v1.projects.locations.customConnectors.customConnectorVersions.html b/docs/dyn/connectors_v1.projects.locations.customConnectors.customConnectorVersions.html new file mode 100644 index 0000000000..11af68446d --- /dev/null +++ b/docs/dyn/connectors_v1.projects.locations.customConnectors.customConnectorVersions.html @@ -0,0 +1,124 @@ + + + +

Connectors API . projects . locations . customConnectors . customConnectorVersions

+

Instance Methods

+

+ close()

+

Close httplib2 connections.

+

+ delete(name, x__xgafv=None)

+

Deletes a single CustomConnectorVersion.

+

Method Details

+
+ close() +
Close httplib2 connections.
+
+ +
+ delete(name, x__xgafv=None) +
Deletes a single CustomConnectorVersion.
+
+Args:
+  name: string, Required. Resource name of the form: `projects/{project}/locations/{location}/customConnectors/{custom_connector}/customConnectorVersions/{custom_connector_version}` (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
+  "error": { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+ + \ No newline at end of file diff --git a/docs/dyn/connectors_v1.projects.locations.customConnectors.html b/docs/dyn/connectors_v1.projects.locations.customConnectors.html index 9d14735abd..857ede4089 100644 --- a/docs/dyn/connectors_v1.projects.locations.customConnectors.html +++ b/docs/dyn/connectors_v1.projects.locations.customConnectors.html @@ -74,6 +74,11 @@

Connectors API . projects . locations . customConnectors

Instance Methods

+

+ customConnectorVersions() +

+

Returns the customConnectorVersions Resource.

+

close()

Close httplib2 connections.

diff --git a/docs/dyn/connectors_v1.projects.locations.providers.connectors.versions.html b/docs/dyn/connectors_v1.projects.locations.providers.connectors.versions.html index 88b6459585..8447d4a803 100644 --- a/docs/dyn/connectors_v1.projects.locations.providers.connectors.versions.html +++ b/docs/dyn/connectors_v1.projects.locations.providers.connectors.versions.html @@ -179,6 +179,7 @@

Method Details

"displayName": "A String", # Display name for authentication template. }, ], + "authOverrideEnabled": True or False, # Output only. Flag to mark the dynamic auth override. "configVariableTemplates": [ # Output only. List of config variables needed to create a connection. { # ConfigVariableTemplate provides metadata about a `ConfigVariable` that is used in a Connection. "authorizationCodeLink": { # This configuration captures the details required to render an authorization link for the OAuth Authorization Code Flow. # Authorization code link options. To be populated if `ValueType` is `AUTHORIZATION_CODE` @@ -633,6 +634,10 @@

Method Details

], }, ], + "schemaRefreshConfig": { # Config for connection schema refresh # Connection Schema Refresh Config + "useActionDisplayNames": True or False, # Whether to use displayName for actions in UI. + "useSynchronousSchemaRefresh": True or False, # Whether to use synchronous schema refresh. + }, "sslConfigTemplate": { # Ssl config details of a connector version # Output only. Ssl configuration supported by the Connector. "additionalVariables": [ # Any additional fields that need to be rendered { # ConfigVariableTemplate provides metadata about a `ConfigVariable` that is used in a Connection. @@ -795,6 +800,7 @@

Method Details

"displayName": "A String", # Display name for authentication template. }, ], + "authOverrideEnabled": True or False, # Output only. Flag to mark the dynamic auth override. "configVariableTemplates": [ # Output only. List of config variables needed to create a connection. { # ConfigVariableTemplate provides metadata about a `ConfigVariable` that is used in a Connection. "authorizationCodeLink": { # This configuration captures the details required to render an authorization link for the OAuth Authorization Code Flow. # Authorization code link options. To be populated if `ValueType` is `AUTHORIZATION_CODE` @@ -1249,6 +1255,10 @@

Method Details

], }, ], + "schemaRefreshConfig": { # Config for connection schema refresh # Connection Schema Refresh Config + "useActionDisplayNames": True or False, # Whether to use displayName for actions in UI. + "useSynchronousSchemaRefresh": True or False, # Whether to use synchronous schema refresh. + }, "sslConfigTemplate": { # Ssl config details of a connector version # Output only. Ssl configuration supported by the Connector. "additionalVariables": [ # Any additional fields that need to be rendered { # ConfigVariableTemplate provides metadata about a `ConfigVariable` that is used in a Connection. diff --git a/docs/dyn/container_v1.projects.locations.clusters.html b/docs/dyn/container_v1.projects.locations.clusters.html index 9d76b2296f..62452882fa 100644 --- a/docs/dyn/container_v1.projects.locations.clusters.html +++ b/docs/dyn/container_v1.projects.locations.clusters.html @@ -392,7 +392,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePools.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -669,6 +680,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -847,6 +860,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1318,7 +1333,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePools.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -1595,6 +1621,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1773,6 +1801,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2147,7 +2177,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePools.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -2424,6 +2465,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2602,6 +2645,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -3796,7 +3841,18 @@

Method Details

"enabled": True or False, # Whether the feature is enabled or not. }, "desiredDatabaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "desiredDatapathProvider": "A String", # The desired datapath provider for the cluster. @@ -3810,6 +3866,7 @@

Method Details

}, "desiredEnableCiliumClusterwideNetworkPolicy": True or False, # Enable/Disable Cilium Clusterwide Network Policy for the cluster. "desiredEnableFqdnNetworkPolicy": True or False, # Enable/Disable FQDN Network Policy for the cluster. + "desiredEnableMultiNetworking": True or False, # Enable/Disable Multi-Networking for the cluster "desiredEnablePrivateEndpoint": True or False, # Enable/Disable private endpoint for the cluster's master. "desiredFleet": { # Fleet is the fleet configuration for the cluster. # The desired fleet configuration for the cluster. "membership": "A String", # [Output only] The full resource name of the registered fleet membership of the cluster, in the format `//gkehub.googleapis.com/projects/*/locations/*/memberships/*`. diff --git a/docs/dyn/container_v1.projects.locations.clusters.nodePools.html b/docs/dyn/container_v1.projects.locations.clusters.nodePools.html index 6724df7958..6b81bdc969 100644 --- a/docs/dyn/container_v1.projects.locations.clusters.nodePools.html +++ b/docs/dyn/container_v1.projects.locations.clusters.nodePools.html @@ -260,6 +260,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -655,6 +657,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -916,6 +920,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. diff --git a/docs/dyn/container_v1.projects.zones.clusters.html b/docs/dyn/container_v1.projects.zones.clusters.html index 544d4f5c44..36f5d9951d 100644 --- a/docs/dyn/container_v1.projects.zones.clusters.html +++ b/docs/dyn/container_v1.projects.zones.clusters.html @@ -471,7 +471,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePools.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -748,6 +759,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -926,6 +939,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1397,7 +1412,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePools.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -1674,6 +1700,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1852,6 +1880,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2270,7 +2300,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePools.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -2547,6 +2588,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2725,6 +2768,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -3823,7 +3868,18 @@

Method Details

"enabled": True or False, # Whether the feature is enabled or not. }, "desiredDatabaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "desiredDatapathProvider": "A String", # The desired datapath provider for the cluster. @@ -3837,6 +3893,7 @@

Method Details

}, "desiredEnableCiliumClusterwideNetworkPolicy": True or False, # Enable/Disable Cilium Clusterwide Network Policy for the cluster. "desiredEnableFqdnNetworkPolicy": True or False, # Enable/Disable FQDN Network Policy for the cluster. + "desiredEnableMultiNetworking": True or False, # Enable/Disable Multi-Networking for the cluster "desiredEnablePrivateEndpoint": True or False, # Enable/Disable private endpoint for the cluster's master. "desiredFleet": { # Fleet is the fleet configuration for the cluster. # The desired fleet configuration for the cluster. "membership": "A String", # [Output only] The full resource name of the registered fleet membership of the cluster, in the format `//gkehub.googleapis.com/projects/*/locations/*/memberships/*`. diff --git a/docs/dyn/container_v1.projects.zones.clusters.nodePools.html b/docs/dyn/container_v1.projects.zones.clusters.nodePools.html index 9c3f24d9f4..be55683aee 100644 --- a/docs/dyn/container_v1.projects.zones.clusters.nodePools.html +++ b/docs/dyn/container_v1.projects.zones.clusters.nodePools.html @@ -325,6 +325,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -720,6 +722,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -981,6 +985,8 @@

Method Details

"sandboxConfig": { # SandboxConfig contains configurations of the sandbox to use for the node. # Sandbox configuration for this node. "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. diff --git a/docs/dyn/container_v1beta1.projects.locations.clusters.html b/docs/dyn/container_v1beta1.projects.locations.clusters.html index 96e8eb2563..0e56b459d2 100644 --- a/docs/dyn/container_v1beta1.projects.locations.clusters.html +++ b/docs/dyn/container_v1beta1.projects.locations.clusters.html @@ -412,7 +412,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePool.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -706,6 +717,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -906,6 +919,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1427,7 +1442,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePool.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -1721,6 +1747,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1921,6 +1949,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2345,7 +2375,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePool.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -2639,6 +2680,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2839,6 +2882,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -4087,7 +4132,18 @@

Method Details

"enabled": True or False, # Whether the feature is enabled or not. }, "desiredDatabaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "desiredDatapathProvider": "A String", # The desired datapath provider for the cluster. @@ -4101,6 +4157,7 @@

Method Details

}, "desiredEnableCiliumClusterwideNetworkPolicy": True or False, # Enable/Disable Cilium Clusterwide Network Policy for the cluster. "desiredEnableFqdnNetworkPolicy": True or False, # Enable/Disable FQDN Network Policy for the cluster. + "desiredEnableMultiNetworking": True or False, # Enable/Disable Multi-Networking for the cluster "desiredEnablePrivateEndpoint": True or False, # Enable/Disable private endpoint for the cluster's master. "desiredFleet": { # Fleet is the fleet configuration for the cluster. # The desired fleet configuration for the cluster. "membership": "A String", # [Output only] The full resource name of the registered fleet membership of the cluster, in the format `//gkehub.googleapis.com/projects/*/locations/*/memberships/*`. diff --git a/docs/dyn/container_v1beta1.projects.locations.clusters.nodePools.html b/docs/dyn/container_v1beta1.projects.locations.clusters.nodePools.html index a5d2b00208..4b1e727ac4 100644 --- a/docs/dyn/container_v1beta1.projects.locations.clusters.nodePools.html +++ b/docs/dyn/container_v1beta1.projects.locations.clusters.nodePools.html @@ -273,6 +273,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -685,6 +687,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -963,6 +967,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. diff --git a/docs/dyn/container_v1beta1.projects.zones.clusters.html b/docs/dyn/container_v1beta1.projects.zones.clusters.html index bba874b93d..36603ba329 100644 --- a/docs/dyn/container_v1beta1.projects.zones.clusters.html +++ b/docs/dyn/container_v1beta1.projects.zones.clusters.html @@ -498,7 +498,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePool.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -792,6 +803,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -992,6 +1005,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1513,7 +1528,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePool.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -1807,6 +1833,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2007,6 +2035,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2475,7 +2505,18 @@

Method Details

"currentNodeCount": 42, # [Output only] The number of nodes currently in the cluster. Deprecated. Call Kubernetes API directly to retrieve node information. "currentNodeVersion": "A String", # [Output only] Deprecated, use [NodePool.version](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters.nodePools) instead. The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes. "databaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "defaultMaxPodsConstraint": { # Constraints applied to pods. # The default constraint on the maximum number of pods that can be run simultaneously on a node in the node pool of this cluster. Only honored if cluster created with IP Alias support. @@ -2769,6 +2810,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -2969,6 +3012,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -4114,7 +4159,18 @@

Method Details

"enabled": True or False, # Whether the feature is enabled or not. }, "desiredDatabaseEncryption": { # Configuration of etcd encryption. # Configuration of etcd encryption. + "currentState": "A String", # Output only. The current state of etcd encryption. + "decryptionKeys": [ # Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource. + "A String", + ], "keyName": "A String", # Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key + "lastOperationErrors": [ # Output only. Records errors seen during DatabaseEncryption update operations. + { # OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration. + "errorMessage": "A String", # Description of the error seen during the operation. + "keyName": "A String", # CloudKMS key resource that had the error. + "timestamp": "A String", # Time when the CloudKMS error was seen. + }, + ], "state": "A String", # The desired state of etcd encryption. }, "desiredDatapathProvider": "A String", # The desired datapath provider for the cluster. @@ -4128,6 +4184,7 @@

Method Details

}, "desiredEnableCiliumClusterwideNetworkPolicy": True or False, # Enable/Disable Cilium Clusterwide Network Policy for the cluster. "desiredEnableFqdnNetworkPolicy": True or False, # Enable/Disable FQDN Network Policy for the cluster. + "desiredEnableMultiNetworking": True or False, # Enable/Disable Multi-Networking for the cluster "desiredEnablePrivateEndpoint": True or False, # Enable/Disable private endpoint for the cluster's master. "desiredFleet": { # Fleet is the fleet configuration for the cluster. # The desired fleet configuration for the cluster. "membership": "A String", # [Output only] The full resource name of the registered fleet membership of the cluster, in the format `//gkehub.googleapis.com/projects/*/locations/*/memberships/*`. diff --git a/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html b/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html index 23ef38ec13..e65ac69619 100644 --- a/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html +++ b/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html @@ -338,6 +338,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -750,6 +752,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. @@ -1028,6 +1032,8 @@

Method Details

"sandboxType": "A String", # Type of the sandbox to use for the node (e.g. 'gvisor') "type": "A String", # Type of the sandbox to use for the node. }, + "secondaryBootDiskUpdateStrategy": { # SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks. # Secondary boot disk update strategy. + }, "secondaryBootDisks": [ # List of secondary boot disks attached to the nodes. { # SecondaryBootDisk represents a persistent disk attached to a node with special configurations based on its mode. "diskImage": "A String", # Fully-qualified resource ID for an existing disk image. diff --git a/docs/dyn/customsearch_v1.cse.html b/docs/dyn/customsearch_v1.cse.html index 7f72543ca3..e63c91b0d8 100644 --- a/docs/dyn/customsearch_v1.cse.html +++ b/docs/dyn/customsearch_v1.cse.html @@ -83,7 +83,7 @@

Instance Methods

close()

Close httplib2 connections.

- list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, sort=None, start=None, x__xgafv=None)

+ list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, snippetLength=None, sort=None, start=None, x__xgafv=None)

Returns metadata about the search performed, metadata about the engine used for the search, and the search results.

Method Details

@@ -92,7 +92,7 @@

Method Details

- list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, sort=None, start=None, x__xgafv=None) + list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, snippetLength=None, sort=None, start=None, x__xgafv=None)
Returns metadata about the search performed, metadata about the engine used for the search, and the search results.
 
 Args:
@@ -175,6 +175,7 @@ 

Method Details

siteSearchFilterUndefined - Filter mode unspecified. e - Exclude results from the listed sites. i - Include only results from the listed sites. + snippetLength: integer, Optional. Maximum length of snippet text, in characters, to be returned with results. * Valid values are integers between 1 and 160, inclusive. sort: string, The sort expression to apply to the results. The sort parameter specifies that the results be sorted according to the specified expression i.e. sort by date. [Example: sort=date](https://developers.google.com/custom-search/docs/structured_search#sort-by-attribute). start: integer, The index of the first result to return. The default number of results per page is 10, so `&start=11` would start at the top of the second page of results. **Note**: The JSON API will never return more than 100 results, even if more than 100 documents match the query, so setting the sum of `start + num` to a number greater than 100 will produce an error. Also note that the maximum value for `num` is 10. x__xgafv: string, V1 error format. diff --git a/docs/dyn/customsearch_v1.cse.siterestrict.html b/docs/dyn/customsearch_v1.cse.siterestrict.html index 45f72c2c84..ef6722e8a4 100644 --- a/docs/dyn/customsearch_v1.cse.siterestrict.html +++ b/docs/dyn/customsearch_v1.cse.siterestrict.html @@ -78,7 +78,7 @@

Instance Methods

close()

Close httplib2 connections.

- list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, sort=None, start=None, x__xgafv=None)

+ list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, snippetLength=None, sort=None, start=None, x__xgafv=None)

Returns metadata about the search performed, metadata about the engine used for the search, and the search results. Uses a small set of url patterns.

Method Details

@@ -87,7 +87,7 @@

Method Details

- list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, sort=None, start=None, x__xgafv=None) + list(c2coff=None, cr=None, cx=None, dateRestrict=None, exactTerms=None, excludeTerms=None, fileType=None, filter=None, gl=None, googlehost=None, highRange=None, hl=None, hq=None, imgColorType=None, imgDominantColor=None, imgSize=None, imgType=None, linkSite=None, lowRange=None, lr=None, num=None, orTerms=None, q=None, relatedSite=None, rights=None, safe=None, searchType=None, siteSearch=None, siteSearchFilter=None, snippetLength=None, sort=None, start=None, x__xgafv=None)
Returns metadata about the search performed, metadata about the engine used for the search, and the search results. Uses a small set of url patterns.
 
 Args:
@@ -170,6 +170,7 @@ 

Method Details

siteSearchFilterUndefined - Filter mode unspecified. e - Exclude results from the listed sites. i - Include only results from the listed sites. + snippetLength: integer, Optional. Maximum length of snippet text, in characters, to be returned with results. * Valid values are integers between 1 and 160, inclusive. sort: string, The sort expression to apply to the results. The sort parameter specifies that the results be sorted according to the specified expression i.e. sort by date. [Example: sort=date](https://developers.google.com/custom-search/docs/structured_search#sort-by-attribute). start: integer, The index of the first result to return. The default number of results per page is 10, so `&start=11` would start at the top of the second page of results. **Note**: The JSON API will never return more than 100 results, even if more than 100 documents match the query, so setting the sum of `start + num` to a number greater than 100 will produce an error. Also note that the maximum value for `num` is 10. x__xgafv: string, V1 error format. diff --git a/docs/dyn/datacatalog_v1.projects.locations.tagTemplates.html b/docs/dyn/datacatalog_v1.projects.locations.tagTemplates.html index 295be52268..3aa880abbb 100644 --- a/docs/dyn/datacatalog_v1.projects.locations.tagTemplates.html +++ b/docs/dyn/datacatalog_v1.projects.locations.tagTemplates.html @@ -119,6 +119,7 @@

Method Details

The object takes the form of: { # A tag template defines a tag that can have one or more typed fields. The template is used to create tags that are attached to Google Cloud resources. [Tag template roles] (https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. For example, see the [TagTemplate User] (https://cloud.google.com/data-catalog/docs/how-to/template-user) role that includes a permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Optional. Transfer status of the TagTemplate "displayName": "A String", # Display name for this template. Defaults to an empty string. The name must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), and can't start or end with spaces. The maximum length is 200 characters. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. The map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. The IDs have the following limitations: * Can contain uppercase and lowercase letters, numbers (0-9) and underscores (_). * Must be at least 1 character and at most 64 characters long. * Must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -153,6 +154,7 @@

Method Details

An object of the form: { # A tag template defines a tag that can have one or more typed fields. The template is used to create tags that are attached to Google Cloud resources. [Tag template roles] (https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. For example, see the [TagTemplate User] (https://cloud.google.com/data-catalog/docs/how-to/template-user) role that includes a permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Optional. Transfer status of the TagTemplate "displayName": "A String", # Display name for this template. Defaults to an empty string. The name must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), and can't start or end with spaces. The maximum length is 200 characters. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. The map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. The IDs have the following limitations: * Can contain uppercase and lowercase letters, numbers (0-9) and underscores (_). * Must be at least 1 character and at most 64 characters long. * Must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -212,6 +214,7 @@

Method Details

An object of the form: { # A tag template defines a tag that can have one or more typed fields. The template is used to create tags that are attached to Google Cloud resources. [Tag template roles] (https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. For example, see the [TagTemplate User] (https://cloud.google.com/data-catalog/docs/how-to/template-user) role that includes a permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Optional. Transfer status of the TagTemplate "displayName": "A String", # Display name for this template. Defaults to an empty string. The name must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), and can't start or end with spaces. The maximum length is 200 characters. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. The map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. The IDs have the following limitations: * Can contain uppercase and lowercase letters, numbers (0-9) and underscores (_). * Must be at least 1 character and at most 64 characters long. * Must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -290,6 +293,7 @@

Method Details

The object takes the form of: { # A tag template defines a tag that can have one or more typed fields. The template is used to create tags that are attached to Google Cloud resources. [Tag template roles] (https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. For example, see the [TagTemplate User] (https://cloud.google.com/data-catalog/docs/how-to/template-user) role that includes a permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Optional. Transfer status of the TagTemplate "displayName": "A String", # Display name for this template. Defaults to an empty string. The name must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), and can't start or end with spaces. The maximum length is 200 characters. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. The map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. The IDs have the following limitations: * Can contain uppercase and lowercase letters, numbers (0-9) and underscores (_). * Must be at least 1 character and at most 64 characters long. * Must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -324,6 +328,7 @@

Method Details

An object of the form: { # A tag template defines a tag that can have one or more typed fields. The template is used to create tags that are attached to Google Cloud resources. [Tag template roles] (https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. For example, see the [TagTemplate User] (https://cloud.google.com/data-catalog/docs/how-to/template-user) role that includes a permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Optional. Transfer status of the TagTemplate "displayName": "A String", # Display name for this template. Defaults to an empty string. The name must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), and can't start or end with spaces. The maximum length is 200 characters. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. The map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. The IDs have the following limitations: * Can contain uppercase and lowercase letters, numbers (0-9) and underscores (_). * Must be at least 1 character and at most 64 characters long. * Must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. diff --git a/docs/dyn/datacatalog_v1beta1.projects.locations.tagTemplates.html b/docs/dyn/datacatalog_v1beta1.projects.locations.tagTemplates.html index b2b1ac41ad..3fc5822d2c 100644 --- a/docs/dyn/datacatalog_v1beta1.projects.locations.tagTemplates.html +++ b/docs/dyn/datacatalog_v1beta1.projects.locations.tagTemplates.html @@ -119,6 +119,7 @@

Method Details

The object takes the form of: { # A tag template defines a tag, which can have one or more typed fields. The template is used to create and attach the tag to Google Cloud resources. [Tag template roles](https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. See, for example, the [TagTemplate User](https://cloud.google.com/data-catalog/docs/how-to/template-user) role, which includes permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Output only. Transfer status of the TagTemplate "displayName": "A String", # The display name for this template. Defaults to an empty string. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. This map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. Field IDs can contain letters (both uppercase and lowercase), numbers (0-9) and underscores (_). Field IDs must be at least 1 character long and at most 64 characters long. Field IDs must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -152,6 +153,7 @@

Method Details

An object of the form: { # A tag template defines a tag, which can have one or more typed fields. The template is used to create and attach the tag to Google Cloud resources. [Tag template roles](https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. See, for example, the [TagTemplate User](https://cloud.google.com/data-catalog/docs/how-to/template-user) role, which includes permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Output only. Transfer status of the TagTemplate "displayName": "A String", # The display name for this template. Defaults to an empty string. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. This map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. Field IDs can contain letters (both uppercase and lowercase), numbers (0-9) and underscores (_). Field IDs must be at least 1 character long and at most 64 characters long. Field IDs must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -210,6 +212,7 @@

Method Details

An object of the form: { # A tag template defines a tag, which can have one or more typed fields. The template is used to create and attach the tag to Google Cloud resources. [Tag template roles](https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. See, for example, the [TagTemplate User](https://cloud.google.com/data-catalog/docs/how-to/template-user) role, which includes permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Output only. Transfer status of the TagTemplate "displayName": "A String", # The display name for this template. Defaults to an empty string. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. This map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. Field IDs can contain letters (both uppercase and lowercase), numbers (0-9) and underscores (_). Field IDs must be at least 1 character long and at most 64 characters long. Field IDs must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -287,6 +290,7 @@

Method Details

The object takes the form of: { # A tag template defines a tag, which can have one or more typed fields. The template is used to create and attach the tag to Google Cloud resources. [Tag template roles](https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. See, for example, the [TagTemplate User](https://cloud.google.com/data-catalog/docs/how-to/template-user) role, which includes permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Output only. Transfer status of the TagTemplate "displayName": "A String", # The display name for this template. Defaults to an empty string. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. This map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. Field IDs can contain letters (both uppercase and lowercase), numbers (0-9) and underscores (_). Field IDs must be at least 1 character long and at most 64 characters long. Field IDs must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. @@ -320,6 +324,7 @@

Method Details

An object of the form: { # A tag template defines a tag, which can have one or more typed fields. The template is used to create and attach the tag to Google Cloud resources. [Tag template roles](https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. See, for example, the [TagTemplate User](https://cloud.google.com/data-catalog/docs/how-to/template-user) role, which includes permission to use the tag template to tag resources. + "dataplexTransferStatus": "A String", # Output only. Transfer status of the TagTemplate "displayName": "A String", # The display name for this template. Defaults to an empty string. "fields": { # Required. Map of tag template field IDs to the settings for the field. This map is an exhaustive list of the allowed fields. This map must contain at least one field and at most 500 fields. The keys to this map are tag template field IDs. Field IDs can contain letters (both uppercase and lowercase), numbers (0-9) and underscores (_). Field IDs must be at least 1 character long and at most 64 characters long. Field IDs must start with a letter or underscore. "a_key": { # The template for an individual field within a tag template. diff --git a/docs/dyn/dataflow_v1b3.projects.locations.templates.html b/docs/dyn/dataflow_v1b3.projects.locations.templates.html index c0f98ffde5..c8fe623ab7 100644 --- a/docs/dyn/dataflow_v1b3.projects.locations.templates.html +++ b/docs/dyn/dataflow_v1b3.projects.locations.templates.html @@ -103,7 +103,7 @@

Method Details

The object takes the form of: { # A request to create a Cloud Dataflow job from a template. - "environment": { # The environment values to set at runtime. # The runtime environment for the job. + "environment": { # The environment values to set at runtime. LINT.IfChange # The runtime environment for the job. "additionalExperiments": [ # Optional. Additional experiment flags for the job, specified with the `--experiments` option. "A String", ], @@ -583,7 +583,7 @@

Method Details

The object takes the form of: { # Parameters to provide to the template being launched. Note that the [metadata in the pipeline code] (https://cloud.google.com/dataflow/docs/guides/templates/creating-templates#metadata) determines which runtime parameters are valid. - "environment": { # The environment values to set at runtime. # The runtime environment for the job. + "environment": { # The environment values to set at runtime. LINT.IfChange # The runtime environment for the job. "additionalExperiments": [ # Optional. Additional experiment flags for the job, specified with the `--experiments` option. "A String", ], @@ -617,9 +617,9 @@

Method Details

"update": True or False, # If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state. } - dynamicTemplate_gcsPath: string, Path to dynamic template spec file on Cloud Storage. The file must be a Json serialized DynamicTemplateFieSpec object. + dynamicTemplate_gcsPath: string, Path to the dynamic template specification file on Cloud Storage. The file must be a JSON serialized `DynamicTemplateFileSpec` object. dynamicTemplate_stagingLocation: string, Cloud Storage path for staging dependencies. Must be a valid Cloud Storage URL, beginning with `gs://`. - gcsPath: string, A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with 'gs://'. + gcsPath: string, A Cloud Storage path to the template to use to create the job. Must be valid Cloud Storage URL, beginning with `gs://`. validateOnly: boolean, If true, the request is validated but not actually executed. Defaults to false. x__xgafv: string, V1 error format. Allowed values diff --git a/docs/dyn/dataflow_v1b3.projects.templates.html b/docs/dyn/dataflow_v1b3.projects.templates.html index fff7a53f30..425d593c8b 100644 --- a/docs/dyn/dataflow_v1b3.projects.templates.html +++ b/docs/dyn/dataflow_v1b3.projects.templates.html @@ -102,7 +102,7 @@

Method Details

The object takes the form of: { # A request to create a Cloud Dataflow job from a template. - "environment": { # The environment values to set at runtime. # The runtime environment for the job. + "environment": { # The environment values to set at runtime. LINT.IfChange # The runtime environment for the job. "additionalExperiments": [ # Optional. Additional experiment flags for the job, specified with the `--experiments` option. "A String", ], @@ -581,7 +581,7 @@

Method Details

The object takes the form of: { # Parameters to provide to the template being launched. Note that the [metadata in the pipeline code] (https://cloud.google.com/dataflow/docs/guides/templates/creating-templates#metadata) determines which runtime parameters are valid. - "environment": { # The environment values to set at runtime. # The runtime environment for the job. + "environment": { # The environment values to set at runtime. LINT.IfChange # The runtime environment for the job. "additionalExperiments": [ # Optional. Additional experiment flags for the job, specified with the `--experiments` option. "A String", ], @@ -615,9 +615,9 @@

Method Details

"update": True or False, # If set, replace the existing pipeline with the name specified by jobName with this pipeline, preserving state. } - dynamicTemplate_gcsPath: string, Path to dynamic template spec file on Cloud Storage. The file must be a Json serialized DynamicTemplateFieSpec object. + dynamicTemplate_gcsPath: string, Path to the dynamic template specification file on Cloud Storage. The file must be a JSON serialized `DynamicTemplateFileSpec` object. dynamicTemplate_stagingLocation: string, Cloud Storage path for staging dependencies. Must be a valid Cloud Storage URL, beginning with `gs://`. - gcsPath: string, A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with 'gs://'. + gcsPath: string, A Cloud Storage path to the template to use to create the job. Must be valid Cloud Storage URL, beginning with `gs://`. location: string, The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) to which to direct the request. validateOnly: boolean, If true, the request is validated but not actually executed. Defaults to false. x__xgafv: string, V1 error format. diff --git a/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html b/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html index 844b48b8a6..fec604e0f8 100644 --- a/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html +++ b/docs/dyn/datamigration_v1.projects.locations.migrationJobs.html @@ -200,7 +200,7 @@

Method Details

"provider": "A String", # The database provider. }, "sqlserverHomogeneousMigrationJobConfig": { # Configuration for homogeneous migration to Cloud SQL for SQL Server. # Optional. Configuration for SQL Server homogeneous migration. - "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_backup_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB_backup_1691448254.trn, use pattern: (?.*)_backup_(?.*).trn + "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB.1691448254.trn, use pattern: (?.*)\.(?\d*).trn or (?.*)\.(?\d*).trn "databaseBackups": [ # Required. Backup details per database in Cloud Storage. { # Specifies the backup details for a single database in Cloud Storage for homogeneous migration to Cloud SQL for SQL Server. "database": "A String", # Required. Name of a SQL Server database for which to define backup configuration. @@ -463,7 +463,7 @@

Method Details

"provider": "A String", # The database provider. }, "sqlserverHomogeneousMigrationJobConfig": { # Configuration for homogeneous migration to Cloud SQL for SQL Server. # Optional. Configuration for SQL Server homogeneous migration. - "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_backup_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB_backup_1691448254.trn, use pattern: (?.*)_backup_(?.*).trn + "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB.1691448254.trn, use pattern: (?.*)\.(?\d*).trn or (?.*)\.(?\d*).trn "databaseBackups": [ # Required. Backup details per database in Cloud Storage. { # Specifies the backup details for a single database in Cloud Storage for homogeneous migration to Cloud SQL for SQL Server. "database": "A String", # Required. Name of a SQL Server database for which to define backup configuration. @@ -608,7 +608,7 @@

Method Details

"provider": "A String", # The database provider. }, "sqlserverHomogeneousMigrationJobConfig": { # Configuration for homogeneous migration to Cloud SQL for SQL Server. # Optional. Configuration for SQL Server homogeneous migration. - "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_backup_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB_backup_1691448254.trn, use pattern: (?.*)_backup_(?.*).trn + "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB.1691448254.trn, use pattern: (?.*)\.(?\d*).trn or (?.*)\.(?\d*).trn "databaseBackups": [ # Required. Backup details per database in Cloud Storage. { # Specifies the backup details for a single database in Cloud Storage for homogeneous migration to Cloud SQL for SQL Server. "database": "A String", # Required. Name of a SQL Server database for which to define backup configuration. @@ -714,7 +714,7 @@

Method Details

"provider": "A String", # The database provider. }, "sqlserverHomogeneousMigrationJobConfig": { # Configuration for homogeneous migration to Cloud SQL for SQL Server. # Optional. Configuration for SQL Server homogeneous migration. - "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_backup_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB_backup_1691448254.trn, use pattern: (?.*)_backup_(?.*).trn + "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB.1691448254.trn, use pattern: (?.*)\.(?\d*).trn or (?.*)\.(?\d*).trn "databaseBackups": [ # Required. Backup details per database in Cloud Storage. { # Specifies the backup details for a single database in Cloud Storage for homogeneous migration to Cloud SQL for SQL Server. "database": "A String", # Required. Name of a SQL Server database for which to define backup configuration. @@ -1153,7 +1153,7 @@

Method Details

"provider": "A String", # The database provider. }, "sqlserverHomogeneousMigrationJobConfig": { # Configuration for homogeneous migration to Cloud SQL for SQL Server. # Optional. Configuration for SQL Server homogeneous migration. - "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_backup_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB_backup_1691448254.trn, use pattern: (?.*)_backup_(?.*).trn + "backupFilePattern": "A String", # Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_20230802_155400.trn, use pattern: (?.*)_backup_(?\d{4})(?\d{2})(?\d{2})_(?\d{2})(?\d{2})(?\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB.1691448254.trn, use pattern: (?.*)\.(?\d*).trn or (?.*)\.(?\d*).trn "databaseBackups": [ # Required. Backup details per database in Cloud Storage. { # Specifies the backup details for a single database in Cloud Storage for homogeneous migration to Cloud SQL for SQL Server. "database": "A String", # Required. Name of a SQL Server database for which to define backup configuration. diff --git a/docs/dyn/dataplex_v1.projects.locations.dataScans.html b/docs/dyn/dataplex_v1.projects.locations.dataScans.html index 0ee5b0057a..ba09700ffe 100644 --- a/docs/dyn/dataplex_v1.projects.locations.dataScans.html +++ b/docs/dyn/dataplex_v1.projects.locations.dataScans.html @@ -136,16 +136,6 @@

Method Details

"entity": "A String", # Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}. "resource": "A String", # Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID }, - "dataDocumentationResult": { # The output of a DataDocumentation scan. # Output only. The result of the data documentation scan. - "queries": [ # Output only. The list of generated queries. - { # A query in data documentation - "description": "A String", # Output only. The description for the query. - "sql": "A String", # Output only. The SQL query string which can be executed. - }, - ], - }, - "dataDocumentationSpec": { # DataDocumentation scan related spec. # DataDocumentationScan related setting. - }, "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "postScanActionsResult": { # The result of post scan actions of DataProfileScan job. # Output only. The result of post scan actions. "bigqueryExportResult": { # The result of BigQuery export post scan action. # Output only. The result of BigQuery export post scan action. @@ -552,16 +542,6 @@

Method Details

"entity": "A String", # Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}. "resource": "A String", # Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID }, - "dataDocumentationResult": { # The output of a DataDocumentation scan. # Output only. The result of the data documentation scan. - "queries": [ # Output only. The list of generated queries. - { # A query in data documentation - "description": "A String", # Output only. The description for the query. - "sql": "A String", # Output only. The SQL query string which can be executed. - }, - ], - }, - "dataDocumentationSpec": { # DataDocumentation scan related spec. # DataDocumentationScan related setting. - }, "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "postScanActionsResult": { # The result of post scan actions of DataProfileScan job. # Output only. The result of post scan actions. "bigqueryExportResult": { # The result of BigQuery export post scan action. # Output only. The result of BigQuery export post scan action. @@ -887,16 +867,6 @@

Method Details

"entity": "A String", # Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}. "resource": "A String", # Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID }, - "dataDocumentationResult": { # The output of a DataDocumentation scan. # Output only. The result of the data documentation scan. - "queries": [ # Output only. The list of generated queries. - { # A query in data documentation - "description": "A String", # Output only. The description for the query. - "sql": "A String", # Output only. The SQL query string which can be executed. - }, - ], - }, - "dataDocumentationSpec": { # DataDocumentation scan related spec. # DataDocumentationScan related setting. - }, "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "postScanActionsResult": { # The result of post scan actions of DataProfileScan job. # Output only. The result of post scan actions. "bigqueryExportResult": { # The result of BigQuery export post scan action. # Output only. The result of BigQuery export post scan action. @@ -1183,16 +1153,6 @@

Method Details

"entity": "A String", # Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}. "resource": "A String", # Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID }, - "dataDocumentationResult": { # The output of a DataDocumentation scan. # Output only. The result of the data documentation scan. - "queries": [ # Output only. The list of generated queries. - { # A query in data documentation - "description": "A String", # Output only. The description for the query. - "sql": "A String", # Output only. The SQL query string which can be executed. - }, - ], - }, - "dataDocumentationSpec": { # DataDocumentation scan related spec. # DataDocumentationScan related setting. - }, "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "postScanActionsResult": { # The result of post scan actions of DataProfileScan job. # Output only. The result of post scan actions. "bigqueryExportResult": { # The result of BigQuery export post scan action. # Output only. The result of BigQuery export post scan action. @@ -1496,16 +1456,6 @@

Method Details

{ # Run DataScan Response. "job": { # A DataScanJob represents an instance of DataScan execution. # DataScanJob created by RunDataScan request. - "dataDocumentationResult": { # The output of a DataDocumentation scan. # Output only. The result of the data documentation scan. - "queries": [ # Output only. The list of generated queries. - { # A query in data documentation - "description": "A String", # Output only. The description for the query. - "sql": "A String", # Output only. The SQL query string which can be executed. - }, - ], - }, - "dataDocumentationSpec": { # DataDocumentation scan related spec. # Output only. DataDocumentationScan related setting. - }, "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "postScanActionsResult": { # The result of post scan actions of DataProfileScan job. # Output only. The result of post scan actions. "bigqueryExportResult": { # The result of BigQuery export post scan action. # Output only. The result of BigQuery export post scan action. diff --git a/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html b/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html index ee5b0389ae..c3828b61ae 100644 --- a/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html +++ b/docs/dyn/dataplex_v1.projects.locations.dataScans.jobs.html @@ -180,16 +180,6 @@

Method Details

An object of the form: { # A DataScanJob represents an instance of DataScan execution. - "dataDocumentationResult": { # The output of a DataDocumentation scan. # Output only. The result of the data documentation scan. - "queries": [ # Output only. The list of generated queries. - { # A query in data documentation - "description": "A String", # Output only. The description for the query. - "sql": "A String", # Output only. The SQL query string which can be executed. - }, - ], - }, - "dataDocumentationSpec": { # DataDocumentation scan related spec. # Output only. DataDocumentationScan related setting. - }, "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "postScanActionsResult": { # The result of post scan actions of DataProfileScan job. # Output only. The result of post scan actions. "bigqueryExportResult": { # The result of BigQuery export post scan action. # Output only. The result of BigQuery export post scan action. @@ -444,16 +434,6 @@

Method Details

{ # List DataScanJobs response. "dataScanJobs": [ # DataScanJobs (BASIC view only) under a given dataScan. { # A DataScanJob represents an instance of DataScan execution. - "dataDocumentationResult": { # The output of a DataDocumentation scan. # Output only. The result of the data documentation scan. - "queries": [ # Output only. The list of generated queries. - { # A query in data documentation - "description": "A String", # Output only. The description for the query. - "sql": "A String", # Output only. The SQL query string which can be executed. - }, - ], - }, - "dataDocumentationSpec": { # DataDocumentation scan related spec. # Output only. DataDocumentationScan related setting. - }, "dataProfileResult": { # DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result. # Output only. The result of the data profile scan. "postScanActionsResult": { # The result of post scan actions of DataProfileScan job. # Output only. The result of post scan actions. "bigqueryExportResult": { # The result of BigQuery export post scan action. # Output only. The result of BigQuery export post scan action. diff --git a/docs/dyn/dataproc_v1.projects.locations.batches.html b/docs/dyn/dataproc_v1.projects.locations.batches.html index c6b76eefb5..eb2141b91b 100644 --- a/docs/dyn/dataproc_v1.projects.locations.batches.html +++ b/docs/dyn/dataproc_v1.projects.locations.batches.html @@ -74,6 +74,9 @@

Cloud Dataproc API . projects . locations . batches

Instance Methods

+

+ analyze(name, body=None, x__xgafv=None)

+

Analyze a Batch for possible recommendations and insights.

close()

Close httplib2 connections.

@@ -93,6 +96,48 @@

Instance Methods

list_next()

Retrieves the next page of results.

Method Details

+
+ analyze(name, body=None, x__xgafv=None) +
Analyze a Batch for possible recommendations and insights.
+
+Args:
+  name: string, Required. The fully qualified name of the batch to analyze in the format "projects/PROJECT_ID/locations/DATAPROC_REGION/batches/BATCH_ID" (required)
+  body: object, The request body.
+    The object takes the form of:
+
+{ # A request to analyze a batch workload.
+  "requestId": "A String", # Optional. A unique ID used to identify the request. If the service receives two AnalyzeBatchRequest (http://cloud/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.AnalyzeBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first request created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # This resource represents a long-running operation that is the result of a network API call.
+  "done": True or False, # If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.
+  "error": { # The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
+    "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+    "details": [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
+      {
+        "a_key": "", # Properties of the object. Contains field @type with type URL.
+      },
+    ],
+    "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
+  },
+  "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+  "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.
+  "response": { # The normal, successful response of the operation. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.
+    "a_key": "", # Properties of the object. Contains field @type with type URL.
+  },
+}
+
+
close()
Close httplib2 connections.
diff --git a/docs/dyn/datastore_v1.projects.html b/docs/dyn/datastore_v1.projects.html index 5927476408..ac5c894e49 100644 --- a/docs/dyn/datastore_v1.projects.html +++ b/docs/dyn/datastore_v1.projects.html @@ -296,6 +296,11 @@

Method Details

}, }, }, + "propertyMask": { # The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity. # The properties to write in this mutation. None of the properties in the mask may have a reserved name, except for `__key__`. This field is ignored for `delete`. If the entity already exists, only properties referenced in the mask are updated, others are left untouched. Properties referenced in the mask but not in the entity are deleted. + "paths": [ # The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value. + "A String", + ], + }, "update": { # A Datastore data object. Must not exceed 1 MiB - 4 bytes. # The entity to update. The entity must already exist. Must have a complete key path. "key": { # A unique identifier for an entity. If a key's partition ID or any of its path kinds or names are reserved/read-only, the key is reserved/read-only. A reserved/read-only key is forbidden in certain documented contexts. # The entity's key. An entity must have a key, unless otherwise documented (for example, an entity in `Value.entity_value` may have no key). An entity's kind is its key path's last element's kind, or null if it has no key. "partitionId": { # A partition ID identifies a grouping of entities. The grouping is always by project and namespace, however the namespace ID may be empty. A partition ID contains several dimensions: project ID and namespace ID. Partition dimensions: - May be `""`. - Must be valid UTF-8 bytes. - Must have values that match regex `[A-Za-z\d\.\-_]{1,100}` If the value of any dimension matches regex `__.*__`, the partition is reserved/read-only. A reserved/read-only partition ID is forbidden in certain documented contexts. Foreign partition IDs (in which the project ID does not match the context project ID ) are discouraged. Reads and writes of foreign partition IDs may fail if the project is not in an active state. # Entities are partitioned into subsets, currently identified by a project ID and namespace ID. Queries are scoped to a single partition. @@ -585,6 +590,11 @@

Method Details

], }, ], + "propertyMask": { # The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity. # The properties to return. Defaults to returning all properties. If this field is set and an entity has a property not referenced in the mask, it will be absent from LookupResponse.found.entity.properties. The entity's key is always returned. + "paths": [ # The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value. + "A String", + ], + }, "readOptions": { # The options shared by read requests. # The options for this lookup request. "newTransaction": { # Options for beginning a new transaction. Transactions can be created explicitly with calls to Datastore.BeginTransaction or implicitly by setting ReadOptions.new_transaction in read requests. # Options for beginning a new transaction for this request. The new transaction identifier will be returned in the corresponding response as either LookupResponse.transaction or RunQueryResponse.transaction. "readOnly": { # Options specific to read-only transactions. # The transaction should only allow reads. @@ -930,6 +940,9 @@

Method Details

}, }, "databaseId": "A String", # The ID of the database against which to make the request. '(default)' is not allowed; please use empty string '' to refer the default database. + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "gqlQuery": { # A [GQL query](https://cloud.google.com/datastore/docs/apis/gql/gql_reference). # The GQL query to run. This query must be an aggregation query. "allowLiterals": True or False, # When false, the query string must not contain any literals and instead must bind all values. For example, `SELECT * FROM Kind WHERE a = 'string literal'` is not allowed, while `SELECT * FROM Kind WHERE a = @value` is. "namedBindings": { # For each non-reserved named binding site in the query string, there must be a named parameter with that name, but not necessarily the inverse. Key must match regex `A-Za-z_$*`, must not match regex `__.*__`, and must not be `""`. @@ -1088,6 +1101,23 @@

Method Details

"moreResults": "A String", # The state of the query after the current batch. Only COUNT(*) aggregations are supported in the initial launch. Therefore, expected result type is limited to `NO_MORE_RESULTS`. "readTime": "A String", # Read timestamp this batch was returned from. In a single transaction, subsequent query result batches for the same query can have a greater timestamp. Each batch's read timestamp is valid for all preceding batches. }, + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "query": { # Datastore query for running an aggregation over a Query. # The parsed form of the `GqlQuery` from the request, if it was set. "aggregations": [ # Optional. Series of aggregations to apply over the results of the `nested_query`. Requires: * A minimum of one and maximum of five aggregations per query. { # Defines an aggregation that produces a single result. @@ -1203,6 +1233,9 @@

Method Details

{ # The request for Datastore.RunQuery. "databaseId": "A String", # The ID of the database against which to make the request. '(default)' is not allowed; please use empty string '' to refer the default database. + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "gqlQuery": { # A [GQL query](https://cloud.google.com/datastore/docs/apis/gql/gql_reference). # The GQL query to run. This query must be a non-aggregation query. "allowLiterals": True or False, # When false, the query string must not contain any literals and instead must bind all values. For example, `SELECT * FROM Kind WHERE a = 'string literal'` is not allowed, while `SELECT * FROM Kind WHERE a = @value` is. "namedBindings": { # For each non-reserved named binding site in the query string, there must be a named parameter with that name, but not necessarily the inverse. Key must match regex `A-Za-z_$*`, must not match regex `__.*__`, and must not be `""`. @@ -1292,6 +1325,11 @@

Method Details

"namespaceId": "A String", # If not empty, the ID of the namespace to which the entities belong. "projectId": "A String", # The ID of the project to which the entities belong. }, + "propertyMask": { # The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity. # The properties to return. This field must not be set for a projection query. See LookupRequest.property_mask. + "paths": [ # The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value. + "A String", + ], + }, "query": { # A query for entities. # The query to run. "distinctOn": [ # The properties to make distinct. The query results will contain the first result for each distinct combination of values for the given properties (if empty, all results are returned). Requires: * If `order` is specified, the set of distinct on properties must appear before the non-distinct on properties in `order`. { # A reference to a property relative to the kind expressions. @@ -1466,6 +1504,23 @@

Method Details

"skippedResults": 42, # The number of results skipped, typically because of an offset. "snapshotVersion": "A String", # The version number of the snapshot this batch was returned from. This applies to the range of results from the query's `start_cursor` (or the beginning of the query if no cursor was given) to this batch's `end_cursor` (not the query's `end_cursor`). In a single transaction, subsequent query result batches for the same query can have a greater snapshot version number. Each batch's snapshot version is valid for all preceding batches. The value will be zero for eventually consistent queries. }, + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "query": { # A query for entities. # The parsed form of the `GqlQuery` from the request, if it was set. "distinctOn": [ # The properties to make distinct. The query results will contain the first result for each distinct combination of values for the given properties (if empty, all results are returned). Requires: * If `order` is specified, the set of distinct on properties must appear before the non-distinct on properties in `order`. { # A reference to a property relative to the kind expressions. diff --git a/docs/dyn/datastore_v1beta3.projects.html b/docs/dyn/datastore_v1beta3.projects.html index b5e36b989a..00450ec0c8 100644 --- a/docs/dyn/datastore_v1beta3.projects.html +++ b/docs/dyn/datastore_v1beta3.projects.html @@ -272,6 +272,11 @@

Method Details

}, }, }, + "propertyMask": { # The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity. # The properties to write in this mutation. None of the properties in the mask may have a reserved name, except for `__key__`. This field is ignored for `delete`. If the entity already exists, only properties referenced in the mask are updated, others are left untouched. Properties referenced in the mask but not in the entity are deleted. + "paths": [ # The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value. + "A String", + ], + }, "update": { # A Datastore data object. Must not exceed 1 MiB - 4 bytes. # The entity to update. The entity must already exist. Must have a complete key path. "key": { # A unique identifier for an entity. If a key's partition ID or any of its path kinds or names are reserved/read-only, the key is reserved/read-only. A reserved/read-only key is forbidden in certain documented contexts. # The entity's key. An entity must have a key, unless otherwise documented (for example, an entity in `Value.entity_value` may have no key). An entity's kind is its key path's last element's kind, or null if it has no key. "partitionId": { # A partition ID identifies a grouping of entities. The grouping is always by project and namespace, however the namespace ID may be empty. A partition ID contains several dimensions: project ID and namespace ID. Partition dimensions: - May be `""`. - Must be valid UTF-8 bytes. - Must have values that match regex `[A-Za-z\d\.\-_]{1,100}` If the value of any dimension matches regex `__.*__`, the partition is reserved/read-only. A reserved/read-only partition ID is forbidden in certain documented contexts. Foreign partition IDs (in which the project ID does not match the context project ID ) are discouraged. Reads and writes of foreign partition IDs may fail if the project is not in an active state. # Entities are partitioned into subsets, currently identified by a project ID and namespace ID. Queries are scoped to a single partition. @@ -440,6 +445,11 @@

Method Details

], }, ], + "propertyMask": { # The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity. # The properties to return. Defaults to returning all properties. If this field is set and an entity has a property not referenced in the mask, it will be absent from LookupResponse.found.entity.properties. The entity's key is always returned. + "paths": [ # The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value. + "A String", + ], + }, "readOptions": { # The options shared by read requests. # The options for this lookup request. "readConsistency": "A String", # The non-transactional read consistency to use. "readTime": "A String", # Reads entities as they were at the given time. This value is only supported for Cloud Firestore in Datastore mode. This must be a microsecond precision timestamp within the past one hour, or if Point-in-Time Recovery is enabled, can additionally be a whole minute timestamp within the past 7 days. @@ -767,6 +777,9 @@

Method Details

"startCursor": "A String", # A starting point for the query results. Query cursors are returned in query result batches and [can only be used to continue the same query](https://cloud.google.com/datastore/docs/concepts/queries#cursors_limits_and_offsets). }, }, + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "gqlQuery": { # A [GQL query](https://cloud.google.com/datastore/docs/apis/gql/gql_reference). # The GQL query to run. This query must be an aggregation query. "allowLiterals": True or False, # When false, the query string must not contain any literals and instead must bind all values. For example, `SELECT * FROM Kind WHERE a = 'string literal'` is not allowed, while `SELECT * FROM Kind WHERE a = @value` is. "namedBindings": { # For each non-reserved named binding site in the query string, there must be a named parameter with that name, but not necessarily the inverse. Key must match regex `A-Za-z_$*`, must not match regex `__.*__`, and must not be `""`. @@ -913,6 +926,23 @@

Method Details

"moreResults": "A String", # The state of the query after the current batch. Only COUNT(*) aggregations are supported in the initial launch. Therefore, expected result type is limited to `NO_MORE_RESULTS`. "readTime": "A String", # Read timestamp this batch was returned from. In a single transaction, subsequent query result batches for the same query can have a greater timestamp. Each batch's read timestamp is valid for all preceding batches. }, + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "query": { # Datastore query for running an aggregation over a Query. # The parsed form of the `GqlQuery` from the request, if it was set. "aggregations": [ # Optional. Series of aggregations to apply over the results of the `nested_query`. Requires: * A minimum of one and maximum of five aggregations per query. { # Defines an aggregation that produces a single result. @@ -1025,6 +1055,9 @@

Method Details

The object takes the form of: { # The request for Datastore.RunQuery. + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "gqlQuery": { # A [GQL query](https://cloud.google.com/datastore/docs/apis/gql/gql_reference). # The GQL query to run. This query must be a non-aggregation query. "allowLiterals": True or False, # When false, the query string must not contain any literals and instead must bind all values. For example, `SELECT * FROM Kind WHERE a = 'string literal'` is not allowed, while `SELECT * FROM Kind WHERE a = @value` is. "namedBindings": { # For each non-reserved named binding site in the query string, there must be a named parameter with that name, but not necessarily the inverse. Key must match regex `A-Za-z_$*`, must not match regex `__.*__`, and must not be `""`. @@ -1111,6 +1144,11 @@

Method Details

"namespaceId": "A String", # If not empty, the ID of the namespace to which the entities belong. "projectId": "A String", # The ID of the project to which the entities belong. }, + "propertyMask": { # The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity. # The properties to return. This field must not be set for a projection query. See LookupRequest.property_mask. + "paths": [ # The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value. + "A String", + ], + }, "query": { # A query for entities. # The query to run. "distinctOn": [ # The properties to make distinct. The query results will contain the first result for each distinct combination of values for the given properties (if empty, all results are returned). Requires: * If `order` is specified, the set of distinct on properties must appear before the non-distinct on properties in `order`. { # A reference to a property relative to the kind expressions. @@ -1274,6 +1312,23 @@

Method Details

"skippedResults": 42, # The number of results skipped, typically because of an offset. "snapshotVersion": "A String", # The version number of the snapshot this batch was returned from. This applies to the range of results from the query's `start_cursor` (or the beginning of the query if no cursor was given) to this batch's `end_cursor` (not the query's `end_cursor`). In a single transaction, subsequent query result batches for the same query can have a greater snapshot version number. Each batch's snapshot version is valid for all preceding batches. The value will be zero for eventually consistent queries. }, + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "query": { # A query for entities. # The parsed form of the `GqlQuery` from the request, if it was set. "distinctOn": [ # The properties to make distinct. The query results will contain the first result for each distinct combination of values for the given properties (if empty, all results are returned). Requires: * If `order` is specified, the set of distinct on properties must appear before the non-distinct on properties in `order`. { # A reference to a property relative to the kind expressions. diff --git a/docs/dyn/firestore_v1.projects.databases.backupSchedules.html b/docs/dyn/firestore_v1.projects.databases.backupSchedules.html index 77a5d7592b..4e8b99b09b 100644 --- a/docs/dyn/firestore_v1.projects.databases.backupSchedules.html +++ b/docs/dyn/firestore_v1.projects.databases.backupSchedules.html @@ -109,7 +109,7 @@

Method Details

{ # A backup schedule for a Cloud Firestore Database. This resource is owned by the database it is backing up, and is deleted along with the database. The actual backups are not though. "createTime": "A String", # Output only. The timestamp at which this backup schedule was created and effective since. No backups will be created for this schedule before this time. - "dailyRecurrence": { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. + "dailyRecurrence": { # Represents a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. }, "name": "A String", # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` "retention": "A String", # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. @@ -129,7 +129,7 @@

Method Details

{ # A backup schedule for a Cloud Firestore Database. This resource is owned by the database it is backing up, and is deleted along with the database. The actual backups are not though. "createTime": "A String", # Output only. The timestamp at which this backup schedule was created and effective since. No backups will be created for this schedule before this time. - "dailyRecurrence": { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. + "dailyRecurrence": { # Represents a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. }, "name": "A String", # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` "retention": "A String", # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. @@ -145,7 +145,7 @@

Method Details

Deletes a backup schedule.
 
 Args:
-  name: string, Required. The name of backup schedule. Format `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` (required)
+  name: string, Required. The name of the backup schedule. Format `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` (required)
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -174,7 +174,7 @@ 

Method Details

{ # A backup schedule for a Cloud Firestore Database. This resource is owned by the database it is backing up, and is deleted along with the database. The actual backups are not though. "createTime": "A String", # Output only. The timestamp at which this backup schedule was created and effective since. No backups will be created for this schedule before this time. - "dailyRecurrence": { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. + "dailyRecurrence": { # Represents a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. }, "name": "A String", # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` "retention": "A String", # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. @@ -203,7 +203,7 @@

Method Details

"backupSchedules": [ # List of all backup schedules. { # A backup schedule for a Cloud Firestore Database. This resource is owned by the database it is backing up, and is deleted along with the database. The actual backups are not though. "createTime": "A String", # Output only. The timestamp at which this backup schedule was created and effective since. No backups will be created for this schedule before this time. - "dailyRecurrence": { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. + "dailyRecurrence": { # Represents a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. }, "name": "A String", # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` "retention": "A String", # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. @@ -227,7 +227,7 @@

Method Details

{ # A backup schedule for a Cloud Firestore Database. This resource is owned by the database it is backing up, and is deleted along with the database. The actual backups are not though. "createTime": "A String", # Output only. The timestamp at which this backup schedule was created and effective since. No backups will be created for this schedule before this time. - "dailyRecurrence": { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. + "dailyRecurrence": { # Represents a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. }, "name": "A String", # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` "retention": "A String", # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. @@ -248,7 +248,7 @@

Method Details

{ # A backup schedule for a Cloud Firestore Database. This resource is owned by the database it is backing up, and is deleted along with the database. The actual backups are not though. "createTime": "A String", # Output only. The timestamp at which this backup schedule was created and effective since. No backups will be created for this schedule before this time. - "dailyRecurrence": { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. + "dailyRecurrence": { # Represents a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time. }, "name": "A String", # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}` "retention": "A String", # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. diff --git a/docs/dyn/firestore_v1.projects.databases.documents.html b/docs/dyn/firestore_v1.projects.databases.documents.html index e91bcffb56..33a0c3af4f 100644 --- a/docs/dyn/firestore_v1.projects.databases.documents.html +++ b/docs/dyn/firestore_v1.projects.databases.documents.html @@ -1750,6 +1750,9 @@

Method Details

The object takes the form of: { # The request for Firestore.RunAggregationQuery. + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "newTransaction": { # Options for creating a new transaction. # Starts a new transaction as part of the query, defaulting to read-only. The new transaction ID will be returned as the first response in the stream. "readOnly": { # Options for a transaction that can only be used to read documents. # The transaction can only be used for read operations. "readTime": "A String", # Reads documents at the given time. This must be a microsecond precision timestamp within the past one hour, or if Point-in-Time Recovery is enabled, can additionally be a whole minute timestamp within the past 7 days. @@ -1918,6 +1921,23 @@

Method Details

An object of the form: { # The response for Firestore.RunAggregationQuery. + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "readTime": "A String", # The time at which the aggregate result was computed. This is always monotonically increasing; in this case, the previous AggregationResult in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `result` will be sent, and this represents the time at which the query was run. "result": { # The result of a single bucket from a Firestore aggregation query. The keys of `aggregate_fields` are the same for all results in an aggregation query, unlike document queries which can have different fields present for each result. # A single aggregation result. Not present when reporting partial progress. "aggregateFields": { # The result of the aggregation functions, ex: `COUNT(*) AS total_docs`. The key is the alias assigned to the aggregation function on input and the size of this map equals the number of aggregation functions in the query. @@ -1961,6 +1981,9 @@

Method Details

The object takes the form of: { # The request for Firestore.RunQuery. + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "newTransaction": { # Options for creating a new transaction. # Starts a new transaction and reads the documents. Defaults to a read-only transaction. The new transaction ID will be returned as the first response in the stream. "readOnly": { # Options for a transaction that can only be used to read documents. # The transaction can only be used for read operations. "readTime": "A String", # Reads documents at the given time. This must be a microsecond precision timestamp within the past one hour, or if Point-in-Time Recovery is enabled, can additionally be a whole minute timestamp within the past 7 days. @@ -2141,6 +2164,23 @@

Method Details

"updateTime": "A String", # Output only. The time at which the document was last changed. This value is initially set to the `create_time` then increases monotonically with each change to the document. It can also be compared to values from other documents and the `read_time` of a query. }, "done": True or False, # If present, Firestore has completely finished the request and no more documents will be returned. + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "readTime": "A String", # The time at which the document was read. This may be monotonically increasing; in this case, the previous documents in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `document` will be sent, and this represents the time at which the query was run. "skippedResults": 42, # The number of results that have been skipped due to an offset between the last response and the current response. "transaction": "A String", # The transaction that was started as part of this request. Can only be set in the first response, and only if RunQueryRequest.new_transaction was set in the request. If set, no other fields will be set in this response. diff --git a/docs/dyn/firestore_v1beta1.projects.databases.documents.html b/docs/dyn/firestore_v1beta1.projects.databases.documents.html index 11ff075e4d..87635483f5 100644 --- a/docs/dyn/firestore_v1beta1.projects.databases.documents.html +++ b/docs/dyn/firestore_v1beta1.projects.databases.documents.html @@ -1750,6 +1750,9 @@

Method Details

The object takes the form of: { # The request for Firestore.RunAggregationQuery. + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "newTransaction": { # Options for creating a new transaction. # Starts a new transaction as part of the query, defaulting to read-only. The new transaction ID will be returned as the first response in the stream. "readOnly": { # Options for a transaction that can only be used to read documents. # The transaction can only be used for read operations. "readTime": "A String", # Reads documents at the given time. This must be a microsecond precision timestamp within the past one hour, or if Point-in-Time Recovery is enabled, can additionally be a whole minute timestamp within the past 7 days. @@ -1918,6 +1921,23 @@

Method Details

An object of the form: { # The response for Firestore.RunAggregationQuery. + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "readTime": "A String", # The time at which the aggregate result was computed. This is always monotonically increasing; in this case, the previous AggregationResult in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `result` will be sent, and this represents the time at which the query was run. "result": { # The result of a single bucket from a Firestore aggregation query. The keys of `aggregate_fields` are the same for all results in an aggregation query, unlike document queries which can have different fields present for each result. # A single aggregation result. Not present when reporting partial progress. "aggregateFields": { # The result of the aggregation functions, ex: `COUNT(*) AS total_docs`. The key is the alias assigned to the aggregation function on input and the size of this map equals the number of aggregation functions in the query. @@ -1961,6 +1981,9 @@

Method Details

The object takes the form of: { # The request for Firestore.RunQuery. + "explainOptions": { # Explain options for the query. # Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned. + "analyze": True or False, # Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics. + }, "newTransaction": { # Options for creating a new transaction. # Starts a new transaction and reads the documents. Defaults to a read-only transaction. The new transaction ID will be returned as the first response in the stream. "readOnly": { # Options for a transaction that can only be used to read documents. # The transaction can only be used for read operations. "readTime": "A String", # Reads documents at the given time. This must be a microsecond precision timestamp within the past one hour, or if Point-in-Time Recovery is enabled, can additionally be a whole minute timestamp within the past 7 days. @@ -2141,6 +2164,23 @@

Method Details

"updateTime": "A String", # Output only. The time at which the document was last changed. This value is initially set to the `create_time` then increases monotonically with each change to the document. It can also be compared to values from other documents and the `read_time` of a query. }, "done": True or False, # If present, Firestore has completely finished the request and no more documents will be returned. + "explainMetrics": { # Explain metrics for the query. # Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream. + "executionStats": { # Execution statistics for the query. # Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true. + "debugStats": { # Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { "indexes_entries_scanned": "1000", "documents_scanned": "20", "billing_details" : { "documents_billable": "20", "index_entries_billable": "1000", "min_query_cost": "0" } } + "a_key": "", # Properties of the object. + }, + "executionDuration": "A String", # Total time to execute the query in the backend. + "readOperations": "A String", # Total billable read operations. + "resultsReturned": "A String", # Total number of results returned, including documents, projections, aggregation results, keys. + }, + "planSummary": { # Planning phase information for the query. # Planning phase information for the query. + "indexesUsed": [ # The indexes selected for the query. For example: [ {"query_scope": "Collection", "properties": "(foo ASC, __name__ ASC)"}, {"query_scope": "Collection", "properties": "(bar ASC, __name__ ASC)"} ] + { + "a_key": "", # Properties of the object. + }, + ], + }, + }, "readTime": "A String", # The time at which the document was read. This may be monotonically increasing; in this case, the previous documents in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `document` will be sent, and this represents the time at which the query was run. "skippedResults": 42, # The number of results that have been skipped due to an offset between the last response and the current response. "transaction": "A String", # The transaction that was started as part of this request. Can only be set in the first response, and only if RunQueryRequest.new_transaction was set in the request. If set, no other fields will be set in this response. diff --git a/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.html b/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.html index ceacb4ab23..35e03f01e6 100644 --- a/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.html +++ b/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.html @@ -124,7 +124,7 @@

Method Details

body: object, The request body. The object takes the form of: -{ # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). Next id: 29 +{ # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). "allNamespaces": True or False, # Output only. If True, all namespaces were included in the Backup. "clusterMetadata": { # Information about the GKE cluster from which this Backup was created. # Output only. Information about the GKE cluster from which this Backup was created. "anthosVersion": "A String", # Output only. Anthos version @@ -258,7 +258,7 @@

Method Details

Returns: An object of the form: - { # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). Next id: 29 + { # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). "allNamespaces": True or False, # Output only. If True, all namespaces were included in the Backup. "clusterMetadata": { # Information about the GKE cluster from which this Backup was created. # Output only. Information about the GKE cluster from which this Backup was created. "anthosVersion": "A String", # Output only. Anthos version @@ -380,7 +380,7 @@

Method Details

{ # Response message for ListBackups. "backups": [ # The list of Backups matching the given criteria. - { # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). Next id: 29 + { # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). "allNamespaces": True or False, # Output only. If True, all namespaces were included in the Backup. "clusterMetadata": { # Information about the GKE cluster from which this Backup was created. # Output only. Information about the GKE cluster from which this Backup was created. "anthosVersion": "A String", # Output only. Anthos version @@ -460,7 +460,7 @@

Method Details

body: object, The request body. The object takes the form of: -{ # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). Next id: 29 +{ # Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). "allNamespaces": True or False, # Output only. If True, all namespaces were included in the Backup. "clusterMetadata": { # Information about the GKE cluster from which this Backup was created. # Output only. Information about the GKE cluster from which this Backup was created. "anthosVersion": "A String", # Output only. Anthos version diff --git a/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.volumeBackups.html b/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.volumeBackups.html index 4cdf863706..039a8b5f4d 100644 --- a/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.volumeBackups.html +++ b/docs/dyn/gkebackup_v1.projects.locations.backupPlans.backups.volumeBackups.html @@ -115,7 +115,7 @@

Method Details

Returns: An object of the form: - { # Represents the backup of a specific persistent volume as a component of a Backup - both the record of the operation and a pointer to the underlying storage-specific artifacts. Next id: 14 + { # Represents the backup of a specific persistent volume as a component of a Backup - both the record of the operation and a pointer to the underlying storage-specific artifacts. "completeTime": "A String", # Output only. The timestamp when the associated underlying volume backup operation completed. "createTime": "A String", # Output only. The timestamp when this VolumeBackup resource was created. "diskSizeBytes": "A String", # Output only. The minimum size of the disk to which this VolumeBackup can be restored. @@ -204,7 +204,7 @@

Method Details

{ # Response message for ListVolumeBackups. "nextPageToken": "A String", # A token which may be sent as page_token in a subsequent `ListVolumeBackups` call to retrieve the next page of results. If this field is omitted or empty, then there are no more results to return. "volumeBackups": [ # The list of VolumeBackups matching the given criteria. - { # Represents the backup of a specific persistent volume as a component of a Backup - both the record of the operation and a pointer to the underlying storage-specific artifacts. Next id: 14 + { # Represents the backup of a specific persistent volume as a component of a Backup - both the record of the operation and a pointer to the underlying storage-specific artifacts. "completeTime": "A String", # Output only. The timestamp when the associated underlying volume backup operation completed. "createTime": "A String", # Output only. The timestamp when this VolumeBackup resource was created. "diskSizeBytes": "A String", # Output only. The minimum size of the disk to which this VolumeBackup can be restored. diff --git a/docs/dyn/gkebackup_v1.projects.locations.restorePlans.html b/docs/dyn/gkebackup_v1.projects.locations.restorePlans.html index 2781e2194d..3a5b8cab21 100644 --- a/docs/dyn/gkebackup_v1.projects.locations.restorePlans.html +++ b/docs/dyn/gkebackup_v1.projects.locations.restorePlans.html @@ -124,7 +124,7 @@

Method Details

body: object, The request body. The object takes the form of: -{ # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. Next id: 13 +{ # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. "backupPlan": "A String", # Required. Immutable. A reference to the BackupPlan from which Backups may be used as the source for Restores created via this RestorePlan. Format: `projects/*/locations/*/backupPlans/*`. "cluster": "A String", # Required. Immutable. The target cluster into which Restores created via this RestorePlan will restore data. NOTE: the cluster's region must be the same as the RestorePlan. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` "createTime": "A String", # Output only. The timestamp when this RestorePlan resource was created. @@ -134,7 +134,7 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. The full name of the RestorePlan resource. Format: `projects/*/locations/*/restorePlans/*`. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Required. Configuration of Restores created via this RestorePlan. + "restoreConfig": { # Configuration of a restore. # Required. Configuration of Restores created via this RestorePlan. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. @@ -303,7 +303,7 @@

Method Details

Returns: An object of the form: - { # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. Next id: 13 + { # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. "backupPlan": "A String", # Required. Immutable. A reference to the BackupPlan from which Backups may be used as the source for Restores created via this RestorePlan. Format: `projects/*/locations/*/backupPlans/*`. "cluster": "A String", # Required. Immutable. The target cluster into which Restores created via this RestorePlan will restore data. NOTE: the cluster's region must be the same as the RestorePlan. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` "createTime": "A String", # Output only. The timestamp when this RestorePlan resource was created. @@ -313,7 +313,7 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. The full name of the RestorePlan resource. Format: `projects/*/locations/*/restorePlans/*`. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Required. Configuration of Restores created via this RestorePlan. + "restoreConfig": { # Configuration of a restore. # Required. Configuration of Restores created via this RestorePlan. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. @@ -471,7 +471,7 @@

Method Details

{ # Response message for ListRestorePlans. "nextPageToken": "A String", # A token which may be sent as page_token in a subsequent `ListRestorePlans` call to retrieve the next page of results. If this field is omitted or empty, then there are no more results to return. "restorePlans": [ # The list of RestorePlans matching the given criteria. - { # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. Next id: 13 + { # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. "backupPlan": "A String", # Required. Immutable. A reference to the BackupPlan from which Backups may be used as the source for Restores created via this RestorePlan. Format: `projects/*/locations/*/backupPlans/*`. "cluster": "A String", # Required. Immutable. The target cluster into which Restores created via this RestorePlan will restore data. NOTE: the cluster's region must be the same as the RestorePlan. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` "createTime": "A String", # Output only. The timestamp when this RestorePlan resource was created. @@ -481,7 +481,7 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. The full name of the RestorePlan resource. Format: `projects/*/locations/*/restorePlans/*`. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Required. Configuration of Restores created via this RestorePlan. + "restoreConfig": { # Configuration of a restore. # Required. Configuration of Restores created via this RestorePlan. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. @@ -598,7 +598,7 @@

Method Details

body: object, The request body. The object takes the form of: -{ # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. Next id: 13 +{ # The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. "backupPlan": "A String", # Required. Immutable. A reference to the BackupPlan from which Backups may be used as the source for Restores created via this RestorePlan. Format: `projects/*/locations/*/backupPlans/*`. "cluster": "A String", # Required. Immutable. The target cluster into which Restores created via this RestorePlan will restore data. NOTE: the cluster's region must be the same as the RestorePlan. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` "createTime": "A String", # Output only. The timestamp when this RestorePlan resource was created. @@ -608,7 +608,7 @@

Method Details

"a_key": "A String", }, "name": "A String", # Output only. The full name of the RestorePlan resource. Format: `projects/*/locations/*/restorePlans/*`. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Required. Configuration of Restores created via this RestorePlan. + "restoreConfig": { # Configuration of a restore. # Required. Configuration of Restores created via this RestorePlan. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. diff --git a/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.html b/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.html index 791ee2086f..7ef86c5547 100644 --- a/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.html +++ b/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.html @@ -124,7 +124,7 @@

Method Details

body: object, The request body. The object takes the form of: -{ # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. Next id: 20 +{ # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. "backup": "A String", # Required. Immutable. A reference to the Backup used as the source from which this Restore will restore. Note that this Backup must be a sub-resource of the RestorePlan's backup_plan. Format: `projects/*/locations/*/backupPlans/*/backups/*`. "cluster": "A String", # Output only. The target cluster into which this Restore will restore data. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` Inherited from parent RestorePlan's cluster value. "completeTime": "A String", # Output only. Timestamp of when the restore operation completed. @@ -138,7 +138,7 @@

Method Details

"resourcesExcludedCount": 42, # Output only. Number of resources excluded during the restore execution. "resourcesFailedCount": 42, # Output only. Number of resources that failed to be restored during the restore execution. "resourcesRestoredCount": 42, # Output only. Number of resources restored during the restore execution. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. + "restoreConfig": { # Configuration of a restore. # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. @@ -308,7 +308,7 @@

Method Details

Returns: An object of the form: - { # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. Next id: 20 + { # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. "backup": "A String", # Required. Immutable. A reference to the Backup used as the source from which this Restore will restore. Note that this Backup must be a sub-resource of the RestorePlan's backup_plan. Format: `projects/*/locations/*/backupPlans/*/backups/*`. "cluster": "A String", # Output only. The target cluster into which this Restore will restore data. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` Inherited from parent RestorePlan's cluster value. "completeTime": "A String", # Output only. Timestamp of when the restore operation completed. @@ -322,7 +322,7 @@

Method Details

"resourcesExcludedCount": 42, # Output only. Number of resources excluded during the restore execution. "resourcesFailedCount": 42, # Output only. Number of resources that failed to be restored during the restore execution. "resourcesRestoredCount": 42, # Output only. Number of resources restored during the restore execution. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. + "restoreConfig": { # Configuration of a restore. # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. @@ -481,7 +481,7 @@

Method Details

{ # Response message for ListRestores. "nextPageToken": "A String", # A token which may be sent as page_token in a subsequent `ListRestores` call to retrieve the next page of results. If this field is omitted or empty, then there are no more results to return. "restores": [ # The list of Restores matching the given criteria. - { # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. Next id: 20 + { # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. "backup": "A String", # Required. Immutable. A reference to the Backup used as the source from which this Restore will restore. Note that this Backup must be a sub-resource of the RestorePlan's backup_plan. Format: `projects/*/locations/*/backupPlans/*/backups/*`. "cluster": "A String", # Output only. The target cluster into which this Restore will restore data. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` Inherited from parent RestorePlan's cluster value. "completeTime": "A String", # Output only. Timestamp of when the restore operation completed. @@ -495,7 +495,7 @@

Method Details

"resourcesExcludedCount": 42, # Output only. Number of resources excluded during the restore execution. "resourcesFailedCount": 42, # Output only. Number of resources that failed to be restored during the restore execution. "resourcesRestoredCount": 42, # Output only. Number of resources restored during the restore execution. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. + "restoreConfig": { # Configuration of a restore. # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. @@ -613,7 +613,7 @@

Method Details

body: object, The request body. The object takes the form of: -{ # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. Next id: 20 +{ # Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. "backup": "A String", # Required. Immutable. A reference to the Backup used as the source from which this Restore will restore. Note that this Backup must be a sub-resource of the RestorePlan's backup_plan. Format: `projects/*/locations/*/backupPlans/*/backups/*`. "cluster": "A String", # Output only. The target cluster into which this Restore will restore data. Valid formats: - `projects/*/locations/*/clusters/*` - `projects/*/zones/*/clusters/*` Inherited from parent RestorePlan's cluster value. "completeTime": "A String", # Output only. Timestamp of when the restore operation completed. @@ -627,7 +627,7 @@

Method Details

"resourcesExcludedCount": 42, # Output only. Number of resources excluded during the restore execution. "resourcesFailedCount": 42, # Output only. Number of resources that failed to be restored during the restore execution. "resourcesRestoredCount": 42, # Output only. Number of resources restored during the restore execution. - "restoreConfig": { # Configuration of a restore. Next id: 14 # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. + "restoreConfig": { # Configuration of a restore. # Output only. Configuration of the Restore. Inherited from parent RestorePlan's restore_config. "allNamespaces": True or False, # Restore all namespaced resources in the Backup if set to "True". Specifying this field to "False" is an error. "clusterResourceConflictPolicy": "A String", # Optional. Defines the behavior for handling the situation where cluster-scoped resources being restored already exist in the target cluster. This MUST be set to a value other than CLUSTER_RESOURCE_CONFLICT_POLICY_UNSPECIFIED if cluster_resource_restore_scope is not empty. "clusterResourceRestoreScope": { # Defines the scope of cluster-scoped resources to restore. Some group kinds are not reasonable choices for a restore, and will cause an error if selected here. Any scope selection that would restore "all valid" resources automatically excludes these group kinds. - gkebackup.gke.io/BackupJob - gkebackup.gke.io/RestoreJob - metrics.k8s.io/NodeMetrics - migration.k8s.io/StorageState - migration.k8s.io/StorageVersionMigration - Node - snapshot.storage.k8s.io/VolumeSnapshotContent - storage.k8s.io/CSINode Some group kinds are driven by restore configuration elsewhere, and will cause an error if selected here. - Namespace - PersistentVolume # Optional. Identifies the cluster-scoped resources to restore from the Backup. Not specifying it means NO cluster resource will be restored. diff --git a/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.volumeRestores.html b/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.volumeRestores.html index 1abed94c68..4a1e480c23 100644 --- a/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.volumeRestores.html +++ b/docs/dyn/gkebackup_v1.projects.locations.restorePlans.restores.volumeRestores.html @@ -115,7 +115,7 @@

Method Details

Returns: An object of the form: - { # Represents the operation of restoring a volume from a VolumeBackup. Next id: 13 + { # Represents the operation of restoring a volume from a VolumeBackup. "completeTime": "A String", # Output only. The timestamp when the associated underlying volume restoration completed. "createTime": "A String", # Output only. The timestamp when this VolumeRestore resource was created. "etag": "A String", # Output only. `etag` is used for optimistic concurrency control as a way to help prevent simultaneous updates of a volume restore from overwriting each other. It is strongly suggested that systems make use of the `etag` in the read-modify-write cycle to perform volume restore updates in order to avoid race conditions. @@ -203,7 +203,7 @@

Method Details

{ # Response message for ListVolumeRestores. "nextPageToken": "A String", # A token which may be sent as page_token in a subsequent `ListVolumeRestores` call to retrieve the next page of results. If this field is omitted or empty, then there are no more results to return. "volumeRestores": [ # The list of VolumeRestores matching the given criteria. - { # Represents the operation of restoring a volume from a VolumeBackup. Next id: 13 + { # Represents the operation of restoring a volume from a VolumeBackup. "completeTime": "A String", # Output only. The timestamp when the associated underlying volume restoration completed. "createTime": "A String", # Output only. The timestamp when this VolumeRestore resource was created. "etag": "A String", # Output only. `etag` is used for optimistic concurrency control as a way to help prevent simultaneous updates of a volume restore from overwriting each other. It is strongly suggested that systems make use of the `etag` in the read-modify-write cycle to perform volume restore updates in order to avoid race conditions. diff --git a/docs/dyn/gkehub_v1.projects.locations.features.html b/docs/dyn/gkehub_v1.projects.locations.features.html index f7fae4ab69..e346b016e5 100644 --- a/docs/dyn/gkehub_v1.projects.locations.features.html +++ b/docs/dyn/gkehub_v1.projects.locations.features.html @@ -189,6 +189,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -358,6 +383,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -646,6 +696,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1029,6 +1104,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1198,6 +1298,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1486,6 +1611,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1857,6 +2007,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2026,6 +2201,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2314,6 +2514,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2641,6 +2866,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2810,6 +3060,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -3098,6 +3373,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. diff --git a/docs/dyn/gkehub_v1.projects.locations.scopes.html b/docs/dyn/gkehub_v1.projects.locations.scopes.html index 41c974fdcd..aced32c941 100644 --- a/docs/dyn/gkehub_v1.projects.locations.scopes.html +++ b/docs/dyn/gkehub_v1.projects.locations.scopes.html @@ -102,6 +102,18 @@

Instance Methods

list(parent, pageSize=None, pageToken=None, x__xgafv=None)

Lists Scopes.

+

+ listMemberships(scopeName, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists Memberships bound to a Scope. The response includes relevant Memberships from all regions.

+

+ listMemberships_next()

+

Retrieves the next page of results.

+

+ listPermitted(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists permitted Scopes.

+

+ listPermitted_next()

+

Retrieves the next page of results.

list_next()

Retrieves the next page of results.

@@ -330,6 +342,180 @@

Method Details

}
+
+ listMemberships(scopeName, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists Memberships bound to a Scope. The response includes relevant Memberships from all regions.
+
+Args:
+  scopeName: string, Required. Name of the Scope, in the format `projects/*/locations/global/scopes/*`, to which the Memberships are bound. (required)
+  filter: string, Optional. Lists Memberships that match the filter expression, following the syntax outlined in https://google.aip.dev/160. Currently, filtering can be done only based on Memberships's `name`, `labels`, `create_time`, `update_time`, and `unique_id`.
+  pageSize: integer, Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned. Pagination is currently not supported; therefore, setting this field does not have any impact for now.
+  pageToken: string, Optional. Token returned by previous call to `ListBoundMemberships` which specifies the position in the list from where to continue listing the resources.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # List of Memberships bound to a Scope.
+  "memberships": [ # The list of Memberships bound to the given Scope.
+    { # Membership contains information about a member cluster.
+      "authority": { # Authority encodes how Google will recognize identities from this Membership. See the workload identity documentation for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity # Optional. How to identify workloads from this Membership. See the documentation on Workload Identity for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
+        "identityProvider": "A String", # Output only. An identity provider that reflects the `issuer` in the workload identity pool.
+        "issuer": "A String", # Optional. A JSON Web Token (JWT) issuer URI. `issuer` must start with `https://` and be a valid URL with length <2000 characters, it must use `location` rather than `zone` for GKE clusters. If set, then Google will allow valid OIDC tokens from this issuer to authenticate within the workload_identity_pool. OIDC discovery will be performed on this URI to validate tokens from the issuer. Clearing `issuer` disables Workload Identity. `issuer` cannot be directly modified; it must be cleared (and Workload Identity disabled) before using a new issuer (and re-enabling Workload Identity).
+        "oidcJwks": "A String", # Optional. OIDC verification keys for this Membership in JWKS format (RFC 7517). When this field is set, OIDC discovery will NOT be performed on `issuer`, and instead OIDC tokens will be validated using this field.
+        "workloadIdentityPool": "A String", # Output only. The name of the workload identity pool in which `issuer` will be recognized. There is a single Workload Identity Pool per Hub that is shared between all Memberships that belong to that Hub. For a Hub hosted in {PROJECT_ID}, the workload pool format is `{PROJECT_ID}.hub.id.goog`, although this is subject to change in newer versions of this API.
+      },
+      "createTime": "A String", # Output only. When the Membership was created.
+      "deleteTime": "A String", # Output only. When the Membership was deleted.
+      "description": "A String", # Output only. Description of this membership, limited to 63 characters. Must match the regex: `a-zA-Z0-9*` This field is present for legacy purposes.
+      "endpoint": { # MembershipEndpoint contains information needed to contact a Kubernetes API, endpoint and any additional Kubernetes metadata. # Optional. Endpoint information to reach this member.
+        "applianceCluster": { # ApplianceCluster contains information specific to GDC Edge Appliance Clusters. # Optional. Specific information for a GDC Edge Appliance cluster.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the Appliance Cluster. For example: //transferappliance.googleapis.com/projects/my-project/locations/us-west1-a/appliances/my-appliance
+        },
+        "edgeCluster": { # EdgeCluster contains information specific to Google Edge Clusters. # Optional. Specific information for a Google Edge cluster.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the Edge Cluster. For example: //edgecontainer.googleapis.com/projects/my-project/locations/us-west1-a/clusters/my-cluster
+        },
+        "gkeCluster": { # GkeCluster contains information specific to GKE clusters. # Optional. Specific information for a GKE-on-GCP cluster.
+          "clusterMissing": True or False, # Output only. If cluster_missing is set then it denotes that the GKE cluster no longer exists in the GKE Control Plane.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the GKE cluster. For example: //container.googleapis.com/projects/my-project/locations/us-west1-a/clusters/my-cluster Zonal clusters are also supported.
+        },
+        "googleManaged": True or False, # Output only. Whether the lifecycle of this membership is managed by a google cluster platform service.
+        "kubernetesMetadata": { # KubernetesMetadata provides informational metadata for Memberships representing Kubernetes clusters. # Output only. Useful Kubernetes-specific metadata.
+          "kubernetesApiServerVersion": "A String", # Output only. Kubernetes API server version string as reported by `/version`.
+          "memoryMb": 42, # Output only. The total memory capacity as reported by the sum of all Kubernetes nodes resources, defined in MB.
+          "nodeCount": 42, # Output only. Node count as reported by Kubernetes nodes resources.
+          "nodeProviderId": "A String", # Output only. Node providerID as reported by the first node in the list of nodes on the Kubernetes endpoint. On Kubernetes platforms that support zero-node clusters (like GKE-on-GCP), the node_count will be zero and the node_provider_id will be empty.
+          "updateTime": "A String", # Output only. The time at which these details were last updated. This update_time is different from the Membership-level update_time since EndpointDetails are updated internally for API consumers.
+          "vcpuCount": 42, # Output only. vCPU count as reported by Kubernetes nodes resources.
+        },
+        "kubernetesResource": { # KubernetesResource contains the YAML manifests and configuration for Membership Kubernetes resources in the cluster. After CreateMembership or UpdateMembership, these resources should be re-applied in the cluster. # Optional. The in-cluster Kubernetes Resources that should be applied for a correctly registered cluster, in the steady state. These resources: * Ensure that the cluster is exclusively registered to one and only one Hub Membership. * Propagate Workload Pool Information available in the Membership Authority field. * Ensure proper initial configuration of default Hub Features.
+          "connectResources": [ # Output only. The Kubernetes resources for installing the GKE Connect agent This field is only populated in the Membership returned from a successful long-running operation from CreateMembership or UpdateMembership. It is not populated during normal GetMembership or ListMemberships requests. To get the resource manifest after the initial registration, the caller should make a UpdateMembership call with an empty field mask.
+            { # ResourceManifest represents a single Kubernetes resource to be applied to the cluster.
+              "clusterScoped": True or False, # Whether the resource provided in the manifest is `cluster_scoped`. If unset, the manifest is assumed to be namespace scoped. This field is used for REST mapping when applying the resource in a cluster.
+              "manifest": "A String", # YAML manifest of the resource.
+            },
+          ],
+          "membershipCrManifest": "A String", # Input only. The YAML representation of the Membership CR. This field is ignored for GKE clusters where Hub can read the CR directly. Callers should provide the CR that is currently present in the cluster during CreateMembership or UpdateMembership, or leave this field empty if none exists. The CR manifest is used to validate the cluster has not been registered with another Membership.
+          "membershipResources": [ # Output only. Additional Kubernetes resources that need to be applied to the cluster after Membership creation, and after every update. This field is only populated in the Membership returned from a successful long-running operation from CreateMembership or UpdateMembership. It is not populated during normal GetMembership or ListMemberships requests. To get the resource manifest after the initial registration, the caller should make a UpdateMembership call with an empty field mask.
+            { # ResourceManifest represents a single Kubernetes resource to be applied to the cluster.
+              "clusterScoped": True or False, # Whether the resource provided in the manifest is `cluster_scoped`. If unset, the manifest is assumed to be namespace scoped. This field is used for REST mapping when applying the resource in a cluster.
+              "manifest": "A String", # YAML manifest of the resource.
+            },
+          ],
+          "resourceOptions": { # ResourceOptions represent options for Kubernetes resource generation. # Optional. Options for Kubernetes resource generation.
+            "connectVersion": "A String", # Optional. The Connect agent version to use for connect_resources. Defaults to the latest GKE Connect version. The version must be a currently supported version, obsolete versions will be rejected.
+            "k8sVersion": "A String", # Optional. Major version of the Kubernetes cluster. This is only used to determine which version to use for the CustomResourceDefinition resources, `apiextensions/v1beta1` or`apiextensions/v1`.
+            "v1beta1Crd": True or False, # Optional. Use `apiextensions/v1beta1` instead of `apiextensions/v1` for CustomResourceDefinition resources. This option should be set for clusters with Kubernetes apiserver versions <1.16.
+          },
+        },
+        "multiCloudCluster": { # MultiCloudCluster contains information specific to GKE Multi-Cloud clusters. # Optional. Specific information for a GKE Multi-Cloud cluster.
+          "clusterMissing": True or False, # Output only. If cluster_missing is set then it denotes that API(gkemulticloud.googleapis.com) resource for this GKE Multi-Cloud cluster no longer exists.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the GKE Multi-Cloud cluster. For example: //gkemulticloud.googleapis.com/projects/my-project/locations/us-west1-a/awsClusters/my-cluster //gkemulticloud.googleapis.com/projects/my-project/locations/us-west1-a/azureClusters/my-cluster //gkemulticloud.googleapis.com/projects/my-project/locations/us-west1-a/attachedClusters/my-cluster
+        },
+        "onPremCluster": { # OnPremCluster contains information specific to GKE On-Prem clusters. # Optional. Specific information for a GKE On-Prem cluster. An onprem user-cluster who has no resourceLink is not allowed to use this field, it should have a nil "type" instead.
+          "adminCluster": True or False, # Immutable. Whether the cluster is an admin cluster.
+          "clusterMissing": True or False, # Output only. If cluster_missing is set then it denotes that API(gkeonprem.googleapis.com) resource for this GKE On-Prem cluster no longer exists.
+          "clusterType": "A String", # Immutable. The on prem cluster's type.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the GKE On-Prem cluster. For example: //gkeonprem.googleapis.com/projects/my-project/locations/us-west1-a/vmwareClusters/my-cluster //gkeonprem.googleapis.com/projects/my-project/locations/us-west1-a/bareMetalClusters/my-cluster
+        },
+      },
+      "externalId": "A String", # Optional. An externally-generated and managed ID for this Membership. This ID may be modified after creation, but this is not recommended. The ID must match the regex: `a-zA-Z0-9*` If this Membership represents a Kubernetes cluster, this value should be set to the UID of the `kube-system` namespace object.
+      "labels": { # Optional. Labels for this membership.
+        "a_key": "A String",
+      },
+      "lastConnectionTime": "A String", # Output only. For clusters using Connect, the timestamp of the most recent connection established with Google Cloud. This time is updated every several minutes, not continuously. For clusters that do not use GKE Connect, or that have never connected successfully, this field will be unset.
+      "monitoringConfig": { # MonitoringConfig informs Fleet-based applications/services/UIs how the metrics for the underlying cluster is reported to cloud monitoring services. It can be set from empty to non-empty, but can't be mutated directly to prevent accidentally breaking the constinousty of metrics. # Optional. The monitoring config information for this membership.
+        "cluster": "A String", # Optional. Cluster name used to report metrics. For Anthos on VMWare/Baremetal/MultiCloud clusters, it would be in format {cluster_type}/{cluster_name}, e.g., "awsClusters/cluster_1".
+        "clusterHash": "A String", # Optional. For GKE and Multicloud clusters, this is the UUID of the cluster resource. For VMWare and Baremetal clusters, this is the kube-system UID.
+        "kubernetesMetricsPrefix": "A String", # Optional. Kubernetes system metrics, if available, are written to this prefix. This defaults to kubernetes.io for GKE, and kubernetes.io/anthos for Anthos eventually. Noted: Anthos MultiCloud will have kubernetes.io prefix today but will migration to be under kubernetes.io/anthos.
+        "location": "A String", # Optional. Location used to report Metrics
+        "projectId": "A String", # Optional. Project used to report Metrics
+      },
+      "name": "A String", # Output only. The full, unique name of this Membership resource in the format `projects/*/locations/*/memberships/{membership_id}`, set during creation. `membership_id` must be a valid RFC 1123 compliant DNS label: 1. At most 63 characters in length 2. It must consist of lower case alphanumeric characters or `-` 3. It must start and end with an alphanumeric character Which can be expressed as the regex: `[a-z0-9]([-a-z0-9]*[a-z0-9])?`, with a maximum length of 63 characters.
+      "state": { # MembershipState describes the state of a Membership resource. # Output only. State of the Membership resource.
+        "code": "A String", # Output only. The current state of the Membership resource.
+      },
+      "uniqueId": "A String", # Output only. Google-generated UUID for this resource. This is unique across all Membership resources. If a Membership resource is deleted and another resource with the same name is created, it gets a different unique_id.
+      "updateTime": "A String", # Output only. When the Membership was last updated.
+    },
+  ],
+  "nextPageToken": "A String", # A token to request the next page of resources from the `ListBoundMemberships` method. The value of an empty string means that there are no more resources to return.
+  "unreachable": [ # List of locations that could not be reached while fetching this list.
+    "A String",
+  ],
+}
+
+ +
+ listMemberships_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ listPermitted(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists permitted Scopes.
+
+Args:
+  parent: string, Required. The parent (project and location) where the Scope will be listed. Specified in the format `projects/*/locations/*`. (required)
+  pageSize: integer, Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned.
+  pageToken: string, Optional. Token returned by previous call to `ListPermittedScopes` which specifies the position in the list from where to continue listing the resources.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # List of permitted Scopes.
+  "nextPageToken": "A String", # A token to request the next page of resources from the `ListPermittedScopes` method. The value of an empty string means that there are no more resources to return.
+  "scopes": [ # The list of permitted Scopes
+    { # Scope represents a Scope in a Fleet.
+      "createTime": "A String", # Output only. When the scope was created.
+      "deleteTime": "A String", # Output only. When the scope was deleted.
+      "labels": { # Optional. Labels for this Scope.
+        "a_key": "A String",
+      },
+      "name": "A String", # The resource name for the scope `projects/{project}/locations/{location}/scopes/{scope}`
+      "namespaceLabels": { # Optional. Scope-level cluster namespace labels. For the member clusters bound to the Scope, these labels are applied to each namespace under the Scope. Scope-level labels take precedence over Namespace-level labels (`namespace_labels` in the Fleet Namespace resource) if they share a key. Keys and values must be Kubernetes-conformant.
+        "a_key": "A String",
+      },
+      "state": { # ScopeLifecycleState describes the state of a Scope resource. # Output only. State of the scope resource.
+        "code": "A String", # Output only. The current state of the scope resource.
+      },
+      "uid": "A String", # Output only. Google-generated UUID for this resource. This is unique across all scope resources. If a scope resource is deleted and another resource with the same name is created, it gets a different uid.
+      "updateTime": "A String", # Output only. When the scope was last updated.
+    },
+  ],
+}
+
+ +
+ listPermitted_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+
list_next()
Retrieves the next page of results.
diff --git a/docs/dyn/gkehub_v1alpha.projects.locations.features.html b/docs/dyn/gkehub_v1alpha.projects.locations.features.html
index 142b67a914..ec252f9476 100644
--- a/docs/dyn/gkehub_v1alpha.projects.locations.features.html
+++ b/docs/dyn/gkehub_v1alpha.projects.locations.features.html
@@ -192,6 +192,31 @@ 

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -374,6 +399,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -680,6 +730,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1193,6 +1268,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1375,6 +1475,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1681,6 +1806,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2182,6 +2332,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2364,6 +2539,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2670,6 +2870,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -3127,6 +3352,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -3309,6 +3559,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -3615,6 +3890,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. diff --git a/docs/dyn/gkehub_v1beta.projects.locations.features.html b/docs/dyn/gkehub_v1beta.projects.locations.features.html index 0c2f5bd322..b526b34420 100644 --- a/docs/dyn/gkehub_v1beta.projects.locations.features.html +++ b/docs/dyn/gkehub_v1beta.projects.locations.features.html @@ -192,6 +192,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -373,6 +398,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -670,6 +720,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1068,6 +1143,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1249,6 +1349,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1546,6 +1671,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -1932,6 +2082,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2113,6 +2288,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2410,6 +2610,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2752,6 +2977,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -2933,6 +3183,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. @@ -3230,6 +3505,31 @@

Method Details

"googleConfig": { # Configuration for the Google Plugin Auth flow. # GoogleConfig specific configuration. "disable": True or False, # Disable automatic configuration of Google Plugin on supported platforms. }, + "ldapConfig": { # Configuration for the LDAP Auth flow. # LDAP specific configuration. + "group": { # Contains the properties for locating and authenticating groups in the directory. # Optional. Contains the properties for locating and authenticating groups in the directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for group entries. + "filter": "A String", # Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to "(objectClass=Group)". + "idAttribute": "A String", # Optional. The identifying name of each group a user belongs to. For example, if this is set to "distinguishedName" then RBACs and other group expectations should be written as full DNs. This defaults to "distinguishedName". + }, + "server": { # Server settings for the external LDAP server. # Required. Server settings for the external LDAP server. + "certificateAuthorityData": "A String", # Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the "ldaps" and "startTLS" connections. + "connectionType": "A String", # Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty. + "host": "A String", # Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, "ldap.server.example" or "10.10.10.10:389". + }, + "serviceAccount": { # Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. # Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate. + "simpleBindCredentials": { # The structure holds the LDAP simple binding credential. # Credentials for basic auth. + "dn": "A String", # Required. The distinguished name(DN) of the service account object/user. + "encryptedPassword": "A String", # Output only. The encrypted password of the service account object/user. + "password": "A String", # Required. Input only. The password of the service account object/user. + }, + }, + "user": { # Defines where users exist in the LDAP directory. # Required. Defines where users exist in the LDAP directory. + "baseDn": "A String", # Required. The location of the subtree in the LDAP directory to search for user entries. + "filter": "A String", # Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to "(objectClass=User)". + "idAttribute": "A String", # Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to "sAMAccountName" and identifierAttribute to "userPrincipalName" would allow a user to login as "bsmith", but actual RBAC policies for the user would be written as "bsmith@example.com". Using "userPrincipalName" is recommended since this will be unique for each user. This defaults to "userPrincipalName". + "loginAttribute": "A String", # Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. "(=)" and is combined with the optional filter field. This defaults to "userPrincipalName". + }, + }, "name": "A String", # Identifier for auth config. "oidcConfig": { # Configuration for OIDC Auth flow. # OIDC specific configuration. "certificateAuthorityData": "A String", # PEM-encoded CA for OIDC provider. diff --git a/docs/dyn/gkehub_v1beta.projects.locations.scopes.html b/docs/dyn/gkehub_v1beta.projects.locations.scopes.html index 059493c4d0..2ff169e55f 100644 --- a/docs/dyn/gkehub_v1beta.projects.locations.scopes.html +++ b/docs/dyn/gkehub_v1beta.projects.locations.scopes.html @@ -102,6 +102,18 @@

Instance Methods

list(parent, pageSize=None, pageToken=None, x__xgafv=None)

Lists Scopes.

+

+ listMemberships(scopeName, filter=None, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists Memberships bound to a Scope. The response includes relevant Memberships from all regions.

+

+ listMemberships_next()

+

Retrieves the next page of results.

+

+ listPermitted(parent, pageSize=None, pageToken=None, x__xgafv=None)

+

Lists permitted Scopes.

+

+ listPermitted_next()

+

Retrieves the next page of results.

list_next()

Retrieves the next page of results.

@@ -330,6 +342,180 @@

Method Details

}
+
+ listMemberships(scopeName, filter=None, pageSize=None, pageToken=None, x__xgafv=None) +
Lists Memberships bound to a Scope. The response includes relevant Memberships from all regions.
+
+Args:
+  scopeName: string, Required. Name of the Scope, in the format `projects/*/locations/global/scopes/*`, to which the Memberships are bound. (required)
+  filter: string, Optional. Lists Memberships that match the filter expression, following the syntax outlined in https://google.aip.dev/160. Currently, filtering can be done only based on Memberships's `name`, `labels`, `create_time`, `update_time`, and `unique_id`.
+  pageSize: integer, Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned. Pagination is currently not supported; therefore, setting this field does not have any impact for now.
+  pageToken: string, Optional. Token returned by previous call to `ListBoundMemberships` which specifies the position in the list from where to continue listing the resources.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # List of Memberships bound to a Scope.
+  "memberships": [ # The list of Memberships bound to the given Scope.
+    { # Membership contains information about a member cluster.
+      "authority": { # Authority encodes how Google will recognize identities from this Membership. See the workload identity documentation for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity # Optional. How to identify workloads from this Membership. See the documentation on Workload Identity for more details: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
+        "identityProvider": "A String", # Output only. An identity provider that reflects the `issuer` in the workload identity pool.
+        "issuer": "A String", # Optional. A JSON Web Token (JWT) issuer URI. `issuer` must start with `https://` and be a valid URL with length <2000 characters, it must use `location` rather than `zone` for GKE clusters. If set, then Google will allow valid OIDC tokens from this issuer to authenticate within the workload_identity_pool. OIDC discovery will be performed on this URI to validate tokens from the issuer. Clearing `issuer` disables Workload Identity. `issuer` cannot be directly modified; it must be cleared (and Workload Identity disabled) before using a new issuer (and re-enabling Workload Identity).
+        "oidcJwks": "A String", # Optional. OIDC verification keys for this Membership in JWKS format (RFC 7517). When this field is set, OIDC discovery will NOT be performed on `issuer`, and instead OIDC tokens will be validated using this field.
+        "workloadIdentityPool": "A String", # Output only. The name of the workload identity pool in which `issuer` will be recognized. There is a single Workload Identity Pool per Hub that is shared between all Memberships that belong to that Hub. For a Hub hosted in {PROJECT_ID}, the workload pool format is `{PROJECT_ID}.hub.id.goog`, although this is subject to change in newer versions of this API.
+      },
+      "createTime": "A String", # Output only. When the Membership was created.
+      "deleteTime": "A String", # Output only. When the Membership was deleted.
+      "description": "A String", # Output only. Description of this membership, limited to 63 characters. Must match the regex: `a-zA-Z0-9*` This field is present for legacy purposes.
+      "endpoint": { # MembershipEndpoint contains information needed to contact a Kubernetes API, endpoint and any additional Kubernetes metadata. # Optional. Endpoint information to reach this member.
+        "applianceCluster": { # ApplianceCluster contains information specific to GDC Edge Appliance Clusters. # Optional. Specific information for a GDC Edge Appliance cluster.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the Appliance Cluster. For example: //transferappliance.googleapis.com/projects/my-project/locations/us-west1-a/appliances/my-appliance
+        },
+        "edgeCluster": { # EdgeCluster contains information specific to Google Edge Clusters. # Optional. Specific information for a Google Edge cluster.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the Edge Cluster. For example: //edgecontainer.googleapis.com/projects/my-project/locations/us-west1-a/clusters/my-cluster
+        },
+        "gkeCluster": { # GkeCluster contains information specific to GKE clusters. # Optional. Specific information for a GKE-on-GCP cluster.
+          "clusterMissing": True or False, # Output only. If cluster_missing is set then it denotes that the GKE cluster no longer exists in the GKE Control Plane.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the GKE cluster. For example: //container.googleapis.com/projects/my-project/locations/us-west1-a/clusters/my-cluster Zonal clusters are also supported.
+        },
+        "googleManaged": True or False, # Output only. Whether the lifecycle of this membership is managed by a google cluster platform service.
+        "kubernetesMetadata": { # KubernetesMetadata provides informational metadata for Memberships representing Kubernetes clusters. # Output only. Useful Kubernetes-specific metadata.
+          "kubernetesApiServerVersion": "A String", # Output only. Kubernetes API server version string as reported by `/version`.
+          "memoryMb": 42, # Output only. The total memory capacity as reported by the sum of all Kubernetes nodes resources, defined in MB.
+          "nodeCount": 42, # Output only. Node count as reported by Kubernetes nodes resources.
+          "nodeProviderId": "A String", # Output only. Node providerID as reported by the first node in the list of nodes on the Kubernetes endpoint. On Kubernetes platforms that support zero-node clusters (like GKE-on-GCP), the node_count will be zero and the node_provider_id will be empty.
+          "updateTime": "A String", # Output only. The time at which these details were last updated. This update_time is different from the Membership-level update_time since EndpointDetails are updated internally for API consumers.
+          "vcpuCount": 42, # Output only. vCPU count as reported by Kubernetes nodes resources.
+        },
+        "kubernetesResource": { # KubernetesResource contains the YAML manifests and configuration for Membership Kubernetes resources in the cluster. After CreateMembership or UpdateMembership, these resources should be re-applied in the cluster. # Optional. The in-cluster Kubernetes Resources that should be applied for a correctly registered cluster, in the steady state. These resources: * Ensure that the cluster is exclusively registered to one and only one Hub Membership. * Propagate Workload Pool Information available in the Membership Authority field. * Ensure proper initial configuration of default Hub Features.
+          "connectResources": [ # Output only. The Kubernetes resources for installing the GKE Connect agent This field is only populated in the Membership returned from a successful long-running operation from CreateMembership or UpdateMembership. It is not populated during normal GetMembership or ListMemberships requests. To get the resource manifest after the initial registration, the caller should make a UpdateMembership call with an empty field mask.
+            { # ResourceManifest represents a single Kubernetes resource to be applied to the cluster.
+              "clusterScoped": True or False, # Whether the resource provided in the manifest is `cluster_scoped`. If unset, the manifest is assumed to be namespace scoped. This field is used for REST mapping when applying the resource in a cluster.
+              "manifest": "A String", # YAML manifest of the resource.
+            },
+          ],
+          "membershipCrManifest": "A String", # Input only. The YAML representation of the Membership CR. This field is ignored for GKE clusters where Hub can read the CR directly. Callers should provide the CR that is currently present in the cluster during CreateMembership or UpdateMembership, or leave this field empty if none exists. The CR manifest is used to validate the cluster has not been registered with another Membership.
+          "membershipResources": [ # Output only. Additional Kubernetes resources that need to be applied to the cluster after Membership creation, and after every update. This field is only populated in the Membership returned from a successful long-running operation from CreateMembership or UpdateMembership. It is not populated during normal GetMembership or ListMemberships requests. To get the resource manifest after the initial registration, the caller should make a UpdateMembership call with an empty field mask.
+            { # ResourceManifest represents a single Kubernetes resource to be applied to the cluster.
+              "clusterScoped": True or False, # Whether the resource provided in the manifest is `cluster_scoped`. If unset, the manifest is assumed to be namespace scoped. This field is used for REST mapping when applying the resource in a cluster.
+              "manifest": "A String", # YAML manifest of the resource.
+            },
+          ],
+          "resourceOptions": { # ResourceOptions represent options for Kubernetes resource generation. # Optional. Options for Kubernetes resource generation.
+            "connectVersion": "A String", # Optional. The Connect agent version to use for connect_resources. Defaults to the latest GKE Connect version. The version must be a currently supported version, obsolete versions will be rejected.
+            "k8sVersion": "A String", # Optional. Major version of the Kubernetes cluster. This is only used to determine which version to use for the CustomResourceDefinition resources, `apiextensions/v1beta1` or`apiextensions/v1`.
+            "v1beta1Crd": True or False, # Optional. Use `apiextensions/v1beta1` instead of `apiextensions/v1` for CustomResourceDefinition resources. This option should be set for clusters with Kubernetes apiserver versions <1.16.
+          },
+        },
+        "multiCloudCluster": { # MultiCloudCluster contains information specific to GKE Multi-Cloud clusters. # Optional. Specific information for a GKE Multi-Cloud cluster.
+          "clusterMissing": True or False, # Output only. If cluster_missing is set then it denotes that API(gkemulticloud.googleapis.com) resource for this GKE Multi-Cloud cluster no longer exists.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the GKE Multi-Cloud cluster. For example: //gkemulticloud.googleapis.com/projects/my-project/locations/us-west1-a/awsClusters/my-cluster //gkemulticloud.googleapis.com/projects/my-project/locations/us-west1-a/azureClusters/my-cluster //gkemulticloud.googleapis.com/projects/my-project/locations/us-west1-a/attachedClusters/my-cluster
+        },
+        "onPremCluster": { # OnPremCluster contains information specific to GKE On-Prem clusters. # Optional. Specific information for a GKE On-Prem cluster. An onprem user-cluster who has no resourceLink is not allowed to use this field, it should have a nil "type" instead.
+          "adminCluster": True or False, # Immutable. Whether the cluster is an admin cluster.
+          "clusterMissing": True or False, # Output only. If cluster_missing is set then it denotes that API(gkeonprem.googleapis.com) resource for this GKE On-Prem cluster no longer exists.
+          "clusterType": "A String", # Immutable. The on prem cluster's type.
+          "resourceLink": "A String", # Immutable. Self-link of the Google Cloud resource for the GKE On-Prem cluster. For example: //gkeonprem.googleapis.com/projects/my-project/locations/us-west1-a/vmwareClusters/my-cluster //gkeonprem.googleapis.com/projects/my-project/locations/us-west1-a/bareMetalClusters/my-cluster
+        },
+      },
+      "externalId": "A String", # Optional. An externally-generated and managed ID for this Membership. This ID may be modified after creation, but this is not recommended. The ID must match the regex: `a-zA-Z0-9*` If this Membership represents a Kubernetes cluster, this value should be set to the UID of the `kube-system` namespace object.
+      "labels": { # Optional. Labels for this membership.
+        "a_key": "A String",
+      },
+      "lastConnectionTime": "A String", # Output only. For clusters using Connect, the timestamp of the most recent connection established with Google Cloud. This time is updated every several minutes, not continuously. For clusters that do not use GKE Connect, or that have never connected successfully, this field will be unset.
+      "monitoringConfig": { # MonitoringConfig informs Fleet-based applications/services/UIs how the metrics for the underlying cluster is reported to cloud monitoring services. It can be set from empty to non-empty, but can't be mutated directly to prevent accidentally breaking the constinousty of metrics. # Optional. The monitoring config information for this membership.
+        "cluster": "A String", # Optional. Cluster name used to report metrics. For Anthos on VMWare/Baremetal/MultiCloud clusters, it would be in format {cluster_type}/{cluster_name}, e.g., "awsClusters/cluster_1".
+        "clusterHash": "A String", # Optional. For GKE and Multicloud clusters, this is the UUID of the cluster resource. For VMWare and Baremetal clusters, this is the kube-system UID.
+        "kubernetesMetricsPrefix": "A String", # Optional. Kubernetes system metrics, if available, are written to this prefix. This defaults to kubernetes.io for GKE, and kubernetes.io/anthos for Anthos eventually. Noted: Anthos MultiCloud will have kubernetes.io prefix today but will migration to be under kubernetes.io/anthos.
+        "location": "A String", # Optional. Location used to report Metrics
+        "projectId": "A String", # Optional. Project used to report Metrics
+      },
+      "name": "A String", # Output only. The full, unique name of this Membership resource in the format `projects/*/locations/*/memberships/{membership_id}`, set during creation. `membership_id` must be a valid RFC 1123 compliant DNS label: 1. At most 63 characters in length 2. It must consist of lower case alphanumeric characters or `-` 3. It must start and end with an alphanumeric character Which can be expressed as the regex: `[a-z0-9]([-a-z0-9]*[a-z0-9])?`, with a maximum length of 63 characters.
+      "state": { # MembershipState describes the state of a Membership resource. # Output only. State of the Membership resource.
+        "code": "A String", # Output only. The current state of the Membership resource.
+      },
+      "uniqueId": "A String", # Output only. Google-generated UUID for this resource. This is unique across all Membership resources. If a Membership resource is deleted and another resource with the same name is created, it gets a different unique_id.
+      "updateTime": "A String", # Output only. When the Membership was last updated.
+    },
+  ],
+  "nextPageToken": "A String", # A token to request the next page of resources from the `ListBoundMemberships` method. The value of an empty string means that there are no more resources to return.
+  "unreachable": [ # List of locations that could not be reached while fetching this list.
+    "A String",
+  ],
+}
+
+ +
+ listMemberships_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+ +
+ listPermitted(parent, pageSize=None, pageToken=None, x__xgafv=None) +
Lists permitted Scopes.
+
+Args:
+  parent: string, Required. The parent (project and location) where the Scope will be listed. Specified in the format `projects/*/locations/*`. (required)
+  pageSize: integer, Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned.
+  pageToken: string, Optional. Token returned by previous call to `ListPermittedScopes` which specifies the position in the list from where to continue listing the resources.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # List of permitted Scopes.
+  "nextPageToken": "A String", # A token to request the next page of resources from the `ListPermittedScopes` method. The value of an empty string means that there are no more resources to return.
+  "scopes": [ # The list of permitted Scopes
+    { # Scope represents a Scope in a Fleet.
+      "createTime": "A String", # Output only. When the scope was created.
+      "deleteTime": "A String", # Output only. When the scope was deleted.
+      "labels": { # Optional. Labels for this Scope.
+        "a_key": "A String",
+      },
+      "name": "A String", # The resource name for the scope `projects/{project}/locations/{location}/scopes/{scope}`
+      "namespaceLabels": { # Optional. Scope-level cluster namespace labels. For the member clusters bound to the Scope, these labels are applied to each namespace under the Scope. Scope-level labels take precedence over Namespace-level labels (`namespace_labels` in the Fleet Namespace resource) if they share a key. Keys and values must be Kubernetes-conformant.
+        "a_key": "A String",
+      },
+      "state": { # ScopeLifecycleState describes the state of a Scope resource. # Output only. State of the scope resource.
+        "code": "A String", # Output only. The current state of the scope resource.
+      },
+      "uid": "A String", # Output only. Google-generated UUID for this resource. This is unique across all scope resources. If a scope resource is deleted and another resource with the same name is created, it gets a different uid.
+      "updateTime": "A String", # Output only. When the scope was last updated.
+    },
+  ],
+}
+
+ +
+ listPermitted_next() +
Retrieves the next page of results.
+
+        Args:
+          previous_request: The request for the previous page. (required)
+          previous_response: The response from the request for the previous page. (required)
+
+        Returns:
+          A request object that you can call 'execute()' on to request the next
+          page. Returns None if there are no more items in the collection.
+        
+
+
list_next()
Retrieves the next page of results.
diff --git a/docs/dyn/healthcare_v1.projects.locations.datasets.fhirStores.html b/docs/dyn/healthcare_v1.projects.locations.datasets.fhirStores.html
index 27313f8541..d05893eb81 100644
--- a/docs/dyn/healthcare_v1.projects.locations.datasets.fhirStores.html
+++ b/docs/dyn/healthcare_v1.projects.locations.datasets.fhirStores.html
@@ -1466,7 +1466,7 @@ 

Method Details

], }, "force": True or False, # Optional. When enabled, changes will be reverted without explicit confirmation - "inputGcsObject": "A String", # Optional. GCS object containing list of {resourceType}/{resourceId} lines, identifying resources to be reverted + "inputGcsObject": "A String", # Optional. Cloud Storage object containing list of {resourceType}/{resourceId} lines, identifying resources to be reverted "resultGcsBucket": "A String", # Required. Bucket to deposit result "rollbackTime": "A String", # Required. Time point to rollback to. "type": [ # Optional. If specified, revert only resources of these types diff --git a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.html b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.html index 88641fac9e..802629b555 100644 --- a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.html +++ b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.html @@ -122,7 +122,7 @@

Method Details

SetBlobStorageSettings sets the blob storage settings of the specified resources.
 
 Args:
-  resource: string, Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. (required)
+  resource: string, Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. (required)
   body: object, The request body.
     The object takes the form of:
 
@@ -130,7 +130,7 @@ 

Method Details

"blobStorageSettings": { # Settings for data stored in Blob storage. # The blob storage settings to update for the specified resources. Only fields listed in `update_mask` are applied. "blobStorageClass": "A String", # The Storage class in which the Blob data is stored. }, - "filterConfig": { # Specifies the filter configuration for DICOM resources. # Optional. A filter configuration. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. + "filterConfig": { # Specifies the filter configuration for DICOM resources. # Optional. A filter configuration. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. "resourcePathsGcsUri": "A String", # The Cloud Storage location of the filter configuration file. The `gcs_uri` must be in the format `gs://bucket/path/to/object`. The filter configuration file must contain a list of resource paths separated by newline characters (\n or \r\n). Each resource path must be in the format "/studies/{studyUID}[/series/{seriesUID}[/instances/{instanceUID}]]" The Cloud Healthcare API service account must have the `roles/storage.objectViewer` Cloud IAM role for this Cloud Storage location. }, } diff --git a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.series.instances.html b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.series.instances.html index 5d4ecdaeb5..e4c1092989 100644 --- a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.series.instances.html +++ b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.dicomWeb.studies.series.instances.html @@ -91,7 +91,7 @@

Method Details

GetStorageInfo returns the storage info of the specified resource.
 
 Args:
-  resource: string, Required. The path of the resource for which the storage info is requested (for exaxmple for a DICOM Instance: `projects/{projectid}/datasets/{datasetid}/dicomStores/{dicomStoreId}/dicomWeb/studies/{study_uid}/series/{series_uid}/instances/{instance_uid}`) (required)
+  resource: string, Required. The path of the resource for which the storage info is requested (for exaxmple for a DICOM Instance: `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreId}/dicomWeb/studies/{study_uid}/series/{series_uid}/instances/{instance_uid}`) (required)
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -106,7 +106,7 @@ 

Method Details

"storageClass": "A String", # The storage class in which the Blob data is stored. "storageClassUpdateTime": "A String", # The time at which the storage class was updated. This is used to compute early deletion fees of the resource. }, - "referencedResource": "A String", # The resource whose storage info is returned. For example, to specify the resource path of a DICOM Instance: `projects/{projectid}/datasets/{datasetid}/dicomStores/{dicom_store_id}/dicomWeb/studi/{study_uid}/series/{series_uid}/instances/{instance_uid}` + "referencedResource": "A String", # The resource whose storage info is returned. For example, to specify the resource path of a DICOM Instance: `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicom_store_id}/dicomWeb/studi/{study_uid}/series/{series_uid}/instances/{instance_uid}` "structuredStorageInfo": { # StructuredStorageInfo contains details about the data stored in Structured Storage for the referenced resource. # Info about the data stored in structured storage for the resource. "sizeBytes": "A String", # Size in bytes of data stored in structured storage. }, diff --git a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.html b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.html index 54e84382b8..0f8eba6bf4 100644 --- a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.html +++ b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.dicomStores.html @@ -876,7 +876,7 @@

Method Details

SetBlobStorageSettings sets the blob storage settings of the specified resources.
 
 Args:
-  resource: string, Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. (required)
+  resource: string, Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. (required)
   body: object, The request body.
     The object takes the form of:
 
@@ -884,7 +884,7 @@ 

Method Details

"blobStorageSettings": { # Settings for data stored in Blob storage. # The blob storage settings to update for the specified resources. Only fields listed in `update_mask` are applied. "blobStorageClass": "A String", # The Storage class in which the Blob data is stored. }, - "filterConfig": { # Specifies the filter configuration for DICOM resources. # Optional. A filter configuration. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. + "filterConfig": { # Specifies the filter configuration for DICOM resources. # Optional. A filter configuration. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`. "resourcePathsGcsUri": "A String", # The Cloud Storage location of the filter configuration file. The `gcs_uri` must be in the format `gs://bucket/path/to/object`. The filter configuration file must contain a list of resource paths separated by newline characters (\n or \r\n). Each resource path must be in the format "/studies/{studyUID}[/series/{seriesUID}[/instances/{instanceUID}]]" The Cloud Healthcare API service account must have the `roles/storage.objectViewer` Cloud IAM role for this Cloud Storage location. }, } diff --git a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html index 1cb66db273..40448db506 100644 --- a/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html +++ b/docs/dyn/healthcare_v1beta1.projects.locations.datasets.fhirStores.html @@ -1149,8 +1149,8 @@

Method Details

{ # A single consent scope that provides info on who has access to the requested resource scope for a particular purpose and environment, enforced by which consent. "accessorScope": { # The accessor scope that describes who can access, for what purpose, in which environment. # The accessor scope that describes who can access, for what purpose, and in which environment. "actor": "A String", # An individual, group, or access role that identifies the accessor or a characteristic of the accessor. This can be a resource ID (such as `{resourceType}/{id}`) or an external URI. This value must be present. - "environment": "A String", # An abstract identifier that describes the environment or conditions under which the accessor is acting. Can be “*” if it applies to all environments. - "purpose": "A String", # The intent of data use. Can be “*” if it applies to all purposes. + "environment": "A String", # An abstract identifier that describes the environment or conditions under which the accessor is acting. Can be "*" if it applies to all environments. + "purpose": "A String", # The intent of data use. Can be "*" if it applies to all purposes. }, "decision": "A String", # Whether the current consent scope is permitted or denied access on the requested resource. "enforcingConsents": [ # Metadata of the consent resources that enforce the consent scope's access. @@ -1158,13 +1158,13 @@

Method Details

"cascadeOrigins": [ # The compartment base resources that matched a cascading policy. Each resource has the following format: `projects/{project_id}/locations/{location_id}/datasets/{dataset_id}/fhirStores/{fhir_store_id}/fhir/{resource_type}/{resource_id}` "A String", ], - "consentResource": "A String", # The resource name of this consent resource. Format: `projects/{projectId}/datasets/{datasetId}/fhirStores/{fhirStoreId}/fhir/{resourceType}/{id}`. + "consentResource": "A String", # The resource name of this consent resource. Format: `projects/{projectId}/locations/{locationId}/datasets/{datasetId}/fhirStores/{fhirStoreId}/fhir/{resourceType}/{id}`. "enforcementTime": "A String", # Last enforcement timestamp of this consent resource. "matchingAccessorScopes": [ # A list of all the matching accessor scopes of this consent policy that enforced ExplainDataAccessConsentScope.accessor_scope. { # The accessor scope that describes who can access, for what purpose, in which environment. "actor": "A String", # An individual, group, or access role that identifies the accessor or a characteristic of the accessor. This can be a resource ID (such as `{resourceType}/{id}`) or an external URI. This value must be present. - "environment": "A String", # An abstract identifier that describes the environment or conditions under which the accessor is acting. Can be “*” if it applies to all environments. - "purpose": "A String", # The intent of data use. Can be “*” if it applies to all purposes. + "environment": "A String", # An abstract identifier that describes the environment or conditions under which the accessor is acting. Can be "*" if it applies to all environments. + "purpose": "A String", # The intent of data use. Can be "*" if it applies to all purposes. }, ], "patientConsentOwner": "A String", # The patient owning the consent (only applicable for patient consents), in the format: `projects/{project_id}/locations/{location_id}/datasets/{dataset_id}/fhirStores/{fhir_store_id}/fhir/Patient/{patient_id}` diff --git a/docs/dyn/iam_v1.locations.workforcePools.providers.html b/docs/dyn/iam_v1.locations.workforcePools.providers.html index e9efd43858..4469894315 100644 --- a/docs/dyn/iam_v1.locations.workforcePools.providers.html +++ b/docs/dyn/iam_v1.locations.workforcePools.providers.html @@ -125,7 +125,7 @@

Method Details

{ # A configuration for an external identity provider. "attributeCondition": "A String", # A [Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. `google.profile_photo`, `google.display_name` and `google.posix_username` are not supported. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credentials will be accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ``` - "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The linux username used by OS login. This is an optional field and the mapped posix username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` + "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The Linux username used by OS Login. This is an optional field and the mapped POSIX username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` "a_key": "A String", }, "description": "A String", # A user-specified description of the provider. Cannot exceed 256 characters. @@ -238,7 +238,7 @@

Method Details

{ # A configuration for an external identity provider. "attributeCondition": "A String", # A [Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. `google.profile_photo`, `google.display_name` and `google.posix_username` are not supported. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credentials will be accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ``` - "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The linux username used by OS login. This is an optional field and the mapped posix username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` + "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The Linux username used by OS Login. This is an optional field and the mapped POSIX username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` "a_key": "A String", }, "description": "A String", # A user-specified description of the provider. Cannot exceed 256 characters. @@ -293,7 +293,7 @@

Method Details

"workforcePoolProviders": [ # A list of providers. { # A configuration for an external identity provider. "attributeCondition": "A String", # A [Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. `google.profile_photo`, `google.display_name` and `google.posix_username` are not supported. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credentials will be accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ``` - "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The linux username used by OS login. This is an optional field and the mapped posix username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` + "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The Linux username used by OS Login. This is an optional field and the mapped POSIX username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` "a_key": "A String", }, "description": "A String", # A user-specified description of the provider. Cannot exceed 256 characters. @@ -353,7 +353,7 @@

Method Details

{ # A configuration for an external identity provider. "attributeCondition": "A String", # A [Common Expression Language](https://opensource.google/projects/cel) expression, in plain text, to restrict what otherwise valid authentication credentials issued by the provider should not be accepted. The expression must output a boolean representing whether to allow the federation. The following keywords may be referenced in the expressions: * `assertion`: JSON representing the authentication credential issued by the provider. * `google`: The Google attributes mapped from the assertion in the `attribute_mappings`. `google.profile_photo`, `google.display_name` and `google.posix_username` are not supported. * `attribute`: The custom attributes mapped from the assertion in the `attribute_mappings`. The maximum length of the attribute condition expression is 4096 characters. If unspecified, all valid authentication credentials will be accepted. The following example shows how to only allow credentials with a mapped `google.groups` value of `admins`: ``` "'admins' in google.groups" ``` - "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The linux username used by OS login. This is an optional field and the mapped posix username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` + "attributeMapping": { # Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The Linux username used by OS Login. This is an optional field and the mapped POSIX username cannot exceed 32 characters, The key must match the regex "^a-zA-Z0-9._{0,31}$". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {"google.subject": "assertion.sub"} ``` "a_key": "A String", }, "description": "A String", # A user-specified description of the provider. Cannot exceed 256 characters. diff --git a/docs/dyn/iam_v1.organizations.roles.html b/docs/dyn/iam_v1.organizations.roles.html index 369942bea1..a08055cf1c 100644 --- a/docs/dyn/iam_v1.organizations.roles.html +++ b/docs/dyn/iam_v1.organizations.roles.html @@ -121,7 +121,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }, @@ -143,7 +143,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -171,7 +171,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -198,7 +198,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -235,7 +235,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }, @@ -273,7 +273,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. } @@ -294,7 +294,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -328,7 +328,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
diff --git a/docs/dyn/iam_v1.projects.locations.workloadIdentityPools.providers.html b/docs/dyn/iam_v1.projects.locations.workloadIdentityPools.providers.html index b0071f5fdd..ea9cffe269 100644 --- a/docs/dyn/iam_v1.projects.locations.workloadIdentityPools.providers.html +++ b/docs/dyn/iam_v1.projects.locations.workloadIdentityPools.providers.html @@ -144,9 +144,11 @@

Method Details

"jwksJson": "A String", # Optional. OIDC JWKs in JSON String format. For details on the definition of a JWK, see https://tools.ietf.org/html/rfc7517. If not set, the `jwks_uri` from the discovery document(fetched from the .well-known path of the `issuer_uri`) will be used. Currently, RSA and EC asymmetric keys are supported. The JWK must use following format and include only the following fields: { "keys": [ { "kty": "RSA/EC", "alg": "", "use": "sig", "kid": "", "n": "", "e": "", "x": "", "y": "", "crv": "" } ] } }, "saml": { # Represents an SAML 2.0 identity provider. # An SAML 2.0 identity provider. - "idpMetadataXml": "A String", # Required. SAML Identity provider configuration metadata xml doc. The xml document should comply with [SAML 2.0 specification](https://www.oasis-open.org/committees/download.php/56785/sstc-saml-metadata-errata-2.0-wd-05.pdf). The max size of the acceptable xml document will be bounded to 128k characters. The metadata xml document should satisfy the following constraints: 1) Must contain an Identity Provider Entity ID. 2) Must contain at least one non-expired signing key certificate. 3) For each signing key: a) Valid from should be no more than 7 days from now. b) Valid to should be no more than 15 years in the future. 4) Upto 3 IdP signing keys are allowed in the metadata xml. When updating the provider's metadata xml, at lease one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata + "idpMetadataXml": "A String", # Required. SAML identity provider (IdP) configuration metadata XML doc. The XML document must comply with the [SAML 2.0 specification](https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf). The maximum size of an acceptable XML document is 128K characters. The SAML metadata XML document must satisfy the following constraints: * Must contain an IdP Entity ID. * Must contain at least one non-expired signing certificate. * For each signing certificate, the expiration must be: * From no more than 7 days in the future. * To no more than 15 years in the future. * Up to three IdP signing keys are allowed. When updating the provider's metadata XML, at least one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata. }, "state": "A String", # Output only. The state of the provider. + "x509": { # An X.509-type identity provider represents a CA. It is trusted to assert a client identity if the client has a certificate that chains up to this CA. # An X.509-type identity provider. + }, } workloadIdentityPoolProviderId: string, Required. The ID for the provider, which becomes the final component of the resource name. This value must be 4-32 characters, and may contain the characters [a-z0-9-]. The prefix `gcp-` is reserved for use by Google, and may not be specified. @@ -249,9 +251,11 @@

Method Details

"jwksJson": "A String", # Optional. OIDC JWKs in JSON String format. For details on the definition of a JWK, see https://tools.ietf.org/html/rfc7517. If not set, the `jwks_uri` from the discovery document(fetched from the .well-known path of the `issuer_uri`) will be used. Currently, RSA and EC asymmetric keys are supported. The JWK must use following format and include only the following fields: { "keys": [ { "kty": "RSA/EC", "alg": "", "use": "sig", "kid": "", "n": "", "e": "", "x": "", "y": "", "crv": "" } ] } }, "saml": { # Represents an SAML 2.0 identity provider. # An SAML 2.0 identity provider. - "idpMetadataXml": "A String", # Required. SAML Identity provider configuration metadata xml doc. The xml document should comply with [SAML 2.0 specification](https://www.oasis-open.org/committees/download.php/56785/sstc-saml-metadata-errata-2.0-wd-05.pdf). The max size of the acceptable xml document will be bounded to 128k characters. The metadata xml document should satisfy the following constraints: 1) Must contain an Identity Provider Entity ID. 2) Must contain at least one non-expired signing key certificate. 3) For each signing key: a) Valid from should be no more than 7 days from now. b) Valid to should be no more than 15 years in the future. 4) Upto 3 IdP signing keys are allowed in the metadata xml. When updating the provider's metadata xml, at lease one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata + "idpMetadataXml": "A String", # Required. SAML identity provider (IdP) configuration metadata XML doc. The XML document must comply with the [SAML 2.0 specification](https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf). The maximum size of an acceptable XML document is 128K characters. The SAML metadata XML document must satisfy the following constraints: * Must contain an IdP Entity ID. * Must contain at least one non-expired signing certificate. * For each signing certificate, the expiration must be: * From no more than 7 days in the future. * To no more than 15 years in the future. * Up to three IdP signing keys are allowed. When updating the provider's metadata XML, at least one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata. }, "state": "A String", # Output only. The state of the provider. + "x509": { # An X.509-type identity provider represents a CA. It is trusted to assert a client identity if the client has a certificate that chains up to this CA. # An X.509-type identity provider. + }, }
@@ -296,9 +300,11 @@

Method Details

"jwksJson": "A String", # Optional. OIDC JWKs in JSON String format. For details on the definition of a JWK, see https://tools.ietf.org/html/rfc7517. If not set, the `jwks_uri` from the discovery document(fetched from the .well-known path of the `issuer_uri`) will be used. Currently, RSA and EC asymmetric keys are supported. The JWK must use following format and include only the following fields: { "keys": [ { "kty": "RSA/EC", "alg": "", "use": "sig", "kid": "", "n": "", "e": "", "x": "", "y": "", "crv": "" } ] } }, "saml": { # Represents an SAML 2.0 identity provider. # An SAML 2.0 identity provider. - "idpMetadataXml": "A String", # Required. SAML Identity provider configuration metadata xml doc. The xml document should comply with [SAML 2.0 specification](https://www.oasis-open.org/committees/download.php/56785/sstc-saml-metadata-errata-2.0-wd-05.pdf). The max size of the acceptable xml document will be bounded to 128k characters. The metadata xml document should satisfy the following constraints: 1) Must contain an Identity Provider Entity ID. 2) Must contain at least one non-expired signing key certificate. 3) For each signing key: a) Valid from should be no more than 7 days from now. b) Valid to should be no more than 15 years in the future. 4) Upto 3 IdP signing keys are allowed in the metadata xml. When updating the provider's metadata xml, at lease one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata + "idpMetadataXml": "A String", # Required. SAML identity provider (IdP) configuration metadata XML doc. The XML document must comply with the [SAML 2.0 specification](https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf). The maximum size of an acceptable XML document is 128K characters. The SAML metadata XML document must satisfy the following constraints: * Must contain an IdP Entity ID. * Must contain at least one non-expired signing certificate. * For each signing certificate, the expiration must be: * From no more than 7 days in the future. * To no more than 15 years in the future. * Up to three IdP signing keys are allowed. When updating the provider's metadata XML, at least one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata. }, "state": "A String", # Output only. The state of the provider. + "x509": { # An X.509-type identity provider represents a CA. It is trusted to assert a client identity if the client has a certificate that chains up to this CA. # An X.509-type identity provider. + }, }, ], }
@@ -348,9 +354,11 @@

Method Details

"jwksJson": "A String", # Optional. OIDC JWKs in JSON String format. For details on the definition of a JWK, see https://tools.ietf.org/html/rfc7517. If not set, the `jwks_uri` from the discovery document(fetched from the .well-known path of the `issuer_uri`) will be used. Currently, RSA and EC asymmetric keys are supported. The JWK must use following format and include only the following fields: { "keys": [ { "kty": "RSA/EC", "alg": "", "use": "sig", "kid": "", "n": "", "e": "", "x": "", "y": "", "crv": "" } ] } }, "saml": { # Represents an SAML 2.0 identity provider. # An SAML 2.0 identity provider. - "idpMetadataXml": "A String", # Required. SAML Identity provider configuration metadata xml doc. The xml document should comply with [SAML 2.0 specification](https://www.oasis-open.org/committees/download.php/56785/sstc-saml-metadata-errata-2.0-wd-05.pdf). The max size of the acceptable xml document will be bounded to 128k characters. The metadata xml document should satisfy the following constraints: 1) Must contain an Identity Provider Entity ID. 2) Must contain at least one non-expired signing key certificate. 3) For each signing key: a) Valid from should be no more than 7 days from now. b) Valid to should be no more than 15 years in the future. 4) Upto 3 IdP signing keys are allowed in the metadata xml. When updating the provider's metadata xml, at lease one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata + "idpMetadataXml": "A String", # Required. SAML identity provider (IdP) configuration metadata XML doc. The XML document must comply with the [SAML 2.0 specification](https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf). The maximum size of an acceptable XML document is 128K characters. The SAML metadata XML document must satisfy the following constraints: * Must contain an IdP Entity ID. * Must contain at least one non-expired signing certificate. * For each signing certificate, the expiration must be: * From no more than 7 days in the future. * To no more than 15 years in the future. * Up to three IdP signing keys are allowed. When updating the provider's metadata XML, at least one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata. }, "state": "A String", # Output only. The state of the provider. + "x509": { # An X.509-type identity provider represents a CA. It is trusted to assert a client identity if the client has a certificate that chains up to this CA. # An X.509-type identity provider. + }, } updateMask: string, Required. The list of fields to update. diff --git a/docs/dyn/iam_v1.projects.roles.html b/docs/dyn/iam_v1.projects.roles.html index 5eaec4b428..b3cb371141 100644 --- a/docs/dyn/iam_v1.projects.roles.html +++ b/docs/dyn/iam_v1.projects.roles.html @@ -121,7 +121,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }, @@ -143,7 +143,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -171,7 +171,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -198,7 +198,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -235,7 +235,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }, @@ -273,7 +273,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. } @@ -294,7 +294,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }
@@ -328,7 +328,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. } diff --git a/docs/dyn/iam_v1.roles.html b/docs/dyn/iam_v1.roles.html index ae0fe8b9f0..5e3ba01f57 100644 --- a/docs/dyn/iam_v1.roles.html +++ b/docs/dyn/iam_v1.roles.html @@ -119,7 +119,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. } @@ -156,7 +156,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }, @@ -211,7 +211,7 @@

Method Details

"includedPermissions": [ # The names of the permissions this role grants when bound in an IAM policy. "A String", ], - "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles. + "name": "A String", # The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles. "stage": "A String", # The current launch stage of the role. If the `ALPHA` launch stage has been selected for a role, the `stage` field will not be included in the returned definition for the role. "title": "A String", # Optional. A human-readable title for the role. Typically this is limited to 100 UTF-8 bytes. }, diff --git a/docs/dyn/iap_v1.v1.html b/docs/dyn/iap_v1.v1.html index db4f7dc179..ace089cffd 100644 --- a/docs/dyn/iap_v1.v1.html +++ b/docs/dyn/iap_v1.v1.html @@ -94,7 +94,7 @@

Instance Methods

Updates the IAP settings on a particular IAP protected resource. It replaces all fields unless the `update_mask` is set.

validateAttributeExpression(name, expression=None, x__xgafv=None)

-

Validates a given CEL expression conforms to IAP restrictions.

+

Validates that a given CEL expression conforms to IAP restrictions.

Method Details

close() @@ -478,11 +478,11 @@

Method Details

validateAttributeExpression(name, expression=None, x__xgafv=None) -
Validates a given CEL expression conforms to IAP restrictions.
+  
Validates that a given CEL expression conforms to IAP restrictions.
 
 Args:
   name: string, Required. The resource name of the IAP protected resource. (required)
-  expression: string, Required. User input string expression. Should be of the form 'attributes.saml_attributes.filter(attribute, attribute.name in ['{attribute_name}', '{attribute_name}'])'
+  expression: string, Required. User input string expression. Should be of the form `attributes.saml_attributes.filter(attribute, attribute.name in ['{attribute_name}', '{attribute_name}'])`
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
diff --git a/docs/dyn/identitytoolkit_v2.projects.html b/docs/dyn/identitytoolkit_v2.projects.html
index d7b4ce2170..3949cfb2c0 100644
--- a/docs/dyn/identitytoolkit_v2.projects.html
+++ b/docs/dyn/identitytoolkit_v2.projects.html
@@ -287,7 +287,7 @@ 

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. @@ -502,7 +502,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. @@ -716,7 +716,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. diff --git a/docs/dyn/identitytoolkit_v2.projects.tenants.html b/docs/dyn/identitytoolkit_v2.projects.tenants.html index cfc5babf0b..a05ea12dd1 100644 --- a/docs/dyn/identitytoolkit_v2.projects.tenants.html +++ b/docs/dyn/identitytoolkit_v2.projects.tenants.html @@ -206,7 +206,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. @@ -311,7 +311,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. @@ -441,7 +441,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. @@ -614,7 +614,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. @@ -737,7 +737,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. @@ -843,7 +843,7 @@

Method Details

"endScore": 3.14, # The end score (inclusive) of the score range for an action. Must be a value between 0.0 and 1.0, at 11 discrete values; e.g. 0, 0.1, 0.2, 0.3, ... 0.9, 1.0. A score of 0.0 indicates the riskiest request (likely a bot), whereas 1.0 indicates the safest request (likely a human). See https://cloud.google.com/recaptcha-enterprise/docs/interpret-assessment. }, ], - "recaptchaKeys": [ # Output only. The reCAPTCHA keys. + "recaptchaKeys": [ # The reCAPTCHA keys. { # The reCAPTCHA key config. reCAPTCHA Enterprise offers different keys for different client platforms. "key": "A String", # The reCAPTCHA Enterprise key resource name, e.g. "projects/{project}/keys/{key}" "type": "A String", # The client's platform type. diff --git a/docs/dyn/metastore_v1.projects.locations.services.html b/docs/dyn/metastore_v1.projects.locations.services.html index 050e940c35..7832536e73 100644 --- a/docs/dyn/metastore_v1.projects.locations.services.html +++ b/docs/dyn/metastore_v1.projects.locations.services.html @@ -84,11 +84,6 @@

Instance Methods

Returns the metadataImports Resource.

-

- migrationExecutions() -

-

Returns the migrationExecutions Resource.

-

alterLocation(service, body=None, x__xgafv=None)

Alter metadata resource location. The metadata resource can be a database, table, or partition. This functionality only updates the parent directory for the respective metadata resource and does not transfer any existing data to the new location.

diff --git a/docs/dyn/metastore_v1alpha.projects.locations.services.html b/docs/dyn/metastore_v1alpha.projects.locations.services.html index 4c73d5c9d1..0926809e08 100644 --- a/docs/dyn/metastore_v1alpha.projects.locations.services.html +++ b/docs/dyn/metastore_v1alpha.projects.locations.services.html @@ -89,11 +89,6 @@

Instance Methods

Returns the metadataImports Resource.

-

- migrationExecutions() -

-

Returns the migrationExecutions Resource.

-

alterLocation(service, body=None, x__xgafv=None)

Alter metadata resource location. The metadata resource can be a database, table, or partition. This functionality only updates the parent directory for the respective metadata resource and does not transfer any existing data to the new location.

diff --git a/docs/dyn/metastore_v1beta.projects.locations.services.html b/docs/dyn/metastore_v1beta.projects.locations.services.html index b7543a47ba..8b02baedc0 100644 --- a/docs/dyn/metastore_v1beta.projects.locations.services.html +++ b/docs/dyn/metastore_v1beta.projects.locations.services.html @@ -89,11 +89,6 @@

Instance Methods

Returns the metadataImports Resource.

-

- migrationExecutions() -

-

Returns the migrationExecutions Resource.

-

alterLocation(service, body=None, x__xgafv=None)

Alter metadata resource location. The metadata resource can be a database, table, or partition. This functionality only updates the parent directory for the respective metadata resource and does not transfer any existing data to the new location.

diff --git a/docs/dyn/networkservices_v1.projects.locations.lbRouteExtensions.html b/docs/dyn/networkservices_v1.projects.locations.lbRouteExtensions.html index 47a4563de8..71d34f134f 100644 --- a/docs/dyn/networkservices_v1.projects.locations.lbRouteExtensions.html +++ b/docs/dyn/networkservices_v1.projects.locations.lbRouteExtensions.html @@ -131,7 +131,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -139,7 +139,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -249,7 +249,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -257,7 +257,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -307,7 +307,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -315,7 +315,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -374,7 +374,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -382,7 +382,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). diff --git a/docs/dyn/networkservices_v1.projects.locations.lbTrafficExtensions.html b/docs/dyn/networkservices_v1.projects.locations.lbTrafficExtensions.html index 94bc77b227..272627b99d 100644 --- a/docs/dyn/networkservices_v1.projects.locations.lbTrafficExtensions.html +++ b/docs/dyn/networkservices_v1.projects.locations.lbTrafficExtensions.html @@ -131,7 +131,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -139,7 +139,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -249,7 +249,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -257,7 +257,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -307,7 +307,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -315,7 +315,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -374,7 +374,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -382,7 +382,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). diff --git a/docs/dyn/networkservices_v1beta1.projects.locations.lbRouteExtensions.html b/docs/dyn/networkservices_v1beta1.projects.locations.lbRouteExtensions.html index 87ea0eee37..6868175542 100644 --- a/docs/dyn/networkservices_v1beta1.projects.locations.lbRouteExtensions.html +++ b/docs/dyn/networkservices_v1beta1.projects.locations.lbRouteExtensions.html @@ -131,7 +131,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -139,7 +139,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -249,7 +249,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -257,7 +257,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -307,7 +307,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -315,7 +315,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -374,7 +374,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -382,7 +382,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LbRouteExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). diff --git a/docs/dyn/networkservices_v1beta1.projects.locations.lbTrafficExtensions.html b/docs/dyn/networkservices_v1beta1.projects.locations.lbTrafficExtensions.html index c447b48deb..a5863b27b2 100644 --- a/docs/dyn/networkservices_v1beta1.projects.locations.lbTrafficExtensions.html +++ b/docs/dyn/networkservices_v1beta1.projects.locations.lbTrafficExtensions.html @@ -131,7 +131,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -139,7 +139,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -249,7 +249,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -257,7 +257,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -307,7 +307,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -315,7 +315,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). @@ -374,7 +374,7 @@

Method Details

}, ], "matchCondition": { # Conditions under which this chain is invoked for a request. # Required. Conditions under which this chain is invoked for a request. - "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference). + "celExpression": "A String", # Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference). }, "name": "A String", # Required. The name for this extension chain. The name is logged as part of the HTTP request logs. The name must conform with RFC-1034, is restricted to lower-cased letters, numbers and hyphens, and can have a maximum length of 63 characters. Additionally, the first character must be a letter and the last a letter or a number. }, @@ -382,7 +382,7 @@

Method Details

"forwardingRules": [ # Required. A list of references to the forwarding rules to which this service extension is attached to. At least one forwarding rule is required. There can be only one `LBTrafficExtension` resource per forwarding rule. "A String", ], - "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources. + "labels": { # Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources. "a_key": "A String", }, "loadBalancingScheme": "A String", # Required. All backend services and forwarding rules referenced by this extension must share the same load balancing scheme. Supported values: `INTERNAL_MANAGED`, `EXTERNAL_MANAGED`. For more information, refer to [Choosing a load balancer](https://cloud.google.com/load-balancing/docs/backend-service). diff --git a/docs/dyn/privateca_v1.projects.locations.caPools.certificateAuthorities.html b/docs/dyn/privateca_v1.projects.locations.caPools.certificateAuthorities.html index a8c674027d..04a0b5d1ea 100644 --- a/docs/dyn/privateca_v1.projects.locations.caPools.certificateAuthorities.html +++ b/docs/dyn/privateca_v1.projects.locations.caPools.certificateAuthorities.html @@ -377,7 +377,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -876,7 +876,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -1208,7 +1208,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -1549,7 +1549,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. diff --git a/docs/dyn/privateca_v1.projects.locations.caPools.certificates.html b/docs/dyn/privateca_v1.projects.locations.caPools.certificates.html index 2f7cb3fa4a..efe65976a8 100644 --- a/docs/dyn/privateca_v1.projects.locations.caPools.certificates.html +++ b/docs/dyn/privateca_v1.projects.locations.caPools.certificates.html @@ -299,7 +299,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -605,7 +605,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -914,7 +914,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -1229,7 +1229,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -1553,7 +1553,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -1857,7 +1857,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. @@ -2174,7 +2174,7 @@

Method Details

}, }, "subjectKeyId": { # A KeyId identifies a specific public key, usually by hashing the public key. # Optional. When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CAS, which was not generated using method (1) described in RFC 5280 section 4.2.1.2. - "keyId": "A String", # Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. + "keyId": "A String", # Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key. }, "x509Config": { # An X509Parameters is used to describe certain fields of an X.509 certificate, such as the key usage fields, fields specific to CA certificates, certificate policy extensions and custom extensions. # Required. Describes how some of the technical X.509 fields in a certificate should be populated. "additionalExtensions": [ # Optional. Describes custom X.509 extensions. diff --git a/docs/dyn/recaptchaenterprise_v1.projects.firewallpolicies.html b/docs/dyn/recaptchaenterprise_v1.projects.firewallpolicies.html index cbee251177..59a75f5fd8 100644 --- a/docs/dyn/recaptchaenterprise_v1.projects.firewallpolicies.html +++ b/docs/dyn/recaptchaenterprise_v1.projects.firewallpolicies.html @@ -249,7 +249,7 @@

Method Details

Returns: An object of the form: - { # Response to request to list firewall policies belonging to a key. + { # Response to request to list firewall policies belonging to a project. "firewallPolicies": [ # Policy details. { # A FirewallPolicy represents a single matching pattern and resulting actions to take. "actions": [ # Optional. The actions that the caller should take regarding user access. There should be at most one terminal action. A terminal action is any action that forces a response, such as `AllowAction`, `BlockAction` or `SubstituteAction`. Zero or more non-terminal actions such as `SetHeader` might be specified. A single policy can contain up to 16 actions. diff --git a/docs/dyn/redis_v1.projects.locations.clusters.html b/docs/dyn/redis_v1.projects.locations.clusters.html index ca8ea00fb0..ced59c01bc 100644 --- a/docs/dyn/redis_v1.projects.locations.clusters.html +++ b/docs/dyn/redis_v1.projects.locations.clusters.html @@ -126,6 +126,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -140,6 +150,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. @@ -248,6 +261,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -262,6 +285,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. @@ -336,6 +362,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -350,6 +386,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. @@ -407,6 +446,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -421,6 +470,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. diff --git a/docs/dyn/redis_v1beta1.projects.locations.clusters.html b/docs/dyn/redis_v1beta1.projects.locations.clusters.html index c7076992eb..aefe48b1bc 100644 --- a/docs/dyn/redis_v1beta1.projects.locations.clusters.html +++ b/docs/dyn/redis_v1beta1.projects.locations.clusters.html @@ -126,6 +126,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -140,6 +150,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. @@ -248,6 +261,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -262,6 +285,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. @@ -336,6 +362,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -350,6 +386,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. @@ -407,6 +446,16 @@

Method Details

}, ], "name": "A String", # Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}` + "persistenceConfig": { # Configuration of the persistence functionality. # Optional. Persistence config (RDB, AOF) for the cluster. + "aofConfig": { # Configuration of the AOF based persistence. # Optional. AOF configuration. This field will be ignored if mode is not AOF. + "appendFsync": "A String", # Optional. fsync configuration. + }, + "mode": "A String", # Optional. The mode of persistence. + "rdbConfig": { # Configuration of the RDB based persistence. # Optional. RDB configuration. This field will be ignored if mode is not RDB. + "rdbSnapshotPeriod": "A String", # Optional. Period between RDB snapshots. + "rdbSnapshotStartTime": "A String", # Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used. + }, + }, "pscConfigs": [ # Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported. { "network": "A String", # Required. The network where the IP address of the discovery endpoint will be reserved, in the form of projects/{network_project}/global/networks/{network_id}. @@ -421,6 +470,9 @@

Method Details

"pscConnectionId": "A String", # Output only. The PSC connection id of the forwarding rule connected to the service attachment. }, ], + "redisConfigs": { # Optional. Key/Value pairs of customer overrides for mutable Redis Configs + "a_key": "A String", + }, "replicaCount": 42, # Optional. The number of replica nodes per shard. "shardCount": 42, # Required. Number of shards for the Redis cluster. "sizeGb": 42, # Output only. Redis memory size in GB for the entire cluster rounded up to the next integer. diff --git a/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.html b/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.html index 299f45c6c3..b946c78011 100644 --- a/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.html +++ b/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.html @@ -257,8 +257,8 @@

Method Details

The object takes the form of: { # Request message for ImportProducerOverrides - "force": True or False, # Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - "forceOnly": [ # The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. + "force": True or False, # Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + "forceOnly": [ # The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. "A String", ], "inlineSource": { # Import data embedded in the request message # The import data is specified in the request message itself diff --git a/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.limits.producerOverrides.html b/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.limits.producerOverrides.html index cb5b582891..6263c281fd 100644 --- a/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.limits.producerOverrides.html +++ b/docs/dyn/serviceconsumermanagement_v1beta1.services.consumerQuotaMetrics.limits.producerOverrides.html @@ -118,8 +118,8 @@

Method Details

"unit": "A String", # The limit unit of the limit to which this override applies. An example unit would be: `1/{project}/{region}` Note that `{project}` and `{region}` are not placeholders in this example; the literal characters `{` and `}` occur in the string. } - force: boolean, Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. @@ -159,8 +159,8 @@

Method Details

Args: name: string, The resource name of the override to delete. An example name would be: `services/compute.googleapis.com/projects/123/consumerQuotaMetrics/compute.googleapis.com%2Fcpus/limits/%2Fproject%2Fregion/producerOverrides/4a3f2c1d` (required) - force: boolean, Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. @@ -261,8 +261,8 @@

Method Details

"unit": "A String", # The limit unit of the limit to which this override applies. An example unit would be: `1/{project}/{region}` Note that `{project}` and `{region}` are not placeholders in this example; the literal characters `{` and `}` occur in the string. } - force: boolean, Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. diff --git a/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.html b/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.html index dccb642780..209dd86cad 100644 --- a/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.html +++ b/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.html @@ -268,8 +268,8 @@

Method Details

The object takes the form of: { # Request message for ImportAdminOverrides - "force": True or False, # Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - "forceOnly": [ # The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. + "force": True or False, # Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + "forceOnly": [ # The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. "A String", ], "inlineSource": { # Import data embedded in the request message # The import data is specified in the request message itself @@ -327,8 +327,8 @@

Method Details

The object takes the form of: { # Request message for ImportConsumerOverrides - "force": True or False, # Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - "forceOnly": [ # The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. + "force": True or False, # Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + "forceOnly": [ # The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. "A String", ], "inlineSource": { # Import data embedded in the request message # The import data is specified in the request message itself diff --git a/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.adminOverrides.html b/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.adminOverrides.html index 027b7e1b17..34b465bcd7 100644 --- a/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.adminOverrides.html +++ b/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.adminOverrides.html @@ -118,8 +118,8 @@

Method Details

"unit": "A String", # The limit unit of the limit to which this override applies. An example unit would be: `1/{project}/{region}` Note that `{project}` and `{region}` are not placeholders in this example; the literal characters `{` and `}` occur in the string. } - force: boolean, Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. @@ -159,8 +159,8 @@

Method Details

Args: name: string, The resource name of the override to delete. An example name would be: `projects/123/services/compute.googleapis.com/consumerQuotaMetrics/compute.googleapis.com%2Fcpus/limits/%2Fproject%2Fregion/adminOverrides/4a3f2c1d` (required) - force: boolean, Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. @@ -261,8 +261,8 @@

Method Details

"unit": "A String", # The limit unit of the limit to which this override applies. An example unit would be: `1/{project}/{region}` Note that `{project}` and `{region}` are not placeholders in this example; the literal characters `{` and `}` occur in the string. } - force: boolean, Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. diff --git a/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.consumerOverrides.html b/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.consumerOverrides.html index 278b6e996c..bf8177e182 100644 --- a/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.consumerOverrides.html +++ b/docs/dyn/serviceusage_v1beta1.services.consumerQuotaMetrics.limits.consumerOverrides.html @@ -118,8 +118,8 @@

Method Details

"unit": "A String", # The limit unit of the limit to which this override applies. An example unit would be: `1/{project}/{region}` Note that `{project}` and `{region}` are not placeholders in this example; the literal characters `{` and `}` occur in the string. } - force: boolean, Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. @@ -159,8 +159,8 @@

Method Details

Args: name: string, The resource name of the override to delete. An example name would be: `projects/123/services/compute.googleapis.com/consumerQuotaMetrics/compute.googleapis.com%2Fcpus/limits/%2Fproject%2Fregion/consumerOverrides/4a3f2c1d` (required) - force: boolean, Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. @@ -261,8 +261,8 @@

Method Details

"unit": "A String", # The limit unit of the limit to which this override applies. An example unit would be: `1/{project}/{region}` Note that `{project}` and `{region}` are not placeholders in this example; the literal characters `{` and `}` occur in the string. } - force: boolean, Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. - forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. (repeated) + force: boolean, Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. + forceOnly: string, The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in "X-Goog-Request-Reason" header when sending the request. (repeated) Allowed values QUOTA_SAFETY_CHECK_UNSPECIFIED - Unspecified quota safety check. LIMIT_DECREASE_BELOW_USAGE - Validates that a quota mutation would not cause the consumer's effective limit to be lower than the consumer's quota usage. diff --git a/docs/dyn/sheets_v4.spreadsheets.html b/docs/dyn/sheets_v4.spreadsheets.html index c43b01caff..8cc1ebe2e3 100644 --- a/docs/dyn/sheets_v4.spreadsheets.html +++ b/docs/dyn/sheets_v4.spreadsheets.html @@ -8087,6 +8087,7 @@

Method Details

"verticalAlignment": "A String", # The vertical alignment of the value in the cell. "wrapStrategy": "A String", # The wrap strategy for the value in the cell. }, + "importFunctionsExternalUrlAccessAllowed": True or False, # Whether to allow external url access for image and import functions. Read only when true. When false, you can set to true. "iterativeCalculationSettings": { # Settings to control how circular dependencies are resolved with iterative calculation. # Determines whether and how circular references are resolved with iterative calculation. Absence of this field means that circular references result in calculation errors. "convergenceThreshold": 3.14, # When iterative calculation is enabled and successive results differ by less than this threshold value, the calculation rounds stop. "maxIterations": 42, # When iterative calculation is enabled, the maximum number of calculation rounds to perform. @@ -11680,6 +11681,7 @@

Method Details

"verticalAlignment": "A String", # The vertical alignment of the value in the cell. "wrapStrategy": "A String", # The wrap strategy for the value in the cell. }, + "importFunctionsExternalUrlAccessAllowed": True or False, # Whether to allow external url access for image and import functions. Read only when true. When false, you can set to true. "iterativeCalculationSettings": { # Settings to control how circular dependencies are resolved with iterative calculation. # Determines whether and how circular references are resolved with iterative calculation. Absence of this field means that circular references result in calculation errors. "convergenceThreshold": 3.14, # When iterative calculation is enabled and successive results differ by less than this threshold value, the calculation rounds stop. "maxIterations": 42, # When iterative calculation is enabled, the maximum number of calculation rounds to perform. @@ -15339,6 +15341,7 @@

Method Details

"verticalAlignment": "A String", # The vertical alignment of the value in the cell. "wrapStrategy": "A String", # The wrap strategy for the value in the cell. }, + "importFunctionsExternalUrlAccessAllowed": True or False, # Whether to allow external url access for image and import functions. Read only when true. When false, you can set to true. "iterativeCalculationSettings": { # Settings to control how circular dependencies are resolved with iterative calculation. # Determines whether and how circular references are resolved with iterative calculation. Absence of this field means that circular references result in calculation errors. "convergenceThreshold": 3.14, # When iterative calculation is enabled and successive results differ by less than this threshold value, the calculation rounds stop. "maxIterations": 42, # When iterative calculation is enabled, the maximum number of calculation rounds to perform. @@ -18991,6 +18994,7 @@

Method Details

"verticalAlignment": "A String", # The vertical alignment of the value in the cell. "wrapStrategy": "A String", # The wrap strategy for the value in the cell. }, + "importFunctionsExternalUrlAccessAllowed": True or False, # Whether to allow external url access for image and import functions. Read only when true. When false, you can set to true. "iterativeCalculationSettings": { # Settings to control how circular dependencies are resolved with iterative calculation. # Determines whether and how circular references are resolved with iterative calculation. Absence of this field means that circular references result in calculation errors. "convergenceThreshold": 3.14, # When iterative calculation is enabled and successive results differ by less than this threshold value, the calculation rounds stop. "maxIterations": 42, # When iterative calculation is enabled, the maximum number of calculation rounds to perform. @@ -22652,6 +22656,7 @@

Method Details

"verticalAlignment": "A String", # The vertical alignment of the value in the cell. "wrapStrategy": "A String", # The wrap strategy for the value in the cell. }, + "importFunctionsExternalUrlAccessAllowed": True or False, # Whether to allow external url access for image and import functions. Read only when true. When false, you can set to true. "iterativeCalculationSettings": { # Settings to control how circular dependencies are resolved with iterative calculation. # Determines whether and how circular references are resolved with iterative calculation. Absence of this field means that circular references result in calculation errors. "convergenceThreshold": 3.14, # When iterative calculation is enabled and successive results differ by less than this threshold value, the calculation rounds stop. "maxIterations": 42, # When iterative calculation is enabled, the maximum number of calculation rounds to perform. @@ -26349,6 +26354,7 @@

Method Details

"verticalAlignment": "A String", # The vertical alignment of the value in the cell. "wrapStrategy": "A String", # The wrap strategy for the value in the cell. }, + "importFunctionsExternalUrlAccessAllowed": True or False, # Whether to allow external url access for image and import functions. Read only when true. When false, you can set to true. "iterativeCalculationSettings": { # Settings to control how circular dependencies are resolved with iterative calculation. # Determines whether and how circular references are resolved with iterative calculation. Absence of this field means that circular references result in calculation errors. "convergenceThreshold": 3.14, # When iterative calculation is enabled and successive results differ by less than this threshold value, the calculation rounds stop. "maxIterations": 42, # When iterative calculation is enabled, the maximum number of calculation rounds to perform. diff --git a/docs/dyn/sqladmin_v1.instances.html b/docs/dyn/sqladmin_v1.instances.html index f5f85fbfb2..6185edae68 100644 --- a/docs/dyn/sqladmin_v1.instances.html +++ b/docs/dyn/sqladmin_v1.instances.html @@ -1181,6 +1181,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -1239,6 +1247,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # Optional. The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -1287,6 +1299,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors). Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -1312,7 +1325,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -1580,6 +1593,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -1638,6 +1659,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # Optional. The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -1686,6 +1711,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors). Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -1711,7 +1737,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -1945,6 +1971,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -2003,6 +2037,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # Optional. The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -2051,6 +2089,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors). Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -2076,7 +2115,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -2248,6 +2287,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -2306,6 +2353,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # Optional. The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -2354,6 +2405,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors). Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -2379,7 +2431,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -3868,6 +3920,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -3926,6 +3986,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # Optional. The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -3974,6 +4038,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors). Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -3999,7 +4064,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. diff --git a/docs/dyn/sqladmin_v1.projects.instances.html b/docs/dyn/sqladmin_v1.projects.instances.html index 428bf57831..4e20613d8b 100644 --- a/docs/dyn/sqladmin_v1.projects.instances.html +++ b/docs/dyn/sqladmin_v1.projects.instances.html @@ -541,6 +541,7 @@

Method Details

The object takes the form of: { # Instance start external sync request. + "migrationType": "A String", # Optional. MigrationType decides if the migration is a physical file based migration or logical migration. "mysqlSyncConfig": { # MySQL-specific external server sync settings. # MySQL-specific settings for start external sync. "initialSyncFlags": [ # Flags to use for the initial dump. { # Initial sync flags for certain Cloud SQL APIs. Currently used for the MySQL external server initial dump. @@ -678,6 +679,7 @@

Method Details

The object takes the form of: { # Instance verify external sync settings request. + "migrationType": "A String", # Optional. MigrationType decides if the migration is a physical file based migration or logical migration "mysqlSyncConfig": { # MySQL-specific external server sync settings. # Optional. MySQL-specific settings for start external sync. "initialSyncFlags": [ # Flags to use for the initial dump. { # Initial sync flags for certain Cloud SQL APIs. Currently used for the MySQL external server initial dump. diff --git a/docs/dyn/sqladmin_v1beta4.instances.html b/docs/dyn/sqladmin_v1beta4.instances.html index 76b0e3cc0d..32768113b1 100644 --- a/docs/dyn/sqladmin_v1beta4.instances.html +++ b/docs/dyn/sqladmin_v1beta4.instances.html @@ -1181,6 +1181,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini instance configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether Gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -1239,6 +1247,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -1287,6 +1299,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -1312,7 +1325,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -1580,6 +1593,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini instance configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether Gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -1638,6 +1659,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -1686,6 +1711,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -1711,7 +1737,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -1945,6 +1971,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini instance configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether Gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -2003,6 +2037,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -2051,6 +2089,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -2076,7 +2115,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -2248,6 +2287,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini instance configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether Gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -2306,6 +2353,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -2354,6 +2405,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -2379,7 +2431,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. @@ -3868,6 +3920,14 @@

Method Details

"name": "A String", # The name of the failover replica. If specified at instance creation, a failover replica is created for the instance. The name doesn't include the project ID. }, "gceZone": "A String", # The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance. + "geminiConfig": { # Gemini configuration. # Gemini instance configuration. + "activeQueryEnabled": True or False, # Output only. Whether active query is enabled. + "entitled": True or False, # Output only. Whether Gemini is enabled. + "flagRecommenderEnabled": True or False, # Output only. Whether flag recommender is enabled. + "googleVacuumMgmtEnabled": True or False, # Output only. Whether vacuum management is enabled. + "indexAdvisorEnabled": True or False, # Output only. Whether index advisor is enabled. + "oomSessionCancelEnabled": True or False, # Output only. Whether oom session cancel is enabled. + }, "instanceType": "A String", # The instance type. "ipAddresses": [ # The assigned IP addresses for the instance. { # Database instance IP mapping @@ -3926,6 +3986,10 @@

Method Details

"replicaNames": [ # The replicas of the instance. "A String", ], + "replicationCluster": { # Primary-DR replica pair # The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure. + "drReplica": True or False, # Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary. + "failoverDrReplicaName": "A String", # Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica. + }, "rootPassword": "A String", # Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances. "satisfiesPzs": True or False, # The status indicating if instance satisfiesPzs. Reserved for future use. "scheduledMaintenance": { # Any scheduled maintenance for this instance. # The start time of any upcoming scheduled maintenance for this instance. @@ -3974,6 +4038,7 @@

Method Details

"replicationLogArchivingEnabled": True or False, # Reserved for future use. "startTime": "A String", # Start time for the daily backup configuration in UTC timezone in the 24 hour format - `HH:MM`. "transactionLogRetentionDays": 42, # The number of days of transaction logs we retain for point in time restore, from 1-7. + "transactionalLogStorageState": "A String", # Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery. }, "collation": "A String", # The name of server Instance collation. "connectorEnforcement": "A String", # Specifies if connections must use Cloud SQL connectors. Option values include the following: `NOT_REQUIRED` (Cloud SQL instances can be connected without Cloud SQL Connectors) and `REQUIRED` (Only allow connections that use Cloud SQL Connectors) Note that using REQUIRED disables all existing authorized networks. If this field is not specified when creating a new instance, NOT_REQUIRED is used. If this field is not specified when patching or updating an existing instance, it is left unchanged in the instance. @@ -3999,7 +4064,7 @@

Method Details

}, ], "edition": "A String", # Optional. The edition of the instance. - "enableGoogleMlIntegration": True or False, # Optional. Configuration to enable Cloud SQL Vertex AI Integration + "enableGoogleMlIntegration": True or False, # Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances. "insightsConfig": { # Insights configuration. This specifies when Cloud SQL Insights feature is enabled and optional configuration. # Insights configuration, for now relevant only for Postgres. "queryInsightsEnabled": True or False, # Whether Query Insights feature is enabled. "queryPlansPerMinute": 42, # Number of query execution plans captured by Insights per minute for all queries combined. Default is 5. diff --git a/docs/dyn/sqladmin_v1beta4.projects.instances.html b/docs/dyn/sqladmin_v1beta4.projects.instances.html index aac7d439e7..582e668801 100644 --- a/docs/dyn/sqladmin_v1beta4.projects.instances.html +++ b/docs/dyn/sqladmin_v1beta4.projects.instances.html @@ -541,6 +541,7 @@

Method Details

The object takes the form of: { + "migrationType": "A String", # Optional. MigrationType decides if the migration is a physical file based migration or logical migration. "mysqlSyncConfig": { # MySQL-specific external server sync settings. # MySQL-specific settings for start external sync. "initialSyncFlags": [ # Flags to use for the initial dump. { # Initial sync flags for certain Cloud SQL APIs. Currently used for the MySQL external server initial dump. @@ -678,6 +679,7 @@

Method Details

The object takes the form of: { + "migrationType": "A String", # Optional. MigrationType field decides if the migration is a physical file based migration or logical migration "mysqlSyncConfig": { # MySQL-specific external server sync settings. # Optional. MySQL-specific settings for start external sync. "initialSyncFlags": [ # Flags to use for the initial dump. { # Initial sync flags for certain Cloud SQL APIs. Currently used for the MySQL external server initial dump. diff --git a/docs/dyn/storage_v1.objects.html b/docs/dyn/storage_v1.objects.html index 89637c6ae7..d837870d9f 100644 --- a/docs/dyn/storage_v1.objects.html +++ b/docs/dyn/storage_v1.objects.html @@ -1292,8 +1292,8 @@

Method Details

Args: bucket: string, Name of the bucket in which the object resides. (required) object: string, Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts. (required) - copySourceAcl: boolean, If true, copies the source object's ACL; otherwise, uses the bucket's default object ACL. The default is false. generation: string, Selects a specific revision of this object. (required) + copySourceAcl: boolean, If true, copies the source object's ACL; otherwise, uses the bucket's default object ACL. The default is false. ifGenerationMatch: string, Makes the operation conditional on whether the object's one live generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object. ifGenerationNotMatch: string, Makes the operation conditional on whether none of the object's live generations match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object. ifMetagenerationMatch: string, Makes the operation conditional on whether the object's one live metageneration matches the given value. diff --git a/docs/dyn/tasks_v1.tasklists.html b/docs/dyn/tasks_v1.tasklists.html index b8833fa588..7533afcd96 100644 --- a/docs/dyn/tasks_v1.tasklists.html +++ b/docs/dyn/tasks_v1.tasklists.html @@ -85,10 +85,10 @@

Instance Methods

Returns the authenticated user's specified task list.

insert(body=None, x__xgafv=None)

-

Creates a new task list and adds it to the authenticated user's task lists.

+

Creates a new task list and adds it to the authenticated user's task lists. A user can have up to 2000 lists at a time.

list(maxResults=None, pageToken=None, x__xgafv=None)

-

Returns all the authenticated user's task lists.

+

Returns all the authenticated user's task lists. A user can have up to 2000 lists at a time.

list_next()

Retrieves the next page of results.

@@ -136,14 +136,14 @@

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). }
insert(body=None, x__xgafv=None) -
Creates a new task list and adds it to the authenticated user's task lists.
+  
Creates a new task list and adds it to the authenticated user's task lists. A user can have up to 2000 lists at a time.
 
 Args:
   body: object, The request body.
@@ -154,7 +154,7 @@ 

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). } @@ -171,14 +171,14 @@

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). }
list(maxResults=None, pageToken=None, x__xgafv=None) -
Returns all the authenticated user's task lists.
+  
Returns all the authenticated user's task lists. A user can have up to 2000 lists at a time.
 
 Args:
   maxResults: integer, Maximum number of task lists returned on one page. Optional. The default is 20 (max allowed: 100).
@@ -199,7 +199,7 @@ 

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). }, ], @@ -236,7 +236,7 @@

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). } @@ -253,7 +253,7 @@

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). }
@@ -272,7 +272,7 @@

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). } @@ -289,7 +289,7 @@

Method Details

"id": "A String", # Task list identifier. "kind": "A String", # Type of the resource. This is always "tasks#taskList". "selfLink": "A String", # URL pointing to this task list. Used to retrieve, update, or delete this task list. - "title": "A String", # Title of the task list. + "title": "A String", # Title of the task list. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task list (as a RFC 3339 timestamp). }
diff --git a/docs/dyn/tasks_v1.tasks.html b/docs/dyn/tasks_v1.tasks.html index e7c1b92474..5b5ae6e057 100644 --- a/docs/dyn/tasks_v1.tasks.html +++ b/docs/dyn/tasks_v1.tasks.html @@ -88,16 +88,16 @@

Instance Methods

Returns the specified task.

insert(tasklist, body=None, parent=None, previous=None, x__xgafv=None)

-

Creates a new task on the specified task list.

+

Creates a new task on the specified task list. A user can have up to 20,000 uncompleted tasks per list and up to 100,000 tasks in total at a time.

list(tasklist, completedMax=None, completedMin=None, dueMax=None, dueMin=None, maxResults=None, pageToken=None, showCompleted=None, showDeleted=None, showHidden=None, updatedMin=None, x__xgafv=None)

-

Returns all tasks in the specified task list.

+

Returns all tasks in the specified task list. A user can have up to 20,000 uncompleted tasks per list and up to 100,000 tasks in total at a time.

list_next()

Retrieves the next page of results.

move(tasklist, task, parent=None, previous=None, x__xgafv=None)

-

Moves the specified task to another position in the task list. This can include putting it as a child task under a new parent and/or move it to a different position among its sibling tasks.

+

Moves the specified task to another position in the task list. This can include putting it as a child task under a new parent and/or move it to a different position among its sibling tasks. A user can have up to 2,000 subtasks per task.

patch(tasklist, task, body=None, x__xgafv=None)

Updates the specified task. This method supports patch semantics.

@@ -167,12 +167,12 @@

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. } @@ -180,7 +180,7 @@

Method Details

insert(tasklist, body=None, parent=None, previous=None, x__xgafv=None) -
Creates a new task on the specified task list.
+  
Creates a new task on the specified task list. A user can have up to 20,000 uncompleted tasks per list and up to 100,000 tasks in total at a time.
 
 Args:
   tasklist: string, Task list identifier. (required)
@@ -202,12 +202,12 @@ 

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. } @@ -237,12 +237,12 @@

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. }
@@ -250,7 +250,7 @@

Method Details

list(tasklist, completedMax=None, completedMin=None, dueMax=None, dueMin=None, maxResults=None, pageToken=None, showCompleted=None, showDeleted=None, showHidden=None, updatedMin=None, x__xgafv=None) -
Returns all tasks in the specified task list.
+  
Returns all tasks in the specified task list. A user can have up to 20,000 uncompleted tasks per list and up to 100,000 tasks in total at a time.
 
 Args:
   tasklist: string, Task list identifier. (required)
@@ -290,12 +290,12 @@ 

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. }, @@ -321,7 +321,7 @@

Method Details

move(tasklist, task, parent=None, previous=None, x__xgafv=None) -
Moves the specified task to another position in the task list. This can include putting it as a child task under a new parent and/or move it to a different position among its sibling tasks.
+  
Moves the specified task to another position in the task list. This can include putting it as a child task under a new parent and/or move it to a different position among its sibling tasks. A user can have up to 2,000 subtasks per task.
 
 Args:
   tasklist: string, Task list identifier. (required)
@@ -351,12 +351,12 @@ 

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. }
@@ -387,12 +387,12 @@

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. } @@ -420,12 +420,12 @@

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. }
@@ -456,12 +456,12 @@

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. } @@ -489,12 +489,12 @@

Method Details

"type": "A String", # Type of the link, e.g. "email". }, ], - "notes": "A String", # Notes describing the task. Optional. + "notes": "A String", # Notes describing the task. Optional. Maximum length allowed: 8192 characters. "parent": "A String", # Parent task identifier. This field is omitted if it is a top-level task. This field is read-only. Use the "move" method to move the task under a different parent or to the top level. "position": "A String", # String indicating the position of the task among its sibling tasks under the same parent task or at the top level. If this string is greater than another task's corresponding position string according to lexicographical ordering, the task is positioned after the other task under the same parent task (or at the top level). This field is read-only. Use the "move" method to move the task to another position. "selfLink": "A String", # URL pointing to this task. Used to retrieve, update, or delete this task. "status": "A String", # Status of the task. This is either "needsAction" or "completed". - "title": "A String", # Title of the task. + "title": "A String", # Title of the task. Maximum length allowed: 1024 characters. "updated": "A String", # Last modification time of the task (as a RFC 3339 timestamp). "webViewLink": "A String", # An absolute link to the task in the Google Tasks Web UI. This field is read-only. }
diff --git a/docs/dyn/workloadmanager_v1.projects.locations.evaluations.html b/docs/dyn/workloadmanager_v1.projects.locations.evaluations.html index 83bfcbdfea..6d10c0c4e3 100644 --- a/docs/dyn/workloadmanager_v1.projects.locations.evaluations.html +++ b/docs/dyn/workloadmanager_v1.projects.locations.evaluations.html @@ -86,7 +86,7 @@

Instance Methods

create(parent, body=None, evaluationId=None, requestId=None, x__xgafv=None)

Creates a new Evaluation in a given project and location.

- delete(name, requestId=None, x__xgafv=None)

+ delete(name, force=None, requestId=None, x__xgafv=None)

Deletes a single Evaluation.

get(name, x__xgafv=None)

@@ -188,11 +188,12 @@

Method Details

- delete(name, requestId=None, x__xgafv=None) + delete(name, force=None, requestId=None, x__xgafv=None)
Deletes a single Evaluation.
 
 Args:
   name: string, Required. Name of the resource (required)
+  force: boolean, Optional. Followed the best practice from https://aip.dev/135#cascading-delete
   requestId: string, Optional. An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).
   x__xgafv: string, V1 error format.
     Allowed values
diff --git a/docs/dyn/workloadmanager_v1.projects.locations.html b/docs/dyn/workloadmanager_v1.projects.locations.html
index 5a6af86e1f..ec408ae532 100644
--- a/docs/dyn/workloadmanager_v1.projects.locations.html
+++ b/docs/dyn/workloadmanager_v1.projects.locations.html
@@ -94,11 +94,6 @@ 

Instance Methods

Returns the rules Resource.

-

- workloadProfiles() -

-

Returns the workloadProfiles Resource.

-

close()

Close httplib2 connections.

diff --git a/docs/dyn/workloadmanager_v1.projects.locations.insights.html b/docs/dyn/workloadmanager_v1.projects.locations.insights.html index c623750f6c..22ac749969 100644 --- a/docs/dyn/workloadmanager_v1.projects.locations.insights.html +++ b/docs/dyn/workloadmanager_v1.projects.locations.insights.html @@ -96,20 +96,25 @@

Method Details

The object takes the form of: { # Request for sending the data insights. + "agentVersion": "A String", # Optional. The agent version collected this data point. "insight": { # A presentation of host resource usage where the workload runs. # Required. The metrics data details. "instanceId": "A String", # Required. The instance id where the insight is generated from "sapDiscovery": { # The schema of SAP system discovery data. # The insights data for SAP system discovery. This is a copy of SAP System proto and should get updated whenever that one changes. "applicationLayer": { # Message describing the system component. # Optional. An SAP system may run without an application layer. "applicationProperties": { # A set of properties describing an SAP Application layer. # Optional. The component is a SAP application. "abap": True or False, # Optional. Indicates whether this is a Java or ABAP Netweaver instance. true means it is ABAP, false means it is Java. + "appInstanceNumber": "A String", # Optional. Instance number of the SAP application instance. "applicationType": "A String", # Required. Type of the application. Netweaver, etc. + "ascsInstanceNumber": "A String", # Optional. Instance number of the ASCS instance. "ascsUri": "A String", # Optional. Resource URI of the recognized ASCS host of the application. "kernelVersion": "A String", # Optional. Kernel version for Netweaver running in the system. "nfsUri": "A String", # Optional. Resource URI of the recognized shared NFS of the application. May be empty if the application server has only a single node. }, "databaseProperties": { # A set of properties describing an SAP Database layer. # Optional. The component is a SAP database. + "databaseSid": "A String", # Optional. SID of the system database. "databaseType": "A String", # Required. Type of the database. HANA, DB2, etc. "databaseVersion": "A String", # Optional. The version of the database software running in the system. + "instanceNumber": "A String", # Optional. Instance number of the SAP instance. "primaryInstanceUri": "A String", # Required. URI of the recognized primary instance of the database. "sharedNfsUri": "A String", # Optional. URI of the recognized shared NFS of the database. May be empty if the database has only a single node. }, @@ -123,6 +128,7 @@

Method Details

"clusterInstances": [ # Optional. A list of instance URIs that are part of a cluster with this one. "A String", ], + "instanceNumber": "A String", # Optional. The VM's instance number. "virtualHostname": "A String", # Optional. A virtual hostname of the instance if it has one. }, "relatedResources": [ # Optional. A list of resource URIs related to this resource. @@ -140,14 +146,18 @@

Method Details

"databaseLayer": { # Message describing the system component. # Required. An SAP System must have a database. "applicationProperties": { # A set of properties describing an SAP Application layer. # Optional. The component is a SAP application. "abap": True or False, # Optional. Indicates whether this is a Java or ABAP Netweaver instance. true means it is ABAP, false means it is Java. + "appInstanceNumber": "A String", # Optional. Instance number of the SAP application instance. "applicationType": "A String", # Required. Type of the application. Netweaver, etc. + "ascsInstanceNumber": "A String", # Optional. Instance number of the ASCS instance. "ascsUri": "A String", # Optional. Resource URI of the recognized ASCS host of the application. "kernelVersion": "A String", # Optional. Kernel version for Netweaver running in the system. "nfsUri": "A String", # Optional. Resource URI of the recognized shared NFS of the application. May be empty if the application server has only a single node. }, "databaseProperties": { # A set of properties describing an SAP Database layer. # Optional. The component is a SAP database. + "databaseSid": "A String", # Optional. SID of the system database. "databaseType": "A String", # Required. Type of the database. HANA, DB2, etc. "databaseVersion": "A String", # Optional. The version of the database software running in the system. + "instanceNumber": "A String", # Optional. Instance number of the SAP instance. "primaryInstanceUri": "A String", # Required. URI of the recognized primary instance of the database. "sharedNfsUri": "A String", # Optional. URI of the recognized shared NFS of the database. May be empty if the database has only a single node. }, @@ -161,6 +171,7 @@

Method Details

"clusterInstances": [ # Optional. A list of instance URIs that are part of a cluster with this one. "A String", ], + "instanceNumber": "A String", # Optional. The VM's instance number. "virtualHostname": "A String", # Optional. A virtual hostname of the instance if it has one. }, "relatedResources": [ # Optional. A list of resource URIs related to this resource. diff --git a/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json b/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json index 5f3ba45dd9..586300ebcb 100644 --- a/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json +++ b/googleapiclient/discovery_cache/documents/acceleratedmobilepageurl.v1.json @@ -115,7 +115,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://acceleratedmobilepageurl.googleapis.com/", "schemas": { "AmpUrl": { diff --git a/googleapiclient/discovery_cache/documents/accessapproval.v1.json b/googleapiclient/discovery_cache/documents/accessapproval.v1.json index 4297900e88..e9ab4f06ce 100644 --- a/googleapiclient/discovery_cache/documents/accessapproval.v1.json +++ b/googleapiclient/discovery_cache/documents/accessapproval.v1.json @@ -913,7 +913,7 @@ } } }, -"revision": "20240315", +"revision": "20240322", "rootUrl": "https://accessapproval.googleapis.com/", "schemas": { "AccessApprovalServiceAccount": { diff --git a/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json b/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json index a539861eb1..5c650adc80 100644 --- a/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json +++ b/googleapiclient/discovery_cache/documents/accesscontextmanager.v1.json @@ -1290,7 +1290,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://accesscontextmanager.googleapis.com/", "schemas": { "AccessContextManagerOperationMetadata": { diff --git a/googleapiclient/discovery_cache/documents/acmedns.v1.json b/googleapiclient/discovery_cache/documents/acmedns.v1.json index 13a97cc859..9603ec5a40 100644 --- a/googleapiclient/discovery_cache/documents/acmedns.v1.json +++ b/googleapiclient/discovery_cache/documents/acmedns.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://acmedns.googleapis.com/", "schemas": { "AcmeChallengeSet": { diff --git a/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json b/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json index 5a638f2d7f..60c7c2db68 100644 --- a/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json +++ b/googleapiclient/discovery_cache/documents/adexchangebuyer2.v2beta1.json @@ -3115,7 +3115,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://adexchangebuyer.googleapis.com/", "schemas": { "AbsoluteDateRange": { diff --git a/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json b/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json index f6c4d763b4..579e211011 100644 --- a/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json +++ b/googleapiclient/discovery_cache/documents/admin.datatransfer_v1.json @@ -272,7 +272,7 @@ } } }, -"revision": "20240304", +"revision": "20240321", "rootUrl": "https://admin.googleapis.com/", "schemas": { "Application": { diff --git a/googleapiclient/discovery_cache/documents/admin.directory_v1.json b/googleapiclient/discovery_cache/documents/admin.directory_v1.json index a92141a80a..c902c278d1 100644 --- a/googleapiclient/discovery_cache/documents/admin.directory_v1.json +++ b/googleapiclient/discovery_cache/documents/admin.directory_v1.json @@ -4671,7 +4671,7 @@ } } }, -"revision": "20240304", +"revision": "20240321", "rootUrl": "https://admin.googleapis.com/", "schemas": { "Alias": { diff --git a/googleapiclient/discovery_cache/documents/admin.reports_v1.json b/googleapiclient/discovery_cache/documents/admin.reports_v1.json index a744105c87..d28b653688 100644 --- a/googleapiclient/discovery_cache/documents/admin.reports_v1.json +++ b/googleapiclient/discovery_cache/documents/admin.reports_v1.json @@ -626,7 +626,7 @@ } } }, -"revision": "20240304", +"revision": "20240321", "rootUrl": "https://admin.googleapis.com/", "schemas": { "Activities": { diff --git a/googleapiclient/discovery_cache/documents/admob.v1.json b/googleapiclient/discovery_cache/documents/admob.v1.json index 01ecd41cb0..2c55ea89ee 100644 --- a/googleapiclient/discovery_cache/documents/admob.v1.json +++ b/googleapiclient/discovery_cache/documents/admob.v1.json @@ -321,7 +321,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://admob.googleapis.com/", "schemas": { "AdUnit": { diff --git a/googleapiclient/discovery_cache/documents/admob.v1beta.json b/googleapiclient/discovery_cache/documents/admob.v1beta.json index 02ef958fe6..84c47152ee 100644 --- a/googleapiclient/discovery_cache/documents/admob.v1beta.json +++ b/googleapiclient/discovery_cache/documents/admob.v1beta.json @@ -758,7 +758,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://admob.googleapis.com/", "schemas": { "AdSource": { diff --git a/googleapiclient/discovery_cache/documents/adsense.v2.json b/googleapiclient/discovery_cache/documents/adsense.v2.json index fcc0282dc4..f1e47610d4 100644 --- a/googleapiclient/discovery_cache/documents/adsense.v2.json +++ b/googleapiclient/discovery_cache/documents/adsense.v2.json @@ -324,7 +324,7 @@ "adunits": { "methods": { "create": { -"description": "Creates an ad unit. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product. Note that ad units can only be created for ad clients with an \"AFC\" product code. For more info see the [AdClient resource](/adsense/management/reference/rest/v2/accounts.adclients). For now, this method can only be used to create `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566", +"description": "Creates an ad unit. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method. Note that ad units can only be created for ad clients with an \"AFC\" product code. For more info see the [AdClient resource](/adsense/management/reference/rest/v2/accounts.adclients). For now, this method can only be used to create `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566", "flatPath": "v2/accounts/{accountsId}/adclients/{adclientsId}/adunits", "httpMethod": "POST", "id": "adsense.accounts.adclients.adunits.create", @@ -478,7 +478,7 @@ ] }, "patch": { -"description": "Updates an ad unit. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product. For now, this method can only be used to update `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566", +"description": "Updates an ad unit. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method. For now, this method can only be used to update `DISPLAY` ad units. See: https://support.google.com/adsense/answer/9183566", "flatPath": "v2/accounts/{accountsId}/adclients/{adclientsId}/adunits/{adunitsId}", "httpMethod": "PATCH", "id": "adsense.accounts.adclients.adunits.patch", @@ -516,7 +516,7 @@ "customchannels": { "methods": { "create": { -"description": "Creates a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.", +"description": "Creates a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.", "flatPath": "v2/accounts/{accountsId}/adclients/{adclientsId}/customchannels", "httpMethod": "POST", "id": "adsense.accounts.adclients.customchannels.create", @@ -544,7 +544,7 @@ ] }, "delete": { -"description": "Deletes a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.", +"description": "Deletes a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.", "flatPath": "v2/accounts/{accountsId}/adclients/{adclientsId}/customchannels/{customchannelsId}", "httpMethod": "DELETE", "id": "adsense.accounts.adclients.customchannels.delete", @@ -669,7 +669,7 @@ ] }, "patch": { -"description": "Updates a custom channel. This method can only be used by projects enabled for the [AdSense for Platforms](https://developers.google.com/adsense/platforms/) product.", +"description": "Updates a custom channel. This method can be called only by a restricted set of projects, which are usually owned by [AdSense for Platforms](https://developers.google.com/adsense/platforms/) publishers. Contact your account manager if you need to use this method.", "flatPath": "v2/accounts/{accountsId}/adclients/{adclientsId}/customchannels/{customchannelsId}", "httpMethod": "PATCH", "id": "adsense.accounts.adclients.customchannels.patch", @@ -838,6 +838,73 @@ } } }, +"policyIssues": { +"methods": { +"get": { +"description": "Gets information about the selected policy issue.", +"flatPath": "v2/accounts/{accountsId}/policyIssues/{policyIssuesId}", +"httpMethod": "GET", +"id": "adsense.accounts.policyIssues.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Name of the policy issue. Format: accounts/{account}/policyIssues/{policy_issue}", +"location": "path", +"pattern": "^accounts/[^/]+/policyIssues/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+name}", +"response": { +"$ref": "PolicyIssue" +}, +"scopes": [ +"https://www.googleapis.com/auth/adsense", +"https://www.googleapis.com/auth/adsense.readonly" +] +}, +"list": { +"description": "Lists all the policy issues for the specified account.", +"flatPath": "v2/accounts/{accountsId}/policyIssues", +"httpMethod": "GET", +"id": "adsense.accounts.policyIssues.list", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "The maximum number of policy issues to include in the response, used for paging. If unspecified, at most 10000 policy issues will be returned. The maximum value is 10000; values above 10000 will be coerced to 10000.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "A page token, received from a previous `ListPolicyIssues` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListPolicyIssues` must match the call that provided the page token.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The account for which policy issues are being retrieved. Format: accounts/{account}", +"location": "path", +"pattern": "^accounts/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v2/{+parent}/policyIssues", +"response": { +"$ref": "ListPolicyIssuesResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/adsense", +"https://www.googleapis.com/auth/adsense.readonly" +] +} +} +}, "reports": { "methods": { "generate": { @@ -1845,7 +1912,7 @@ } } }, -"revision": "20240318", +"revision": "20240324", "rootUrl": "https://adsense.googleapis.com/", "schemas": { "Account": { @@ -2392,6 +2459,24 @@ true }, "type": "object" }, +"ListPolicyIssuesResponse": { +"description": "Response definition for the policy issues list rpc. Policy issues are reported only if the publisher has at least one AFC ad client in READY or GETTING_READY state. If the publisher has no such AFC ad client, the response will be an empty list.", +"id": "ListPolicyIssuesResponse", +"properties": { +"nextPageToken": { +"description": "Continuation token used to page through policy issues. To retrieve the next page of the results, set the next request's \"page_token\" value to this.", +"type": "string" +}, +"policyIssues": { +"description": "The policy issues returned in the list response.", +"items": { +"$ref": "PolicyIssue" +}, +"type": "array" +} +}, +"type": "object" +}, "ListSavedReportsResponse": { "description": "Response definition for the saved reports list rpc.", "id": "ListSavedReportsResponse", @@ -2468,6 +2553,111 @@ true }, "type": "object" }, +"PolicyIssue": { +"description": "Representation of a policy issue for a single entity (site, site-section, or page). All issues for a single entity are represented by a single PolicyIssue resource, though that PolicyIssue can have multiple causes (or \"topics\") that can change over time. Policy issues are removed if there are no issues detected recently or if there's a recent successful appeal for the entity.", +"id": "PolicyIssue", +"properties": { +"action": { +"description": "Required. The most severe action taken on the entity over the past seven days.", +"enum": [ +"ENFORCEMENT_ACTION_UNSPECIFIED", +"WARNED", +"AD_SERVING_RESTRICTED", +"AD_SERVING_DISABLED", +"AD_SERVED_WITH_CLICK_CONFIRMATION", +"AD_PERSONALIZATION_RESTRICTED" +], +"enumDescriptions": [ +"The action is unspecified.", +"No ad serving enforcement is currently present, but enforcement will start on the `warning_escalation_date` if the issue is not resolved.", +"Ad serving demand has been restricted on the entity.", +"Ad serving has been disabled on the entity.", +"Ads are being served for the entity but Confirmed Click is being applied to the ads. See https://support.google.com/adsense/answer/10025624.", +"Ad personalization is restricted because the ad requests coming from the EEA and UK do not have a TCF string or the Consent Management Platform (CMP) indicated by the TCF string is not Google certified. As a result, basic/limited ads will be served. See https://support.google.com/adsense/answer/13554116" +], +"type": "string" +}, +"adClients": { +"description": "Optional. List of ad clients associated with the policy issue (either as the primary ad client or an associated host/secondary ad client). In the latter case, this will be an ad client that is not owned by the current account.", +"items": { +"type": "string" +}, +"type": "array" +}, +"adRequestCount": { +"description": "Required. Total number of ad requests affected by the policy violations over the past seven days.", +"format": "int64", +"type": "string" +}, +"entityType": { +"description": "Required. Type of the entity indicating if the entity is a site, site-section, or page.", +"enum": [ +"ENTITY_TYPE_UNSPECIFIED", +"SITE", +"SITE_SECTION", +"PAGE" +], +"enumDescriptions": [ +"The entity type is unspecified.", +"The enforced entity is an entire website.", +"The enforced entity is a particular section of a website. All the pages with this prefix are enforced.", +"The enforced entity is a single web page." +], +"type": "string" +}, +"firstDetectedDate": { +"$ref": "Date", +"description": "Required. The date (in the America/Los_Angeles timezone) when policy violations were first detected on the entity." +}, +"lastDetectedDate": { +"$ref": "Date", +"description": "Required. The date (in the America/Los_Angeles timezone) when policy violations were last detected on the entity." +}, +"name": { +"description": "Required. Resource name of the entity with policy issues. Format: accounts/{account}/policyIssues/{policy_issue}", +"type": "string" +}, +"policyTopics": { +"description": "Required. Unordered list. The policy topics that this entity was found to violate over the past seven days.", +"items": { +"$ref": "PolicyTopic" +}, +"type": "array" +}, +"site": { +"description": "Required. Hostname/domain of the entity (for example \"foo.com\" or \"www.foo.com\"). This _should_ be a bare domain/host name without any protocol. This will be present for all policy issues.", +"type": "string" +}, +"siteSection": { +"description": "Optional. Prefix of the site-section having policy issues (For example \"foo.com/bar-section\"). This will be present if the `entity_type` is `SITE_SECTION` and will be absent for other entity types.", +"type": "string" +}, +"uri": { +"description": "Optional. URI of the page having policy violations (for example \"foo.com/bar\" or \"www.foo.com/bar\"). This will be present if the `entity_type` is `PAGE` and will be absent for other entity types.", +"type": "string" +}, +"warningEscalationDate": { +"$ref": "Date", +"description": "Optional. The date (in the America/Los_Angeles timezone) when the entity will have ad serving demand restricted or ad serving disabled. This is present only for issues with a `WARNED` enforcement action. See https://support.google.com/adsense/answer/11066888." +} +}, +"type": "object" +}, +"PolicyTopic": { +"description": "Information about a particular policy topic. A policy topic represents a single class of policy issue that can impact ad serving for your site. For example, sexual content or having ads that obscure your content. A single policy issue can have multiple policy topics for a single entity.", +"id": "PolicyTopic", +"properties": { +"mustFix": { +"description": "Required. Indicates if this is a policy violation or not. When the value is true, issues that are instances of this topic must be addressed to remain in compliance with the partner's agreements with Google. A false value indicates that it's not mandatory to fix the issues but advertising demand might be restricted.", +"type": "boolean" +}, +"topic": { +"description": "Required. The policy topic. For example, \"sexual-content\" or \"ads-obscuring-content\".\"", +"type": "string" +} +}, +"type": "object" +}, "ReportResult": { "description": "Result of a generated report.", "id": "ReportResult", diff --git a/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json b/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json index 17d438affb..c1cc200d7d 100644 --- a/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json +++ b/googleapiclient/discovery_cache/documents/advisorynotifications.v1.json @@ -357,7 +357,7 @@ } } }, -"revision": "20240313", +"revision": "20240317", "rootUrl": "https://advisorynotifications.googleapis.com/", "schemas": { "GoogleCloudAdvisorynotificationsV1Attachment": { diff --git a/googleapiclient/discovery_cache/documents/aiplatform.v1.json b/googleapiclient/discovery_cache/documents/aiplatform.v1.json index 91e3eece34..51dfb4ce9d 100644 --- a/googleapiclient/discovery_cache/documents/aiplatform.v1.json +++ b/googleapiclient/discovery_cache/documents/aiplatform.v1.json @@ -3242,7 +3242,7 @@ ], "parameters": { "filter": { -"description": "Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports = and !=. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports = and, != * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:* or labels:key - key existence * A key including a space must be quoted. `labels.\"a key\"`. * `base_model_name` only supports = Some examples: * `endpoint=1` * `displayName=\"myDisplayName\"` * `labels.myKey=\"myValue\"` * `baseModelName=\"text-bison\"`", +"description": "Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports `=` and `!=`. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports `=` and `!=`. * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:*` or `labels:key` - key existence * A key including a space must be quoted. `labels.\"a key\"`. * `base_model_name` only supports `=`. Some examples: * `endpoint=1` * `displayName=\"myDisplayName\"` * `labels.myKey=\"myValue\"` * `baseModelName=\"text-bison\"`", "location": "query", "type": "string" }, @@ -11797,6 +11797,187 @@ "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] +}, +"reboot": { +"description": "Reboots a PersistentResource.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/persistentResources/{persistentResourcesId}:reboot", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.persistentResources.reboot", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The name of the PersistentResource resource. Format: `projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/persistentResources/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}:reboot", +"request": { +"$ref": "GoogleCloudAiplatformV1RebootPersistentResourceRequest" +}, +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +}, +"resources": { +"operations": { +"methods": { +"cancel": { +"description": "Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to `Code.CANCELLED`.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/persistentResources/{persistentResourcesId}/operations/{operationsId}:cancel", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.persistentResources.operations.cancel", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The name of the operation resource to be cancelled.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/persistentResources/[^/]+/operations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}:cancel", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"delete": { +"description": "Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns `google.rpc.Code.UNIMPLEMENTED`.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/persistentResources/{persistentResourcesId}/operations/{operationsId}", +"httpMethod": "DELETE", +"id": "aiplatform.projects.locations.persistentResources.operations.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The name of the operation resource to be deleted.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/persistentResources/[^/]+/operations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleProtobufEmpty" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"get": { +"description": "Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/persistentResources/{persistentResourcesId}/operations/{operationsId}", +"httpMethod": "GET", +"id": "aiplatform.projects.locations.persistentResources.operations.get", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The name of the operation resource.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/persistentResources/[^/]+/operations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"list": { +"description": "Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/persistentResources/{persistentResourcesId}/operations", +"httpMethod": "GET", +"id": "aiplatform.projects.locations.persistentResources.operations.list", +"parameterOrder": [ +"name" +], +"parameters": { +"filter": { +"description": "The standard list filter.", +"location": "query", +"type": "string" +}, +"name": { +"description": "The name of the operation's parent resource.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/persistentResources/[^/]+$", +"required": true, +"type": "string" +}, +"pageSize": { +"description": "The standard list page size.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "The standard list page token.", +"location": "query", +"type": "string" +} +}, +"path": "v1/{+name}/operations", +"response": { +"$ref": "GoogleLongrunningListOperationsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"wait": { +"description": "Waits until the specified long-running operation is done or reaches at most a specified timeout, returning the latest state. If the operation is already done, the latest state is immediately returned. If the timeout specified is greater than the default HTTP/RPC timeout, the HTTP/RPC timeout is used. If the server does not support this method, it returns `google.rpc.Code.UNIMPLEMENTED`. Note that this method is on a best-effort basis. It may return the latest state before the specified timeout (including immediately), meaning even an immediate response is no guarantee that the operation is done.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/persistentResources/{persistentResourcesId}/operations/{operationsId}:wait", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.persistentResources.operations.wait", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "The name of the operation resource to wait on.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/persistentResources/[^/]+/operations/[^/]+$", +"required": true, +"type": "string" +}, +"timeout": { +"description": "The maximum duration to wait before timing out. If left blank, the wait will be at most the time permitted by the underlying HTTP/RPC protocol. If RPC context deadline is also specified, the shorter one will be used.", +"format": "google-duration", +"location": "query", +"type": "string" +} +}, +"path": "v1/{+name}:wait", +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} } } }, @@ -15858,7 +16039,7 @@ } } }, -"revision": "20240313", +"revision": "20240320", "rootUrl": "https://aiplatform.googleapis.com/", "schemas": { "CloudAiLargeModelsVisionEmbedVideoResponse": { @@ -19012,6 +19193,10 @@ "description": "Optional. The full name of the Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the Job should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](/compute/docs/reference/rest/v1/networks/insert) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. To specify this field, you must have already [configured VPC Network Peering for Vertex AI](https://cloud.google.com/vertex-ai/docs/general/vpc-peering). If this field is left unspecified, the job is not peered with any network.", "type": "string" }, +"persistentResourceId": { +"description": "Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.", +"type": "string" +}, "protectedArtifactLocationId": { "description": "The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations", "type": "string" @@ -22038,6 +22223,10 @@ "description": "Response message for FeatureOnlineStoreService.FetchFeatureValues", "id": "GoogleCloudAiplatformV1FetchFeatureValuesResponse", "properties": { +"dataKey": { +"$ref": "GoogleCloudAiplatformV1FeatureViewDataKey", +"description": "The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs." +}, "keyValues": { "$ref": "GoogleCloudAiplatformV1FetchFeatureValuesResponseFeatureNameValuePairList", "description": "Feature values in KeyValue format." @@ -22343,13 +22532,6 @@ }, "type": "array" }, -"systemInstructions": { -"description": "Optional. The user provided system instructions for the model.", -"items": { -"$ref": "GoogleCloudAiplatformV1Content" -}, -"type": "array" -}, "tools": { "description": "Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.", "items": { @@ -27789,6 +27971,10 @@ "$ref": "GoogleCloudAiplatformV1PublisherModelCallToActionDeployGke", "description": "Optional. Deploy PublisherModel to Google Kubernetes Engine." }, +"multiDeployVertex": { +"$ref": "GoogleCloudAiplatformV1PublisherModelCallToActionDeployVertex", +"description": "Optional. Multiple setups to deploy the PublisherModel to Vertex Endpoint." +}, "openEvaluationPipeline": { "$ref": "GoogleCloudAiplatformV1PublisherModelCallToActionRegionalResourceReferences", "description": "Optional. Open evaluation pipeline of the PublisherModel." @@ -27889,6 +28075,20 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1PublisherModelCallToActionDeployVertex": { +"description": "Multiple setups to deploy the PublisherModel.", +"id": "GoogleCloudAiplatformV1PublisherModelCallToActionDeployVertex", +"properties": { +"multiDeployVertex": { +"description": "Optional. One click deployment configurations.", +"items": { +"$ref": "GoogleCloudAiplatformV1PublisherModelCallToActionDeploy" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1PublisherModelCallToActionOpenFineTuningPipelines": { "description": "Open fine tuning pipelines.", "id": "GoogleCloudAiplatformV1PublisherModelCallToActionOpenFineTuningPipelines", @@ -28480,6 +28680,12 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1RebootPersistentResourceRequest": { +"description": "Request message for PersistentResourceService.RebootPersistentResource.", +"id": "GoogleCloudAiplatformV1RebootPersistentResourceRequest", +"properties": {}, +"type": "object" +}, "GoogleCloudAiplatformV1RemoveContextChildrenRequest": { "description": "Request message for MetadataService.DeleteContextChildrenRequest.", "id": "GoogleCloudAiplatformV1RemoveContextChildrenRequest", @@ -29082,6 +29288,10 @@ "description": "Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed.", "id": "GoogleCloudAiplatformV1Schema", "properties": { +"default": { +"description": "Optional. Default value of the data.", +"type": "any" +}, "description": { "description": "Optional. The description of the data.", "type": "string" @@ -29098,22 +29308,66 @@ "type": "any" }, "format": { -"description": "Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64", +"description": "Optional. The format of the data. Supported formats: for NUMBER type: \"float\", \"double\" for INTEGER type: \"int32\", \"int64\" for STRING type: \"email\", \"byte\", etc", "type": "string" }, "items": { "$ref": "GoogleCloudAiplatformV1Schema", -"description": "Optional. Schema of the elements of Type.ARRAY." +"description": "Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY." +}, +"maxItems": { +"description": "Optional. Maximum number of the elements for Type.ARRAY.", +"format": "int64", +"type": "string" +}, +"maxLength": { +"description": "Optional. Maximum length of the Type.STRING", +"format": "int64", +"type": "string" +}, +"maxProperties": { +"description": "Optional. Maximum number of the properties for Type.OBJECT.", +"format": "int64", +"type": "string" +}, +"maximum": { +"description": "Optional. Maximum value of the Type.INTEGER and Type.NUMBER", +"format": "double", +"type": "number" +}, +"minItems": { +"description": "Optional. Minimum number of the elements for Type.ARRAY.", +"format": "int64", +"type": "string" +}, +"minLength": { +"description": "Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING", +"format": "int64", +"type": "string" +}, +"minProperties": { +"description": "Optional. Minimum number of the properties for Type.OBJECT.", +"format": "int64", +"type": "string" +}, +"minimum": { +"description": "Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER", +"format": "double", +"type": "number" }, "nullable": { "description": "Optional. Indicates if the value may be null.", "type": "boolean" }, +"pattern": { +"description": "Optional. Pattern of the Type.STRING to restrict a string to a regular expression.", +"type": "string" +}, "properties": { "additionalProperties": { "$ref": "GoogleCloudAiplatformV1Schema" }, -"description": "Optional. Properties of Type.OBJECT.", +"description": "Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.", "type": "object" }, "required": { @@ -29123,6 +29377,10 @@ }, "type": "array" }, +"title": { +"description": "Optional. The title of the Schema.", +"type": "string" +}, "type": { "description": "Optional. The type of the data.", "enum": [ @@ -35078,7 +35336,7 @@ false "id": "GoogleCloudAiplatformV1VertexAISearch", "properties": { "datastore": { -"description": "Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<>", +"description": "Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}", "type": "string" } }, @@ -35830,6 +36088,8 @@ false "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -36155,6 +36415,8 @@ false "", "", "", +"", +"", "Bard ARCADE finetune dataset.", "Mobile assistant finetune datasets.", "", @@ -36594,6 +36856,8 @@ false "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -36919,6 +37183,8 @@ false "", "", "", +"", +"", "Bard ARCADE finetune dataset.", "Mobile assistant finetune datasets.", "", @@ -37369,6 +37635,8 @@ false "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -37694,6 +37962,8 @@ false "", "", "", +"", +"", "Bard ARCADE finetune dataset", "Mobile assistant finetune datasets.", "", @@ -38133,6 +38403,8 @@ false "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -38458,6 +38730,8 @@ false "", "", "", +"", +"", "Bard ARCADE finetune dataset", "Mobile assistant finetune datasets.", "", @@ -39150,7 +39424,7 @@ false "id": "LearningGenaiRootGroundingMetadataCitation", "properties": { "endIndex": { -"description": "Index in the prediction output where the citation ends (exclusive). Must be > start_index and < len(output).", +"description": "Index in the prediction output where the citation ends (exclusive). Must be > start_index and <= len(output).", "format": "int32", "type": "integer" }, @@ -39940,14 +40214,16 @@ false "RETURN", "STOP", "MAX_TOKENS", -"FILTER" +"FILTER", +"TOP_N_FILTERED" ], "enumDescriptions": [ "", "Return all the tokens back. This typically implies no filtering or stop sequence was triggered.", "Finished due to provided stop sequence.", "Model has emitted the maximum number of tokens as specified by max_decoding_steps.", -"Finished due to triggering some post-processing filter." +"Finished due to triggering some post-processing filter.", +"Filtered out due to Top_N < Response_Candidates.Size()" ], "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json b/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json index 1722305cb2..c1f9316330 100644 --- a/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/aiplatform.v1beta1.json @@ -3428,7 +3428,7 @@ ], "parameters": { "filter": { -"description": "Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports = and !=. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports = and, != * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:* or labels:key - key existence * A key including a space must be quoted. `labels.\"a key\"`. * `base_model_name` only supports = Some examples: * `endpoint=1` * `displayName=\"myDisplayName\"` * `labels.myKey=\"myValue\"` * `baseModelName=\"text-bison\"`", +"description": "Optional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. * `endpoint` supports `=` and `!=`. `endpoint` represents the Endpoint ID, i.e. the last segment of the Endpoint's resource name. * `display_name` supports `=` and `!=`. * `labels` supports general map functions that is: * `labels.key=value` - key:value equality * `labels.key:*` or `labels:key` - key existence * A key including a space must be quoted. `labels.\"a key\"`. * `base_model_name` only supports `=`. Some examples: * `endpoint=1` * `displayName=\"myDisplayName\"` * `labels.myKey=\"myValue\"` * `baseModelName=\"text-bison\"`", "location": "query", "type": "string" }, @@ -4492,7 +4492,7 @@ "type": "string" }, "updateMask": { -"description": "Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description`", +"description": "Required. Mask specifying which fields to update. Supported fields: * `display_name` * `description` * `tool_use_examples`", "format": "google-fieldmask", "location": "query", "type": "string" @@ -4508,6 +4508,34 @@ "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] +}, +"query": { +"description": "Queries an extension with a default controller.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/extensions/{extensionsId}:query", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.extensions.query", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Name (identifier) of the extension; Format: `projects/{project}/locations/{location}/extensions/{extension}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/extensions/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+name}:query", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1QueryExtensionRequest" +}, +"response": { +"$ref": "GoogleCloudAiplatformV1beta1QueryExtensionResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] } }, "resources": { @@ -5973,6 +6001,34 @@ "https://www.googleapis.com/auth/cloud-platform" ] }, +"streamingFetchFeatureValues": { +"description": "Bidirectional streaming RPC to fetch feature values under a FeatureView. Requests may not have a one-to-one mapping to responses and responses may be returned out-of-order to reduce latency.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/featureOnlineStores/{featureOnlineStoresId}/featureViews/{featureViewsId}:streamingFetchFeatureValues", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.featureOnlineStores.featureViews.streamingFetchFeatureValues", +"parameterOrder": [ +"featureView" +], +"parameters": { +"featureView": { +"description": "Required. FeatureView resource format `projects/{project}/locations/{location}/featureOnlineStores/{featureOnlineStore}/featureViews/{featureView}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/featureOnlineStores/[^/]+/featureViews/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+featureView}:streamingFetchFeatureValues", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1StreamingFetchFeatureValuesRequest" +}, +"response": { +"$ref": "GoogleCloudAiplatformV1beta1StreamingFetchFeatureValuesResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "sync": { "description": "Triggers on-demand sync for the FeatureView.", "flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/featureOnlineStores/{featureOnlineStoresId}/featureViews/{featureViewsId}:sync", @@ -13387,6 +13443,34 @@ "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] +}, +"reboot": { +"description": "Reboots a PersistentResource.", +"flatPath": "v1beta1/projects/{projectsId}/locations/{locationsId}/persistentResources/{persistentResourcesId}:reboot", +"httpMethod": "POST", +"id": "aiplatform.projects.locations.persistentResources.reboot", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The name of the PersistentResource resource. Format: `projects/{project_id_or_number}/locations/{location_id}/persistentResources/{persistent_resource_id}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/persistentResources/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta1/{+name}:reboot", +"request": { +"$ref": "GoogleCloudAiplatformV1beta1RebootPersistentResourceRequest" +}, +"response": { +"$ref": "GoogleLongrunningOperation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] } }, "resources": { @@ -18168,7 +18252,7 @@ } } }, -"revision": "20240313", +"revision": "20240320", "rootUrl": "https://aiplatform.googleapis.com/", "schemas": { "CloudAiLargeModelsVisionEmbedVideoResponse": { @@ -19696,7 +19780,7 @@ "id": "GoogleCloudAiplatformV1beta1AuthConfigApiKeyConfig", "properties": { "apiKeySecret": { -"description": "Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}`", +"description": "Required. The name of the SecretManager secret version resource storing the API key. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.", "type": "string" }, "httpElementLocation": { @@ -19731,7 +19815,7 @@ "id": "GoogleCloudAiplatformV1beta1AuthConfigGoogleServiceAccountConfig", "properties": { "serviceAccount": { -"description": "Optional. The service account that the extension execution service runs as. - If it is not specified, the Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) will be used. - If the service account is provided, the service account should grant Vertex AI Extension Service Agent `iam.serviceAccounts.getAccessToken` permission.", +"description": "Optional. The service account that the extension execution service runs as. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified service account. - If not specified, the Vertex AI Extension Service Agent will be used to execute the Extension.", "type": "string" } }, @@ -19742,7 +19826,7 @@ "id": "GoogleCloudAiplatformV1beta1AuthConfigHttpBasicAuthConfig", "properties": { "credentialSecret": { -"description": "Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}`", +"description": "Required. The name of the SecretManager secret version resource storing the base64 encoded credentials. Format: `projects/{project}/secrets/{secrete}/versions/{version}` - If specified, the `secretmanager.versions.access` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the specified resource.", "type": "string" } }, @@ -19759,11 +19843,11 @@ "id": "GoogleCloudAiplatformV1beta1AuthConfigOauthConfig", "properties": { "accessToken": { -"description": "Access token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time.", +"description": "Access token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.", "type": "string" }, "serviceAccount": { -"description": "The service account that the extension execution service will use to query extension. Used for generating OAuth token on behalf of provided service account. - If the service account is provided, the service account should grant Vertex AI Service Agent `iam.serviceAccounts.getAccessToken` permission.", +"description": "The service account used to generate access tokens for executing the Extension. - If the service account is specified, the `iam.serviceAccounts.getAccessToken` permission should be granted to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents) on the provided service account.", "type": "string" } }, @@ -19774,7 +19858,11 @@ "id": "GoogleCloudAiplatformV1beta1AuthConfigOidcConfig", "properties": { "idToken": { -"description": "OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from ExecuteExtensionRequest.runtime_auth_config at request time.", +"description": "OpenID Connect formatted ID token for extension endpoint. Only used to propagate token from [[ExecuteExtensionRequest.runtime_auth_config]] at request time.", +"type": "string" +}, +"serviceAccount": { +"description": "The service account used to generate an OpenID Connect (OIDC)-compatible JWT token signed by the Google OIDC Provider (accounts.google.com) for extension endpoint (https://cloud.google.com/iam/docs/create-short-lived-credentials-direct#sa-credentials-oidc). - The audience for the token will be set to the URL in the server url defined in the OpenApi spec. - If the service account is provided, the service account should grant `iam.serviceAccounts.getOpenIdToken` permission to Vertex AI Extension Service Agent (https://cloud.google.com/vertex-ai/docs/general/access-control#service-agents).", "type": "string" } }, @@ -20667,6 +20755,17 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1CheckPoint": { +"description": "Placeholder for all checkpoint related data. Any data needed to restore a request and more go/vertex-extension-query-operation", +"id": "GoogleCloudAiplatformV1beta1CheckPoint", +"properties": { +"content": { +"description": "Required. encoded checkpoint", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1CheckTrialEarlyStoppingStateMetatdata": { "description": "This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.", "id": "GoogleCloudAiplatformV1beta1CheckTrialEarlyStoppingStateMetatdata", @@ -22920,7 +23019,7 @@ "id": "GoogleCloudAiplatformV1beta1ExecuteExtensionRequest", "properties": { "operationId": { -"description": "Required. The operation to be executed in this extension as defined in ExtensionOperation.operation_id.", +"description": "Required. The desired ID of the operation to be executed in this extension as defined in ExtensionOperation.operation_id.", "type": "string" }, "operationParams": { @@ -22945,15 +23044,6 @@ "content": { "description": "Response content from the extension. The content should be conformant to the response.content schema in the extension's manifest/OpenAPI spec.", "type": "string" -}, -"output": { -"additionalProperties": { -"description": "Properties of the object.", -"type": "any" -}, -"deprecated": true, -"description": "Output from the extension. The output should be conformant to the extension's manifest/OpenAPI spec. The output can contain values for keys like \"content\", \"headers\", etc. This field is deprecated, please use content field below for the extension execution result.", -"type": "object" } }, "type": "object" @@ -23039,6 +23129,56 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1ExecutionPlan": { +"description": "Execution plan for a request.", +"id": "GoogleCloudAiplatformV1beta1ExecutionPlan", +"properties": { +"steps": { +"description": "Required. Sequence of steps to execute a request.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1ExecutionPlanStep" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1ExecutionPlanStep": { +"description": "Single step in query execution plan.", +"id": "GoogleCloudAiplatformV1beta1ExecutionPlanStep", +"properties": { +"extensionExecution": { +"$ref": "GoogleCloudAiplatformV1beta1ExecutionPlanStepExtensionExecution", +"description": "Extension execution step." +}, +"respondToUser": { +"$ref": "GoogleCloudAiplatformV1beta1ExecutionPlanStepRespondToUser", +"description": "Respond to user step." +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1ExecutionPlanStepExtensionExecution": { +"description": "Extension execution step.", +"id": "GoogleCloudAiplatformV1beta1ExecutionPlanStepExtensionExecution", +"properties": { +"extension": { +"description": "Required. extension resource name", +"type": "string" +}, +"operationId": { +"description": "Required. the operation id", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1ExecutionPlanStepRespondToUser": { +"description": "Respond to user step.", +"id": "GoogleCloudAiplatformV1beta1ExecutionPlanStepRespondToUser", +"properties": {}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1ExplainRequest": { "description": "Request message for PredictionService.Explain.", "id": "GoogleCloudAiplatformV1beta1ExplainRequest", @@ -24893,6 +25033,10 @@ "description": "Response message for FeatureOnlineStoreService.FetchFeatureValues", "id": "GoogleCloudAiplatformV1beta1FetchFeatureValuesResponse", "properties": { +"dataKey": { +"$ref": "GoogleCloudAiplatformV1beta1FeatureViewDataKey", +"description": "The data key associated with this response. Will only be populated for FeatureOnlineStoreService.StreamingFetchFeatureValues RPCs." +}, "keyValues": { "$ref": "GoogleCloudAiplatformV1beta1FetchFeatureValuesResponseFeatureNameValuePairList", "description": "Feature values in KeyValue format." @@ -25233,13 +25377,6 @@ }, "type": "array" }, -"systemInstructions": { -"description": "Optional. The user provided system instructions for the model.", -"items": { -"$ref": "GoogleCloudAiplatformV1beta1Content" -}, -"type": "array" -}, "tools": { "description": "Optional. A list of `Tools` the model may use to generate the next response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.", "items": { @@ -30780,6 +30917,10 @@ "$ref": "GoogleCloudAiplatformV1beta1PublisherModelCallToActionDeployGke", "description": "Optional. Deploy PublisherModel to Google Kubernetes Engine." }, +"multiDeployVertex": { +"$ref": "GoogleCloudAiplatformV1beta1PublisherModelCallToActionDeployVertex", +"description": "Optional. Multiple setups to deploy the PublisherModel to Vertex Endpoint." +}, "openEvaluationPipeline": { "$ref": "GoogleCloudAiplatformV1beta1PublisherModelCallToActionRegionalResourceReferences", "description": "Optional. Open evaluation pipeline of the PublisherModel." @@ -30880,6 +31021,20 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1PublisherModelCallToActionDeployVertex": { +"description": "Multiple setups to deploy the PublisherModel.", +"id": "GoogleCloudAiplatformV1beta1PublisherModelCallToActionDeployVertex", +"properties": { +"multiDeployVertex": { +"description": "Optional. One click deployment configurations.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1PublisherModelCallToActionDeploy" +}, +"type": "array" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1PublisherModelCallToActionOpenFineTuningPipelines": { "description": "Open fine tuning pipelines.", "id": "GoogleCloudAiplatformV1beta1PublisherModelCallToActionOpenFineTuningPipelines", @@ -31218,6 +31373,143 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1QueryExtensionRequest": { +"description": "Request message for ExtensionExecutionService.QueryExtension.", +"id": "GoogleCloudAiplatformV1beta1QueryExtensionRequest", +"properties": { +"contents": { +"description": "Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1Content" +}, +"type": "array" +}, +"query": { +"$ref": "GoogleCloudAiplatformV1beta1QueryRequestQuery", +"deprecated": true, +"description": "Required. User provided input query message." +}, +"useFunctionCall": { +"description": "Optional. Experiment control on whether to use function call.", +"type": "boolean" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1QueryExtensionResponse": { +"description": "Response message for ExtensionExecutionService.QueryExtension.", +"id": "GoogleCloudAiplatformV1beta1QueryExtensionResponse", +"properties": { +"failureMessage": { +"description": "Failure message if any.", +"type": "string" +}, +"metadata": { +"$ref": "GoogleCloudAiplatformV1beta1QueryResponseResponseMetadata", +"deprecated": true, +"description": "Metadata related to the query execution." +}, +"queryResponseMetadata": { +"$ref": "GoogleCloudAiplatformV1beta1QueryResponseQueryResponseMetadata", +"deprecated": true +}, +"response": { +"deprecated": true, +"description": "Response to the user's query.", +"type": "string" +}, +"steps": { +"description": "Steps of extension or LLM interaction, can contain function call, function response, or text response. The last step contains the final response to the query.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1Content" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1QueryRequestQuery": { +"description": "User provided query message.", +"id": "GoogleCloudAiplatformV1beta1QueryRequestQuery", +"properties": { +"query": { +"description": "Required. The query from user.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1QueryResponseQueryResponseMetadata": { +"id": "GoogleCloudAiplatformV1beta1QueryResponseQueryResponseMetadata", +"properties": { +"steps": { +"description": "ReAgent execution steps.", +"items": { +"$ref": "GoogleCloudAiplatformV1beta1QueryResponseQueryResponseMetadataReAgentSteps" +}, +"type": "array" +}, +"useCreativity": { +"description": "Whether the reasoning agent used creativity (instead of extensions provided) to build the response.", +"type": "boolean" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1QueryResponseQueryResponseMetadataReAgentSteps": { +"description": "ReAgent execution steps.", +"id": "GoogleCloudAiplatformV1beta1QueryResponseQueryResponseMetadataReAgentSteps", +"properties": { +"error": { +"description": "Error messages from the extension or during response parsing.", +"type": "string" +}, +"extensionInstruction": { +"description": "Planner's instruction to the extension.", +"type": "string" +}, +"extensionInvoked": { +"description": "Planner's choice of extension to invoke.", +"type": "string" +}, +"response": { +"description": "Response of the extension.", +"type": "string" +}, +"success": { +"description": "When set to False, either the extension fails to execute or the response cannot be summarized.", +"type": "boolean" +}, +"thought": { +"description": "Planner's thought.", +"type": "string" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1QueryResponseResponseMetadata": { +"description": "Metadata for response", +"id": "GoogleCloudAiplatformV1beta1QueryResponseResponseMetadata", +"properties": { +"checkpoint": { +"$ref": "GoogleCloudAiplatformV1beta1CheckPoint", +"description": "Optional. Checkpoint to restore a request" +}, +"executionPlan": { +"$ref": "GoogleCloudAiplatformV1beta1ExecutionPlan", +"description": "Optional. Execution plan for the request." +}, +"flowOutputs": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "To surface the v2 flow output.", +"type": "object" +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1RawPredictRequest": { "description": "Request message for PredictionService.RawPredict.", "id": "GoogleCloudAiplatformV1beta1RawPredictRequest", @@ -31486,6 +31778,12 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1RebootPersistentResourceRequest": { +"description": "Request message for PersistentResourceService.RebootPersistentResource.", +"id": "GoogleCloudAiplatformV1beta1RebootPersistentResourceRequest", +"properties": {}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1RemoveContextChildrenRequest": { "description": "Request message for MetadataService.DeleteContextChildrenRequest.", "id": "GoogleCloudAiplatformV1beta1RemoveContextChildrenRequest", @@ -32183,6 +32481,10 @@ "description": "Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema). More fields may be added in the future as needed.", "id": "GoogleCloudAiplatformV1beta1Schema", "properties": { +"default": { +"description": "Optional. Default value of the data.", +"type": "any" +}, "description": { "description": "Optional. The description of the data.", "type": "string" @@ -32199,22 +32501,66 @@ "type": "any" }, "format": { -"description": "Optional. The format of the data. Supported formats: for NUMBER type: float, double for INTEGER type: int32, int64", +"description": "Optional. The format of the data. Supported formats: for NUMBER type: \"float\", \"double\" for INTEGER type: \"int32\", \"int64\" for STRING type: \"email\", \"byte\", etc", "type": "string" }, "items": { "$ref": "GoogleCloudAiplatformV1beta1Schema", -"description": "Optional. Schema of the elements of Type.ARRAY." +"description": "Optional. SCHEMA FIELDS FOR TYPE ARRAY Schema of the elements of Type.ARRAY." +}, +"maxItems": { +"description": "Optional. Maximum number of the elements for Type.ARRAY.", +"format": "int64", +"type": "string" +}, +"maxLength": { +"description": "Optional. Maximum length of the Type.STRING", +"format": "int64", +"type": "string" +}, +"maxProperties": { +"description": "Optional. Maximum number of the properties for Type.OBJECT.", +"format": "int64", +"type": "string" +}, +"maximum": { +"description": "Optional. Maximum value of the Type.INTEGER and Type.NUMBER", +"format": "double", +"type": "number" +}, +"minItems": { +"description": "Optional. Minimum number of the elements for Type.ARRAY.", +"format": "int64", +"type": "string" +}, +"minLength": { +"description": "Optional. SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING", +"format": "int64", +"type": "string" +}, +"minProperties": { +"description": "Optional. Minimum number of the properties for Type.OBJECT.", +"format": "int64", +"type": "string" +}, +"minimum": { +"description": "Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER", +"format": "double", +"type": "number" }, "nullable": { "description": "Optional. Indicates if the value may be null.", "type": "boolean" }, +"pattern": { +"description": "Optional. Pattern of the Type.STRING to restrict a string to a regular expression.", +"type": "string" +}, "properties": { "additionalProperties": { "$ref": "GoogleCloudAiplatformV1beta1Schema" }, -"description": "Optional. Properties of Type.OBJECT.", +"description": "Optional. SCHEMA FIELDS FOR TYPE OBJECT Properties of Type.OBJECT.", "type": "object" }, "required": { @@ -32224,6 +32570,10 @@ }, "type": "array" }, +"title": { +"description": "Optional. The title of the Schema.", +"type": "string" +}, "type": { "description": "Optional. The type of the data.", "enum": [ @@ -36174,6 +36524,56 @@ }, "type": "object" }, +"GoogleCloudAiplatformV1beta1StreamingFetchFeatureValuesRequest": { +"description": "Request message for FeatureOnlineStoreService.StreamingFetchFeatureValues. For the entities requested, all features under the requested feature view will be returned.", +"id": "GoogleCloudAiplatformV1beta1StreamingFetchFeatureValuesRequest", +"properties": { +"dataFormat": { +"description": "Specify response data format. If not set, KeyValue format will be used.", +"enum": [ +"FEATURE_VIEW_DATA_FORMAT_UNSPECIFIED", +"KEY_VALUE", +"PROTO_STRUCT" +], +"enumDescriptions": [ +"Not set. Will be treated as the KeyValue format.", +"Return response data in key-value format.", +"Return response data in proto Struct format." +], +"type": "string" +}, +"dataKeys": { +"items": { +"$ref": "GoogleCloudAiplatformV1beta1FeatureViewDataKey" +}, +"type": "array" +} +}, +"type": "object" +}, +"GoogleCloudAiplatformV1beta1StreamingFetchFeatureValuesResponse": { +"description": "Response message for FeatureOnlineStoreService.StreamingFetchFeatureValues.", +"id": "GoogleCloudAiplatformV1beta1StreamingFetchFeatureValuesResponse", +"properties": { +"data": { +"items": { +"$ref": "GoogleCloudAiplatformV1beta1FetchFeatureValuesResponse" +}, +"type": "array" +}, +"dataKeysWithError": { +"items": { +"$ref": "GoogleCloudAiplatformV1beta1FeatureViewDataKey" +}, +"type": "array" +}, +"status": { +"$ref": "GoogleRpcStatus", +"description": "Response status. If OK, then StreamingFetchFeatureValuesResponse.data will be populated. Otherwise StreamingFetchFeatureValuesResponse.data_keys_with_error will be populated with the appropriate data keys. The error only applies to the listed data keys - the stream will remain open for further FeatureOnlineStoreService.StreamingFetchFeatureValuesRequest requests." +} +}, +"type": "object" +}, "GoogleCloudAiplatformV1beta1StreamingPredictRequest": { "description": "Request message for PredictionService.StreamingPredict. The first message must contain endpoint field and optionally input. The subsequent messages must contain input.", "id": "GoogleCloudAiplatformV1beta1StreamingPredictRequest", @@ -38174,7 +38574,7 @@ "id": "GoogleCloudAiplatformV1beta1VertexAISearch", "properties": { "datastore": { -"description": "Required. Fully-qualified Vertex AI Search's datastore resource ID. projects/<>/locations/<>/collections/<>/dataStores/<>", +"description": "Required. Fully-qualified Vertex AI Search's datastore resource ID. Format: projects/{project}/locations/{location}/collections/{collection}/dataStores/{dataStore}", "type": "string" } }, @@ -38963,6 +39363,8 @@ "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -39288,6 +39690,8 @@ "", "", "", +"", +"", "Bard ARCADE finetune dataset.", "Mobile assistant finetune datasets.", "", @@ -39727,6 +40131,8 @@ "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -40052,6 +40458,8 @@ "", "", "", +"", +"", "Bard ARCADE finetune dataset.", "Mobile assistant finetune datasets.", "", @@ -40502,6 +40910,8 @@ "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -40827,6 +41237,8 @@ "", "", "", +"", +"", "Bard ARCADE finetune dataset", "Mobile assistant finetune datasets.", "", @@ -41266,6 +41678,8 @@ "DUET_GOOGLESQL_GENERATION", "DUET_CLOUD_IX_PROMPTS", "DUET_RAD", +"DUET_STACKOVERFLOW_ISSUES", +"DUET_STACKOVERFLOW_ANSWERS", "BARD_ARCADE_GITHUB", "MOBILE_ASSISTANT_MAGI_FILTERED_0825_373K", "MOBILE_ASSISTANT_PALM24B_FILTERED_400K", @@ -41591,6 +42005,8 @@ "", "", "", +"", +"", "Bard ARCADE finetune dataset", "Mobile assistant finetune datasets.", "", @@ -42283,7 +42699,7 @@ false "id": "LearningGenaiRootGroundingMetadataCitation", "properties": { "endIndex": { -"description": "Index in the prediction output where the citation ends (exclusive). Must be > start_index and < len(output).", +"description": "Index in the prediction output where the citation ends (exclusive). Must be > start_index and <= len(output).", "format": "int32", "type": "integer" }, @@ -43073,14 +43489,16 @@ false "RETURN", "STOP", "MAX_TOKENS", -"FILTER" +"FILTER", +"TOP_N_FILTERED" ], "enumDescriptions": [ "", "Return all the tokens back. This typically implies no filtering or stop sequence was triggered.", "Finished due to provided stop sequence.", "Model has emitted the maximum number of tokens as specified by max_decoding_steps.", -"Finished due to triggering some post-processing filter." +"Finished due to triggering some post-processing filter.", +"Filtered out due to Top_N < Response_Candidates.Size()" ], "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json b/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json index bddd6fb114..9e4f2f4580 100644 --- a/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/alertcenter.v1beta1.json @@ -423,7 +423,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://alertcenter.googleapis.com/", "schemas": { "AbuseDetected": { diff --git a/googleapiclient/discovery_cache/documents/alloydb.v1.json b/googleapiclient/discovery_cache/documents/alloydb.v1.json index 1a946529a0..50aa601ccc 100644 --- a/googleapiclient/discovery_cache/documents/alloydb.v1.json +++ b/googleapiclient/discovery_cache/documents/alloydb.v1.json @@ -1461,9 +1461,20 @@ } } }, -"revision": "20240306", +"revision": "20240315", "rootUrl": "https://alloydb.googleapis.com/", "schemas": { +"AuthorizedNetwork": { +"description": "AuthorizedNetwork contains metadata for an authorized network.", +"id": "AuthorizedNetwork", +"properties": { +"cidrRange": { +"description": "CIDR range for one authorzied network of the instance.", +"type": "string" +} +}, +"type": "object" +}, "AutomatedBackupPolicy": { "description": "Message describing the user-specified automated backup policy. All fields in the automated backup policy are optional. Defaults for each field are provided if they are not set.", "id": "AutomatedBackupPolicy", @@ -1947,6 +1958,11 @@ false "name": { "description": "The name of the ConnectionInfo singleton resource, e.g.: projects/{project}/locations/{location}/clusters/*/instances/*/connectionInfo This field currently has no semantic meaning.", "type": "string" +}, +"publicIpAddress": { +"description": "Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application.", +"readOnly": true, +"type": "string" } }, "type": "object" @@ -2304,6 +2320,10 @@ false "readOnly": true, "type": "string" }, +"networkConfig": { +"$ref": "InstanceNetworkConfig", +"description": "Optional. Instance level network configuration." +}, "nodes": { "description": "Output only. List of available read-only VMs in this instance, including the standby for a PRIMARY instance.", "items": { @@ -2312,6 +2332,11 @@ false "readOnly": true, "type": "array" }, +"publicIpAddress": { +"description": "Output only. The public IP addresses for the Instance. This is available ONLY when enable_public_ip is set. This is the connection endpoint for an end-user application.", +"readOnly": true, +"type": "string" +}, "queryInsightsConfig": { "$ref": "QueryInsightsInstanceConfig", "description": "Configuration for query insights." @@ -2376,6 +2401,24 @@ false }, "type": "object" }, +"InstanceNetworkConfig": { +"description": "Metadata related to instance level network configuration.", +"id": "InstanceNetworkConfig", +"properties": { +"authorizedExternalNetworks": { +"description": "Optional. A list of external network authorized to access this instance.", +"items": { +"$ref": "AuthorizedNetwork" +}, +"type": "array" +}, +"enablePublicIp": { +"description": "Optional. Enabling public ip for the instance.", +"type": "boolean" +} +}, +"type": "object" +}, "IntegerRestrictions": { "description": "Restrictions on INTEGER type values.", "id": "IntegerRestrictions", @@ -3257,7 +3300,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", @@ -3736,7 +3779,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", diff --git a/googleapiclient/discovery_cache/documents/alloydb.v1alpha.json b/googleapiclient/discovery_cache/documents/alloydb.v1alpha.json index 317c0138d4..ea837754a9 100644 --- a/googleapiclient/discovery_cache/documents/alloydb.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/alloydb.v1alpha.json @@ -1461,7 +1461,7 @@ } } }, -"revision": "20240306", +"revision": "20240315", "rootUrl": "https://alloydb.googleapis.com/", "schemas": { "AuthorizedNetwork": { @@ -3598,7 +3598,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", @@ -4077,7 +4077,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", diff --git a/googleapiclient/discovery_cache/documents/alloydb.v1beta.json b/googleapiclient/discovery_cache/documents/alloydb.v1beta.json index 1451303953..0295481fa3 100644 --- a/googleapiclient/discovery_cache/documents/alloydb.v1beta.json +++ b/googleapiclient/discovery_cache/documents/alloydb.v1beta.json @@ -1458,7 +1458,7 @@ } } }, -"revision": "20240306", +"revision": "20240315", "rootUrl": "https://alloydb.googleapis.com/", "schemas": { "AuthorizedNetwork": { @@ -3574,7 +3574,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", @@ -4053,7 +4053,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", diff --git a/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json b/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json index 85dc51ce9d..d450b711c0 100644 --- a/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/analyticsadmin.v1alpha.json @@ -4457,7 +4457,7 @@ } } }, -"revision": "20240317", +"revision": "20240322", "rootUrl": "https://analyticsadmin.googleapis.com/", "schemas": { "GoogleAnalyticsAdminV1alphaAccessBetweenFilter": { diff --git a/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json b/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json index d3a955c21c..ecc73587d7 100644 --- a/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json +++ b/googleapiclient/discovery_cache/documents/analyticsadmin.v1beta.json @@ -1628,7 +1628,7 @@ } } }, -"revision": "20240317", +"revision": "20240322", "rootUrl": "https://analyticsadmin.googleapis.com/", "schemas": { "GoogleAnalyticsAdminV1betaAccessBetweenFilter": { diff --git a/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json b/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json index 58978c3b8c..57c7bc0233 100644 --- a/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json +++ b/googleapiclient/discovery_cache/documents/analyticsdata.v1beta.json @@ -440,7 +440,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://analyticsdata.googleapis.com/", "schemas": { "ActiveMetricRestriction": { diff --git a/googleapiclient/discovery_cache/documents/analyticshub.v1.json b/googleapiclient/discovery_cache/documents/analyticshub.v1.json index 812bb142db..962edbaf09 100644 --- a/googleapiclient/discovery_cache/documents/analyticshub.v1.json +++ b/googleapiclient/discovery_cache/documents/analyticshub.v1.json @@ -1022,7 +1022,7 @@ } } }, -"revision": "20240307", +"revision": "20240318", "rootUrl": "https://analyticshub.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json b/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json index 749037bbf2..777258d6df 100644 --- a/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/analyticshub.v1beta1.json @@ -695,7 +695,7 @@ } } }, -"revision": "20240307", +"revision": "20240318", "rootUrl": "https://analyticshub.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json b/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json index ec171c725a..121d273100 100644 --- a/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json +++ b/googleapiclient/discovery_cache/documents/androiddeviceprovisioning.v1.json @@ -851,7 +851,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://androiddeviceprovisioning.googleapis.com/", "schemas": { "ClaimDeviceRequest": { diff --git a/googleapiclient/discovery_cache/documents/androidenterprise.v1.json b/googleapiclient/discovery_cache/documents/androidenterprise.v1.json index e4a3f304b6..1a4e7765b4 100644 --- a/googleapiclient/discovery_cache/documents/androidenterprise.v1.json +++ b/googleapiclient/discovery_cache/documents/androidenterprise.v1.json @@ -2649,7 +2649,7 @@ } } }, -"revision": "20240318", +"revision": "20240321", "rootUrl": "https://androidenterprise.googleapis.com/", "schemas": { "Administrator": { diff --git a/googleapiclient/discovery_cache/documents/androidmanagement.v1.json b/googleapiclient/discovery_cache/documents/androidmanagement.v1.json index 977485ba71..55f5e55b7f 100644 --- a/googleapiclient/discovery_cache/documents/androidmanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/androidmanagement.v1.json @@ -1163,7 +1163,7 @@ } } }, -"revision": "20240313", +"revision": "20240321", "rootUrl": "https://androidmanagement.googleapis.com/", "schemas": { "AdbShellCommandEvent": { @@ -2778,13 +2778,15 @@ "MINIMUM_WIFI_SECURITY_LEVEL_UNSPECIFIED", "OPEN_NETWORK_SECURITY", "PERSONAL_NETWORK_SECURITY", -"ENTERPRISE_NETWORK_SECURITY" +"ENTERPRISE_NETWORK_SECURITY", +"ENTERPRISE_BIT192_NETWORK_SECURITY" ], "enumDescriptions": [ "Defaults to OPEN_NETWORK_SECURITY, which means the device will be able to connect to all types of Wi-Fi networks.", "The device will be able to connect to all types of Wi-Fi networks.", "A personal network such as WEP, WPA2-PSK is the minimum required security. The device will not be able to connect to open wifi networks. This is stricter than OPEN_NETWORK_SECURITY. A nonComplianceDetail with API_LEVEL is reported if the Android version is less than 13.", -"An enterprise EAP network is the minimum required security level. The device will not be able to connect to Wi-Fi network below this security level. This is stricter than PERSONAL_NETWORK_SECURITY. A nonComplianceDetail with API_LEVEL is reported if the Android version is less than 13." +"An enterprise EAP network is the minimum required security level. The device will not be able to connect to Wi-Fi network below this security level. This is stricter than PERSONAL_NETWORK_SECURITY. A nonComplianceDetail with API_LEVEL is reported if the Android version is less than 13.", +"A 192-bit enterprise network is the minimum required security level. The device will not be able to connect to Wi-Fi network below this security level. This is stricter than ENTERPRISE_NETWORK_SECURITY. A nonComplianceDetail with API_LEVEL is reported if the Android version is less than 13." ], "type": "string" }, @@ -6111,7 +6113,9 @@ false "STOP_LOST_MODE_USER_ATTEMPT", "LOST_MODE_OUTGOING_PHONE_CALL", "LOST_MODE_LOCATION", -"ENROLLMENT_COMPLETE" +"ENROLLMENT_COMPLETE", +"MAX_DEVICES_REGISTRATION_QUOTA_WARNING", +"MAX_DEVICES_REGISTRATION_QUOTA_EXHAUSTED" ], "enumDescriptions": [ "This value is not used", @@ -6145,7 +6149,9 @@ false "Indicates stopLostModeUserAttemptEvent has been set.", "Indicates lostModeOutgoingPhoneCallEvent has been set.", "Indicates lostModeLocationEvent has been set.", -"Indicates enrollment_complete_event has been set." +"Indicates enrollment_complete_event has been set.", +"Indicates max_devices_registration_quota_warning_event has been set.", +"Indicates max_devices_registration_quota_exhausted_event has been set." ], "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/androidpublisher.v3.json b/googleapiclient/discovery_cache/documents/androidpublisher.v3.json index d8f22fc248..41a6487477 100644 --- a/googleapiclient/discovery_cache/documents/androidpublisher.v3.json +++ b/googleapiclient/discovery_cache/documents/androidpublisher.v3.json @@ -4279,6 +4279,11 @@ "location": "query", "type": "string" }, +"includeQuantityBasedPartialRefund": { +"description": "Optional. Whether to include voided purchases of quantity-based partial refunds, which are applicable only to multi-quantity purchases. If true, additional voided purchases may be returned with voidedQuantity that indicates the refund quantity of a quantity-based partial refund. The default value is false.", +"location": "query", +"type": "boolean" +}, "maxResults": { "description": "Defines how many results the list operation should return. The default number depends on the resource collection.", "format": "uint32", @@ -4726,7 +4731,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://androidpublisher.googleapis.com/", "schemas": { "Abi": { @@ -7706,6 +7711,11 @@ false "format": "int32", "type": "integer" }, +"refundableQuantity": { +"description": "The quantity eligible for refund, i.e. quantity that hasn't been refunded. The value reflects quantity-based partial refunds and full refunds.", +"format": "int32", +"type": "integer" +}, "regionCode": { "description": "ISO 3166-1 alpha-2 billing region code of the user at the time the product was granted.", "type": "string" @@ -9744,6 +9754,11 @@ false "description": "The token which uniquely identifies a one-time purchase or subscription. To uniquely identify subscription renewals use order_id (available starting from version 3 of the API).", "type": "string" }, +"voidedQuantity": { +"description": "The voided quantity as the result of a quantity-based partial refund. Voided purchases of quantity-based partial refunds may only be returned when includeQuantityBasedPartialRefund is set to true.", +"format": "int32", +"type": "integer" +}, "voidedReason": { "description": "The reason why the purchase was voided, possible values are: 0. Other 1. Remorse 2. Not_received 3. Defective 4. Accidental_purchase 5. Fraud 6. Friendly_fraud 7. Chargeback", "format": "int32", diff --git a/googleapiclient/discovery_cache/documents/apigateway.v1.json b/googleapiclient/discovery_cache/documents/apigateway.v1.json index 65c7f30652..1bca330600 100644 --- a/googleapiclient/discovery_cache/documents/apigateway.v1.json +++ b/googleapiclient/discovery_cache/documents/apigateway.v1.json @@ -1083,7 +1083,7 @@ } } }, -"revision": "20240227", +"revision": "20240313", "rootUrl": "https://apigateway.googleapis.com/", "schemas": { "ApigatewayApi": { diff --git a/googleapiclient/discovery_cache/documents/apigateway.v1beta.json b/googleapiclient/discovery_cache/documents/apigateway.v1beta.json index c909ca95f7..6db2bdd93f 100644 --- a/googleapiclient/discovery_cache/documents/apigateway.v1beta.json +++ b/googleapiclient/discovery_cache/documents/apigateway.v1beta.json @@ -1083,7 +1083,7 @@ } } }, -"revision": "20240227", +"revision": "20240313", "rootUrl": "https://apigateway.googleapis.com/", "schemas": { "ApigatewayApi": { diff --git a/googleapiclient/discovery_cache/documents/apigee.v1.json b/googleapiclient/discovery_cache/documents/apigee.v1.json index 63bf880b4e..57dcda8f8b 100644 --- a/googleapiclient/discovery_cache/documents/apigee.v1.json +++ b/googleapiclient/discovery_cache/documents/apigee.v1.json @@ -10013,7 +10013,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://apigee.googleapis.com/", "schemas": { "EdgeConfigstoreBundleBadBundle": { @@ -10115,6 +10115,21 @@ }, "type": "object" }, +"GoogleCloudApigeeV1AccessLoggingConfig": { +"description": "Access logging configuration enables customers to ship the access logs from the tenant projects to their own project's cloud logging. The feature is at the instance level ad disabled by default. It can be enabled during CreateInstance or UpdateInstance.", +"id": "GoogleCloudApigeeV1AccessLoggingConfig", +"properties": { +"enabled": { +"description": "Optional. Boolean flag that specifies whether the customer access log feature is enabled.", +"type": "boolean" +}, +"filter": { +"description": "Optional. Ship the access log entries that match the status_code defined in the filter. The status_code is the only expected/supported filter field. (Ex: status_code) The filter will parse it to the Common Expression Language semantics for expression evaluation to build the filter condition. (Ex: \"filter\": status_code >= 200 && status_code < 300 )", +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudApigeeV1AccessRemove": { "description": "Remove action. For example, \"Remove\" : { \"name\" : \"target.name\", \"success\" : true }", "id": "GoogleCloudApigeeV1AccessRemove", @@ -13583,6 +13598,10 @@ "description": "Apigee runtime instance.", "id": "GoogleCloudApigeeV1Instance", "properties": { +"accessLoggingConfig": { +"$ref": "GoogleCloudApigeeV1AccessLoggingConfig", +"description": "Optional. Access logging configuration enables the access logging feature at the instance. Apigee customers can enable access logging to ship the access logs to their own project's cloud logging." +}, "consumerAcceptList": { "description": "Optional. Customer accept list represents the list of projects (id/number) on customer side that can privately connect to the service attachment. It is an optional field which the customers can provide during the instance creation. By default, the customer project associated with the Apigee organization will be included to the list.", "items": { diff --git a/googleapiclient/discovery_cache/documents/apikeys.v2.json b/googleapiclient/discovery_cache/documents/apikeys.v2.json index d615d84a0f..177f4dfad5 100644 --- a/googleapiclient/discovery_cache/documents/apikeys.v2.json +++ b/googleapiclient/discovery_cache/documents/apikeys.v2.json @@ -396,7 +396,7 @@ } } }, -"revision": "20240310", +"revision": "20240324", "rootUrl": "https://apikeys.googleapis.com/", "schemas": { "Operation": { diff --git a/googleapiclient/discovery_cache/documents/appengine.v1.json b/googleapiclient/discovery_cache/documents/appengine.v1.json index 2ab41292b2..8fb0d5a1b6 100644 --- a/googleapiclient/discovery_cache/documents/appengine.v1.json +++ b/googleapiclient/discovery_cache/documents/appengine.v1.json @@ -1718,7 +1718,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://appengine.googleapis.com/", "schemas": { "ApiConfigHandler": { diff --git a/googleapiclient/discovery_cache/documents/appengine.v1alpha.json b/googleapiclient/discovery_cache/documents/appengine.v1alpha.json index 5613541c2a..2f8ebe4ba1 100644 --- a/googleapiclient/discovery_cache/documents/appengine.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/appengine.v1alpha.json @@ -946,7 +946,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://appengine.googleapis.com/", "schemas": { "AuthorizedCertificate": { diff --git a/googleapiclient/discovery_cache/documents/appengine.v1beta.json b/googleapiclient/discovery_cache/documents/appengine.v1beta.json index ac872a16a9..d5d4e2ac82 100644 --- a/googleapiclient/discovery_cache/documents/appengine.v1beta.json +++ b/googleapiclient/discovery_cache/documents/appengine.v1beta.json @@ -1918,7 +1918,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://appengine.googleapis.com/", "schemas": { "ApiConfigHandler": { diff --git a/googleapiclient/discovery_cache/documents/apphub.v1.json b/googleapiclient/discovery_cache/documents/apphub.v1.json index b6dcdaa7bb..31cd3de0c1 100644 --- a/googleapiclient/discovery_cache/documents/apphub.v1.json +++ b/googleapiclient/discovery_cache/documents/apphub.v1.json @@ -1346,7 +1346,7 @@ } } }, -"revision": "20240311", +"revision": "20240313", "rootUrl": "https://apphub.googleapis.com/", "schemas": { "Application": { diff --git a/googleapiclient/discovery_cache/documents/apphub.v1alpha.json b/googleapiclient/discovery_cache/documents/apphub.v1alpha.json index a9bb48d3ee..201e8da44e 100644 --- a/googleapiclient/discovery_cache/documents/apphub.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/apphub.v1alpha.json @@ -1438,7 +1438,7 @@ } } }, -"revision": "20240311", +"revision": "20240313", "rootUrl": "https://apphub.googleapis.com/", "schemas": { "Application": { diff --git a/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json b/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json index 5adf93d938..7db0ef5ce9 100644 --- a/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/area120tables.v1alpha1.json @@ -586,7 +586,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://area120tables.googleapis.com/", "schemas": { "BatchCreateRowsRequest": { diff --git a/googleapiclient/discovery_cache/documents/artifactregistry.v1beta1.json b/googleapiclient/discovery_cache/documents/artifactregistry.v1beta1.json index 6027439ea9..0e217d2e6e 100644 --- a/googleapiclient/discovery_cache/documents/artifactregistry.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/artifactregistry.v1beta1.json @@ -936,7 +936,7 @@ } } }, -"revision": "20240313", +"revision": "20240322", "rootUrl": "https://artifactregistry.googleapis.com/", "schemas": { "Binding": { diff --git a/googleapiclient/discovery_cache/documents/artifactregistry.v1beta2.json b/googleapiclient/discovery_cache/documents/artifactregistry.v1beta2.json index 973b9826b3..8c21e1e514 100644 --- a/googleapiclient/discovery_cache/documents/artifactregistry.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/artifactregistry.v1beta2.json @@ -1208,7 +1208,7 @@ } } }, -"revision": "20240313", +"revision": "20240322", "rootUrl": "https://artifactregistry.googleapis.com/", "schemas": { "AptArtifact": { diff --git a/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json b/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json index f2be33ff91..537c420913 100644 --- a/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json +++ b/googleapiclient/discovery_cache/documents/authorizedbuyersmarketplace.v1.json @@ -1307,7 +1307,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://authorizedbuyersmarketplace.googleapis.com/", "schemas": { "AcceptProposalRequest": { diff --git a/googleapiclient/discovery_cache/documents/backupdr.v1.json b/googleapiclient/discovery_cache/documents/backupdr.v1.json index 29c0c17541..f00a5d2931 100644 --- a/googleapiclient/discovery_cache/documents/backupdr.v1.json +++ b/googleapiclient/discovery_cache/documents/backupdr.v1.json @@ -567,7 +567,7 @@ } } }, -"revision": "20240228", +"revision": "20240313", "rootUrl": "https://backupdr.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/baremetalsolution.v2.json b/googleapiclient/discovery_cache/documents/baremetalsolution.v2.json index 6849738471..e703187f04 100644 --- a/googleapiclient/discovery_cache/documents/baremetalsolution.v2.json +++ b/googleapiclient/discovery_cache/documents/baremetalsolution.v2.json @@ -1638,7 +1638,7 @@ } } }, -"revision": "20240228", +"revision": "20240321", "rootUrl": "https://baremetalsolution.googleapis.com/", "schemas": { "AllowedClient": { diff --git a/googleapiclient/discovery_cache/documents/batch.v1.json b/googleapiclient/discovery_cache/documents/batch.v1.json index 2aaf0167a7..e803721f4f 100644 --- a/googleapiclient/discovery_cache/documents/batch.v1.json +++ b/googleapiclient/discovery_cache/documents/batch.v1.json @@ -12,7 +12,7 @@ "baseUrl": "https://batch.googleapis.com/", "batchPath": "batch", "canonicalName": "Batch", -"description": "An API to manage the running of batch resources on Google Cloud Platform.", +"description": "An API to manage the running of Batch resources on Google Cloud Platform.", "discoveryVersion": "v1", "documentationLink": "https://cloud.google.com/batch/", "fullyEncodeReservedExpansion": true, @@ -561,7 +561,7 @@ } } }, -"revision": "20240305", +"revision": "20240315", "rootUrl": "https://batch.googleapis.com/", "schemas": { "Accelerator": { @@ -907,7 +907,7 @@ "description": "Environment variables to set before running the Task." }, "maxRunDuration": { -"description": "Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit.", +"description": "Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999],", "format": "google-duration", "type": "string" }, @@ -2193,7 +2193,7 @@ "type": "integer" }, "maxRunDuration": { -"description": "Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit.", +"description": "Maximum duration the task should run. The task will be killed and marked as FAILED if over this limit. The valid value range for max_run_duration in seconds is [0, 315576000000.999999999],", "format": "google-duration", "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/beyondcorp.v1.json b/googleapiclient/discovery_cache/documents/beyondcorp.v1.json index 64fab774b6..53360b7376 100644 --- a/googleapiclient/discovery_cache/documents/beyondcorp.v1.json +++ b/googleapiclient/discovery_cache/documents/beyondcorp.v1.json @@ -1804,7 +1804,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://beyondcorp.googleapis.com/", "schemas": { "AllocatedConnection": { diff --git a/googleapiclient/discovery_cache/documents/beyondcorp.v1alpha.json b/googleapiclient/discovery_cache/documents/beyondcorp.v1alpha.json index ece2d0d4aa..7afeb3a92f 100644 --- a/googleapiclient/discovery_cache/documents/beyondcorp.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/beyondcorp.v1alpha.json @@ -3907,7 +3907,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://beyondcorp.googleapis.com/", "schemas": { "AllocatedConnection": { diff --git a/googleapiclient/discovery_cache/documents/biglake.v1.json b/googleapiclient/discovery_cache/documents/biglake.v1.json index ae0baba14d..96025c3fc3 100644 --- a/googleapiclient/discovery_cache/documents/biglake.v1.json +++ b/googleapiclient/discovery_cache/documents/biglake.v1.json @@ -616,7 +616,7 @@ } } }, -"revision": "20240311", +"revision": "20240317", "rootUrl": "https://biglake.googleapis.com/", "schemas": { "Catalog": { diff --git a/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json b/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json index 7db78eed92..053c756c6a 100644 --- a/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json +++ b/googleapiclient/discovery_cache/documents/bigquerydatapolicy.v1.json @@ -395,7 +395,7 @@ } } }, -"revision": "20240311", +"revision": "20240317", "rootUrl": "https://bigquerydatapolicy.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/bigquerydatatransfer.v1.json b/googleapiclient/discovery_cache/documents/bigquerydatatransfer.v1.json index 7b841fbec6..df9c83b562 100644 --- a/googleapiclient/discovery_cache/documents/bigquerydatatransfer.v1.json +++ b/googleapiclient/discovery_cache/documents/bigquerydatatransfer.v1.json @@ -150,7 +150,7 @@ ], "parameters": { "name": { -"description": "The name of the project resource in the form: `projects/{project_id}`", +"description": "Required. The name of the project resource in the form: `projects/{project_id}`", "location": "path", "pattern": "^projects/[^/]+$", "required": true, @@ -282,7 +282,7 @@ ], "parameters": { "name": { -"description": "The name of the project resource in the form: `projects/{project_id}`", +"description": "Required. The name of the project resource in the form: `projects/{project_id}`", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+$", "required": true, @@ -381,7 +381,7 @@ ], "parameters": { "name": { -"description": "The name of the project resource in the form: `projects/{project_id}`", +"description": "Required. The name of the project resource in the form: `projects/{project_id}`", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+$", "required": true, @@ -658,7 +658,7 @@ "type": "string" }, "name": { -"description": "The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config.", +"description": "Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config.", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/transferConfigs/[^/]+$", "required": true, @@ -732,7 +732,7 @@ ], "parameters": { "parent": { -"description": "Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`.", +"description": "Required. Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`.", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/transferConfigs/[^/]+$", "required": true, @@ -1106,7 +1106,7 @@ "type": "string" }, "name": { -"description": "The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config.", +"description": "Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config.", "location": "path", "pattern": "^projects/[^/]+/transferConfigs/[^/]+$", "required": true, @@ -1180,7 +1180,7 @@ ], "parameters": { "parent": { -"description": "Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`.", +"description": "Required. Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}` or `projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}`.", "location": "path", "pattern": "^projects/[^/]+/transferConfigs/[^/]+$", "required": true, @@ -1398,7 +1398,7 @@ } } }, -"revision": "20240312", +"revision": "20240323", "rootUrl": "https://bigquerydatatransfer.googleapis.com/", "schemas": { "CheckValidCredsRequest": { @@ -1978,7 +1978,7 @@ "description": "The encryption configuration part. Currently, it is only used for the optional KMS key name. The BigQuery service account of your project must be granted permissions to use the key. Read methods will return the key name applied in effect. Write methods will apply the key if it is present, or otherwise try to apply project default keys if it is absent." }, "name": { -"description": "The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config.", +"description": "Identifier. The resource name of the transfer config. Transfer config names have the form either `projects/{project_id}/locations/{region}/transferConfigs/{config_id}` or `projects/{project_id}/transferConfigs/{config_id}`, where `config_id` is usually a UUID, even though it is not guaranteed or required. The name is ignored when creating a transfer config.", "type": "string" }, "nextRunTime": { @@ -2109,7 +2109,7 @@ "description": "Status of the transfer run." }, "name": { -"description": "The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run.", +"description": "Identifier. The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run.", "type": "string" }, "notificationPubsubTopic": { diff --git a/googleapiclient/discovery_cache/documents/bigqueryreservation.v1.json b/googleapiclient/discovery_cache/documents/bigqueryreservation.v1.json index 8a326bf8cb..498a4aa07e 100644 --- a/googleapiclient/discovery_cache/documents/bigqueryreservation.v1.json +++ b/googleapiclient/discovery_cache/documents/bigqueryreservation.v1.json @@ -579,6 +579,35 @@ "https://www.googleapis.com/auth/cloud-platform" ] }, +"failoverReservation": { +"description": "Failover a reservation to the secondary location. The operation should be done in the current secondary location, which will be promoted to the new primary location for the reservation. Attempting to failover a reservation in the current primary location will fail with the error code `google.rpc.Code.FAILED_PRECONDITION`.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/reservations/{reservationsId}:failoverReservation", +"httpMethod": "POST", +"id": "bigqueryreservation.projects.locations.reservations.failoverReservation", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Resource name of the reservation to failover. E.g., `projects/myproject/locations/US/reservations/team1-prod`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/reservations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}:failoverReservation", +"request": { +"$ref": "FailoverReservationRequest" +}, +"response": { +"$ref": "Reservation" +}, +"scopes": [ +"https://www.googleapis.com/auth/bigquery", +"https://www.googleapis.com/auth/cloud-platform" +] +}, "get": { "description": "Returns information about the reservation.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/reservations/{reservationsId}", @@ -851,7 +880,7 @@ } } }, -"revision": "20240313", +"revision": "20240321", "rootUrl": "https://bigqueryreservation.googleapis.com/", "schemas": { "Assignment": { @@ -1114,6 +1143,12 @@ false "properties": {}, "type": "object" }, +"FailoverReservationRequest": { +"description": "The request for ReservationService.FailoverReservation.", +"id": "FailoverReservationRequest", +"properties": {}, +"type": "object" +}, "ListAssignmentsResponse": { "description": "The response for ReservationService.ListAssignments.", "id": "ListAssignmentsResponse", @@ -1244,6 +1279,18 @@ false "description": "The resource name of the reservation, e.g., `projects/*/locations/*/reservations/team1-prod`. The reservation_id must only contain lower case alphanumeric characters or dashes. It must start with a letter and must not end with a dash. Its maximum length is 64 characters.", "type": "string" }, +"originalPrimaryLocation": { +"description": "Optional. The original primary location of the reservation which is set only during its creation and remains unchanged afterwards. It can be used by the customer to answer questions about disaster recovery billing. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions.", +"type": "string" +}, +"primaryLocation": { +"description": "Optional. The primary location of the reservation. The field is only meaningful for reservation used for cross region disaster recovery. The field is output only for customers and should not be specified, however, the google.api.field_behavior is not set to OUTPUT_ONLY since these fields are set in rerouted requests sent across regions.", +"type": "string" +}, +"secondaryLocation": { +"description": "Optional. The secondary location of the reservation which is used for cross region disaster recovery purposes. Customer can set this in create/update reservation calls to create a failover reservation or convert a non-failover reservation to a failover reservation.", +"type": "string" +}, "slotCapacity": { "description": "Baseline slots available to this reservation. A slot is a unit of computational power in BigQuery, and serves as the unit of parallelism. Queries using this reservation might use more slots during runtime if ignore_idle_slots is set to false, or autoscaling is enabled. If edition is EDITION_UNSPECIFIED and total slot_capacity of the reservation and its siblings exceeds the total slot_count of all capacity commitments, the request will fail with `google.rpc.Code.RESOURCE_EXHAUSTED`. If edition is any value but EDITION_UNSPECIFIED, then the above requirement is not needed. The total slot_capacity of the reservation and its siblings may exceed the total slot_count of capacity commitments. In that case, the exceeding slots will be charged with the autoscale SKU. You can increase the number of baseline slots in a reservation every few minutes. If you want to decrease your baseline slots, you are limited to once an hour if you have recently changed your baseline slot capacity and your baseline slots exceed your committed slots. Otherwise, you can decrease your baseline slots every few minutes.", "format": "int64", diff --git a/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json b/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json index 6843349445..b3786345fa 100644 --- a/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json +++ b/googleapiclient/discovery_cache/documents/bigtableadmin.v2.json @@ -2098,13 +2098,17 @@ } } }, -"revision": "20240306", +"revision": "20240318", "rootUrl": "https://bigtableadmin.googleapis.com/", "schemas": { "AppProfile": { "description": "A configuration object describing how Cloud Bigtable should treat traffic from a particular end user application.", "id": "AppProfile", "properties": { +"dataBoostIsolationReadOnly": { +"$ref": "DataBoostIsolationReadOnly", +"description": "Specifies that this app profile is intended for read-only usage via the Data Boost feature." +}, "description": { "description": "Long form description of the use case for this AppProfile.", "type": "string" @@ -2409,6 +2413,14 @@ "consistencyToken": { "description": "Required. The token created using GenerateConsistencyToken for the Table.", "type": "string" +}, +"dataBoostReadLocalWrites": { +"$ref": "DataBoostReadLocalWrites", +"description": "Checks that reads using an app profile with `DataBoostIsolationReadOnly` can see all writes committed before the token was created, but only if the read and write target the same cluster." +}, +"standardReadRemoteWrites": { +"$ref": "StandardReadRemoteWrites", +"description": "Checks that reads using an app profile with `StandardIsolation` can see all writes committed before the token was created, even if the read and write target different clusters." } }, "type": "object" @@ -2559,6 +2571,10 @@ "$ref": "ColumnFamilyStats", "description": "Output only. Only available with STATS_VIEW, this includes summary statistics about column family contents. For statistics over an entire table, see TableStats above.", "readOnly": true +}, +"valueType": { +"$ref": "Type", +"description": "The type of data stored in each of this family's cell values, including its full encoding. If omitted, the family only serves raw untyped bytes. For now, only the `Aggregate` type is supported. `Aggregate` can only be set at family creation and is immutable afterwards. If `value_type` is `Aggregate`, written data must be compatible with: * `value_type.input_type` for `AddInput` mutations" } }, "type": "object" @@ -2805,6 +2821,31 @@ }, "type": "object" }, +"DataBoostIsolationReadOnly": { +"description": "Data Boost is a serverless compute capability that lets you run high-throughput read jobs on your Bigtable data, without impacting the performance of the clusters that handle your application traffic. Currently, Data Boost exclusively supports read-only use-cases with single-cluster routing. Data Boost reads are only guaranteed to see the results of writes that were written at least 30 minutes ago. This means newly written values may not become visible for up to 30m, and also means that old values may remain visible for up to 30m after being deleted or overwritten. To mitigate the staleness of the data, users may either wait 30m, or use CheckConsistency.", +"id": "DataBoostIsolationReadOnly", +"properties": { +"computeBillingOwner": { +"description": "The Compute Billing Owner for this Data Boost App Profile.", +"enum": [ +"COMPUTE_BILLING_OWNER_UNSPECIFIED", +"HOST_PAYS" +], +"enumDescriptions": [ +"Unspecified value.", +"The host Cloud Project containing the targeted Bigtable Instance / Table pays for compute." +], +"type": "string" +} +}, +"type": "object" +}, +"DataBoostReadLocalWrites": { +"description": "Checks that all writes before the consistency token was generated in the same cluster are readable by Databoost.", +"id": "DataBoostReadLocalWrites", +"properties": {}, +"type": "object" +}, "DropRowRangeRequest": { "description": "Request message for google.bigtable.admin.v2.BigtableTableAdmin.DropRowRange", "id": "DropRowRangeRequest", @@ -3003,6 +3044,93 @@ }, "type": "object" }, +"GoogleBigtableAdminV2TypeAggregate": { +"description": "A value that combines incremental updates into a summarized value. Data is never directly written or read using type `Aggregate`. Writes will provide either the `input_type` or `state_type`, and reads will always return the `state_type` .", +"id": "GoogleBigtableAdminV2TypeAggregate", +"properties": { +"inputType": { +"$ref": "Type", +"description": "Type of the inputs that are accumulated by this `Aggregate`, which must specify a full encoding. Use `AddInput` mutations to accumulate new inputs." +}, +"stateType": { +"$ref": "Type", +"description": "Output only. Type that holds the internal accumulator state for the `Aggregate`. This is a function of the `input_type` and `aggregator` chosen, and will always specify a full encoding.", +"readOnly": true +}, +"sum": { +"$ref": "GoogleBigtableAdminV2TypeAggregateSum", +"description": "Sum aggregator." +} +}, +"type": "object" +}, +"GoogleBigtableAdminV2TypeAggregateSum": { +"description": "Computes the sum of the input values. Allowed input: `Int64` State: same as input", +"id": "GoogleBigtableAdminV2TypeAggregateSum", +"properties": {}, +"type": "object" +}, +"GoogleBigtableAdminV2TypeBytes": { +"description": "Bytes Values of type `Bytes` are stored in `Value.bytes_value`.", +"id": "GoogleBigtableAdminV2TypeBytes", +"properties": { +"encoding": { +"$ref": "GoogleBigtableAdminV2TypeBytesEncoding", +"description": "The encoding to use when converting to/from lower level types." +} +}, +"type": "object" +}, +"GoogleBigtableAdminV2TypeBytesEncoding": { +"description": "Rules used to convert to/from lower level types.", +"id": "GoogleBigtableAdminV2TypeBytesEncoding", +"properties": { +"raw": { +"$ref": "GoogleBigtableAdminV2TypeBytesEncodingRaw", +"description": "Use `Raw` encoding." +} +}, +"type": "object" +}, +"GoogleBigtableAdminV2TypeBytesEncodingRaw": { +"description": "Leaves the value \"as-is\" * Natural sort? Yes * Self-delimiting? No * Compatibility? N/A", +"id": "GoogleBigtableAdminV2TypeBytesEncodingRaw", +"properties": {}, +"type": "object" +}, +"GoogleBigtableAdminV2TypeInt64": { +"description": "Int64 Values of type `Int64` are stored in `Value.int_value`.", +"id": "GoogleBigtableAdminV2TypeInt64", +"properties": { +"encoding": { +"$ref": "GoogleBigtableAdminV2TypeInt64Encoding", +"description": "The encoding to use when converting to/from lower level types." +} +}, +"type": "object" +}, +"GoogleBigtableAdminV2TypeInt64Encoding": { +"description": "Rules used to convert to/from lower level types.", +"id": "GoogleBigtableAdminV2TypeInt64Encoding", +"properties": { +"bigEndianBytes": { +"$ref": "GoogleBigtableAdminV2TypeInt64EncodingBigEndianBytes", +"description": "Use `BigEndianBytes` encoding." +} +}, +"type": "object" +}, +"GoogleBigtableAdminV2TypeInt64EncodingBigEndianBytes": { +"description": "Encodes the value as an 8-byte big endian twos complement `Bytes` value. * Natural sort? No (positive values only) * Self-delimiting? Yes * Compatibility? - BigQuery Federation `BINARY` encoding - HBase `Bytes.toBytes` - Java `ByteBuffer.putLong()` with `ByteOrder.BIG_ENDIAN`", +"id": "GoogleBigtableAdminV2TypeInt64EncodingBigEndianBytes", +"properties": { +"bytesType": { +"$ref": "GoogleBigtableAdminV2TypeBytes", +"description": "The underlying `Bytes` type, which may be able to encode further." +} +}, +"type": "object" +}, "HotTablet": { "description": "A tablet is a defined by a start and end key and is explained in https://cloud.google.com/bigtable/docs/overview#architecture and https://cloud.google.com/bigtable/docs/performance#optimization. A Hot tablet is a tablet that exhibits high average cpu usage during the time interval from start time to end time.", "id": "HotTablet", @@ -3691,6 +3819,12 @@ }, "type": "object" }, +"StandardReadRemoteWrites": { +"description": "Checks that all writes before the consistency token was generated are replicated in every cluster and readable.", +"id": "StandardReadRemoteWrites", +"properties": {}, +"type": "object" +}, "Status": { "description": "The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).", "id": "Status", @@ -3867,6 +4001,25 @@ }, "type": "object" }, +"Type": { +"description": "`Type` represents the type of data that is written to, read from, or stored in Bigtable. It is heavily based on the GoogleSQL standard to help maintain familiarity and consistency across products and features. For compatibility with Bigtable's existing untyped APIs, each `Type` includes an `Encoding` which describes how to convert to/from the underlying data. This might involve composing a series of steps into an \"encoding chain,\" for example to convert from INT64 -> STRING -> raw bytes. In most cases, a \"link\" in the encoding chain will be based an on existing GoogleSQL conversion function like `CAST`. Each link in the encoding chain also defines the following properties: * Natural sort: Does the encoded value sort consistently with the original typed value? Note that Bigtable will always sort data based on the raw encoded value, *not* the decoded type. - Example: STRING values sort in the same order as their UTF-8 encodings. - Counterexample: Encoding INT64 to a fixed-width STRING does *not* preserve sort order when dealing with negative numbers. INT64(1) > INT64(-1), but STRING(\"-00001\") > STRING(\"00001). - The overall encoding chain sorts naturally if *every* link does. * Self-delimiting: If we concatenate two encoded values, can we always tell where the first one ends and the second one begins? - Example: If we encode INT64s to fixed-width STRINGs, the first value will always contain exactly N digits, possibly preceded by a sign. - Counterexample: If we concatenate two UTF-8 encoded STRINGs, we have no way to tell where the first one ends. - The overall encoding chain is self-delimiting if *any* link is. * Compatibility: Which other systems have matching encoding schemes? For example, does this encoding have a GoogleSQL equivalent? HBase? Java?", +"id": "Type", +"properties": { +"aggregateType": { +"$ref": "GoogleBigtableAdminV2TypeAggregate", +"description": "Aggregate" +}, +"bytesType": { +"$ref": "GoogleBigtableAdminV2TypeBytes", +"description": "Bytes" +}, +"int64Type": { +"$ref": "GoogleBigtableAdminV2TypeInt64", +"description": "Int64" +} +}, +"type": "object" +}, "UndeleteTableMetadata": { "description": "Metadata type for the operation returned by google.bigtable.admin.v2.BigtableTableAdmin.UndeleteTable.", "id": "UndeleteTableMetadata", diff --git a/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json b/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json index 9d72b9591e..f17fadb87a 100644 --- a/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json +++ b/googleapiclient/discovery_cache/documents/binaryauthorization.v1.json @@ -742,7 +742,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://binaryauthorization.googleapis.com/", "schemas": { "AdmissionRule": { diff --git a/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json b/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json index 8ebc8352c1..c84412392e 100644 --- a/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/binaryauthorization.v1beta1.json @@ -551,7 +551,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://binaryauthorization.googleapis.com/", "schemas": { "AdmissionRule": { diff --git a/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json b/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json index 29c61d023e..5a166d7484 100644 --- a/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json +++ b/googleapiclient/discovery_cache/documents/blockchainnodeengine.v1.json @@ -487,7 +487,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://blockchainnodeengine.googleapis.com/", "schemas": { "BlockchainNode": { diff --git a/googleapiclient/discovery_cache/documents/blogger.v2.json b/googleapiclient/discovery_cache/documents/blogger.v2.json index df41a33bd7..6e239af801 100644 --- a/googleapiclient/discovery_cache/documents/blogger.v2.json +++ b/googleapiclient/discovery_cache/documents/blogger.v2.json @@ -401,7 +401,7 @@ } } }, -"revision": "20240318", +"revision": "20240324", "rootUrl": "https://blogger.googleapis.com/", "schemas": { "Blog": { diff --git a/googleapiclient/discovery_cache/documents/blogger.v3.json b/googleapiclient/discovery_cache/documents/blogger.v3.json index 77f5977e21..67e7e700fa 100644 --- a/googleapiclient/discovery_cache/documents/blogger.v3.json +++ b/googleapiclient/discovery_cache/documents/blogger.v3.json @@ -1710,7 +1710,7 @@ } } }, -"revision": "20240318", +"revision": "20240324", "rootUrl": "https://blogger.googleapis.com/", "schemas": { "Blog": { diff --git a/googleapiclient/discovery_cache/documents/books.v1.json b/googleapiclient/discovery_cache/documents/books.v1.json index f4b7ece63c..1cca4c9102 100644 --- a/googleapiclient/discovery_cache/documents/books.v1.json +++ b/googleapiclient/discovery_cache/documents/books.v1.json @@ -2677,7 +2677,7 @@ } } }, -"revision": "20240312", +"revision": "20240319", "rootUrl": "https://books.googleapis.com/", "schemas": { "Annotation": { diff --git a/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json b/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json index 5333b01676..58c3a4f895 100644 --- a/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json +++ b/googleapiclient/discovery_cache/documents/businessprofileperformance.v1.json @@ -417,7 +417,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://businessprofileperformance.googleapis.com/", "schemas": { "DailyMetricTimeSeries": { diff --git a/googleapiclient/discovery_cache/documents/calendar.v3.json b/googleapiclient/discovery_cache/documents/calendar.v3.json index eb59bafb37..5b75d3885e 100644 --- a/googleapiclient/discovery_cache/documents/calendar.v3.json +++ b/googleapiclient/discovery_cache/documents/calendar.v3.json @@ -1759,7 +1759,7 @@ } } }, -"revision": "20240308", +"revision": "20240314", "rootUrl": "https://www.googleapis.com/", "schemas": { "Acl": { diff --git a/googleapiclient/discovery_cache/documents/certificatemanager.v1.json b/googleapiclient/discovery_cache/documents/certificatemanager.v1.json index 40e823553e..547824bf77 100644 --- a/googleapiclient/discovery_cache/documents/certificatemanager.v1.json +++ b/googleapiclient/discovery_cache/documents/certificatemanager.v1.json @@ -1280,9 +1280,20 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://certificatemanager.googleapis.com/", "schemas": { +"AllowlistedCertificate": { +"description": "Defines an allowlisted certificate.", +"id": "AllowlistedCertificate", +"properties": { +"pemCertificate": { +"description": "Required. PEM certificate that is allowlisted. The certificate can be up to 5k bytes, and must be a parseable X.509 certificate.", +"type": "string" +} +}, +"type": "object" +}, "AuthorizationAttemptInfo": { "description": "State of the latest attempt to authorize a domain for certificate issuance.", "id": "AuthorizationAttemptInfo", @@ -2185,6 +2196,13 @@ "description": "Defines a trust config.", "id": "TrustConfig", "properties": { +"allowlistedCertificates": { +"description": "Optional. A certificate matching an allowlisted certificate is always considered valid as long as the certificate is parseable, proof of private key possession is established, and constraints on the certificate\u2019s SAN field are met.", +"items": { +"$ref": "AllowlistedCertificate" +}, +"type": "array" +}, "createTime": { "description": "Output only. The creation timestamp of a TrustConfig.", "format": "google-datetime", diff --git a/googleapiclient/discovery_cache/documents/chat.v1.json b/googleapiclient/discovery_cache/documents/chat.v1.json index 4de3b470ae..b60f9c54d3 100644 --- a/googleapiclient/discovery_cache/documents/chat.v1.json +++ b/googleapiclient/discovery_cache/documents/chat.v1.json @@ -990,7 +990,7 @@ } } }, -"revision": "20240315", +"revision": "20240319", "rootUrl": "https://chat.googleapis.com/", "schemas": { "ActionParameter": { diff --git a/googleapiclient/discovery_cache/documents/checks.v1alpha.json b/googleapiclient/discovery_cache/documents/checks.v1alpha.json index 251e000545..3c030e5080 100644 --- a/googleapiclient/discovery_cache/documents/checks.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/checks.v1alpha.json @@ -414,7 +414,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://checks.googleapis.com/", "schemas": { "CancelOperationRequest": { @@ -797,7 +797,8 @@ "QUEBEC_BILL_64", "CHINA_PIPL", "SOUTH_KOREA_PIPA", -"SOUTH_AFRICA_POPIA" +"SOUTH_AFRICA_POPIA", +"JAPAN_APPI" ], "enumDescriptions": [ "Not specified.", @@ -821,7 +822,8 @@ "Quebec Bill 64: An Act to Modernize Legislative Provisions as Regards the Protection of Personal Information.", "China Personal Information Protection Law.", "South Korea Personal Information Protection Act.", -"South Africa Protection of Personal Information Act." +"South Africa Protection of Personal Information Act.", +"Japan Act on the Protection of Personal Information." ], "type": "string" } diff --git a/googleapiclient/discovery_cache/documents/chromemanagement.v1.json b/googleapiclient/discovery_cache/documents/chromemanagement.v1.json index 69e1fe2376..98a1aa9523 100644 --- a/googleapiclient/discovery_cache/documents/chromemanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/chromemanagement.v1.json @@ -902,7 +902,7 @@ ], "parameters": { "filter": { -"description": "Optional. Only include resources that match the filter. Supported filter fields: - org_unit_id - serial_number - device_id - reports_timestamp The \"reports_timestamp\" filter accepts either the Unix Epoch milliseconds format or the RFC3339 UTC \"Zulu\" format with nanosecond resolution and up to nine fractional digits. Both formats should be surrounded by simple double quotes. Examples: \"2014-10-02T15:01:23Z\", \"2014-10-02T15:01:23.045123456Z\", \"1679283943823\".", +"description": "Optional. Only include resources that match the filter. Requests that don't specify a \"reports_timestamp\" value will default to returning only recent reports. Specify \"reports_timestamp>=0\" to get all report data. Supported filter fields: - org_unit_id - serial_number - device_id - reports_timestamp The \"reports_timestamp\" filter accepts either the Unix Epoch milliseconds format or the RFC3339 UTC \"Zulu\" format with nanosecond resolution and up to nine fractional digits. Both formats should be surrounded by simple double quotes. Examples: \"2014-10-02T15:01:23Z\", \"2014-10-02T15:01:23.045123456Z\", \"1679283943823\".", "location": "query", "type": "string" }, @@ -1172,7 +1172,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://chromemanagement.googleapis.com/", "schemas": { "GoogleChromeManagementV1AndroidAppInfo": { diff --git a/googleapiclient/discovery_cache/documents/chromepolicy.v1.json b/googleapiclient/discovery_cache/documents/chromepolicy.v1.json index 1ea02aae7b..553bb10636 100644 --- a/googleapiclient/discovery_cache/documents/chromepolicy.v1.json +++ b/googleapiclient/discovery_cache/documents/chromepolicy.v1.json @@ -557,7 +557,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://chromepolicy.googleapis.com/", "schemas": { "GoogleChromePolicyVersionsV1AdditionalTargetKeyName": { diff --git a/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json b/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json index c2b95a250f..7c2f81111d 100644 --- a/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json +++ b/googleapiclient/discovery_cache/documents/chromeuxreport.v1.json @@ -131,7 +131,7 @@ } } }, -"revision": "20240312", +"revision": "20240321", "rootUrl": "https://chromeuxreport.googleapis.com/", "schemas": { "Bin": { diff --git a/googleapiclient/discovery_cache/documents/civicinfo.v2.json b/googleapiclient/discovery_cache/documents/civicinfo.v2.json index 937e2cf4ed..00c1bf24c1 100644 --- a/googleapiclient/discovery_cache/documents/civicinfo.v2.json +++ b/googleapiclient/discovery_cache/documents/civicinfo.v2.json @@ -365,7 +365,7 @@ } } }, -"revision": "20240312", +"revision": "20240319", "rootUrl": "https://civicinfo.googleapis.com/", "schemas": { "AdministrationRegion": { diff --git a/googleapiclient/discovery_cache/documents/classroom.v1.json b/googleapiclient/discovery_cache/documents/classroom.v1.json index dc6f691ede..98c0233f5a 100644 --- a/googleapiclient/discovery_cache/documents/classroom.v1.json +++ b/googleapiclient/discovery_cache/documents/classroom.v1.json @@ -2400,7 +2400,7 @@ } } }, -"revision": "20240312", +"revision": "20240318", "rootUrl": "https://classroom.googleapis.com/", "schemas": { "Announcement": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1.json index 33082f9d7c..b966a57395 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1.json @@ -1095,7 +1095,7 @@ } } }, -"revision": "20240302", +"revision": "20240322", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AccessSelector": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json index 7e97069501..5b4442b5de 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1beta1.json @@ -411,7 +411,7 @@ } } }, -"revision": "20240302", +"revision": "20240322", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json index 95f8ac13b5..af4b5aaa51 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1p1beta1.json @@ -108,7 +108,7 @@ "iamPolicies": { "methods": { "searchAll": { -"description": "Searches all the IAM policies within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the IAM policies within a scope, even if they don't have `.getIamPolicy` permission of all the IAM policies. Callers should have `cloud.assets.SearchAllIamPolicies` permission on the requested scope, otherwise the request will be rejected.", +"description": "Searches all the IAM policies within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the IAM policies within a scope, even if they don't have `.getIamPolicy` permission of all the IAM policies. Callers should have `cloudasset.assets.searchAllIamPolicies` permission on the requested scope, otherwise the request will be rejected.", "flatPath": "v1p1beta1/{v1p1beta1Id}/{v1p1beta1Id1}/iamPolicies:searchAll", "httpMethod": "GET", "id": "cloudasset.iamPolicies.searchAll", @@ -153,7 +153,7 @@ "resources": { "methods": { "searchAll": { -"description": "Searches all the resources within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the resources within a scope, even if they don't have `.get` permission of all the resources. Callers should have `cloud.assets.SearchAllResources` permission on the requested scope, otherwise the request will be rejected.", +"description": "Searches all the resources within a given accessible Resource Manager scope (project/folder/organization). This RPC gives callers especially administrators the ability to search all the resources within a scope, even if they don't have `.get` permission of all the resources. Callers should have `cloudasset.assets.searchAllResources` permission on the requested scope, otherwise the request will be rejected.", "flatPath": "v1p1beta1/{v1p1beta1Id}/{v1p1beta1Id1}/resources:searchAll", "httpMethod": "GET", "id": "cloudasset.resources.searchAll", @@ -207,7 +207,7 @@ } } }, -"revision": "20240302", +"revision": "20240322", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json index 1c8000d665..22e9bde7fb 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1p5beta1.json @@ -177,7 +177,7 @@ } } }, -"revision": "20240302", +"revision": "20240322", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json b/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json index 6e6df79ed5..4b1bd2854b 100644 --- a/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudasset.v1p7beta1.json @@ -167,7 +167,7 @@ } } }, -"revision": "20240302", +"revision": "20240322", "rootUrl": "https://cloudasset.googleapis.com/", "schemas": { "AnalyzeIamPolicyLongrunningMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudbilling.v1.json b/googleapiclient/discovery_cache/documents/cloudbilling.v1.json index 7ef94e7973..6eff853f53 100644 --- a/googleapiclient/discovery_cache/documents/cloudbilling.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudbilling.v1.json @@ -751,7 +751,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://cloudbilling.googleapis.com/", "schemas": { "AggregationInfo": { diff --git a/googleapiclient/discovery_cache/documents/cloudbuild.v1.json b/googleapiclient/discovery_cache/documents/cloudbuild.v1.json index 212885a164..d71cc2b705 100644 --- a/googleapiclient/discovery_cache/documents/cloudbuild.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudbuild.v1.json @@ -2346,7 +2346,7 @@ } } }, -"revision": "20240310", +"revision": "20240321", "rootUrl": "https://cloudbuild.googleapis.com/", "schemas": { "ApprovalConfig": { diff --git a/googleapiclient/discovery_cache/documents/cloudbuild.v2.json b/googleapiclient/discovery_cache/documents/cloudbuild.v2.json index fabd824c99..3003c04a3e 100644 --- a/googleapiclient/discovery_cache/documents/cloudbuild.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudbuild.v2.json @@ -671,7 +671,7 @@ ], "parameters": { "pageSize": { -"description": "Optional. Number of results to return in the list. Default to 100.", +"description": "Optional. Number of results to return in the list. Default to 20.", "format": "int32", "location": "query", "type": "integer" @@ -844,7 +844,7 @@ } } }, -"revision": "20240310", +"revision": "20240321", "rootUrl": "https://cloudbuild.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/cloudchannel.v1.json b/googleapiclient/discovery_cache/documents/cloudchannel.v1.json index 9131b4e47a..838cdc6f46 100644 --- a/googleapiclient/discovery_cache/documents/cloudchannel.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudchannel.v1.json @@ -2183,7 +2183,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://cloudchannel.googleapis.com/", "schemas": { "GoogleCloudChannelV1ActivateEntitlementRequest": { diff --git a/googleapiclient/discovery_cache/documents/clouddeploy.v1.json b/googleapiclient/discovery_cache/documents/clouddeploy.v1.json index 779b616484..9361ba74fb 100644 --- a/googleapiclient/discovery_cache/documents/clouddeploy.v1.json +++ b/googleapiclient/discovery_cache/documents/clouddeploy.v1.json @@ -2065,7 +2065,7 @@ } } }, -"revision": "20240310", +"revision": "20240311", "rootUrl": "https://clouddeploy.googleapis.com/", "schemas": { "AbandonReleaseRequest": { diff --git a/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json b/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json index 009f9fd86c..842939d3d9 100644 --- a/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/clouderrorreporting.v1beta1.json @@ -214,7 +214,7 @@ ] }, "report": { -"description": "Report an individual error event and record the event to a log. This endpoint accepts **either** an OAuth token, **or** an [API key](https://support.google.com/cloud/answer/6158862) for authentication. To use an API key, append it to the URL as the value of a `key` parameter. For example: `POST https://clouderrorreporting.googleapis.com/v1beta1/{projectName}/events:report?key=123ABC456` **Note:** [Error Reporting] (https://cloud.google.com/error-reporting) is a global service built on Cloud Logging and doesn't analyze logs stored in regional log buckets or logs routed to other Google Cloud projects.", +"description": "Report an individual error event and record the event to a log. This endpoint accepts **either** an OAuth token, **or** an [API key](https://support.google.com/cloud/answer/6158862) for authentication. To use an API key, append it to the URL as the value of a `key` parameter. For example: `POST https://clouderrorreporting.googleapis.com/v1beta1/{projectName}/events:report?key=123ABC456` **Note:** [Error Reporting] (https://cloud.google.com/error-reporting) is a global service built on Cloud Logging and doesn't analyze logs stored in regional log buckets.", "flatPath": "v1beta1/projects/{projectsId}/events:report", "httpMethod": "POST", "id": "clouderrorreporting.projects.events.report", @@ -431,7 +431,7 @@ } } }, -"revision": "20240307", +"revision": "20240315", "rootUrl": "https://clouderrorreporting.googleapis.com/", "schemas": { "DeleteEventsResponse": { diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json index 7c265b014f..8626f397d8 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v1.json @@ -552,7 +552,7 @@ } } }, -"revision": "20240307", +"revision": "20240315", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json index ef17375cf1..e236475cdd 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v2.json @@ -716,7 +716,7 @@ } } }, -"revision": "20240307", +"revision": "20240315", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AbortFunctionUpgradeRequest": { diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json index 4ebcdf7760..57ad479c70 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v2alpha.json @@ -716,7 +716,7 @@ } } }, -"revision": "20240307", +"revision": "20240315", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AbortFunctionUpgradeRequest": { diff --git a/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json b/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json index 0a2415634e..892d55ef77 100644 --- a/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json +++ b/googleapiclient/discovery_cache/documents/cloudfunctions.v2beta.json @@ -716,7 +716,7 @@ } } }, -"revision": "20240307", +"revision": "20240315", "rootUrl": "https://cloudfunctions.googleapis.com/", "schemas": { "AbortFunctionUpgradeRequest": { diff --git a/googleapiclient/discovery_cache/documents/cloudidentity.v1.json b/googleapiclient/discovery_cache/documents/cloudidentity.v1.json index bf4f83b01a..760a8724fa 100644 --- a/googleapiclient/discovery_cache/documents/cloudidentity.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudidentity.v1.json @@ -1990,7 +1990,7 @@ } } }, -"revision": "20240312", +"revision": "20240319", "rootUrl": "https://cloudidentity.googleapis.com/", "schemas": { "AddIdpCredentialOperationMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json b/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json index 2ec172e0b5..fa073ac726 100644 --- a/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudidentity.v1beta1.json @@ -2015,7 +2015,7 @@ } } }, -"revision": "20240312", +"revision": "20240319", "rootUrl": "https://cloudidentity.googleapis.com/", "schemas": { "AddIdpCredentialOperationMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudprofiler.v2.json b/googleapiclient/discovery_cache/documents/cloudprofiler.v2.json index 913b23db49..8fe4fbd22f 100644 --- a/googleapiclient/discovery_cache/documents/cloudprofiler.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudprofiler.v2.json @@ -254,7 +254,7 @@ } } }, -"revision": "20240311", +"revision": "20240317", "rootUrl": "https://cloudprofiler.googleapis.com/", "schemas": { "CreateProfileRequest": { diff --git a/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json b/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json index 22f3a395cf..a55e5af559 100644 --- a/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudscheduler.v1.json @@ -418,7 +418,7 @@ } } }, -"revision": "20240301", +"revision": "20240312", "rootUrl": "https://cloudscheduler.googleapis.com/", "schemas": { "AppEngineHttpTarget": { diff --git a/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json b/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json index b947fd09e0..1411cb3e4c 100644 --- a/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudscheduler.v1beta1.json @@ -433,7 +433,7 @@ } } }, -"revision": "20240301", +"revision": "20240312", "rootUrl": "https://cloudscheduler.googleapis.com/", "schemas": { "AppEngineHttpTarget": { diff --git a/googleapiclient/discovery_cache/documents/cloudshell.v1.json b/googleapiclient/discovery_cache/documents/cloudshell.v1.json index 267de1b58e..aa179f69d1 100644 --- a/googleapiclient/discovery_cache/documents/cloudshell.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudshell.v1.json @@ -374,7 +374,7 @@ } } }, -"revision": "20240313", +"revision": "20240320", "rootUrl": "https://cloudshell.googleapis.com/", "schemas": { "AddPublicKeyMetadata": { diff --git a/googleapiclient/discovery_cache/documents/cloudsupport.v2.json b/googleapiclient/discovery_cache/documents/cloudsupport.v2.json index e98dc1a7bc..2a0f8d80b8 100644 --- a/googleapiclient/discovery_cache/documents/cloudsupport.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudsupport.v2.json @@ -552,7 +552,7 @@ } } }, -"revision": "20240317", +"revision": "20240322", "rootUrl": "https://cloudsupport.googleapis.com/", "schemas": { "Actor": { diff --git a/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json b/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json index ce98b2cb95..b1818977ba 100644 --- a/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json +++ b/googleapiclient/discovery_cache/documents/cloudsupport.v2beta.json @@ -125,26 +125,6 @@ "location": "query", "type": "string" }, -"product.productLine": { -"description": "The Product Line of the Product.", -"enum": [ -"PRODUCT_LINE_UNSPECIFIED", -"GOOGLE_CLOUD", -"GOOGLE_MAPS" -], -"enumDescriptions": [ -"Unknown product type.", -"Google Cloud", -"Google Maps" -], -"location": "query", -"type": "string" -}, -"product.productSubline": { -"description": "The Product Subline of the Product, such as \"Maps Billing\".", -"location": "query", -"type": "string" -}, "query": { "description": "An expression used to filter case classifications. If it's an empty string, then no filtering happens. Otherwise, case classifications will be returned that match the filter.", "location": "query", @@ -303,21 +283,6 @@ "pattern": "^[^/]+/[^/]+$", "required": true, "type": "string" -}, -"productLine": { -"description": "The product line for which to request cases for. If unspecified, only Google Cloud cases will be returned.", -"enum": [ -"PRODUCT_LINE_UNSPECIFIED", -"GOOGLE_CLOUD", -"GOOGLE_MAPS" -], -"enumDescriptions": [ -"Unknown product type.", -"Google Cloud", -"Google Maps" -], -"location": "query", -"type": "string" } }, "path": "v2beta/{+parent}/cases", @@ -583,7 +548,7 @@ } } }, -"revision": "20240317", +"revision": "20240322", "rootUrl": "https://cloudsupport.googleapis.com/", "schemas": { "Actor": { @@ -818,10 +783,6 @@ "id": { "description": "The unique ID for a classification. Must be specified for case creation. To retrieve valid classification IDs for case creation, use `caseClassifications.search`. Classification IDs returned by `caseClassifications.search` are guaranteed to be valid for at least 6 months. If a given classification is deactiveated, it will immediately stop being returned. After 6 months, `case.create` requests using the classification ID will fail.", "type": "string" -}, -"product": { -"$ref": "Product", -"description": "The full product the classification corresponds to." } }, "type": "object" @@ -1367,31 +1328,6 @@ }, "type": "object" }, -"Product": { -"description": "The full product a case may be associated with, including Product Line and Product Subline.", -"id": "Product", -"properties": { -"productLine": { -"description": "The Product Line of the Product.", -"enum": [ -"PRODUCT_LINE_UNSPECIFIED", -"GOOGLE_CLOUD", -"GOOGLE_MAPS" -], -"enumDescriptions": [ -"Unknown product type.", -"Google Cloud", -"Google Maps" -], -"type": "string" -}, -"productSubline": { -"description": "The Product Subline of the Product, such as \"Maps Billing\".", -"type": "string" -} -}, -"type": "object" -}, "SearchCaseClassificationsResponse": { "description": "The response message for SearchCaseClassifications endpoint.", "id": "SearchCaseClassificationsResponse", diff --git a/googleapiclient/discovery_cache/documents/cloudtasks.v2.json b/googleapiclient/discovery_cache/documents/cloudtasks.v2.json index eb9e225e17..32fe5b01bc 100644 --- a/googleapiclient/discovery_cache/documents/cloudtasks.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudtasks.v2.json @@ -779,7 +779,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://cloudtasks.googleapis.com/", "schemas": { "AppEngineHttpRequest": { diff --git a/googleapiclient/discovery_cache/documents/cloudtasks.v2beta2.json b/googleapiclient/discovery_cache/documents/cloudtasks.v2beta2.json index 04276efad8..dc014db43d 100644 --- a/googleapiclient/discovery_cache/documents/cloudtasks.v2beta2.json +++ b/googleapiclient/discovery_cache/documents/cloudtasks.v2beta2.json @@ -935,7 +935,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://cloudtasks.googleapis.com/", "schemas": { "AcknowledgeTaskRequest": { diff --git a/googleapiclient/discovery_cache/documents/cloudtasks.v2beta3.json b/googleapiclient/discovery_cache/documents/cloudtasks.v2beta3.json index 316f5bd896..c36120ca46 100644 --- a/googleapiclient/discovery_cache/documents/cloudtasks.v2beta3.json +++ b/googleapiclient/discovery_cache/documents/cloudtasks.v2beta3.json @@ -791,7 +791,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://cloudtasks.googleapis.com/", "schemas": { "AppEngineHttpQueue": { diff --git a/googleapiclient/discovery_cache/documents/cloudtrace.v1.json b/googleapiclient/discovery_cache/documents/cloudtrace.v1.json index b6966af661..9c96c3da7f 100644 --- a/googleapiclient/discovery_cache/documents/cloudtrace.v1.json +++ b/googleapiclient/discovery_cache/documents/cloudtrace.v1.json @@ -257,7 +257,7 @@ } } }, -"revision": "20240301", +"revision": "20240315", "rootUrl": "https://cloudtrace.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/cloudtrace.v2.json b/googleapiclient/discovery_cache/documents/cloudtrace.v2.json index ebf02a07ed..eea62dd8bd 100644 --- a/googleapiclient/discovery_cache/documents/cloudtrace.v2.json +++ b/googleapiclient/discovery_cache/documents/cloudtrace.v2.json @@ -181,7 +181,7 @@ } } }, -"revision": "20240301", +"revision": "20240315", "rootUrl": "https://cloudtrace.googleapis.com/", "schemas": { "Annotation": { diff --git a/googleapiclient/discovery_cache/documents/cloudtrace.v2beta1.json b/googleapiclient/discovery_cache/documents/cloudtrace.v2beta1.json index 205b9f5569..1ca20f7b17 100644 --- a/googleapiclient/discovery_cache/documents/cloudtrace.v2beta1.json +++ b/googleapiclient/discovery_cache/documents/cloudtrace.v2beta1.json @@ -273,7 +273,7 @@ } } }, -"revision": "20240301", +"revision": "20240315", "rootUrl": "https://cloudtrace.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/composer.v1.json b/googleapiclient/discovery_cache/documents/composer.v1.json index d26456aa8d..9478aae80c 100644 --- a/googleapiclient/discovery_cache/documents/composer.v1.json +++ b/googleapiclient/discovery_cache/documents/composer.v1.json @@ -938,7 +938,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://composer.googleapis.com/", "schemas": { "AirflowMetadataRetentionPolicyConfig": { @@ -1166,7 +1166,7 @@ "properties": { "airflowMetadataRetentionConfig": { "$ref": "AirflowMetadataRetentionPolicyConfig", -"description": "Optional. The retention policy for airflow metadata database. Details: go/composer-database-retention-2" +"description": "Optional. The retention policy for airflow metadata database." }, "taskLogsRetentionConfig": { "$ref": "TaskLogsRetentionConfig", diff --git a/googleapiclient/discovery_cache/documents/composer.v1beta1.json b/googleapiclient/discovery_cache/documents/composer.v1beta1.json index e23b92fbce..e0dba0be49 100644 --- a/googleapiclient/discovery_cache/documents/composer.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/composer.v1beta1.json @@ -994,7 +994,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://composer.googleapis.com/", "schemas": { "AirflowMetadataRetentionPolicyConfig": { diff --git a/googleapiclient/discovery_cache/documents/compute.alpha.json b/googleapiclient/discovery_cache/documents/compute.alpha.json index 4f1c9c08e0..fcf0ef10f4 100644 --- a/googleapiclient/discovery_cache/documents/compute.alpha.json +++ b/googleapiclient/discovery_cache/documents/compute.alpha.json @@ -18438,6 +18438,100 @@ } } }, +"networkPlacements": { +"methods": { +"get": { +"description": "Returns the specified network placement.", +"flatPath": "projects/{project}/global/networkPlacements/{networkPlacement}", +"httpMethod": "GET", +"id": "compute.networkPlacements.get", +"parameterOrder": [ +"project", +"networkPlacement" +], +"parameters": { +"networkPlacement": { +"description": "Name of the network placement to return.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/global/networkPlacements/{networkPlacement}", +"response": { +"$ref": "NetworkPlacement" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, +"list": { +"description": "Retrieves a list of network placements available to the specified project.", +"flatPath": "projects/{project}/global/networkPlacements", +"httpMethod": "GET", +"id": "compute.networkPlacements.list", +"parameterOrder": [ +"project" +], +"parameters": { +"filter": { +"description": "A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = \"Intel Skylake\") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = \"Intel Skylake\") OR (cpuPlatform = \"Intel Broadwell\") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq \"double quoted literal\"` `(fieldname1 eq literal) (fieldname2 ne \"literal\")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name \"instance\", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.", +"location": "query", +"type": "string" +}, +"maxResults": { +"default": "500", +"description": "The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)", +"format": "uint32", +"location": "query", +"minimum": "0", +"type": "integer" +}, +"orderBy": { +"description": "Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy=\"creationTimestamp desc\"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.", +"location": "query", +"type": "string" +}, +"pageToken": { +"description": "Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.", +"location": "query", +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"returnPartialSuccess": { +"description": "Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.", +"location": "query", +"type": "boolean" +} +}, +"path": "projects/{project}/global/networkPlacements", +"response": { +"$ref": "NetworkPlacementsListResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +} +} +}, "networks": { "methods": { "addPeering": { @@ -43897,7 +43991,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://compute.googleapis.com/", "schemas": { "AWSV4Signature": { @@ -59360,7 +59454,7 @@ false "additionalProperties": { "type": "string" }, -"description": "Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only.", +"description": "Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources.", "type": "object" } }, @@ -62184,7 +62278,7 @@ false "type": "string" }, "type": { -"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL.", +"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL.", "enum": [ "HIERARCHY", "NETWORK", @@ -69768,11 +69862,13 @@ false "description": "The type of vNIC to be used on this interface. This may be gVNIC or VirtioNet.", "enum": [ "GVNIC", +"IDPF", "UNSPECIFIED_NIC_TYPE", "VIRTIO_NET" ], "enumDescriptions": [ "GVNIC", +"IDPF", "No type specified.", "VIRTIO" ], @@ -70108,6 +70204,485 @@ false }, "type": "object" }, +"NetworkPlacement": { +"description": "NetworkPlacement Represents a Google managed network placement resource.", +"id": "NetworkPlacement", +"properties": { +"creationTimestamp": { +"description": "[Output Only] Creation timestamp in RFC3339 text format.", +"type": "string" +}, +"description": { +"description": "[Output Only] An optional description of this resource.", +"type": "string" +}, +"features": { +"$ref": "NetworkPlacementNetworkFeatures", +"description": "[Output Only] Features supported by the network." +}, +"id": { +"description": "[Output Only] The unique identifier for the resource. This identifier is defined by the server.", +"format": "uint64", +"type": "string" +}, +"kind": { +"default": "compute#networkPlacement", +"description": "[Output Only] Type of the resource. Always compute#networkPlacement for network placements.", +"type": "string" +}, +"name": { +"description": "[Output Only] Name of the resource.", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"type": "string" +}, +"selfLink": { +"description": "[Output Only] Server-defined URL for the resource.", +"type": "string" +}, +"selfLinkWithId": { +"description": "[Output Only] Server-defined URL for this resource with the resource id.", +"type": "string" +}, +"zone": { +"description": "[Output Only] Zone to which the network is restricted.", +"type": "string" +} +}, +"type": "object" +}, +"NetworkPlacementNetworkFeatures": { +"id": "NetworkPlacementNetworkFeatures", +"properties": { +"allowAutoModeSubnet": { +"description": "Specifies whether auto mode subnet creation is allowed.", +"enum": [ +"AUTO_MODE_SUBNET_ALLOWED", +"AUTO_MODE_SUBNET_BLOCKED", +"AUTO_MODE_SUBNET_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowCloudNat": { +"description": "Specifies whether cloud NAT creation is allowed.", +"enum": [ +"CLOUD_NAT_ALLOWED", +"CLOUD_NAT_BLOCKED", +"CLOUD_NAT_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowCloudRouter": { +"description": "Specifies whether cloud router creation is allowed.", +"enum": [ +"CLOUD_ROUTER_ALLOWED", +"CLOUD_ROUTER_BLOCKED", +"CLOUD_ROUTER_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowInterconnect": { +"description": "Specifies whether Cloud Interconnect creation is allowed.", +"enum": [ +"INTERCONNECT_ALLOWED", +"INTERCONNECT_BLOCKED", +"INTERCONNECT_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowLoadBalancing": { +"description": "Specifies whether cloud load balancing is allowed.", +"enum": [ +"LOAD_BALANCING_ALLOWED", +"LOAD_BALANCING_BLOCKED", +"LOAD_BALANCING_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowMultiNicInSameNetwork": { +"description": "Specifies whether multi-nic in the same network is allowed.", +"enum": [ +"MULTI_NIC_IN_SAME_NETWORK_ALLOWED", +"MULTI_NIC_IN_SAME_NETWORK_BLOCKED", +"MULTI_NIC_IN_SAME_NETWORK_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowPacketMirroring": { +"description": "Specifies whether Packet Mirroring 1.0 is supported.", +"enum": [ +"PACKET_MIRRORING_ALLOWED", +"PACKET_MIRRORING_BLOCKED", +"PACKET_MIRRORING_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowPrivateGoogleAccess": { +"description": "Specifies whether private Google access is allowed.", +"enum": [ +"PRIVATE_GOOGLE_ACCESS_ALLOWED", +"PRIVATE_GOOGLE_ACCESS_BLOCKED", +"PRIVATE_GOOGLE_ACCESS_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowPsc": { +"description": "Specifies whether PSC creation is allowed.", +"enum": [ +"PSC_ALLOWED", +"PSC_BLOCKED", +"PSC_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowSameNetworkUnicast": { +"description": "Specifies whether unicast within the same network is allowed.", +"enum": [ +"SAME_NETWORK_UNICAST_ALLOWED", +"SAME_NETWORK_UNICAST_BLOCKED", +"SAME_NETWORK_UNICAST_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowStaticRoutes": { +"description": "Specifies whether static route creation is allowed.", +"enum": [ +"STATIC_ROUTES_ALLOWED", +"STATIC_ROUTES_BLOCKED", +"STATIC_ROUTES_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowVpcPeering": { +"description": "Specifies whether VPC peering is allowed.", +"enum": [ +"VPC_PEERING_ALLOWED", +"VPC_PEERING_BLOCKED", +"VPC_PEERING_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowVpn": { +"description": "Specifies whether VPN creation is allowed.", +"enum": [ +"VPN_ALLOWED", +"VPN_BLOCKED", +"VPN_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"allowedSubnetPurposes": { +"description": "Specifies which subnetwork purposes are supported.", +"items": { +"enum": [ +"SUBNET_PURPOSE_CUSTOM_HARDWARE", +"SUBNET_PURPOSE_PRIVATE", +"SUBNET_PURPOSE_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"type": "array" +}, +"allowedSubnetStackTypes": { +"description": "Specifies which subnetwork stack types are supported.", +"items": { +"enum": [ +"SUBNET_STACK_TYPE_IPV4_IPV6", +"SUBNET_STACK_TYPE_IPV4_ONLY", +"SUBNET_STACK_TYPE_IPV6_ONLY", +"SUBNET_STACK_TYPE_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"", +"" +], +"type": "string" +}, +"type": "array" +}, +"interfaceTypes": { +"description": "If set, limits the interface types that the network supports. If empty, all interface types are supported.", +"items": { +"enum": [ +"GVNIC", +"IDPF", +"UNSPECIFIED_NIC_TYPE", +"VIRTIO_NET" +], +"enumDescriptions": [ +"GVNIC", +"IDPF", +"No type specified.", +"VIRTIO" +], +"type": "string" +}, +"type": "array" +}, +"multicast": { +"description": "Specifies which type of multicast is supported.", +"enum": [ +"MULTICAST_SDN", +"MULTICAST_ULL", +"MULTICAST_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +}, +"unicast": { +"description": "Specifies which type of unicast is supported.", +"enum": [ +"UNICAST_SDN", +"UNICAST_ULL", +"UNICAST_UNSPECIFIED" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +} +}, +"type": "object" +}, +"NetworkPlacementsListResponse": { +"description": "Contains a list of network placements.", +"id": "NetworkPlacementsListResponse", +"properties": { +"etag": { +"type": "string" +}, +"id": { +"description": "[Output Only] Unique identifier for the resource; defined by the server.", +"type": "string" +}, +"items": { +"description": "A list of NetworkPlacement resources.", +"items": { +"$ref": "NetworkPlacement" +}, +"type": "array" +}, +"kind": { +"default": "compute#networkPlacementList", +"description": "[Output Only] Type of resource. Always compute#networkPlacementList for network placements.", +"type": "string" +}, +"nextPageToken": { +"description": "[Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results.", +"type": "string" +}, +"selfLink": { +"description": "[Output Only] Server-defined URL for this resource.", +"type": "string" +}, +"unreachables": { +"description": "[Output Only] Unreachable resources. end_interface: MixerListResponseWithEtagBuilder", +"items": { +"type": "string" +}, +"type": "array" +}, +"warning": { +"description": "[Output Only] Informational warning message.", +"properties": { +"code": { +"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.", +"enum": [ +"CLEANUP_FAILED", +"DEPRECATED_RESOURCE_USED", +"DEPRECATED_TYPE_USED", +"DISK_SIZE_LARGER_THAN_IMAGE_SIZE", +"EXPERIMENTAL_TYPE_USED", +"EXTERNAL_API_WARNING", +"FIELD_VALUE_OVERRIDEN", +"INJECTED_KERNELS_DEPRECATED", +"INVALID_HEALTH_CHECK_FOR_DYNAMIC_WIEGHTED_LB", +"LARGE_DEPLOYMENT_WARNING", +"LIST_OVERHEAD_QUOTA_EXCEED", +"MISSING_TYPE_DEPENDENCY", +"NEXT_HOP_ADDRESS_NOT_ASSIGNED", +"NEXT_HOP_CANNOT_IP_FORWARD", +"NEXT_HOP_INSTANCE_HAS_NO_IPV6_INTERFACE", +"NEXT_HOP_INSTANCE_NOT_FOUND", +"NEXT_HOP_INSTANCE_NOT_ON_NETWORK", +"NEXT_HOP_NOT_RUNNING", +"NOT_CRITICAL_ERROR", +"NO_RESULTS_ON_PAGE", +"PARTIAL_SUCCESS", +"REQUIRED_TOS_AGREEMENT", +"RESOURCE_IN_USE_BY_OTHER_RESOURCE_WARNING", +"RESOURCE_NOT_DELETED", +"SCHEMA_VALIDATION_IGNORED", +"SINGLE_INSTANCE_PROPERTY_TEMPLATE", +"UNDECLARED_PROPERTIES", +"UNREACHABLE" +], +"enumDeprecated": [ +false, +false, +false, +false, +false, +false, +true, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false +], +"enumDescriptions": [ +"Warning about failed cleanup of transient changes made by a failed operation.", +"A link to a deprecated resource was created.", +"When deploying and at least one of the resources has a type marked as deprecated", +"The user created a boot disk that is larger than image size.", +"When deploying and at least one of the resources has a type marked as experimental", +"Warning that is present in an external api call", +"Warning that value of a field has been overridden. Deprecated unused field.", +"The operation involved use of an injected kernel, which is deprecated.", +"A WEIGHTED_MAGLEV backend service is associated with a health check that is not of type HTTP/HTTPS/HTTP2.", +"When deploying a deployment with a exceedingly large number of resources", +"Resource can't be retrieved due to list overhead quota exceed which captures the amount of resources filtered out by user-defined list filter.", +"A resource depends on a missing type", +"The route's nextHopIp address is not assigned to an instance on the network.", +"The route's next hop instance cannot ip forward.", +"The route's nextHopInstance URL refers to an instance that does not have an ipv6 interface on the same network as the route.", +"The route's nextHopInstance URL refers to an instance that does not exist.", +"The route's nextHopInstance URL refers to an instance that is not on the same network as the route.", +"The route's next hop instance does not have a status of RUNNING.", +"Error which is not critical. We decided to continue the process despite the mentioned error.", +"No results are present on a particular list page.", +"Success is reported, but some results may be missing due to errors", +"The user attempted to use a resource that requires a TOS they have not accepted.", +"Warning that a resource is in use.", +"One or more of the resources set to auto-delete could not be deleted because they were in use.", +"When a resource schema validation is ignored.", +"Instance template used in instance group manager is valid as such, but its application does not make a lot of sense, because it allows only single instance in instance group.", +"When undeclared properties in the schema are present", +"A given scope cannot be reached." +], +"type": "string" +}, +"data": { +"description": "[Output Only] Metadata about this warning in key: value format. For example: \"data\": [ { \"key\": \"scope\", \"value\": \"zones/us-east1-d\" } ", +"items": { +"properties": { +"key": { +"description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).", +"type": "string" +}, +"value": { +"description": "[Output Only] A warning data value corresponding to the key.", +"type": "string" +} +}, +"type": "object" +}, +"type": "array" +}, +"message": { +"description": "[Output Only] A human-readable description of the warning code.", +"type": "string" +} +}, +"type": "object" +} +}, +"type": "object" +}, "NetworkRoutingConfig": { "description": "A routing configuration attached to a network resource. The message includes the list of routers associated with the network, and a flag indicating the type of routing behavior to enforce network-wide.", "id": "NetworkRoutingConfig", @@ -78088,11 +78663,6 @@ false }, "type": "array" }, -"skipInapplicableInstances": { -"deprecated": true, -"description": "Skip instances which cannot be deleted (instances not belonging to this managed group, already being deleted or being abandoned). If `false`, fail whole flow, if such instance is passed. DEPRECATED: Use skip_instances_on_validation_error instead.", -"type": "boolean" -}, "skipInstancesOnValidationError": { "description": "Specifies whether the request should proceed despite the inclusion of instances that are not members of the group or that are already in the process of being deleted or abandoned. If this field is set to `false` and such an instance is specified in the request, the operation fails. The operation always fails if the request contains a malformed instance URL or a reference to an instance that exists in a zone or region other than the group's zone or region.", "type": "boolean" diff --git a/googleapiclient/discovery_cache/documents/compute.beta.json b/googleapiclient/discovery_cache/documents/compute.beta.json index e96fd14926..f1abc8fbbc 100644 --- a/googleapiclient/discovery_cache/documents/compute.beta.json +++ b/googleapiclient/discovery_cache/documents/compute.beta.json @@ -5309,11 +5309,6 @@ "required": true, "type": "string" }, -"paths": { -"location": "query", -"repeated": true, -"type": "string" -}, "project": { "description": "Project ID for this request.", "location": "path", @@ -10684,6 +10679,21 @@ "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", "required": true, "type": "string" +}, +"view": { +"description": "View of the instance template.", +"enum": [ +"BASIC", +"FULL", +"INSTANCE_VIEW_UNSPECIFIED" +], +"enumDescriptions": [ +"Include everything except Partner Metadata.", +"Include everything.", +"The default / unset value. The API will default to the BASIC view." +], +"location": "query", +"type": "string" } }, "path": "projects/{project}/global/instanceTemplates/{instanceTemplate}", @@ -10814,6 +10824,21 @@ "description": "Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.", "location": "query", "type": "boolean" +}, +"view": { +"description": "View of the instance template.", +"enum": [ +"BASIC", +"FULL", +"INSTANCE_VIEW_UNSPECIFIED" +], +"enumDescriptions": [ +"Include everything except Partner Metadata.", +"Include everything.", +"The default / unset value. The API will default to the BASIC view." +], +"location": "query", +"type": "string" } }, "path": "projects/{project}/global/instanceTemplates", @@ -11362,6 +11387,21 @@ "required": true, "type": "string" }, +"view": { +"description": "View of the instance.", +"enum": [ +"BASIC", +"FULL", +"INSTANCE_VIEW_UNSPECIFIED" +], +"enumDescriptions": [ +"Include everything except Partner Metadata.", +"Include everything.", +"The default / unset value. The API will default to the BASIC view." +], +"location": "query", +"type": "string" +}, "zone": { "description": "The name of the zone for this request.", "location": "path", @@ -11532,6 +11572,54 @@ "https://www.googleapis.com/auth/compute.readonly" ] }, +"getPartnerMetadata": { +"description": "Gets partner metadata of the specified instance and namespaces.", +"flatPath": "projects/{project}/zones/{zone}/instances/{instance}/getPartnerMetadata", +"httpMethod": "GET", +"id": "compute.instances.getPartnerMetadata", +"parameterOrder": [ +"project", +"zone", +"instance" +], +"parameters": { +"instance": { +"description": "Name of the instance scoping this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +}, +"namespaces": { +"description": "Comma separated partner metadata namespaces.", +"location": "query", +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"zone": { +"description": "The name of the zone for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/zones/{zone}/instances/{instance}/getPartnerMetadata", +"response": { +"$ref": "PartnerMetadata" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, "getScreenshot": { "description": "Returns the screenshot from the specified instance.", "flatPath": "projects/{project}/zones/{zone}/instances/{instance}/screenshot", @@ -11816,6 +11904,21 @@ "location": "query", "type": "boolean" }, +"view": { +"description": "View of the instance.", +"enum": [ +"BASIC", +"FULL", +"INSTANCE_VIEW_UNSPECIFIED" +], +"enumDescriptions": [ +"Include everything except Partner Metadata.", +"Include everything.", +"The default / unset value. The API will default to the BASIC view." +], +"location": "query", +"type": "string" +}, "zone": { "description": "The name of the zone for this request.", "location": "path", @@ -11905,6 +12008,56 @@ "https://www.googleapis.com/auth/compute.readonly" ] }, +"patchPartnerMetadata": { +"description": "Patches partner metadata of the specified instance.", +"flatPath": "projects/{project}/zones/{zone}/instances/{instance}/patchPartnerMetadata", +"httpMethod": "POST", +"id": "compute.instances.patchPartnerMetadata", +"parameterOrder": [ +"project", +"zone", +"instance" +], +"parameters": { +"instance": { +"description": "Name of the instance scoping this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"requestId": { +"description": "An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).", +"location": "query", +"type": "string" +}, +"zone": { +"description": "The name of the zone for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/zones/{zone}/instances/{instance}/patchPartnerMetadata", +"request": { +"$ref": "PartnerMetadata" +}, +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute" +] +}, "performMaintenance": { "description": "Perform a manual maintenance on the instance.", "flatPath": "projects/{project}/zones/{zone}/instances/{instance}/performMaintenance", @@ -26280,6 +26433,21 @@ "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", "required": true, "type": "string" +}, +"view": { +"description": "View of the instance template.", +"enum": [ +"BASIC", +"FULL", +"INSTANCE_VIEW_UNSPECIFIED" +], +"enumDescriptions": [ +"Include everything except Partner Metadata.", +"Include everything.", +"The default / unset value. The API will default to the BASIC view." +], +"location": "query", +"type": "string" } }, "path": "projects/{project}/regions/{region}/instanceTemplates/{instanceTemplate}", @@ -26385,6 +26553,21 @@ "description": "Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.", "location": "query", "type": "boolean" +}, +"view": { +"description": "View of the instance template.", +"enum": [ +"BASIC", +"FULL", +"INSTANCE_VIEW_UNSPECIFIED" +], +"enumDescriptions": [ +"Include everything except Partner Metadata.", +"Include everything.", +"The default / unset value. The API will default to the BASIC view." +], +"location": "query", +"type": "string" } }, "path": "projects/{project}/regions/{region}/instanceTemplates", @@ -32184,62 +32367,19 @@ "https://www.googleapis.com/auth/compute" ] }, -"get": { -"description": "Returns the specified Router resource.", -"flatPath": "projects/{project}/regions/{region}/routers/{router}", -"httpMethod": "GET", -"id": "compute.routers.get", -"parameterOrder": [ -"project", -"region", -"router" -], -"parameters": { -"project": { -"description": "Project ID for this request.", -"location": "path", -"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", -"required": true, -"type": "string" -}, -"region": { -"description": "Name of the region for this request.", -"location": "path", -"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", -"required": true, -"type": "string" -}, -"router": { -"description": "Name of the Router resource to return.", -"location": "path", -"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", -"required": true, -"type": "string" -} -}, -"path": "projects/{project}/regions/{region}/routers/{router}", -"response": { -"$ref": "Router" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform", -"https://www.googleapis.com/auth/compute", -"https://www.googleapis.com/auth/compute.readonly" -] -}, -"getNatIpInfo": { -"description": "Retrieves runtime NAT IP information.", -"flatPath": "projects/{project}/regions/{region}/routers/{router}/getNatIpInfo", -"httpMethod": "GET", -"id": "compute.routers.getNatIpInfo", +"deleteRoutePolicy": { +"description": "Deletes Route Policy", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/deleteRoutePolicy", +"httpMethod": "POST", +"id": "compute.routers.deleteRoutePolicy", "parameterOrder": [ "project", "region", "router" ], "parameters": { -"natName": { -"description": "Name of the nat service to filter the NAT IP information. If it is omitted, all nats for this router will be returned. Name should conform to RFC1035.", +"policy": { +"description": "The Policy name for this request. Name must conform to RFC1035", "location": "query", "type": "string" }, @@ -32257,104 +32397,246 @@ "required": true, "type": "string" }, -"router": { -"description": "Name of the Router resource to query for Nat IP information. The name should conform to RFC1035.", -"location": "path", -"required": true, -"type": "string" -} -}, -"path": "projects/{project}/regions/{region}/routers/{router}/getNatIpInfo", -"response": { -"$ref": "NatIpInfoResponse" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform", -"https://www.googleapis.com/auth/compute", -"https://www.googleapis.com/auth/compute.readonly" -] -}, -"getNatMappingInfo": { -"description": "Retrieves runtime Nat mapping information of VM endpoints.", -"flatPath": "projects/{project}/regions/{region}/routers/{router}/getNatMappingInfo", -"httpMethod": "GET", -"id": "compute.routers.getNatMappingInfo", -"parameterOrder": [ -"project", -"region", -"router" -], -"parameters": { -"filter": { -"description": "A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = \"Intel Skylake\") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = \"Intel Skylake\") OR (cpuPlatform = \"Intel Broadwell\") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq \"double quoted literal\"` `(fieldname1 eq literal) (fieldname2 ne \"literal\")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name \"instance\", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.", -"location": "query", -"type": "string" -}, -"maxResults": { -"default": "500", -"description": "The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)", -"format": "uint32", -"location": "query", -"minimum": "0", -"type": "integer" -}, -"natName": { -"description": "Name of the nat service to filter the Nat Mapping information. If it is omitted, all nats for this router will be returned. Name should conform to RFC1035.", -"location": "query", -"type": "string" -}, -"orderBy": { -"description": "Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy=\"creationTimestamp desc\"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.", -"location": "query", -"type": "string" -}, -"pageToken": { -"description": "Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.", +"requestId": { +"description": "An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).", "location": "query", "type": "string" }, -"project": { -"description": "Project ID for this request.", -"location": "path", -"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", -"required": true, -"type": "string" -}, -"region": { -"description": "Name of the region for this request.", -"location": "path", -"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", -"required": true, -"type": "string" -}, -"returnPartialSuccess": { -"description": "Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.", -"location": "query", -"type": "boolean" -}, "router": { -"description": "Name of the Router resource to query for Nat Mapping information of VM endpoints.", +"description": "Name of the Router resource where Route Policy is defined.", "location": "path", "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", "required": true, "type": "string" } }, -"path": "projects/{project}/regions/{region}/routers/{router}/getNatMappingInfo", +"path": "projects/{project}/regions/{region}/routers/{router}/deleteRoutePolicy", "response": { -"$ref": "VmEndpointNatMappingsList" +"$ref": "Operation" }, "scopes": [ "https://www.googleapis.com/auth/cloud-platform", -"https://www.googleapis.com/auth/compute", -"https://www.googleapis.com/auth/compute.readonly" +"https://www.googleapis.com/auth/compute" ] }, -"getRouterStatus": { -"description": "Retrieves runtime information of the specified router.", -"flatPath": "projects/{project}/regions/{region}/routers/{router}/getRouterStatus", +"get": { +"description": "Returns the specified Router resource.", +"flatPath": "projects/{project}/regions/{region}/routers/{router}", "httpMethod": "GET", -"id": "compute.routers.getRouterStatus", +"id": "compute.routers.get", +"parameterOrder": [ +"project", +"region", +"router" +], +"parameters": { +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"region": { +"description": "Name of the region for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +}, +"router": { +"description": "Name of the Router resource to return.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/regions/{region}/routers/{router}", +"response": { +"$ref": "Router" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, +"getNatIpInfo": { +"description": "Retrieves runtime NAT IP information.", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/getNatIpInfo", +"httpMethod": "GET", +"id": "compute.routers.getNatIpInfo", +"parameterOrder": [ +"project", +"region", +"router" +], +"parameters": { +"natName": { +"description": "Name of the nat service to filter the NAT IP information. If it is omitted, all nats for this router will be returned. Name should conform to RFC1035.", +"location": "query", +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"region": { +"description": "Name of the region for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +}, +"router": { +"description": "Name of the Router resource to query for Nat IP information. The name should conform to RFC1035.", +"location": "path", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/regions/{region}/routers/{router}/getNatIpInfo", +"response": { +"$ref": "NatIpInfoResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, +"getNatMappingInfo": { +"description": "Retrieves runtime Nat mapping information of VM endpoints.", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/getNatMappingInfo", +"httpMethod": "GET", +"id": "compute.routers.getNatMappingInfo", +"parameterOrder": [ +"project", +"region", +"router" +], +"parameters": { +"filter": { +"description": "A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = \"Intel Skylake\") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = \"Intel Skylake\") OR (cpuPlatform = \"Intel Broadwell\") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq \"double quoted literal\"` `(fieldname1 eq literal) (fieldname2 ne \"literal\")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name \"instance\", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.", +"location": "query", +"type": "string" +}, +"maxResults": { +"default": "500", +"description": "The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)", +"format": "uint32", +"location": "query", +"minimum": "0", +"type": "integer" +}, +"natName": { +"description": "Name of the nat service to filter the Nat Mapping information. If it is omitted, all nats for this router will be returned. Name should conform to RFC1035.", +"location": "query", +"type": "string" +}, +"orderBy": { +"description": "Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy=\"creationTimestamp desc\"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.", +"location": "query", +"type": "string" +}, +"pageToken": { +"description": "Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.", +"location": "query", +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"region": { +"description": "Name of the region for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +}, +"returnPartialSuccess": { +"description": "Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.", +"location": "query", +"type": "boolean" +}, +"router": { +"description": "Name of the Router resource to query for Nat Mapping information of VM endpoints.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/regions/{region}/routers/{router}/getNatMappingInfo", +"response": { +"$ref": "VmEndpointNatMappingsList" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, +"getRoutePolicy": { +"description": "Returns specified Route Policy", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/getRoutePolicy", +"httpMethod": "GET", +"id": "compute.routers.getRoutePolicy", +"parameterOrder": [ +"project", +"region", +"router" +], +"parameters": { +"policy": { +"description": "The Policy name for this request. Name must conform to RFC1035", +"location": "query", +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"region": { +"description": "Name of the region for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +}, +"router": { +"description": "Name of the Router resource to query for the route policy. The name should conform to RFC1035.", +"location": "path", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/regions/{region}/routers/{router}/getRoutePolicy", +"response": { +"$ref": "RoutersGetRoutePolicyResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, +"getRouterStatus": { +"description": "Retrieves runtime information of the specified router.", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/getRouterStatus", +"httpMethod": "GET", +"id": "compute.routers.getRouterStatus", "parameterOrder": [ "project", "region", @@ -32498,6 +32780,196 @@ "https://www.googleapis.com/auth/compute.readonly" ] }, +"listBgpRoutes": { +"description": "Retrieves a list of router bgp routes available to the specified project.", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/listBgpRoutes", +"httpMethod": "GET", +"id": "compute.routers.listBgpRoutes", +"parameterOrder": [ +"project", +"region", +"router" +], +"parameters": { +"addressFamily": { +"default": "UNSPECIFIED_IP_VERSION", +"description": "(Required) limit results to this address family (either IPv4 or IPv6)", +"enum": [ +"IPV4", +"IPV6", +"UNSPECIFIED_IP_VERSION" +], +"enumDescriptions": [ +"", +"", +"" +], +"location": "query", +"type": "string" +}, +"destinationPrefix": { +"description": "Limit results to destinations that are subnets of this CIDR range", +"location": "query", +"type": "string" +}, +"filter": { +"description": "A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = \"Intel Skylake\") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = \"Intel Skylake\") OR (cpuPlatform = \"Intel Broadwell\") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq \"double quoted literal\"` `(fieldname1 eq literal) (fieldname2 ne \"literal\")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name \"instance\", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.", +"location": "query", +"type": "string" +}, +"maxResults": { +"default": "500", +"description": "The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)", +"format": "uint32", +"location": "query", +"minimum": "0", +"type": "integer" +}, +"orderBy": { +"description": "Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy=\"creationTimestamp desc\"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.", +"location": "query", +"type": "string" +}, +"pageToken": { +"description": "Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.", +"location": "query", +"type": "string" +}, +"peer": { +"description": "(Required) limit results to the BGP peer with the given name. Name should conform to RFC1035.", +"location": "query", +"type": "string" +}, +"policyApplied": { +"default": "true", +"description": "When true, the method returns post-policy routes. Otherwise, it returns pre-policy routes.", +"location": "query", +"type": "boolean" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"region": { +"description": "Name of the region for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +}, +"returnPartialSuccess": { +"description": "Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.", +"location": "query", +"type": "boolean" +}, +"routeType": { +"default": "UNSPECIFIED_ROUTE_TYPE", +"description": "(Required) limit results to this type of route (either LEARNED or ADVERTISED)", +"enum": [ +"ADVERTISED", +"LEARNED", +"UNSPECIFIED_ROUTE_TYPE" +], +"enumDescriptions": [ +"", +"", +"" +], +"location": "query", +"type": "string" +}, +"router": { +"description": "Name or id of the resource for this request. Name should conform to RFC1035.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/regions/{region}/routers/{router}/listBgpRoutes", +"response": { +"$ref": "RoutersListBgpRoutes" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, +"listRoutePolicies": { +"description": "Retrieves a list of router route policy subresources available to the specified project.", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/listRoutePolicies", +"httpMethod": "GET", +"id": "compute.routers.listRoutePolicies", +"parameterOrder": [ +"project", +"region", +"router" +], +"parameters": { +"filter": { +"description": "A filter expression that filters resources listed in the response. Most Compute resources support two types of filter expressions: expressions that support regular expressions and expressions that follow API improvement proposal AIP-160. These two types of filter expressions cannot be mixed in one request. If you want to use AIP-160, your expression must specify the field name, an operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The operator must be either `=`, `!=`, `>`, `<`, `<=`, `>=` or `:`. For example, if you are filtering Compute Engine instances, you can exclude instances named `example-instance` by specifying `name != example-instance`. The `:*` comparison can be used to test whether a key has been defined. For example, to find all objects with `owner` label use: ``` labels.owner:* ``` You can also filter nested fields. For example, you could specify `scheduling.automaticRestart = false` to include instances only if they are not scheduled for automatic restarts. You can use filtering on nested fields to filter based on resource labels. To filter on multiple expressions, provide each separate expression within parentheses. For example: ``` (scheduling.automaticRestart = true) (cpuPlatform = \"Intel Skylake\") ``` By default, each expression is an `AND` expression. However, you can include `AND` and `OR` expressions explicitly. For example: ``` (cpuPlatform = \"Intel Skylake\") OR (cpuPlatform = \"Intel Broadwell\") AND (scheduling.automaticRestart = true) ``` If you want to use a regular expression, use the `eq` (equal) or `ne` (not equal) operator against a single un-parenthesized expression with or without quotes or against multiple parenthesized expressions. Examples: `fieldname eq unquoted literal` `fieldname eq 'single quoted literal'` `fieldname eq \"double quoted literal\"` `(fieldname1 eq literal) (fieldname2 ne \"literal\")` The literal value is interpreted as a regular expression using Google RE2 library syntax. The literal value must match the entire field. For example, to filter for instances that do not end with name \"instance\", you would use `name ne .*instance`. You cannot combine constraints on multiple fields using regular expressions.", +"location": "query", +"type": "string" +}, +"maxResults": { +"default": "500", +"description": "The maximum number of results per page that should be returned. If the number of available results is larger than `maxResults`, Compute Engine returns a `nextPageToken` that can be used to get the next page of results in subsequent list requests. Acceptable values are `0` to `500`, inclusive. (Default: `500`)", +"format": "uint32", +"location": "query", +"minimum": "0", +"type": "integer" +}, +"orderBy": { +"description": "Sorts list results by a certain order. By default, results are returned in alphanumerical order based on the resource name. You can also sort results in descending order based on the creation timestamp using `orderBy=\"creationTimestamp desc\"`. This sorts results based on the `creationTimestamp` field in reverse chronological order (newest result first). Use this to sort resources like operations so that the newest operation is returned first. Currently, only sorting by `name` or `creationTimestamp desc` is supported.", +"location": "query", +"type": "string" +}, +"pageToken": { +"description": "Specifies a page token to use. Set `pageToken` to the `nextPageToken` returned by a previous list request to get the next page of results.", +"location": "query", +"type": "string" +}, +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"region": { +"description": "Name of the region for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +}, +"returnPartialSuccess": { +"description": "Opt-in for partial success behavior which provides partial results in case of failure. The default value is false. For example, when partial success behavior is enabled, aggregatedList for a single zone scope either returns all resources in the zone or no resources, with an error code.", +"location": "query", +"type": "boolean" +}, +"router": { +"description": "Name or id of the resource for this request. Name should conform to RFC1035.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/regions/{region}/routers/{router}/listRoutePolicies", +"response": { +"$ref": "RoutersListRoutePolicies" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, "patch": { "description": "Patches the specified Router resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules.", "flatPath": "projects/{project}/regions/{region}/routers/{router}", @@ -32689,6 +33161,56 @@ "https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/compute" ] +}, +"updateRoutePolicy": { +"description": "Updates or creates new Route Policy", +"flatPath": "projects/{project}/regions/{region}/routers/{router}/updateRoutePolicy", +"httpMethod": "POST", +"id": "compute.routers.updateRoutePolicy", +"parameterOrder": [ +"project", +"region", +"router" +], +"parameters": { +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"region": { +"description": "Name of the region for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +}, +"requestId": { +"description": "An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).", +"location": "query", +"type": "string" +}, +"router": { +"description": "Name of the Router resource where Route Policy is defined.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/regions/{region}/routers/{router}/updateRoutePolicy", +"request": { +"$ref": "RoutePolicy" +}, +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute" +] } } }, @@ -41057,7 +41579,7 @@ } } }, -"revision": "20240227", +"revision": "20240312", "rootUrl": "https://compute.googleapis.com/", "schemas": { "AWSV4Signature": { @@ -45598,6 +46120,91 @@ false }, "type": "object" }, +"BgpRoute": { +"id": "BgpRoute", +"properties": { +"asPaths": { +"description": "[Output only] AS-PATH for the route", +"items": { +"$ref": "BgpRouteAsPath" +}, +"type": "array" +}, +"communities": { +"description": "[Output only] BGP communities in human-readable A:B format.", +"items": { +"type": "string" +}, +"type": "array" +}, +"destination": { +"$ref": "BgpRouteNetworkLayerReachabilityInformation", +"description": "[Output only] Destination IP range for the route, in human-readable CIDR format" +}, +"med": { +"description": "[Output only] BGP multi-exit discriminator", +"format": "uint32", +"type": "integer" +}, +"origin": { +"description": "[Output only] BGP origin (EGP, IGP or INCOMPLETE)", +"enum": [ +"BGP_ORIGIN_EGP", +"BGP_ORIGIN_IGP", +"BGP_ORIGIN_INCOMPLETE" +], +"enumDescriptions": [ +"", +"", +"" +], +"type": "string" +} +}, +"type": "object" +}, +"BgpRouteAsPath": { +"id": "BgpRouteAsPath", +"properties": { +"asns": { +"description": "[Output only] ASNs in the path segment. When type is SEQUENCE, these are ordered.", +"items": { +"format": "int32", +"type": "integer" +}, +"type": "array" +}, +"type": { +"description": "[Output only] Type of AS-PATH segment (SEQUENCE or SET)", +"enum": [ +"AS_PATH_TYPE_SEQUENCE", +"AS_PATH_TYPE_SET" +], +"enumDescriptions": [ +"", +"" +], +"type": "string" +} +}, +"type": "object" +}, +"BgpRouteNetworkLayerReachabilityInformation": { +"description": "Network Layer Reachability Information (NLRI) for a route.", +"id": "BgpRouteNetworkLayerReachabilityInformation", +"properties": { +"pathId": { +"description": "If the BGP session supports multiple paths (RFC 7911), the path identifier for this route.", +"format": "uint32", +"type": "integer" +}, +"prefix": { +"description": "Human readable CIDR notation for a prefix. E.g. 10.42.0.0/16.", +"type": "string" +} +}, +"type": "object" +}, "Binding": { "description": "Associates `members`, or principals, with a `role`.", "id": "Binding", @@ -46649,7 +47256,7 @@ false "type": "array" }, "allowOriginRegexes": { -"description": "Specifies a regular expression that matches allowed origins. For more information about the regular expression syntax, see Syntax. An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED.", +"description": "Specifies a regular expression that matches allowed origins. For more information, see regular expression syntax . An origin is allowed if it matches either an item in allowOrigins or an item in allowOriginRegexes. Regular expressions can only be used when the loadBalancingScheme is set to INTERNAL_SELF_MANAGED.", "items": { "type": "string" }, @@ -46663,7 +47270,7 @@ false "type": "array" }, "disabled": { -"description": "If true, the setting specifies the CORS policy is disabled. The default value of false, which indicates that the CORS policy is in effect.", +"description": "If true, disables the CORS policy. The default value is false, which indicates that the CORS policy is in effect.", "type": "boolean" }, "exposeHeaders": { @@ -53972,6 +54579,13 @@ false "$ref": "InstanceParams", "description": "Input only. [Input Only] Additional params passed with the request, but not persisted as part of resource payload." }, +"partnerMetadata": { +"additionalProperties": { +"$ref": "StructuredEntries" +}, +"description": "Partner Metadata assigned to the instance. A map from a subdomain (namespace) to entries map.", +"type": "object" +}, "postKeyRevocationActionType": { "description": "PostKeyRevocationActionType of the instance.", "enum": [ @@ -55357,7 +55971,7 @@ false "additionalProperties": { "type": "string" }, -"description": "Resource manager tags to be bound to the instance group manager. Tag keys and values have the same definition as resource manager tags. Keys must be in the format `tagKeys/123`, and values are in the format `tagValues/456`. The field is allowed for INSERT only.", +"description": "Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see Manage tags for resources.", "type": "object" } }, @@ -55368,7 +55982,8 @@ false "id": "InstanceGroupManagerResizeRequest", "properties": { "count": { -"description": "The count of instances to create as part of this resize request.", +"deprecated": true, +"description": "This field is deprecated, please use resize_by instead. The count of instances to create as part of this resize request.", "format": "int32", "type": "integer" }, @@ -55425,14 +56040,25 @@ false "CREATING", "FAILED", "PROVISIONING", +"STATE_UNSPECIFIED", "SUCCEEDED" ], +"enumDeprecated": [ +false, +false, +false, +false, +true, +false, +false +], "enumDescriptions": [ "The request was created successfully and was accepted for provisioning when the capacity becomes available.", "The request is cancelled.", -"resize request is being created and may still fail creation.", +"Resize request is being created and may still fail creation.", "The request failed before or during provisioning. If the request fails during provisioning, any VMs that were created during provisioning are rolled back and removed from the MIG.", -"The target resource(s) are being provisioned.", +"The value is deprecated. ResizeRequests would stay in the ACCEPTED state during provisioning attempts. The target resource(s) are being provisioned.", +"Default value. This value should never be returned.", "The request succeeded." ], "type": "string" @@ -57334,6 +57960,13 @@ false "$ref": "NetworkPerformanceConfig", "description": "Note that for MachineImage, this is not supported yet." }, +"partnerMetadata": { +"additionalProperties": { +"$ref": "StructuredEntries" +}, +"description": "Partner Metadata assigned to the instance properties. A map from a subdomain (namespace) to entries map.", +"type": "object" +}, "postKeyRevocationActionType": { "description": "PostKeyRevocationActionType of the instance.", "enum": [ @@ -58096,7 +58729,7 @@ false "type": "string" }, "type": { -"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL.", +"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL.", "enum": [ "HIERARCHY", "NETWORK", @@ -69159,6 +69792,25 @@ false }, "type": "object" }, +"PartnerMetadata": { +"description": "Model definition of partner_metadata field. To be used in dedicated Partner Metadata methods and to be inlined in the Instance and InstanceTemplate resources.", +"id": "PartnerMetadata", +"properties": { +"fingerprint": { +"description": "Instance-level hash to be used for optimistic locking.", +"format": "byte", +"type": "string" +}, +"partnerMetadata": { +"additionalProperties": { +"$ref": "StructuredEntries" +}, +"description": "Partner Metadata assigned to the instance. A map from a subdomain to entries map. Subdomain name must be compliant with RFC1035 definition. The total size of all keys and values must be less than 2MB. Subdomain 'metadata.compute.googleapis.com' is reserverd for instance's metadata.", +"type": "object" +} +}, +"type": "object" +}, "PathMatcher": { "description": "A matcher for the path portion of the URL. The BackendService from the longest-matched rule will serve the URL. If no rule was matched, the default service is used.", "id": "PathMatcher", @@ -72573,7 +73225,7 @@ false "type": "array" }, "type": { -"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL.", +"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL.", "enum": [ "HIERARCHY", "NETWORK", @@ -74636,6 +75288,61 @@ false }, "type": "object" }, +"RoutePolicy": { +"id": "RoutePolicy", +"properties": { +"fingerprint": { +"description": "A fingerprint for the Route Policy being applied to this Router, which is essentially a hash of the Route Policy used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update Route Policy. You must always provide an up-to-date fingerprint hash in order to update or change labels. To see the latest fingerprint, make a getRoutePolicy() request to retrieve a Route Policy.", +"format": "byte", +"type": "string" +}, +"name": { +"description": "Route Policy name, which must be a resource ID segment and unique within all the router's Route Policies. Name should conform to RFC1035.", +"type": "string" +}, +"terms": { +"description": "List of terms (the order in the list is not important, they are evaluated in order of priority). Order of policies is not retained and might change when getting policy later.", +"items": { +"$ref": "RoutePolicyPolicyTerm" +}, +"type": "array" +}, +"type": { +"enum": [ +"ROUTE_POLICY_TYPE_EXPORT", +"ROUTE_POLICY_TYPE_IMPORT" +], +"enumDescriptions": [ +"The Route Policy is an Export Policy.", +"The Route Policy is an Import Policy." +], +"type": "string" +} +}, +"type": "object" +}, +"RoutePolicyPolicyTerm": { +"id": "RoutePolicyPolicyTerm", +"properties": { +"actions": { +"description": "CEL expressions to evaluate to modify a route when this term matches.", +"items": { +"$ref": "Expr" +}, +"type": "array" +}, +"match": { +"$ref": "Expr", +"description": "CEL expression evaluated against a route to determine if this term applies. When not set, the term applies to all routes." +}, +"priority": { +"description": "The evaluation priority for this term, which must be between 0 (inclusive) and 2^31 (exclusive), and unique within the list.", +"format": "int32", +"type": "integer" +} +}, +"type": "object" +}, "Router": { "description": "Represents a Cloud Router resource. For more information about Cloud Router, read the Cloud Router overview.", "id": "Router", @@ -75920,6 +76627,337 @@ false }, "type": "object" }, +"RoutersGetRoutePolicyResponse": { +"id": "RoutersGetRoutePolicyResponse", +"properties": { +"resource": { +"$ref": "RoutePolicy" +} +}, +"type": "object" +}, +"RoutersListBgpRoutes": { +"id": "RoutersListBgpRoutes", +"properties": { +"etag": { +"type": "string" +}, +"id": { +"description": "[Output Only] The unique identifier for the resource. This identifier is defined by the server.", +"type": "string" +}, +"kind": { +"default": "compute#routersListBgpRoutes", +"description": "[Output Only] Type of resource. Always compute#routersListBgpRoutes for lists of bgp routes.", +"type": "string" +}, +"nextPageToken": { +"description": "[Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results.", +"type": "string" +}, +"result": { +"description": "[Output Only] A list of bgp routes.", +"items": { +"$ref": "BgpRoute" +}, +"type": "array" +}, +"selfLink": { +"description": "[Output Only] Server-defined URL for this resource.", +"type": "string" +}, +"unreachables": { +"description": "[Output Only] Unreachable resources.", +"items": { +"type": "string" +}, +"type": "array" +}, +"warning": { +"description": "[Output Only] Informational warning message.", +"properties": { +"code": { +"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.", +"enum": [ +"CLEANUP_FAILED", +"DEPRECATED_RESOURCE_USED", +"DEPRECATED_TYPE_USED", +"DISK_SIZE_LARGER_THAN_IMAGE_SIZE", +"EXPERIMENTAL_TYPE_USED", +"EXTERNAL_API_WARNING", +"FIELD_VALUE_OVERRIDEN", +"INJECTED_KERNELS_DEPRECATED", +"INVALID_HEALTH_CHECK_FOR_DYNAMIC_WIEGHTED_LB", +"LARGE_DEPLOYMENT_WARNING", +"LIST_OVERHEAD_QUOTA_EXCEED", +"MISSING_TYPE_DEPENDENCY", +"NEXT_HOP_ADDRESS_NOT_ASSIGNED", +"NEXT_HOP_CANNOT_IP_FORWARD", +"NEXT_HOP_INSTANCE_HAS_NO_IPV6_INTERFACE", +"NEXT_HOP_INSTANCE_NOT_FOUND", +"NEXT_HOP_INSTANCE_NOT_ON_NETWORK", +"NEXT_HOP_NOT_RUNNING", +"NOT_CRITICAL_ERROR", +"NO_RESULTS_ON_PAGE", +"PARTIAL_SUCCESS", +"REQUIRED_TOS_AGREEMENT", +"RESOURCE_IN_USE_BY_OTHER_RESOURCE_WARNING", +"RESOURCE_NOT_DELETED", +"SCHEMA_VALIDATION_IGNORED", +"SINGLE_INSTANCE_PROPERTY_TEMPLATE", +"UNDECLARED_PROPERTIES", +"UNREACHABLE" +], +"enumDeprecated": [ +false, +false, +false, +false, +false, +false, +true, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false +], +"enumDescriptions": [ +"Warning about failed cleanup of transient changes made by a failed operation.", +"A link to a deprecated resource was created.", +"When deploying and at least one of the resources has a type marked as deprecated", +"The user created a boot disk that is larger than image size.", +"When deploying and at least one of the resources has a type marked as experimental", +"Warning that is present in an external api call", +"Warning that value of a field has been overridden. Deprecated unused field.", +"The operation involved use of an injected kernel, which is deprecated.", +"A WEIGHTED_MAGLEV backend service is associated with a health check that is not of type HTTP/HTTPS/HTTP2.", +"When deploying a deployment with a exceedingly large number of resources", +"Resource can't be retrieved due to list overhead quota exceed which captures the amount of resources filtered out by user-defined list filter.", +"A resource depends on a missing type", +"The route's nextHopIp address is not assigned to an instance on the network.", +"The route's next hop instance cannot ip forward.", +"The route's nextHopInstance URL refers to an instance that does not have an ipv6 interface on the same network as the route.", +"The route's nextHopInstance URL refers to an instance that does not exist.", +"The route's nextHopInstance URL refers to an instance that is not on the same network as the route.", +"The route's next hop instance does not have a status of RUNNING.", +"Error which is not critical. We decided to continue the process despite the mentioned error.", +"No results are present on a particular list page.", +"Success is reported, but some results may be missing due to errors", +"The user attempted to use a resource that requires a TOS they have not accepted.", +"Warning that a resource is in use.", +"One or more of the resources set to auto-delete could not be deleted because they were in use.", +"When a resource schema validation is ignored.", +"Instance template used in instance group manager is valid as such, but its application does not make a lot of sense, because it allows only single instance in instance group.", +"When undeclared properties in the schema are present", +"A given scope cannot be reached." +], +"type": "string" +}, +"data": { +"description": "[Output Only] Metadata about this warning in key: value format. For example: \"data\": [ { \"key\": \"scope\", \"value\": \"zones/us-east1-d\" } ", +"items": { +"properties": { +"key": { +"description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).", +"type": "string" +}, +"value": { +"description": "[Output Only] A warning data value corresponding to the key.", +"type": "string" +} +}, +"type": "object" +}, +"type": "array" +}, +"message": { +"description": "[Output Only] A human-readable description of the warning code.", +"type": "string" +} +}, +"type": "object" +} +}, +"type": "object" +}, +"RoutersListRoutePolicies": { +"id": "RoutersListRoutePolicies", +"properties": { +"etag": { +"type": "string" +}, +"id": { +"description": "[Output Only] The unique identifier for the resource. This identifier is defined by the server.", +"type": "string" +}, +"kind": { +"default": "compute#routersListRoutePolicies", +"description": "[Output Only] Type of resource. Always compute#routersListRoutePolicies for lists of route policies.", +"type": "string" +}, +"nextPageToken": { +"description": "[Output Only] This token allows you to get the next page of results for list requests. If the number of results is larger than maxResults, use the nextPageToken as a value for the query parameter pageToken in the next list request. Subsequent list requests will have their own nextPageToken to continue paging through the results.", +"type": "string" +}, +"result": { +"description": "[Output Only] A list of route policies.", +"items": { +"$ref": "RoutePolicy" +}, +"type": "array" +}, +"selfLink": { +"description": "[Output Only] Server-defined URL for this resource.", +"type": "string" +}, +"unreachables": { +"description": "[Output Only] Unreachable resources.", +"items": { +"type": "string" +}, +"type": "array" +}, +"warning": { +"description": "[Output Only] Informational warning message.", +"properties": { +"code": { +"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.", +"enum": [ +"CLEANUP_FAILED", +"DEPRECATED_RESOURCE_USED", +"DEPRECATED_TYPE_USED", +"DISK_SIZE_LARGER_THAN_IMAGE_SIZE", +"EXPERIMENTAL_TYPE_USED", +"EXTERNAL_API_WARNING", +"FIELD_VALUE_OVERRIDEN", +"INJECTED_KERNELS_DEPRECATED", +"INVALID_HEALTH_CHECK_FOR_DYNAMIC_WIEGHTED_LB", +"LARGE_DEPLOYMENT_WARNING", +"LIST_OVERHEAD_QUOTA_EXCEED", +"MISSING_TYPE_DEPENDENCY", +"NEXT_HOP_ADDRESS_NOT_ASSIGNED", +"NEXT_HOP_CANNOT_IP_FORWARD", +"NEXT_HOP_INSTANCE_HAS_NO_IPV6_INTERFACE", +"NEXT_HOP_INSTANCE_NOT_FOUND", +"NEXT_HOP_INSTANCE_NOT_ON_NETWORK", +"NEXT_HOP_NOT_RUNNING", +"NOT_CRITICAL_ERROR", +"NO_RESULTS_ON_PAGE", +"PARTIAL_SUCCESS", +"REQUIRED_TOS_AGREEMENT", +"RESOURCE_IN_USE_BY_OTHER_RESOURCE_WARNING", +"RESOURCE_NOT_DELETED", +"SCHEMA_VALIDATION_IGNORED", +"SINGLE_INSTANCE_PROPERTY_TEMPLATE", +"UNDECLARED_PROPERTIES", +"UNREACHABLE" +], +"enumDeprecated": [ +false, +false, +false, +false, +false, +false, +true, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false, +false +], +"enumDescriptions": [ +"Warning about failed cleanup of transient changes made by a failed operation.", +"A link to a deprecated resource was created.", +"When deploying and at least one of the resources has a type marked as deprecated", +"The user created a boot disk that is larger than image size.", +"When deploying and at least one of the resources has a type marked as experimental", +"Warning that is present in an external api call", +"Warning that value of a field has been overridden. Deprecated unused field.", +"The operation involved use of an injected kernel, which is deprecated.", +"A WEIGHTED_MAGLEV backend service is associated with a health check that is not of type HTTP/HTTPS/HTTP2.", +"When deploying a deployment with a exceedingly large number of resources", +"Resource can't be retrieved due to list overhead quota exceed which captures the amount of resources filtered out by user-defined list filter.", +"A resource depends on a missing type", +"The route's nextHopIp address is not assigned to an instance on the network.", +"The route's next hop instance cannot ip forward.", +"The route's nextHopInstance URL refers to an instance that does not have an ipv6 interface on the same network as the route.", +"The route's nextHopInstance URL refers to an instance that does not exist.", +"The route's nextHopInstance URL refers to an instance that is not on the same network as the route.", +"The route's next hop instance does not have a status of RUNNING.", +"Error which is not critical. We decided to continue the process despite the mentioned error.", +"No results are present on a particular list page.", +"Success is reported, but some results may be missing due to errors", +"The user attempted to use a resource that requires a TOS they have not accepted.", +"Warning that a resource is in use.", +"One or more of the resources set to auto-delete could not be deleted because they were in use.", +"When a resource schema validation is ignored.", +"Instance template used in instance group manager is valid as such, but its application does not make a lot of sense, because it allows only single instance in instance group.", +"When undeclared properties in the schema are present", +"A given scope cannot be reached." +], +"type": "string" +}, +"data": { +"description": "[Output Only] Metadata about this warning in key: value format. For example: \"data\": [ { \"key\": \"scope\", \"value\": \"zones/us-east1-d\" } ", +"items": { +"properties": { +"key": { +"description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding).", +"type": "string" +}, +"value": { +"description": "[Output Only] A warning data value corresponding to the key.", +"type": "string" +} +}, +"type": "object" +}, +"type": "array" +}, +"message": { +"description": "[Output Only] A human-readable description of the warning code.", +"type": "string" +} +}, +"type": "object" +} +}, +"type": "object" +}, "RoutersPreviewResponse": { "id": "RoutersPreviewResponse", "properties": { @@ -82336,6 +83374,19 @@ false }, "type": "object" }, +"StructuredEntries": { +"id": "StructuredEntries", +"properties": { +"entries": { +"additionalProperties": { +"type": "any" +}, +"description": "Map of a partner metadata that belong to the same subdomain. It accepts any value including google.protobuf.Struct.", +"type": "object" +} +}, +"type": "object" +}, "Subnetwork": { "description": "Represents a Subnetwork resource. A subnetwork (also known as a subnet) is a logical partition of a Virtual Private Cloud network with one primary IP range and zero or more secondary IP ranges. For more information, read Virtual Private Cloud (VPC) Network.", "id": "Subnetwork", diff --git a/googleapiclient/discovery_cache/documents/compute.v1.json b/googleapiclient/discovery_cache/documents/compute.v1.json index 2a1134538d..348e6166b0 100644 --- a/googleapiclient/discovery_cache/documents/compute.v1.json +++ b/googleapiclient/discovery_cache/documents/compute.v1.json @@ -9304,6 +9304,92 @@ } } }, +"instanceSettings": { +"methods": { +"get": { +"description": "Get Instance settings.", +"flatPath": "projects/{project}/zones/{zone}/instanceSettings", +"httpMethod": "GET", +"id": "compute.instanceSettings.get", +"parameterOrder": [ +"project", +"zone" +], +"parameters": { +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"zone": { +"description": "Name of the zone for this request.", +"location": "path", +"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/zones/{zone}/instanceSettings", +"response": { +"$ref": "InstanceSettings" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute", +"https://www.googleapis.com/auth/compute.readonly" +] +}, +"patch": { +"description": "Patch Instance settings", +"flatPath": "projects/{project}/zones/{zone}/instanceSettings", +"httpMethod": "PATCH", +"id": "compute.instanceSettings.patch", +"parameterOrder": [ +"project", +"zone" +], +"parameters": { +"project": { +"description": "Project ID for this request.", +"location": "path", +"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z0-9](?:[-a-z0-9]{0,61}[a-z0-9])?))", +"required": true, +"type": "string" +}, +"requestId": { +"description": "An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).", +"location": "query", +"type": "string" +}, +"updateMask": { +"description": "update_mask indicates fields to be updated as part of this request.", +"format": "google-fieldmask", +"location": "query", +"type": "string" +}, +"zone": { +"description": "The zone scoping this request. It should conform to RFC1035.", +"location": "path", +"required": true, +"type": "string" +} +}, +"path": "projects/{project}/zones/{zone}/instanceSettings", +"request": { +"$ref": "InstanceSettings" +}, +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform", +"https://www.googleapis.com/auth/compute" +] +} +} +}, "instanceTemplates": { "methods": { "aggregatedList": { @@ -37285,7 +37371,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://compute.googleapis.com/", "schemas": { "AWSV4Signature": { @@ -52326,6 +52412,49 @@ false }, "type": "object" }, +"InstanceSettings": { +"description": "Represents a Instance Settings resource. You can use instance settings to configure default settings for Compute Engine VM instances. For example, you can use it to configure default machine type of Compute Engine VM instances.", +"id": "InstanceSettings", +"properties": { +"fingerprint": { +"description": "Specifies a fingerprint for instance settings, which is essentially a hash of the instance settings resource's contents and used for optimistic locking. The fingerprint is initially generated by Compute Engine and changes after every request to modify or update the instance settings resource. You must always provide an up-to-date fingerprint hash in order to update or change the resource, otherwise the request will fail with error 412 conditionNotMet. To see the latest fingerprint, make a get() request to retrieve the resource.", +"format": "byte", +"type": "string" +}, +"kind": { +"default": "compute#instanceSettings", +"description": "[Output Only] Type of the resource. Always compute#instance_settings for instance settings.", +"type": "string" +}, +"metadata": { +"$ref": "InstanceSettingsMetadata", +"description": "The metadata key/value pairs assigned to all the instances in the corresponding scope." +}, +"zone": { +"description": "[Output Only] URL of the zone where the resource resides You must specify this field as part of the HTTP request URL. It is not settable as a field in the request body.", +"type": "string" +} +}, +"type": "object" +}, +"InstanceSettingsMetadata": { +"id": "InstanceSettingsMetadata", +"properties": { +"items": { +"additionalProperties": { +"type": "string" +}, +"description": "A metadata key/value items map. The total size of all keys and values must be less than 512KB.", +"type": "object" +}, +"kind": { +"default": "compute#metadata", +"description": "[Output Only] Type of the resource. Always compute#metadata for metadata.", +"type": "string" +} +}, +"type": "object" +}, "InstanceTemplate": { "description": "Represents an Instance Template resource. Google Compute Engine has two Instance Template resources: * [Global](/compute/docs/reference/rest/v1/instanceTemplates) * [Regional](/compute/docs/reference/rest/v1/regionInstanceTemplates) You can reuse a global instance template in different regions whereas you can use a regional instance template in a specified region only. If you want to reduce cross-region dependency or achieve data residency, use a regional instance template. To create VMs, managed instance groups, and reservations, you can use either global or regional instance templates. For more information, read Instance Templates.", "id": "InstanceTemplate", @@ -52935,7 +53064,7 @@ false "type": "string" }, "type": { -"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL.", +"description": "[Output Only] The type of the firewall policy. Can be one of HIERARCHY, NETWORK, NETWORK_REGIONAL, SYSTEM_GLOBAL, SYSTEM_REGIONAL.", "enum": [ "HIERARCHY", "NETWORK", diff --git a/googleapiclient/discovery_cache/documents/connectors.v1.json b/googleapiclient/discovery_cache/documents/connectors.v1.json index d2a6d20ba4..4815145051 100644 --- a/googleapiclient/discovery_cache/documents/connectors.v1.json +++ b/googleapiclient/discovery_cache/documents/connectors.v1.json @@ -1143,6 +1143,37 @@ "https://www.googleapis.com/auth/cloud-platform" ] } +}, +"resources": { +"customConnectorVersions": { +"methods": { +"delete": { +"description": "Deletes a single CustomConnectorVersion.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/customConnectors/{customConnectorsId}/customConnectorVersions/{customConnectorVersionsId}", +"httpMethod": "DELETE", +"id": "connectors.projects.locations.customConnectors.customConnectorVersions.delete", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. Resource name of the form: `projects/{project}/locations/{location}/customConnectors/{custom_connector}/customConnectorVersions/{custom_connector_version}`", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/customConnectors/[^/]+/customConnectorVersions/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}", +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +} +} +} } }, "endpointAttachments": { @@ -2327,7 +2358,7 @@ } } }, -"revision": "20240305", +"revision": "20240319", "rootUrl": "https://connectors.googleapis.com/", "schemas": { "AuditConfig": { @@ -3117,6 +3148,11 @@ "readOnly": true, "type": "array" }, +"authOverrideEnabled": { +"description": "Output only. Flag to mark the dynamic auth override.", +"readOnly": true, +"type": "boolean" +}, "configVariableTemplates": { "description": "Output only. List of config variables needed to create a connection.", "items": { @@ -3209,6 +3245,10 @@ "readOnly": true, "type": "array" }, +"schemaRefreshConfig": { +"$ref": "SchemaRefreshConfig", +"description": "Connection Schema Refresh Config" +}, "sslConfigTemplate": { "$ref": "SslConfigTemplate", "description": "Output only. Ssl configuration supported by the Connector.", @@ -5501,7 +5541,7 @@ false "type": "object" }, "MaintenancePolicy": { -"description": "LINT.IfChange Defines policies to service maintenance events.", +"description": "Defines policies to service maintenance events.", "id": "MaintenancePolicy", "properties": { "createTime": { @@ -6532,6 +6572,21 @@ false }, "type": "object" }, +"SchemaRefreshConfig": { +"description": "Config for connection schema refresh", +"id": "SchemaRefreshConfig", +"properties": { +"useActionDisplayNames": { +"description": "Whether to use displayName for actions in UI.", +"type": "boolean" +}, +"useSynchronousSchemaRefresh": { +"description": "Whether to use synchronous schema refresh.", +"type": "boolean" +} +}, +"type": "object" +}, "Secret": { "description": "Secret provides a reference to entries in Secret Manager.", "id": "Secret", diff --git a/googleapiclient/discovery_cache/documents/connectors.v2.json b/googleapiclient/discovery_cache/documents/connectors.v2.json index 4f94a1fdf8..c99f562718 100644 --- a/googleapiclient/discovery_cache/documents/connectors.v2.json +++ b/googleapiclient/discovery_cache/documents/connectors.v2.json @@ -660,7 +660,7 @@ } } }, -"revision": "20240305", +"revision": "20240319", "rootUrl": "https://connectors.googleapis.com/", "schemas": { "AccessCredentials": { @@ -1707,7 +1707,7 @@ false "type": "object" }, "MaintenancePolicy": { -"description": "LINT.IfChange Defines policies to service maintenance events.", +"description": "Defines policies to service maintenance events.", "id": "MaintenancePolicy", "properties": { "createTime": { diff --git a/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json b/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json index 32b7324d6b..072f60abcc 100644 --- a/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/contactcenteraiplatform.v1alpha1.json @@ -512,7 +512,7 @@ } } }, -"revision": "20240306", +"revision": "20240320", "rootUrl": "https://contactcenteraiplatform.googleapis.com/", "schemas": { "AdminUser": { diff --git a/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json b/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json index e9cf929902..4abf241db6 100644 --- a/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json +++ b/googleapiclient/discovery_cache/documents/contactcenterinsights.v1.json @@ -1473,7 +1473,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://contactcenterinsights.googleapis.com/", "schemas": { "GoogleCloudContactcenterinsightsV1Analysis": { diff --git a/googleapiclient/discovery_cache/documents/container.v1.json b/googleapiclient/discovery_cache/documents/container.v1.json index ead1b7a324..5222a1b7cb 100644 --- a/googleapiclient/discovery_cache/documents/container.v1.json +++ b/googleapiclient/discovery_cache/documents/container.v1.json @@ -2540,7 +2540,7 @@ } } }, -"revision": "20240214", +"revision": "20240313", "rootUrl": "https://container.googleapis.com/", "schemas": { "AcceleratorConfig": { @@ -3534,10 +3534,18 @@ "$ref": "DNSConfig", "description": "DNSConfig contains clusterDNS config for this cluster." }, +"desiredEnableCiliumClusterwideNetworkPolicy": { +"description": "Enable/Disable Cilium Clusterwide Network Policy for the cluster.", +"type": "boolean" +}, "desiredEnableFqdnNetworkPolicy": { "description": "Enable/Disable FQDN Network Policy for the cluster.", "type": "boolean" }, +"desiredEnableMultiNetworking": { +"description": "Enable/Disable Multi-Networking for the cluster", +"type": "boolean" +}, "desiredEnablePrivateEndpoint": { "description": "Enable/Disable private endpoint for the cluster's master.", "type": "boolean" @@ -3927,10 +3935,49 @@ "description": "Configuration of etcd encryption.", "id": "DatabaseEncryption", "properties": { +"currentState": { +"description": "Output only. The current state of etcd encryption.", +"enum": [ +"CURRENT_STATE_UNSPECIFIED", +"CURRENT_STATE_ENCRYPTED", +"CURRENT_STATE_DECRYPTED", +"CURRENT_STATE_ENCRYPTION_PENDING", +"CURRENT_STATE_ENCRYPTION_ERROR", +"CURRENT_STATE_DECRYPTION_PENDING", +"CURRENT_STATE_DECRYPTION_ERROR" +], +"enumDescriptions": [ +"Should never be set", +"Secrets in etcd are encrypted.", +"Secrets in etcd are stored in plain text (at etcd level) - this is unrelated to Compute Engine level full disk encryption.", +"Encryption (or re-encryption with a different CloudKMS key) of Secrets is in progress.", +"Encryption (or re-encryption with a different CloudKMS key) of Secrets in etcd encountered an error.", +"De-crypting Secrets to plain text in etcd is in progress.", +"De-crypting Secrets to plain text in etcd encountered an error." +], +"readOnly": true, +"type": "string" +}, +"decryptionKeys": { +"description": "Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, "keyName": { "description": "Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key", "type": "string" }, +"lastOperationErrors": { +"description": "Output only. Records errors seen during DatabaseEncryption update operations.", +"items": { +"$ref": "OperationError" +}, +"readOnly": true, +"type": "array" +}, "state": { "description": "The desired state of etcd encryption.", "enum": [ @@ -4965,6 +5012,10 @@ "$ref": "DNSConfig", "description": "DNSConfig contains clusterDNS config for this cluster." }, +"enableCiliumClusterwideNetworkPolicy": { +"description": "Whether CiliumClusterwideNetworkPolicy is enabled on this cluster.", +"type": "boolean" +}, "enableFqdnNetworkPolicy": { "description": "Whether FQDN Network Policy is enabled on this cluster.", "type": "boolean" @@ -5266,6 +5317,10 @@ "$ref": "SandboxConfig", "description": "Sandbox configuration for this node." }, +"secondaryBootDiskUpdateStrategy": { +"$ref": "SecondaryBootDiskUpdateStrategy", +"description": "Secondary boot disk update strategy." +}, "secondaryBootDisks": { "description": "List of secondary boot disks attached to the nodes.", "items": { @@ -5861,6 +5916,26 @@ false }, "type": "object" }, +"OperationError": { +"description": "OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration.", +"id": "OperationError", +"properties": { +"errorMessage": { +"description": "Description of the error seen during the operation.", +"type": "string" +}, +"keyName": { +"description": "CloudKMS key resource that had the error.", +"type": "string" +}, +"timestamp": { +"description": "Time when the CloudKMS error was seen.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, "OperationProgress": { "description": "Information about operation (or operation stage) progress.", "id": "OperationProgress", @@ -6309,6 +6384,12 @@ false }, "type": "object" }, +"SecondaryBootDiskUpdateStrategy": { +"description": "SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks.", +"id": "SecondaryBootDiskUpdateStrategy", +"properties": {}, +"type": "object" +}, "SecurityBulletinEvent": { "description": "SecurityBulletinEvent is a notification sent to customers when a security bulletin has been posted that they are vulnerable to.", "id": "SecurityBulletinEvent", diff --git a/googleapiclient/discovery_cache/documents/container.v1beta1.json b/googleapiclient/discovery_cache/documents/container.v1beta1.json index df9795ae6f..df45985d4f 100644 --- a/googleapiclient/discovery_cache/documents/container.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/container.v1beta1.json @@ -2565,7 +2565,7 @@ } } }, -"revision": "20240213", +"revision": "20240313", "rootUrl": "https://container.googleapis.com/", "schemas": { "AcceleratorConfig": { @@ -3717,6 +3717,10 @@ "description": "Enable/Disable FQDN Network Policy for the cluster.", "type": "boolean" }, +"desiredEnableMultiNetworking": { +"description": "Enable/Disable Multi-Networking for the cluster", +"type": "boolean" +}, "desiredEnablePrivateEndpoint": { "description": "Enable/Disable private endpoint for the cluster's master.", "type": "boolean" @@ -4139,10 +4143,49 @@ "description": "Configuration of etcd encryption.", "id": "DatabaseEncryption", "properties": { +"currentState": { +"description": "Output only. The current state of etcd encryption.", +"enum": [ +"CURRENT_STATE_UNSPECIFIED", +"CURRENT_STATE_ENCRYPTED", +"CURRENT_STATE_DECRYPTED", +"CURRENT_STATE_ENCRYPTION_PENDING", +"CURRENT_STATE_ENCRYPTION_ERROR", +"CURRENT_STATE_DECRYPTION_PENDING", +"CURRENT_STATE_DECRYPTION_ERROR" +], +"enumDescriptions": [ +"Should never be set", +"Secrets in etcd are encrypted.", +"Secrets in etcd are stored in plain text (at etcd level) - this is unrelated to Compute Engine level full disk encryption.", +"Encryption (or re-encryption with a different CloudKMS key) of Secrets is in progress.", +"Encryption (or re-encryption with a different CloudKMS key) of Secrets in etcd encountered an error.", +"De-crypting Secrets to plain text in etcd is in progress.", +"De-crypting Secrets to plain text in etcd encountered an error." +], +"readOnly": true, +"type": "string" +}, +"decryptionKeys": { +"description": "Output only. Keys in use by the cluster for decrypting existing objects, in addition to the key in `key_name`. Each item is a CloudKMS key resource.", +"items": { +"type": "string" +}, +"readOnly": true, +"type": "array" +}, "keyName": { "description": "Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key", "type": "string" }, +"lastOperationErrors": { +"description": "Output only. Records errors seen during DatabaseEncryption update operations.", +"items": { +"$ref": "OperationError" +}, +"readOnly": true, +"type": "array" +}, "state": { "description": "The desired state of etcd encryption.", "enum": [ @@ -5670,6 +5713,10 @@ false "$ref": "SandboxConfig", "description": "Sandbox configuration for this node." }, +"secondaryBootDiskUpdateStrategy": { +"$ref": "SecondaryBootDiskUpdateStrategy", +"description": "Secondary boot disk update strategy." +}, "secondaryBootDisks": { "description": "List of secondary boot disks attached to the nodes.", "items": { @@ -6269,6 +6316,26 @@ false }, "type": "object" }, +"OperationError": { +"description": "OperationError records errors seen from CloudKMS keys encountered during updates to DatabaseEncryption configuration.", +"id": "OperationError", +"properties": { +"errorMessage": { +"description": "Description of the error seen during the operation.", +"type": "string" +}, +"keyName": { +"description": "CloudKMS key resource that had the error.", +"type": "string" +}, +"timestamp": { +"description": "Time when the CloudKMS error was seen.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, "OperationProgress": { "description": "Information about operation (or operation stage) progress.", "id": "OperationProgress", @@ -6799,6 +6866,12 @@ false }, "type": "object" }, +"SecondaryBootDiskUpdateStrategy": { +"description": "SecondaryBootDiskUpdateStrategy is a placeholder which will be extended in the future to define different options for updating secondary boot disks.", +"id": "SecondaryBootDiskUpdateStrategy", +"properties": {}, +"type": "object" +}, "SecretManagerConfig": { "description": "SecretManagerConfig is config for secret manager enablement.", "id": "SecretManagerConfig", diff --git a/googleapiclient/discovery_cache/documents/containeranalysis.v1.json b/googleapiclient/discovery_cache/documents/containeranalysis.v1.json index 52aa8a096e..b5dcbf33b0 100644 --- a/googleapiclient/discovery_cache/documents/containeranalysis.v1.json +++ b/googleapiclient/discovery_cache/documents/containeranalysis.v1.json @@ -1065,7 +1065,7 @@ } } }, -"revision": "20240308", +"revision": "20240315", "rootUrl": "https://containeranalysis.googleapis.com/", "schemas": { "AliasContext": { diff --git a/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json b/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json index e984708318..97f9fd5ac4 100644 --- a/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/containeranalysis.v1alpha1.json @@ -1233,7 +1233,7 @@ } } }, -"revision": "20240308", +"revision": "20240315", "rootUrl": "https://containeranalysis.googleapis.com/", "schemas": { "AnalysisCompleted": { diff --git a/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json b/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json index 42ac4d1c8b..2037afabad 100644 --- a/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/containeranalysis.v1beta1.json @@ -1121,7 +1121,7 @@ } } }, -"revision": "20240308", +"revision": "20240315", "rootUrl": "https://containeranalysis.googleapis.com/", "schemas": { "AliasContext": { diff --git a/googleapiclient/discovery_cache/documents/content.v2.1.json b/googleapiclient/discovery_cache/documents/content.v2.1.json index 8c4e251b70..f81e979fb8 100644 --- a/googleapiclient/discovery_cache/documents/content.v2.1.json +++ b/googleapiclient/discovery_cache/documents/content.v2.1.json @@ -6186,7 +6186,7 @@ } } }, -"revision": "20240317", +"revision": "20240323", "rootUrl": "https://shoppingcontent.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/customsearch.v1.json b/googleapiclient/discovery_cache/documents/customsearch.v1.json index e418f3c903..1cbcc27693 100644 --- a/googleapiclient/discovery_cache/documents/customsearch.v1.json +++ b/googleapiclient/discovery_cache/documents/customsearch.v1.json @@ -372,6 +372,12 @@ false "location": "query", "type": "string" }, +"snippetLength": { +"description": "Optional. Maximum length of snippet text, in characters, to be returned with results. * Valid values are integers between 1 and 160, inclusive.", +"format": "int32", +"location": "query", +"type": "integer" +}, "sort": { "description": "The sort expression to apply to the results. The sort parameter specifies that the results be sorted according to the specified expression i.e. sort by date. [Example: sort=date](https://developers.google.com/custom-search/docs/structured_search#sort-by-attribute).", "location": "query", @@ -668,6 +674,12 @@ false "location": "query", "type": "string" }, +"snippetLength": { +"description": "Optional. Maximum length of snippet text, in characters, to be returned with results. * Valid values are integers between 1 and 160, inclusive.", +"format": "int32", +"location": "query", +"type": "integer" +}, "sort": { "description": "The sort expression to apply to the results. The sort parameter specifies that the results be sorted according to the specified expression i.e. sort by date. [Example: sort=date](https://developers.google.com/custom-search/docs/structured_search#sort-by-attribute).", "location": "query", @@ -690,7 +702,7 @@ false } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://customsearch.googleapis.com/", "schemas": { "Promotion": { diff --git a/googleapiclient/discovery_cache/documents/datacatalog.v1.json b/googleapiclient/discovery_cache/documents/datacatalog.v1.json index 07ac770763..8fdff3d552 100644 --- a/googleapiclient/discovery_cache/documents/datacatalog.v1.json +++ b/googleapiclient/discovery_cache/documents/datacatalog.v1.json @@ -2144,7 +2144,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://datacatalog.googleapis.com/", "schemas": { "Binding": { @@ -4205,6 +4205,18 @@ "description": "A tag template defines a tag that can have one or more typed fields. The template is used to create tags that are attached to Google Cloud resources. [Tag template roles] (https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. For example, see the [TagTemplate User] (https://cloud.google.com/data-catalog/docs/how-to/template-user) role that includes a permission to use the tag template to tag resources.", "id": "GoogleCloudDatacatalogV1TagTemplate", "properties": { +"dataplexTransferStatus": { +"description": "Optional. Transfer status of the TagTemplate", +"enum": [ +"DATAPLEX_TRANSFER_STATUS_UNSPECIFIED", +"MIGRATED" +], +"enumDescriptions": [ +"Default value. TagTemplate and its tags are only visible and editable in DataCatalog.", +"TagTemplate and its tags are auto-copied to Dataplex service. Visible in both services. Editable in DataCatalog, read-only in Dataplex." +], +"type": "string" +}, "displayName": { "description": "Display name for this template. Defaults to an empty string. The name must contain only Unicode letters, numbers (0-9), underscores (_), dashes (-), spaces ( ), and can't start or end with spaces. The maximum length is 200 characters.", "type": "string" diff --git a/googleapiclient/discovery_cache/documents/datacatalog.v1beta1.json b/googleapiclient/discovery_cache/documents/datacatalog.v1beta1.json index 755eafb70c..52b6bd0955 100644 --- a/googleapiclient/discovery_cache/documents/datacatalog.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/datacatalog.v1beta1.json @@ -1813,7 +1813,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://datacatalog.googleapis.com/", "schemas": { "Binding": { @@ -4268,6 +4268,19 @@ "description": "A tag template defines a tag, which can have one or more typed fields. The template is used to create and attach the tag to Google Cloud resources. [Tag template roles](https://cloud.google.com/iam/docs/understanding-roles#data-catalog-roles) provide permissions to create, edit, and use the template. See, for example, the [TagTemplate User](https://cloud.google.com/data-catalog/docs/how-to/template-user) role, which includes permission to use the tag template to tag resources.", "id": "GoogleCloudDatacatalogV1beta1TagTemplate", "properties": { +"dataplexTransferStatus": { +"description": "Output only. Transfer status of the TagTemplate", +"enum": [ +"DATAPLEX_TRANSFER_STATUS_UNSPECIFIED", +"MIGRATED" +], +"enumDescriptions": [ +"Default value. TagTemplate and its tags are only visible and editable in DataCatalog.", +"TagTemplate and its tags are auto-copied to Dataplex service. Visible in both services. Editable in DataCatalog, read-only in Dataplex." +], +"readOnly": true, +"type": "string" +}, "displayName": { "description": "The display name for this template. Defaults to an empty string.", "type": "string" diff --git a/googleapiclient/discovery_cache/documents/dataflow.v1b3.json b/googleapiclient/discovery_cache/documents/dataflow.v1b3.json index f9c4ee60bf..c17304d3f9 100644 --- a/googleapiclient/discovery_cache/documents/dataflow.v1b3.json +++ b/googleapiclient/discovery_cache/documents/dataflow.v1b3.json @@ -31,6 +31,11 @@ "description": "Regional Endpoint", "endpointUrl": "https://dataflow.europe-west3.rep.googleapis.com/", "location": "europe-west3" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://dataflow.europe-west9.rep.googleapis.com/", +"location": "europe-west9" } ], "fullyEncodeReservedExpansion": true, @@ -1918,7 +1923,7 @@ ], "parameters": { "dynamicTemplate.gcsPath": { -"description": "Path to dynamic template spec file on Cloud Storage. The file must be a Json serialized DynamicTemplateFieSpec object.", +"description": "Path to the dynamic template specification file on Cloud Storage. The file must be a JSON serialized `DynamicTemplateFileSpec` object.", "location": "query", "type": "string" }, @@ -1928,7 +1933,7 @@ "type": "string" }, "gcsPath": { -"description": "A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with 'gs://'.", +"description": "A Cloud Storage path to the template to use to create the job. Must be valid Cloud Storage URL, beginning with `gs://`.", "location": "query", "type": "string" }, @@ -2133,7 +2138,7 @@ ], "parameters": { "dynamicTemplate.gcsPath": { -"description": "Path to dynamic template spec file on Cloud Storage. The file must be a Json serialized DynamicTemplateFieSpec object.", +"description": "Path to the dynamic template specification file on Cloud Storage. The file must be a JSON serialized `DynamicTemplateFileSpec` object.", "location": "query", "type": "string" }, @@ -2143,7 +2148,7 @@ "type": "string" }, "gcsPath": { -"description": "A Cloud Storage path to the template from which to create the job. Must be valid Cloud Storage URL, beginning with 'gs://'.", +"description": "A Cloud Storage path to the template to use to create the job. Must be valid Cloud Storage URL, beginning with `gs://`.", "location": "query", "type": "string" }, @@ -2182,7 +2187,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://dataflow.googleapis.com/", "schemas": { "ApproximateProgress": { @@ -5413,7 +5418,7 @@ "type": "object" }, "RuntimeEnvironment": { -"description": "The environment values to set at runtime.", +"description": "The environment values to set at runtime. LINT.IfChange", "id": "RuntimeEnvironment", "properties": { "additionalExperiments": { diff --git a/googleapiclient/discovery_cache/documents/dataform.v1beta1.json b/googleapiclient/discovery_cache/documents/dataform.v1beta1.json index 20f790fad1..a50c96c42f 100644 --- a/googleapiclient/discovery_cache/documents/dataform.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/dataform.v1beta1.json @@ -2103,7 +2103,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://dataform.googleapis.com/", "schemas": { "Assertion": { diff --git a/googleapiclient/discovery_cache/documents/datalineage.v1.json b/googleapiclient/discovery_cache/documents/datalineage.v1.json index 5c0ccb045a..3e4b6659c4 100644 --- a/googleapiclient/discovery_cache/documents/datalineage.v1.json +++ b/googleapiclient/discovery_cache/documents/datalineage.v1.json @@ -798,7 +798,7 @@ } } }, -"revision": "20240301", +"revision": "20240313", "rootUrl": "https://datalineage.googleapis.com/", "schemas": { "GoogleCloudDatacatalogLineageV1BatchSearchLinkProcessesRequest": { diff --git a/googleapiclient/discovery_cache/documents/datamigration.v1.json b/googleapiclient/discovery_cache/documents/datamigration.v1.json index 3a8d9e4c26..a8f52520a8 100644 --- a/googleapiclient/discovery_cache/documents/datamigration.v1.json +++ b/googleapiclient/discovery_cache/documents/datamigration.v1.json @@ -2125,7 +2125,7 @@ } } }, -"revision": "20240312", +"revision": "20240315", "rootUrl": "https://datamigration.googleapis.com/", "schemas": { "AlloyDbConnectionProfile": { @@ -5692,7 +5692,7 @@ "id": "SqlServerHomogeneousMigrationJobConfig", "properties": { "backupFilePattern": { -"description": "Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_backup_20230802_155400.trn, use pattern: (?.*)_backup_(?\\d{4})(?\\d{2})(?\\d{2})_(?\\d{2})(?\\d{2})(?\\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB_backup_1691448254.trn, use pattern: (?.*)_backup_(?.*).trn", +"description": "Required. Pattern that describes the default backup naming strategy. The specified pattern should ensure lexicographical order of backups. The pattern must define one of the following capture group sets: Capture group set #1 yy/yyyy - year, 2 or 4 digits mm - month number, 1-12 dd - day of month, 1-31 hh - hour of day, 00-23 mi - minutes, 00-59 ss - seconds, 00-59 Example: For backup file TestDB_20230802_155400.trn, use pattern: (?.*)_backup_(?\\d{4})(?\\d{2})(?\\d{2})_(?\\d{2})(?\\d{2})(?\\d{2}).trn Capture group set #2 timestamp - unix timestamp Example: For backup file TestDB.1691448254.trn, use pattern: (?.*)\\.(?\\d*).trn or (?.*)\\.(?\\d*).trn", "type": "string" }, "databaseBackups": { diff --git a/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json b/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json index f380b7f0a2..2d108dd5c0 100644 --- a/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/datamigration.v1beta1.json @@ -1049,7 +1049,7 @@ } } }, -"revision": "20240312", +"revision": "20240315", "rootUrl": "https://datamigration.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/datapipelines.v1.json b/googleapiclient/discovery_cache/documents/datapipelines.v1.json index 5d1f09678b..dc676a35b6 100644 --- a/googleapiclient/discovery_cache/documents/datapipelines.v1.json +++ b/googleapiclient/discovery_cache/documents/datapipelines.v1.json @@ -369,7 +369,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://datapipelines.googleapis.com/", "schemas": { "GoogleCloudDatapipelinesV1DataflowJobDetails": { diff --git a/googleapiclient/discovery_cache/documents/dataplex.v1.json b/googleapiclient/discovery_cache/documents/dataplex.v1.json index daaa1ce2ee..07dab71de5 100644 --- a/googleapiclient/discovery_cache/documents/dataplex.v1.json +++ b/googleapiclient/discovery_cache/documents/dataplex.v1.json @@ -5271,7 +5271,7 @@ } } }, -"revision": "20240311", +"revision": "20240317", "rootUrl": "https://dataplex.googleapis.com/", "schemas": { "Empty": { @@ -6381,44 +6381,6 @@ }, "type": "object" }, -"GoogleCloudDataplexV1DataDocumentationResult": { -"description": "The output of a DataDocumentation scan.", -"id": "GoogleCloudDataplexV1DataDocumentationResult", -"properties": { -"queries": { -"description": "Output only. The list of generated queries.", -"items": { -"$ref": "GoogleCloudDataplexV1DataDocumentationResultQuery" -}, -"readOnly": true, -"type": "array" -} -}, -"type": "object" -}, -"GoogleCloudDataplexV1DataDocumentationResultQuery": { -"description": "A query in data documentation", -"id": "GoogleCloudDataplexV1DataDocumentationResultQuery", -"properties": { -"description": { -"description": "Output only. The description for the query.", -"readOnly": true, -"type": "string" -}, -"sql": { -"description": "Output only. The SQL query string which can be executed.", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, -"GoogleCloudDataplexV1DataDocumentationSpec": { -"description": "DataDocumentation scan related spec.", -"id": "GoogleCloudDataplexV1DataDocumentationSpec", -"properties": {}, -"type": "object" -}, "GoogleCloudDataplexV1DataProfileResult": { "description": "DataProfileResult defines the output of DataProfileScan. Each field of the table will have field type specific profile result.", "id": "GoogleCloudDataplexV1DataProfileResult", @@ -7333,15 +7295,6 @@ "$ref": "GoogleCloudDataplexV1DataSource", "description": "Required. The data source for DataScan." }, -"dataDocumentationResult": { -"$ref": "GoogleCloudDataplexV1DataDocumentationResult", -"description": "Output only. The result of the data documentation scan.", -"readOnly": true -}, -"dataDocumentationSpec": { -"$ref": "GoogleCloudDataplexV1DataDocumentationSpec", -"description": "DataDocumentationScan related setting." -}, "dataProfileResult": { "$ref": "GoogleCloudDataplexV1DataProfileResult", "description": "Output only. The result of the data profile scan.", @@ -7413,14 +7366,12 @@ "enum": [ "DATA_SCAN_TYPE_UNSPECIFIED", "DATA_QUALITY", -"DATA_PROFILE", -"DATA_DOCUMENTATION" +"DATA_PROFILE" ], "enumDescriptions": [ "The DataScan type is unspecified.", "Data Quality scan.", -"Data Profile scan.", -"Data Documentation scan." +"Data Profile scan." ], "readOnly": true, "type": "string" @@ -7725,16 +7676,6 @@ "description": "A DataScanJob represents an instance of DataScan execution.", "id": "GoogleCloudDataplexV1DataScanJob", "properties": { -"dataDocumentationResult": { -"$ref": "GoogleCloudDataplexV1DataDocumentationResult", -"description": "Output only. The result of the data documentation scan.", -"readOnly": true -}, -"dataDocumentationSpec": { -"$ref": "GoogleCloudDataplexV1DataDocumentationSpec", -"description": "Output only. DataDocumentationScan related setting.", -"readOnly": true -}, "dataProfileResult": { "$ref": "GoogleCloudDataplexV1DataProfileResult", "description": "Output only. The result of the data profile scan.", @@ -7805,14 +7746,12 @@ "enum": [ "DATA_SCAN_TYPE_UNSPECIFIED", "DATA_QUALITY", -"DATA_PROFILE", -"DATA_DOCUMENTATION" +"DATA_PROFILE" ], "enumDescriptions": [ "The DataScan type is unspecified.", "Data Quality scan.", -"Data Profile scan.", -"Data Documentation scan." +"Data Profile scan." ], "readOnly": true, "type": "string" diff --git a/googleapiclient/discovery_cache/documents/dataportability.v1.json b/googleapiclient/discovery_cache/documents/dataportability.v1.json index 4068292ed0..65c19fcfec 100644 --- a/googleapiclient/discovery_cache/documents/dataportability.v1.json +++ b/googleapiclient/discovery_cache/documents/dataportability.v1.json @@ -2,6 +2,9 @@ "auth": { "oauth2": { "scopes": { +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions": { +"description": "Move a copy of the Google Alerts subscriptions you created." +}, "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations": { "description": "Move a copy of messages between you and the businesses you have conversations with across Google services." }, @@ -26,6 +29,18 @@ "https://www.googleapis.com/auth/dataportability.chrome.settings": { "description": "Move a copy of your settings in Chrome." }, +"https://www.googleapis.com/auth/dataportability.discover.follows": { +"description": "Move a copy of searches and sites you follow, saved by Discover." +}, +"https://www.googleapis.com/auth/dataportability.discover.likes": { +"description": "Move a copy of links to your liked documents, saved by Discover." +}, +"https://www.googleapis.com/auth/dataportability.discover.not_interested": { +"description": "Move a copy of content you marked as not interested, saved by Discover." +}, +"https://www.googleapis.com/auth/dataportability.maps.aliased_places": { +"description": "Move a copy of the places you labeled on Maps." +}, "https://www.googleapis.com/auth/dataportability.maps.commute_routes": { "description": "Move a copy of your pinned trips on Maps." }, @@ -35,12 +50,21 @@ "https://www.googleapis.com/auth/dataportability.maps.ev_profile": { "description": "Move a copy of your electric vehicle profile on Maps." }, +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions": { +"description": "Move a copy of the corrections you made to places or map information on Maps." +}, "https://www.googleapis.com/auth/dataportability.maps.offering_contributions": { "description": "Move a copy of your updates to places on Maps." }, "https://www.googleapis.com/auth/dataportability.maps.photos_videos": { "description": "Move a copy of the photos and videos you posted on Maps." }, +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback": { +"description": "Move a copy of feedback you gave after completing trips using Maps directions." +}, +"https://www.googleapis.com/auth/dataportability.maps.questions_answers": { +"description": "Move a copy of the questions and answers you posted on Maps." +}, "https://www.googleapis.com/auth/dataportability.maps.reviews": { "description": "Move a copy of your reviews and posts on Maps." }, @@ -50,6 +74,12 @@ "https://www.googleapis.com/auth/dataportability.myactivity.maps": { "description": "Move a copy of your Maps activity." }, +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter": { +"description": "Move a copy of your My Ad Center activity." +}, +"https://www.googleapis.com/auth/dataportability.myactivity.play": { +"description": "Move a copy of your Google Play activity." +}, "https://www.googleapis.com/auth/dataportability.myactivity.search": { "description": "Move a copy of your Google Search activity." }, @@ -59,15 +89,72 @@ "https://www.googleapis.com/auth/dataportability.myactivity.youtube": { "description": "Move a copy of your YouTube activity." }, +"https://www.googleapis.com/auth/dataportability.mymaps.maps": { +"description": "Move a copy of the maps you created in My Maps." +}, +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations": { +"description": "Move a copy of your food purchase and reservation activity." +}, +"https://www.googleapis.com/auth/dataportability.play.devices": { +"description": "Move a copy of information about your devices with Google Play Store installed." +}, +"https://www.googleapis.com/auth/dataportability.play.grouping": { +"description": "Move a copy of your Google Play Store Grouping tags created by app developers." +}, +"https://www.googleapis.com/auth/dataportability.play.installs": { +"description": "Move a copy of your Google Play Store app installations." +}, +"https://www.googleapis.com/auth/dataportability.play.library": { +"description": "Move a copy of your Google Play Store downloads, including books, games, and apps." +}, +"https://www.googleapis.com/auth/dataportability.play.playpoints": { +"description": "Move a copy of information about your Google Play Store Points." +}, +"https://www.googleapis.com/auth/dataportability.play.promotions": { +"description": "Move a copy of information about your Google Play Store promotions." +}, +"https://www.googleapis.com/auth/dataportability.play.purchases": { +"description": "Move a copy of your Google Play Store purchases." +}, +"https://www.googleapis.com/auth/dataportability.play.redemptions": { +"description": "Move a copy of your Google Play Store redemption activities." +}, +"https://www.googleapis.com/auth/dataportability.play.subscriptions": { +"description": "Move a copy of your Google Play Store subscriptions." +}, +"https://www.googleapis.com/auth/dataportability.play.usersettings": { +"description": "Move a copy of your Google Play Store user settings and preferences." +}, "https://www.googleapis.com/auth/dataportability.saved.collections": { "description": "Move a copy of your saved links, images, places, and collections from your use of Google services." }, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars": { +"description": "Move a copy of your media reviews on Google Search." +}, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers": { +"description": "Move a copy of your self-reported video streaming provider preferences from Google Search and Google TV." +}, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs": { +"description": "Move a copy of your indicated thumbs up and thumbs down on media in Google Search and Google TV." +}, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched": { +"description": "Move a copy of information about the movies and TV shows you marked as watched on Google Search and Google TV." +}, +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings": { +"description": "Move a copy of your notification settings on the Google Search app." +}, +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions": { +"description": "Move a copy of your notification subscriptions on Google Search app." +}, "https://www.googleapis.com/auth/dataportability.shopping.addresses": { "description": "Move a copy of your shipping information on Shopping." }, "https://www.googleapis.com/auth/dataportability.shopping.reviews": { "description": "Move a copy of reviews you wrote about products or online stores on Google Search." }, +"https://www.googleapis.com/auth/dataportability.streetview.imagery": { +"description": "Move a copy of the images and videos you uploaded to Street View." +}, "https://www.googleapis.com/auth/dataportability.youtube.channel": { "description": "Move a copy of information about your YouTube channel." }, @@ -234,6 +321,7 @@ "$ref": "PortabilityArchiveState" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -242,20 +330,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -297,6 +413,7 @@ "$ref": "RetryPortabilityArchiveResponse" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -305,20 +422,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -354,6 +499,7 @@ "$ref": "Empty" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -362,20 +508,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -411,6 +585,7 @@ "$ref": "InitiatePortabilityArchiveResponse" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -419,20 +594,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -452,7 +655,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://dataportability.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/dataportability.v1beta.json b/googleapiclient/discovery_cache/documents/dataportability.v1beta.json index 2ddd693252..5fff441c54 100644 --- a/googleapiclient/discovery_cache/documents/dataportability.v1beta.json +++ b/googleapiclient/discovery_cache/documents/dataportability.v1beta.json @@ -2,6 +2,9 @@ "auth": { "oauth2": { "scopes": { +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions": { +"description": "Move a copy of the Google Alerts subscriptions you created." +}, "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations": { "description": "Move a copy of messages between you and the businesses you have conversations with across Google services." }, @@ -26,6 +29,18 @@ "https://www.googleapis.com/auth/dataportability.chrome.settings": { "description": "Move a copy of your settings in Chrome." }, +"https://www.googleapis.com/auth/dataportability.discover.follows": { +"description": "Move a copy of searches and sites you follow, saved by Discover." +}, +"https://www.googleapis.com/auth/dataportability.discover.likes": { +"description": "Move a copy of links to your liked documents, saved by Discover." +}, +"https://www.googleapis.com/auth/dataportability.discover.not_interested": { +"description": "Move a copy of content you marked as not interested, saved by Discover." +}, +"https://www.googleapis.com/auth/dataportability.maps.aliased_places": { +"description": "Move a copy of the places you labeled on Maps." +}, "https://www.googleapis.com/auth/dataportability.maps.commute_routes": { "description": "Move a copy of your pinned trips on Maps." }, @@ -35,12 +50,21 @@ "https://www.googleapis.com/auth/dataportability.maps.ev_profile": { "description": "Move a copy of your electric vehicle profile on Maps." }, +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions": { +"description": "Move a copy of the corrections you made to places or map information on Maps." +}, "https://www.googleapis.com/auth/dataportability.maps.offering_contributions": { "description": "Move a copy of your updates to places on Maps." }, "https://www.googleapis.com/auth/dataportability.maps.photos_videos": { "description": "Move a copy of the photos and videos you posted on Maps." }, +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback": { +"description": "Move a copy of feedback you gave after completing trips using Maps directions." +}, +"https://www.googleapis.com/auth/dataportability.maps.questions_answers": { +"description": "Move a copy of the questions and answers you posted on Maps." +}, "https://www.googleapis.com/auth/dataportability.maps.reviews": { "description": "Move a copy of your reviews and posts on Maps." }, @@ -50,6 +74,12 @@ "https://www.googleapis.com/auth/dataportability.myactivity.maps": { "description": "Move a copy of your Maps activity." }, +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter": { +"description": "Move a copy of your My Ad Center activity." +}, +"https://www.googleapis.com/auth/dataportability.myactivity.play": { +"description": "Move a copy of your Google Play activity." +}, "https://www.googleapis.com/auth/dataportability.myactivity.search": { "description": "Move a copy of your Google Search activity." }, @@ -59,15 +89,72 @@ "https://www.googleapis.com/auth/dataportability.myactivity.youtube": { "description": "Move a copy of your YouTube activity." }, +"https://www.googleapis.com/auth/dataportability.mymaps.maps": { +"description": "Move a copy of the maps you created in My Maps." +}, +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations": { +"description": "Move a copy of your food purchase and reservation activity." +}, +"https://www.googleapis.com/auth/dataportability.play.devices": { +"description": "Move a copy of information about your devices with Google Play Store installed." +}, +"https://www.googleapis.com/auth/dataportability.play.grouping": { +"description": "Move a copy of your Google Play Store Grouping tags created by app developers." +}, +"https://www.googleapis.com/auth/dataportability.play.installs": { +"description": "Move a copy of your Google Play Store app installations." +}, +"https://www.googleapis.com/auth/dataportability.play.library": { +"description": "Move a copy of your Google Play Store downloads, including books, games, and apps." +}, +"https://www.googleapis.com/auth/dataportability.play.playpoints": { +"description": "Move a copy of information about your Google Play Store Points." +}, +"https://www.googleapis.com/auth/dataportability.play.promotions": { +"description": "Move a copy of information about your Google Play Store promotions." +}, +"https://www.googleapis.com/auth/dataportability.play.purchases": { +"description": "Move a copy of your Google Play Store purchases." +}, +"https://www.googleapis.com/auth/dataportability.play.redemptions": { +"description": "Move a copy of your Google Play Store redemption activities." +}, +"https://www.googleapis.com/auth/dataportability.play.subscriptions": { +"description": "Move a copy of your Google Play Store subscriptions." +}, +"https://www.googleapis.com/auth/dataportability.play.usersettings": { +"description": "Move a copy of your Google Play Store user settings and preferences." +}, "https://www.googleapis.com/auth/dataportability.saved.collections": { "description": "Move a copy of your saved links, images, places, and collections from your use of Google services." }, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars": { +"description": "Move a copy of your media reviews on Google Search." +}, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers": { +"description": "Move a copy of your self-reported video streaming provider preferences from Google Search and Google TV." +}, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs": { +"description": "Move a copy of your indicated thumbs up and thumbs down on media in Google Search and Google TV." +}, +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched": { +"description": "Move a copy of information about the movies and TV shows you marked as watched on Google Search and Google TV." +}, +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings": { +"description": "Move a copy of your notification settings on the Google Search app." +}, +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions": { +"description": "Move a copy of your notification subscriptions on Google Search app." +}, "https://www.googleapis.com/auth/dataportability.shopping.addresses": { "description": "Move a copy of your shipping information on Shopping." }, "https://www.googleapis.com/auth/dataportability.shopping.reviews": { "description": "Move a copy of reviews you wrote about products or online stores on Google Search." }, +"https://www.googleapis.com/auth/dataportability.streetview.imagery": { +"description": "Move a copy of the images and videos you uploaded to Street View." +}, "https://www.googleapis.com/auth/dataportability.youtube.channel": { "description": "Move a copy of information about your YouTube channel." }, @@ -234,6 +321,7 @@ "$ref": "PortabilityArchiveState" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -242,20 +330,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -297,6 +413,7 @@ "$ref": "RetryPortabilityArchiveResponse" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -305,20 +422,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -354,6 +499,7 @@ "$ref": "Empty" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -362,20 +508,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -411,6 +585,7 @@ "$ref": "InitiatePortabilityArchiveResponse" }, "scopes": [ +"https://www.googleapis.com/auth/dataportability.alerts.subscriptions", "https://www.googleapis.com/auth/dataportability.businessmessaging.conversations", "https://www.googleapis.com/auth/dataportability.chrome.autofill", "https://www.googleapis.com/auth/dataportability.chrome.bookmarks", @@ -419,20 +594,48 @@ "https://www.googleapis.com/auth/dataportability.chrome.history", "https://www.googleapis.com/auth/dataportability.chrome.reading_list", "https://www.googleapis.com/auth/dataportability.chrome.settings", +"https://www.googleapis.com/auth/dataportability.discover.follows", +"https://www.googleapis.com/auth/dataportability.discover.likes", +"https://www.googleapis.com/auth/dataportability.discover.not_interested", +"https://www.googleapis.com/auth/dataportability.maps.aliased_places", "https://www.googleapis.com/auth/dataportability.maps.commute_routes", "https://www.googleapis.com/auth/dataportability.maps.commute_settings", "https://www.googleapis.com/auth/dataportability.maps.ev_profile", +"https://www.googleapis.com/auth/dataportability.maps.factual_contributions", "https://www.googleapis.com/auth/dataportability.maps.offering_contributions", "https://www.googleapis.com/auth/dataportability.maps.photos_videos", +"https://www.googleapis.com/auth/dataportability.maps.post_trip_feedback", +"https://www.googleapis.com/auth/dataportability.maps.questions_answers", "https://www.googleapis.com/auth/dataportability.maps.reviews", "https://www.googleapis.com/auth/dataportability.maps.starred_places", "https://www.googleapis.com/auth/dataportability.myactivity.maps", +"https://www.googleapis.com/auth/dataportability.myactivity.myadcenter", +"https://www.googleapis.com/auth/dataportability.myactivity.play", "https://www.googleapis.com/auth/dataportability.myactivity.search", "https://www.googleapis.com/auth/dataportability.myactivity.shopping", "https://www.googleapis.com/auth/dataportability.myactivity.youtube", +"https://www.googleapis.com/auth/dataportability.mymaps.maps", +"https://www.googleapis.com/auth/dataportability.order_reserve.purchases_reservations", +"https://www.googleapis.com/auth/dataportability.play.devices", +"https://www.googleapis.com/auth/dataportability.play.grouping", +"https://www.googleapis.com/auth/dataportability.play.installs", +"https://www.googleapis.com/auth/dataportability.play.library", +"https://www.googleapis.com/auth/dataportability.play.playpoints", +"https://www.googleapis.com/auth/dataportability.play.promotions", +"https://www.googleapis.com/auth/dataportability.play.purchases", +"https://www.googleapis.com/auth/dataportability.play.redemptions", +"https://www.googleapis.com/auth/dataportability.play.subscriptions", +"https://www.googleapis.com/auth/dataportability.play.usersettings", "https://www.googleapis.com/auth/dataportability.saved.collections", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.reviews_and_stars", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.streaming_video_providers", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.thumbs", +"https://www.googleapis.com/auth/dataportability.search_ugc.media.watched", +"https://www.googleapis.com/auth/dataportability.searchnotifications.settings", +"https://www.googleapis.com/auth/dataportability.searchnotifications.subscriptions", "https://www.googleapis.com/auth/dataportability.shopping.addresses", "https://www.googleapis.com/auth/dataportability.shopping.reviews", +"https://www.googleapis.com/auth/dataportability.streetview.imagery", "https://www.googleapis.com/auth/dataportability.youtube.channel", "https://www.googleapis.com/auth/dataportability.youtube.comments", "https://www.googleapis.com/auth/dataportability.youtube.live_chat", @@ -452,7 +655,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://dataportability.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/dataproc.v1.json b/googleapiclient/discovery_cache/documents/dataproc.v1.json index 61fb644a37..9786d7156b 100644 --- a/googleapiclient/discovery_cache/documents/dataproc.v1.json +++ b/googleapiclient/discovery_cache/documents/dataproc.v1.json @@ -18,11 +18,6 @@ "endpoints": [ { "description": "Regional Endpoint", -"endpointUrl": "https://dataproc.me-central2.rep.googleapis.com/", -"location": "me-central2" -}, -{ -"description": "Regional Endpoint", "endpointUrl": "https://dataproc.europe-west3.rep.googleapis.com/", "location": "europe-west3" }, @@ -30,6 +25,11 @@ "description": "Regional Endpoint", "endpointUrl": "https://dataproc.europe-west9.rep.googleapis.com/", "location": "europe-west9" +}, +{ +"description": "Regional Endpoint", +"endpointUrl": "https://dataproc.me-central2.rep.googleapis.com/", +"location": "me-central2" } ], "fullyEncodeReservedExpansion": true, @@ -358,6 +358,34 @@ }, "batches": { "methods": { +"analyze": { +"description": "Analyze a Batch for possible recommendations and insights.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/batches/{batchesId}:analyze", +"httpMethod": "POST", +"id": "dataproc.projects.locations.batches.analyze", +"parameterOrder": [ +"name" +], +"parameters": { +"name": { +"description": "Required. The fully qualified name of the batch to analyze in the format \"projects/PROJECT_ID/locations/DATAPROC_REGION/batches/BATCH_ID\"", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/batches/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+name}:analyze", +"request": { +"$ref": "AnalyzeBatchRequest" +}, +"response": { +"$ref": "Operation" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "create": { "description": "Creates a batch workload that executes asynchronously.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/batches", @@ -3044,7 +3072,7 @@ } } }, -"revision": "20240309", +"revision": "20240320", "rootUrl": "https://dataproc.googleapis.com/", "schemas": { "AcceleratorConfig": { @@ -3063,6 +3091,17 @@ }, "type": "object" }, +"AnalyzeBatchRequest": { +"description": "A request to analyze a batch workload.", +"id": "AnalyzeBatchRequest", +"properties": { +"requestId": { +"description": "Optional. A unique ID used to identify the request. If the service receives two AnalyzeBatchRequest (http://cloud/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.AnalyzeBatchRequest)s with the same request_id, the second request is ignored and the Operation that corresponds to the first request created and stored in the backend is returned.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.", +"type": "string" +} +}, +"type": "object" +}, "AnalyzeOperationMetadata": { "description": "Metadata describing the Analyze operation.", "id": "AnalyzeOperationMetadata", diff --git a/googleapiclient/discovery_cache/documents/datastore.v1.json b/googleapiclient/discovery_cache/documents/datastore.v1.json index 8540c3911c..dfd49e1f58 100644 --- a/googleapiclient/discovery_cache/documents/datastore.v1.json +++ b/googleapiclient/discovery_cache/documents/datastore.v1.json @@ -654,7 +654,7 @@ } } }, -"revision": "20240307", +"revision": "20240317", "rootUrl": "https://datastore.googleapis.com/", "schemas": { "Aggregation": { @@ -993,6 +993,62 @@ }, "type": "object" }, +"ExecutionStats": { +"description": "Execution statistics for the query.", +"id": "ExecutionStats", +"properties": { +"debugStats": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { \"indexes_entries_scanned\": \"1000\", \"documents_scanned\": \"20\", \"billing_details\" : { \"documents_billable\": \"20\", \"index_entries_billable\": \"1000\", \"min_query_cost\": \"0\" } }", +"type": "object" +}, +"executionDuration": { +"description": "Total time to execute the query in the backend.", +"format": "google-duration", +"type": "string" +}, +"readOperations": { +"description": "Total billable read operations.", +"format": "int64", +"type": "string" +}, +"resultsReturned": { +"description": "Total number of results returned, including documents, projections, aggregation results, keys.", +"format": "int64", +"type": "string" +} +}, +"type": "object" +}, +"ExplainMetrics": { +"description": "Explain metrics for the query.", +"id": "ExplainMetrics", +"properties": { +"executionStats": { +"$ref": "ExecutionStats", +"description": "Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true." +}, +"planSummary": { +"$ref": "PlanSummary", +"description": "Planning phase information for the query." +} +}, +"type": "object" +}, +"ExplainOptions": { +"description": "Explain options for the query.", +"id": "ExplainOptions", +"properties": { +"analyze": { +"description": "Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics.", +"type": "boolean" +} +}, +"type": "object" +}, "Filter": { "description": "A holder for any type of filter.", "id": "Filter", @@ -1823,6 +1879,10 @@ }, "type": "array" }, +"propertyMask": { +"$ref": "PropertyMask", +"description": "The properties to return. Defaults to returning all properties. If this field is set and an entity has a property not referenced in the mask, it will be absent from LookupResponse.found.entity.properties. The entity's key is always returned." +}, "readOptions": { "$ref": "ReadOptions", "description": "The options for this lookup request." @@ -1885,6 +1945,10 @@ "$ref": "Entity", "description": "The entity to insert. The entity must not already exist. The entity key's final path element may be incomplete." }, +"propertyMask": { +"$ref": "PropertyMask", +"description": "The properties to write in this mutation. None of the properties in the mask may have a reserved name, except for `__key__`. This field is ignored for `delete`. If the entity already exists, only properties referenced in the mask are updated, others are left untouched. Properties referenced in the mask but not in the entity are deleted." +}, "update": { "$ref": "Entity", "description": "The entity to update. The entity must already exist. Must have a complete key path." @@ -1970,6 +2034,24 @@ }, "type": "object" }, +"PlanSummary": { +"description": "Planning phase information for the query.", +"id": "PlanSummary", +"properties": { +"indexesUsed": { +"description": "The indexes selected for the query. For example: [ {\"query_scope\": \"Collection\", \"properties\": \"(foo ASC, __name__ ASC)\"}, {\"query_scope\": \"Collection\", \"properties\": \"(bar ASC, __name__ ASC)\"} ]", +"items": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"type": "object" +}, +"type": "array" +} +}, +"type": "object" +}, "Projection": { "description": "A representation of a property in a projection.", "id": "Projection", @@ -2024,6 +2106,20 @@ }, "type": "object" }, +"PropertyMask": { +"description": "The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity.", +"id": "PropertyMask", +"properties": { +"paths": { +"description": "The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "PropertyOrder": { "description": "The desired order for a specific property.", "id": "PropertyOrder", @@ -2309,6 +2405,10 @@ "description": "The ID of the database against which to make the request. '(default)' is not allowed; please use empty string '' to refer the default database.", "type": "string" }, +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "gqlQuery": { "$ref": "GqlQuery", "description": "The GQL query to run. This query must be an aggregation query." @@ -2332,6 +2432,10 @@ "$ref": "AggregationResultBatch", "description": "A batch of aggregation results. Always present." }, +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "query": { "$ref": "AggregationQuery", "description": "The parsed form of the `GqlQuery` from the request, if it was set." @@ -2352,6 +2456,10 @@ "description": "The ID of the database against which to make the request. '(default)' is not allowed; please use empty string '' to refer the default database.", "type": "string" }, +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "gqlQuery": { "$ref": "GqlQuery", "description": "The GQL query to run. This query must be a non-aggregation query." @@ -2360,6 +2468,10 @@ "$ref": "PartitionId", "description": "Entities are partitioned into subsets, identified by a partition ID. Queries are scoped to a single partition. This partition ID is normalized with the standard default context partition ID." }, +"propertyMask": { +"$ref": "PropertyMask", +"description": "The properties to return. This field must not be set for a projection query. See LookupRequest.property_mask." +}, "query": { "$ref": "Query", "description": "The query to run." @@ -2379,6 +2491,10 @@ "$ref": "QueryResultBatch", "description": "A batch of query results (always present)." }, +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "query": { "$ref": "Query", "description": "The parsed form of the `GqlQuery` from the request, if it was set." diff --git a/googleapiclient/discovery_cache/documents/datastore.v1beta1.json b/googleapiclient/discovery_cache/documents/datastore.v1beta1.json index f4ab2cb828..3c05e205a3 100644 --- a/googleapiclient/discovery_cache/documents/datastore.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/datastore.v1beta1.json @@ -168,7 +168,7 @@ } } }, -"revision": "20240307", +"revision": "20240317", "rootUrl": "https://datastore.googleapis.com/", "schemas": { "GoogleDatastoreAdminV1CommonMetadata": { diff --git a/googleapiclient/discovery_cache/documents/datastore.v1beta3.json b/googleapiclient/discovery_cache/documents/datastore.v1beta3.json index 4ca03816f7..4e9d1882c2 100644 --- a/googleapiclient/discovery_cache/documents/datastore.v1beta3.json +++ b/googleapiclient/discovery_cache/documents/datastore.v1beta3.json @@ -336,7 +336,7 @@ } } }, -"revision": "20240307", +"revision": "20240317", "rootUrl": "https://datastore.googleapis.com/", "schemas": { "Aggregation": { @@ -653,6 +653,62 @@ }, "type": "object" }, +"ExecutionStats": { +"description": "Execution statistics for the query.", +"id": "ExecutionStats", +"properties": { +"debugStats": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { \"indexes_entries_scanned\": \"1000\", \"documents_scanned\": \"20\", \"billing_details\" : { \"documents_billable\": \"20\", \"index_entries_billable\": \"1000\", \"min_query_cost\": \"0\" } }", +"type": "object" +}, +"executionDuration": { +"description": "Total time to execute the query in the backend.", +"format": "google-duration", +"type": "string" +}, +"readOperations": { +"description": "Total billable read operations.", +"format": "int64", +"type": "string" +}, +"resultsReturned": { +"description": "Total number of results returned, including documents, projections, aggregation results, keys.", +"format": "int64", +"type": "string" +} +}, +"type": "object" +}, +"ExplainMetrics": { +"description": "Explain metrics for the query.", +"id": "ExplainMetrics", +"properties": { +"executionStats": { +"$ref": "ExecutionStats", +"description": "Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true." +}, +"planSummary": { +"$ref": "PlanSummary", +"description": "Planning phase information for the query." +} +}, +"type": "object" +}, +"ExplainOptions": { +"description": "Explain options for the query.", +"id": "ExplainOptions", +"properties": { +"analyze": { +"description": "Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics.", +"type": "boolean" +} +}, +"type": "object" +}, "Filter": { "description": "A holder for any type of filter.", "id": "Filter", @@ -1278,6 +1334,10 @@ }, "type": "array" }, +"propertyMask": { +"$ref": "PropertyMask", +"description": "The properties to return. Defaults to returning all properties. If this field is set and an entity has a property not referenced in the mask, it will be absent from LookupResponse.found.entity.properties. The entity's key is always returned." +}, "readOptions": { "$ref": "ReadOptions", "description": "The options for this lookup request." @@ -1335,6 +1395,10 @@ "$ref": "Entity", "description": "The entity to insert. The entity must not already exist. The entity key's final path element may be incomplete." }, +"propertyMask": { +"$ref": "PropertyMask", +"description": "The properties to write in this mutation. None of the properties in the mask may have a reserved name, except for `__key__`. This field is ignored for `delete`. If the entity already exists, only properties referenced in the mask are updated, others are left untouched. Properties referenced in the mask but not in the entity are deleted." +}, "update": { "$ref": "Entity", "description": "The entity to update. The entity must already exist. Must have a complete key path." @@ -1416,6 +1480,24 @@ }, "type": "object" }, +"PlanSummary": { +"description": "Planning phase information for the query.", +"id": "PlanSummary", +"properties": { +"indexesUsed": { +"description": "The indexes selected for the query. For example: [ {\"query_scope\": \"Collection\", \"properties\": \"(foo ASC, __name__ ASC)\"}, {\"query_scope\": \"Collection\", \"properties\": \"(bar ASC, __name__ ASC)\"} ]", +"items": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"type": "object" +}, +"type": "array" +} +}, +"type": "object" +}, "Projection": { "description": "A representation of a property in a projection.", "id": "Projection", @@ -1470,6 +1552,20 @@ }, "type": "object" }, +"PropertyMask": { +"description": "The set of arbitrarily nested property paths used to restrict an operation to only a subset of properties in an entity.", +"id": "PropertyMask", +"properties": { +"paths": { +"description": "The paths to the properties covered by this mask. A path is a list of property names separated by dots (`.`), for example `foo.bar` means the property `bar` inside the entity property `foo` inside the entity associated with this path. If a property name contains a dot `.` or a backslash `\\`, then that name must be escaped. A path must not be empty, and may not reference a value inside an array value.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "PropertyOrder": { "description": "The desired order for a specific property.", "id": "PropertyOrder", @@ -1743,6 +1839,10 @@ "$ref": "AggregationQuery", "description": "The query to run." }, +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "gqlQuery": { "$ref": "GqlQuery", "description": "The GQL query to run. This query must be an aggregation query." @@ -1766,6 +1866,10 @@ "$ref": "AggregationResultBatch", "description": "A batch of aggregation results. Always present." }, +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "query": { "$ref": "AggregationQuery", "description": "The parsed form of the `GqlQuery` from the request, if it was set." @@ -1777,6 +1881,10 @@ "description": "The request for Datastore.RunQuery.", "id": "RunQueryRequest", "properties": { +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "gqlQuery": { "$ref": "GqlQuery", "description": "The GQL query to run. This query must be a non-aggregation query." @@ -1785,6 +1893,10 @@ "$ref": "PartitionId", "description": "Entities are partitioned into subsets, identified by a partition ID. Queries are scoped to a single partition. This partition ID is normalized with the standard default context partition ID." }, +"propertyMask": { +"$ref": "PropertyMask", +"description": "The properties to return. This field must not be set for a projection query. See LookupRequest.property_mask." +}, "query": { "$ref": "Query", "description": "The query to run." @@ -1804,6 +1916,10 @@ "$ref": "QueryResultBatch", "description": "A batch of query results (always present)." }, +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "query": { "$ref": "Query", "description": "The parsed form of the `GqlQuery` from the request, if it was set." diff --git a/googleapiclient/discovery_cache/documents/datastream.v1.json b/googleapiclient/discovery_cache/documents/datastream.v1.json index 0d398e4ca9..3d683e45a6 100644 --- a/googleapiclient/discovery_cache/documents/datastream.v1.json +++ b/googleapiclient/discovery_cache/documents/datastream.v1.json @@ -1250,7 +1250,7 @@ } } }, -"revision": "20240305", +"revision": "20240310", "rootUrl": "https://datastream.googleapis.com/", "schemas": { "AvroFileFormat": { diff --git a/googleapiclient/discovery_cache/documents/datastream.v1alpha1.json b/googleapiclient/discovery_cache/documents/datastream.v1alpha1.json index 553f8f6a61..7a87294921 100644 --- a/googleapiclient/discovery_cache/documents/datastream.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/datastream.v1alpha1.json @@ -1224,7 +1224,7 @@ } } }, -"revision": "20240305", +"revision": "20240310", "rootUrl": "https://datastream.googleapis.com/", "schemas": { "AvroFileFormat": { diff --git a/googleapiclient/discovery_cache/documents/deploymentmanager.alpha.json b/googleapiclient/discovery_cache/documents/deploymentmanager.alpha.json index 4136a8e5f1..1785b52a2f 100644 --- a/googleapiclient/discovery_cache/documents/deploymentmanager.alpha.json +++ b/googleapiclient/discovery_cache/documents/deploymentmanager.alpha.json @@ -1588,7 +1588,7 @@ } } }, -"revision": "20240306", +"revision": "20240320", "rootUrl": "https://deploymentmanager.googleapis.com/", "schemas": { "AsyncOptions": { diff --git a/googleapiclient/discovery_cache/documents/deploymentmanager.v2.json b/googleapiclient/discovery_cache/documents/deploymentmanager.v2.json index 9f85633788..b76f98d9f8 100644 --- a/googleapiclient/discovery_cache/documents/deploymentmanager.v2.json +++ b/googleapiclient/discovery_cache/documents/deploymentmanager.v2.json @@ -988,7 +988,7 @@ } } }, -"revision": "20240306", +"revision": "20240320", "rootUrl": "https://deploymentmanager.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/deploymentmanager.v2beta.json b/googleapiclient/discovery_cache/documents/deploymentmanager.v2beta.json index 197cc7b005..e33c796574 100644 --- a/googleapiclient/discovery_cache/documents/deploymentmanager.v2beta.json +++ b/googleapiclient/discovery_cache/documents/deploymentmanager.v2beta.json @@ -1552,7 +1552,7 @@ } } }, -"revision": "20240306", +"revision": "20240320", "rootUrl": "https://deploymentmanager.googleapis.com/", "schemas": { "AsyncOptions": { diff --git a/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json b/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json index 90445ede30..5b69c251b4 100644 --- a/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json +++ b/googleapiclient/discovery_cache/documents/dialogflow.v2beta1.json @@ -7695,7 +7695,7 @@ } } }, -"revision": "20240313", +"revision": "20240314", "rootUrl": "https://dialogflow.googleapis.com/", "schemas": { "GoogleCloudDialogflowCxV3AdvancedSettings": { diff --git a/googleapiclient/discovery_cache/documents/dialogflow.v3.json b/googleapiclient/discovery_cache/documents/dialogflow.v3.json index c9c91f9426..c289d86c96 100644 --- a/googleapiclient/discovery_cache/documents/dialogflow.v3.json +++ b/googleapiclient/discovery_cache/documents/dialogflow.v3.json @@ -4453,7 +4453,7 @@ } } }, -"revision": "20240313", +"revision": "20240314", "rootUrl": "https://dialogflow.googleapis.com/", "schemas": { "GoogleCloudDialogflowCxV3AdvancedSettings": { diff --git a/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json b/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json index fd2a7e7c9c..598e8c9fb6 100644 --- a/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json +++ b/googleapiclient/discovery_cache/documents/dialogflow.v3beta1.json @@ -4453,7 +4453,7 @@ } } }, -"revision": "20240313", +"revision": "20240314", "rootUrl": "https://dialogflow.googleapis.com/", "schemas": { "GoogleCloudDialogflowCxV3AdvancedSettings": { diff --git a/googleapiclient/discovery_cache/documents/dlp.v2.json b/googleapiclient/discovery_cache/documents/dlp.v2.json index bf0c8638be..6fcba5f005 100644 --- a/googleapiclient/discovery_cache/documents/dlp.v2.json +++ b/googleapiclient/discovery_cache/documents/dlp.v2.json @@ -4164,7 +4164,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://dlp.googleapis.com/", "schemas": { "GooglePrivacyDlpV2Action": { @@ -4901,7 +4901,11 @@ "TYPE_NUMERIC", "TYPE_RECORD", "TYPE_BIGNUMERIC", -"TYPE_JSON" +"TYPE_JSON", +"TYPE_INTERVAL", +"TYPE_RANGE_DATE", +"TYPE_RANGE_DATETIME", +"TYPE_RANGE_TIMESTAMP" ], "enumDescriptions": [ "Invalid type.", @@ -4918,7 +4922,11 @@ "Encoded as a decimal string.", "Container of ordered fields, each with a type and field name.", "Decimal type.", -"Json type." +"Json type.", +"Interval type.", +"Range type.", +"Range type.", +"Range type." ], "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/dns.v1.json b/googleapiclient/discovery_cache/documents/dns.v1.json index f79ab7d400..b7d4ef3b3c 100644 --- a/googleapiclient/discovery_cache/documents/dns.v1.json +++ b/googleapiclient/discovery_cache/documents/dns.v1.json @@ -1824,7 +1824,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://dns.googleapis.com/", "schemas": { "Change": { diff --git a/googleapiclient/discovery_cache/documents/dns.v1beta2.json b/googleapiclient/discovery_cache/documents/dns.v1beta2.json index be4d7a24e5..7f1c9b0ee1 100644 --- a/googleapiclient/discovery_cache/documents/dns.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/dns.v1beta2.json @@ -1821,7 +1821,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://dns.googleapis.com/", "schemas": { "Change": { diff --git a/googleapiclient/discovery_cache/documents/docs.v1.json b/googleapiclient/discovery_cache/documents/docs.v1.json index 37738a9fb7..b9be4c359a 100644 --- a/googleapiclient/discovery_cache/documents/docs.v1.json +++ b/googleapiclient/discovery_cache/documents/docs.v1.json @@ -216,7 +216,7 @@ } } }, -"revision": "20240305", +"revision": "20240319", "rootUrl": "https://docs.googleapis.com/", "schemas": { "AutoText": { diff --git a/googleapiclient/discovery_cache/documents/domains.v1.json b/googleapiclient/discovery_cache/documents/domains.v1.json index b771648e10..f67599788f 100644 --- a/googleapiclient/discovery_cache/documents/domains.v1.json +++ b/googleapiclient/discovery_cache/documents/domains.v1.json @@ -848,7 +848,7 @@ } } }, -"revision": "20240311", +"revision": "20240313", "rootUrl": "https://domains.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/domains.v1alpha2.json b/googleapiclient/discovery_cache/documents/domains.v1alpha2.json index 63fac43411..358bff838c 100644 --- a/googleapiclient/discovery_cache/documents/domains.v1alpha2.json +++ b/googleapiclient/discovery_cache/documents/domains.v1alpha2.json @@ -848,7 +848,7 @@ } } }, -"revision": "20240311", +"revision": "20240313", "rootUrl": "https://domains.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/domains.v1beta1.json b/googleapiclient/discovery_cache/documents/domains.v1beta1.json index 7af49ef6f4..f0e45fdbdc 100644 --- a/googleapiclient/discovery_cache/documents/domains.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/domains.v1beta1.json @@ -848,7 +848,7 @@ } } }, -"revision": "20240311", +"revision": "20240313", "rootUrl": "https://domains.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/domainsrdap.v1.json b/googleapiclient/discovery_cache/documents/domainsrdap.v1.json index 189ce2f3b1..2d5d0e1e61 100644 --- a/googleapiclient/discovery_cache/documents/domainsrdap.v1.json +++ b/googleapiclient/discovery_cache/documents/domainsrdap.v1.json @@ -289,7 +289,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://domainsrdap.googleapis.com/", "schemas": { "HttpBody": { diff --git a/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json b/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json index fe74e44206..ff814e0c72 100644 --- a/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json +++ b/googleapiclient/discovery_cache/documents/doubleclickbidmanager.v2.json @@ -319,7 +319,7 @@ } } }, -"revision": "20240305", +"revision": "20240311", "rootUrl": "https://doubleclickbidmanager.googleapis.com/", "schemas": { "ChannelGrouping": { diff --git a/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json b/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json index c64c774a25..a6fa7a825e 100644 --- a/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json +++ b/googleapiclient/discovery_cache/documents/doubleclicksearch.v2.json @@ -543,7 +543,7 @@ } } }, -"revision": "20240312", +"revision": "20240320", "rootUrl": "https://doubleclicksearch.googleapis.com/", "schemas": { "Availability": { diff --git a/googleapiclient/discovery_cache/documents/drive.v2.json b/googleapiclient/discovery_cache/documents/drive.v2.json index 870fa1968f..13bf436119 100644 --- a/googleapiclient/discovery_cache/documents/drive.v2.json +++ b/googleapiclient/discovery_cache/documents/drive.v2.json @@ -3842,7 +3842,7 @@ } } }, -"revision": "20240310", +"revision": "20240314", "rootUrl": "https://www.googleapis.com/", "schemas": { "About": { diff --git a/googleapiclient/discovery_cache/documents/drive.v3.json b/googleapiclient/discovery_cache/documents/drive.v3.json index ea57f8cf36..b40ad65e21 100644 --- a/googleapiclient/discovery_cache/documents/drive.v3.json +++ b/googleapiclient/discovery_cache/documents/drive.v3.json @@ -2503,7 +2503,7 @@ } } }, -"revision": "20240310", +"revision": "20240314", "rootUrl": "https://www.googleapis.com/", "schemas": { "About": { diff --git a/googleapiclient/discovery_cache/documents/driveactivity.v2.json b/googleapiclient/discovery_cache/documents/driveactivity.v2.json index fc2832ac86..fc16855f2a 100644 --- a/googleapiclient/discovery_cache/documents/driveactivity.v2.json +++ b/googleapiclient/discovery_cache/documents/driveactivity.v2.json @@ -132,7 +132,7 @@ } } }, -"revision": "20240310", +"revision": "20240319", "rootUrl": "https://driveactivity.googleapis.com/", "schemas": { "Action": { diff --git a/googleapiclient/discovery_cache/documents/drivelabels.v2.json b/googleapiclient/discovery_cache/documents/drivelabels.v2.json index 9a596a5706..9ee475274a 100644 --- a/googleapiclient/discovery_cache/documents/drivelabels.v2.json +++ b/googleapiclient/discovery_cache/documents/drivelabels.v2.json @@ -1032,7 +1032,7 @@ } } }, -"revision": "20240313", +"revision": "20240319", "rootUrl": "https://drivelabels.googleapis.com/", "schemas": { "GoogleAppsDriveLabelsV2BadgeColors": { diff --git a/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json b/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json index 6fa6ff508f..ee8d55a81f 100644 --- a/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json +++ b/googleapiclient/discovery_cache/documents/drivelabels.v2beta.json @@ -1032,7 +1032,7 @@ } } }, -"revision": "20240313", +"revision": "20240319", "rootUrl": "https://drivelabels.googleapis.com/", "schemas": { "GoogleAppsDriveLabelsV2betaBadgeColors": { diff --git a/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json b/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json index 4dd69c3414..6b900625b2 100644 --- a/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json +++ b/googleapiclient/discovery_cache/documents/essentialcontacts.v1.json @@ -850,7 +850,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://essentialcontacts.googleapis.com/", "schemas": { "GoogleCloudEssentialcontactsV1ComputeContactsResponse": { diff --git a/googleapiclient/discovery_cache/documents/eventarc.v1.json b/googleapiclient/discovery_cache/documents/eventarc.v1.json index 6a0df53d8b..291ace7a0d 100644 --- a/googleapiclient/discovery_cache/documents/eventarc.v1.json +++ b/googleapiclient/discovery_cache/documents/eventarc.v1.json @@ -1197,7 +1197,7 @@ } } }, -"revision": "20240308", +"revision": "20240315", "rootUrl": "https://eventarc.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/eventarc.v1beta1.json b/googleapiclient/discovery_cache/documents/eventarc.v1beta1.json index 5d7728d87d..33872b5a6c 100644 --- a/googleapiclient/discovery_cache/documents/eventarc.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/eventarc.v1beta1.json @@ -584,7 +584,7 @@ } } }, -"revision": "20240308", +"revision": "20240315", "rootUrl": "https://eventarc.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json b/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json index b8eb7dd935..531ecaceb4 100644 --- a/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/factchecktools.v1alpha1.json @@ -304,7 +304,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://factchecktools.googleapis.com/", "schemas": { "GoogleFactcheckingFactchecktoolsV1alpha1Claim": { diff --git a/googleapiclient/discovery_cache/documents/fcm.v1.json b/googleapiclient/discovery_cache/documents/fcm.v1.json index b002a880e3..36018da671 100644 --- a/googleapiclient/discovery_cache/documents/fcm.v1.json +++ b/googleapiclient/discovery_cache/documents/fcm.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240301", +"revision": "20240320", "rootUrl": "https://fcm.googleapis.com/", "schemas": { "AndroidConfig": { diff --git a/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json b/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json index 00fc699c46..58d54f1f4a 100644 --- a/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/fcmdata.v1beta1.json @@ -154,7 +154,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://fcmdata.googleapis.com/", "schemas": { "GoogleFirebaseFcmDataV1beta1AndroidDeliveryData": { diff --git a/googleapiclient/discovery_cache/documents/firebase.v1beta1.json b/googleapiclient/discovery_cache/documents/firebase.v1beta1.json index ef0ac6befb..36e22f3598 100644 --- a/googleapiclient/discovery_cache/documents/firebase.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/firebase.v1beta1.json @@ -1324,7 +1324,7 @@ } } }, -"revision": "20240315", +"revision": "20240322", "rootUrl": "https://firebase.googleapis.com/", "schemas": { "AddFirebaseRequest": { diff --git a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json index d0b99f25be..d499f3d4e4 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1.json @@ -1343,7 +1343,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://firebaseappcheck.googleapis.com/", "schemas": { "GoogleFirebaseAppcheckV1AppAttestConfig": { diff --git a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json index 1f6598b810..5fdd9adf45 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json +++ b/googleapiclient/discovery_cache/documents/firebaseappcheck.v1beta.json @@ -1823,7 +1823,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://firebaseappcheck.googleapis.com/", "schemas": { "GoogleFirebaseAppcheckV1betaAppAttestConfig": { diff --git a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json index 0b89eaf71d..7504d9c800 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1.json @@ -941,7 +941,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://firebaseappdistribution.googleapis.com/", "schemas": { "GdataBlobstore2Info": { diff --git a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json index 9de2e93746..7b14ab88f2 100644 --- a/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/firebaseappdistribution.v1alpha.json @@ -585,7 +585,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://firebaseappdistribution.googleapis.com/", "schemas": { "GoogleFirebaseAppdistroV1Release": { diff --git a/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json b/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json index 71181a6162..3a48dc0a4c 100644 --- a/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json +++ b/googleapiclient/discovery_cache/documents/firebasedatabase.v1beta.json @@ -351,7 +351,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://firebasedatabase.googleapis.com/", "schemas": { "DatabaseInstance": { diff --git a/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json b/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json index 8295b0c135..dd86bfc842 100644 --- a/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json +++ b/googleapiclient/discovery_cache/documents/firebasedynamiclinks.v1.json @@ -224,7 +224,7 @@ } } }, -"revision": "20240315", +"revision": "20240325", "rootUrl": "https://firebasedynamiclinks.googleapis.com/", "schemas": { "AnalyticsInfo": { diff --git a/googleapiclient/discovery_cache/documents/firebasehosting.v1.json b/googleapiclient/discovery_cache/documents/firebasehosting.v1.json index d9acd9e8bd..9acdb8ed96 100644 --- a/googleapiclient/discovery_cache/documents/firebasehosting.v1.json +++ b/googleapiclient/discovery_cache/documents/firebasehosting.v1.json @@ -269,7 +269,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://firebasehosting.googleapis.com/", "schemas": { "CancelOperationRequest": { @@ -379,7 +379,7 @@ }, "quickSetupUpdates": { "$ref": "DnsUpdates", -"description": "A set of DNS record updates that allow Hosting to serve secure content on your domain name. The record type determines the update's purpose: - `A` and `AAAA`: Updates your domain name's IP addresses so that they direct traffic to Hosting servers. - `TXT`: Updates ownership permissions on your domain name, letting Hosting know that your custom domain's project has permission to perfrom actions for that domain name. - `CAA`: Updates your domain name's list of authorized Certificate Authorities (CAs). Only present if you have existing `CAA` records that prohibit Hosting's CA from minting certs for your domain name. These updates include all DNS changes you'll need to get started with Hosting, but, if made all at once, can result in a brief period of downtime for your domain name--while Hosting creates and uploads an SSL cert, for example. If you'd like to add your domain name to Hosting without downtime, complete the `liveMigrationSteps` first, before making the remaining updates in this field." +"description": "A set of DNS record updates that allow Hosting to serve secure content on your domain name. The record type determines the update's purpose: - `A` and `AAAA`: Updates your domain name's IP addresses so that they direct traffic to Hosting servers. - `TXT`: Updates ownership permissions on your domain name, letting Hosting know that your custom domain's project has permission to perform actions for that domain name. - `CAA`: Updates your domain name's list of authorized Certificate Authorities (CAs). Only present if you have existing `CAA` records that prohibit Hosting's CA from minting certs for your domain name. These updates include all DNS changes you'll need to get started with Hosting, but, if made all at once, can result in a brief period of downtime for your domain name--while Hosting creates and uploads an SSL cert, for example. If you'd like to add your domain name to Hosting without downtime, complete the `liveMigrationSteps` first, before making the remaining updates in this field." } }, "type": "object" diff --git a/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json b/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json index 5ca5cd4aaf..ad961317ed 100644 --- a/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/firebasehosting.v1beta1.json @@ -2422,7 +2422,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://firebasehosting.googleapis.com/", "schemas": { "ActingUser": { @@ -2885,7 +2885,7 @@ }, "quickSetupUpdates": { "$ref": "DnsUpdates", -"description": "A set of DNS record updates that allow Hosting to serve secure content on your domain name. The record type determines the update's purpose: - `A` and `AAAA`: Updates your domain name's IP addresses so that they direct traffic to Hosting servers. - `TXT`: Updates ownership permissions on your domain name, letting Hosting know that your custom domain's project has permission to perfrom actions for that domain name. - `CAA`: Updates your domain name's list of authorized Certificate Authorities (CAs). Only present if you have existing `CAA` records that prohibit Hosting's CA from minting certs for your domain name. These updates include all DNS changes you'll need to get started with Hosting, but, if made all at once, can result in a brief period of downtime for your domain name--while Hosting creates and uploads an SSL cert, for example. If you'd like to add your domain name to Hosting without downtime, complete the `liveMigrationSteps` first, before making the remaining updates in this field." +"description": "A set of DNS record updates that allow Hosting to serve secure content on your domain name. The record type determines the update's purpose: - `A` and `AAAA`: Updates your domain name's IP addresses so that they direct traffic to Hosting servers. - `TXT`: Updates ownership permissions on your domain name, letting Hosting know that your custom domain's project has permission to perform actions for that domain name. - `CAA`: Updates your domain name's list of authorized Certificate Authorities (CAs). Only present if you have existing `CAA` records that prohibit Hosting's CA from minting certs for your domain name. These updates include all DNS changes you'll need to get started with Hosting, but, if made all at once, can result in a brief period of downtime for your domain name--while Hosting creates and uploads an SSL cert, for example. If you'd like to add your domain name to Hosting without downtime, complete the `liveMigrationSteps` first, before making the remaining updates in this field." } }, "type": "object" diff --git a/googleapiclient/discovery_cache/documents/firebaseml.v1.json b/googleapiclient/discovery_cache/documents/firebaseml.v1.json index cd9cb9d085..ce3d897805 100644 --- a/googleapiclient/discovery_cache/documents/firebaseml.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaseml.v1.json @@ -204,7 +204,7 @@ } } }, -"revision": "20240221", +"revision": "20240322", "rootUrl": "https://firebaseml.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json b/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json index f131537275..12e0a2d308 100644 --- a/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/firebaseml.v1beta2.json @@ -318,7 +318,7 @@ } } }, -"revision": "20240221", +"revision": "20240322", "rootUrl": "https://firebaseml.googleapis.com/", "schemas": { "DownloadModelResponse": { diff --git a/googleapiclient/discovery_cache/documents/firebaserules.v1.json b/googleapiclient/discovery_cache/documents/firebaserules.v1.json index 15aaf5d281..c9ddf78f18 100644 --- a/googleapiclient/discovery_cache/documents/firebaserules.v1.json +++ b/googleapiclient/discovery_cache/documents/firebaserules.v1.json @@ -477,7 +477,7 @@ } } }, -"revision": "20240214", +"revision": "20240311", "rootUrl": "https://firebaserules.googleapis.com/", "schemas": { "Arg": { diff --git a/googleapiclient/discovery_cache/documents/firestore.v1.json b/googleapiclient/discovery_cache/documents/firestore.v1.json index fe5c7b6875..340fcb54bd 100644 --- a/googleapiclient/discovery_cache/documents/firestore.v1.json +++ b/googleapiclient/discovery_cache/documents/firestore.v1.json @@ -394,7 +394,7 @@ ], "parameters": { "name": { -"description": "Required. The name of backup schedule. Format `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`", +"description": "Required. The name of the backup schedule. Format `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`", "location": "path", "pattern": "^projects/[^/]+/databases/[^/]+/backupSchedules/[^/]+$", "required": true, @@ -1672,7 +1672,7 @@ } } }, -"revision": "20240307", +"revision": "20240317", "rootUrl": "https://firestore.googleapis.com/", "schemas": { "Aggregation": { @@ -2158,6 +2158,36 @@ "properties": {}, "type": "object" }, +"ExecutionStats": { +"description": "Execution statistics for the query.", +"id": "ExecutionStats", +"properties": { +"debugStats": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { \"indexes_entries_scanned\": \"1000\", \"documents_scanned\": \"20\", \"billing_details\" : { \"documents_billable\": \"20\", \"index_entries_billable\": \"1000\", \"min_query_cost\": \"0\" } }", +"type": "object" +}, +"executionDuration": { +"description": "Total time to execute the query in the backend.", +"format": "google-duration", +"type": "string" +}, +"readOperations": { +"description": "Total billable read operations.", +"format": "int64", +"type": "string" +}, +"resultsReturned": { +"description": "Total number of results returned, including documents, projections, aggregation results, keys.", +"format": "int64", +"type": "string" +} +}, +"type": "object" +}, "ExistenceFilter": { "description": "A digest of all the documents that match a given target.", "id": "ExistenceFilter", @@ -2179,6 +2209,32 @@ }, "type": "object" }, +"ExplainMetrics": { +"description": "Explain metrics for the query.", +"id": "ExplainMetrics", +"properties": { +"executionStats": { +"$ref": "ExecutionStats", +"description": "Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true." +}, +"planSummary": { +"$ref": "PlanSummary", +"description": "Planning phase information for the query." +} +}, +"type": "object" +}, +"ExplainOptions": { +"description": "Explain options for the query.", +"id": "ExplainOptions", +"properties": { +"analyze": { +"description": "Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics.", +"type": "boolean" +} +}, +"type": "object" +}, "FieldFilter": { "description": "A filter on a specific field.", "id": "FieldFilter", @@ -2416,7 +2472,7 @@ "type": "object" }, "GoogleFirestoreAdminV1DailyRecurrence": { -"description": "Represent a recurring schedule that runs at a specific time every day. The time zone is UTC.", +"description": "Represents a recurring schedule that runs at a specific time every day. The time zone is UTC.", "id": "GoogleFirestoreAdminV1DailyRecurrence", "properties": {}, "type": "object" @@ -3705,6 +3761,24 @@ }, "type": "object" }, +"PlanSummary": { +"description": "Planning phase information for the query.", +"id": "PlanSummary", +"properties": { +"indexesUsed": { +"description": "The indexes selected for the query. For example: [ {\"query_scope\": \"Collection\", \"properties\": \"(foo ASC, __name__ ASC)\"}, {\"query_scope\": \"Collection\", \"properties\": \"(bar ASC, __name__ ASC)\"} ]", +"items": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"type": "object" +}, +"type": "array" +} +}, +"type": "object" +}, "Precondition": { "description": "A precondition on a document, used for conditional operations.", "id": "Precondition", @@ -3790,6 +3864,10 @@ "description": "The request for Firestore.RunAggregationQuery.", "id": "RunAggregationQueryRequest", "properties": { +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "newTransaction": { "$ref": "TransactionOptions", "description": "Starts a new transaction as part of the query, defaulting to read-only. The new transaction ID will be returned as the first response in the stream." @@ -3815,6 +3893,10 @@ "description": "The response for Firestore.RunAggregationQuery.", "id": "RunAggregationQueryResponse", "properties": { +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "readTime": { "description": "The time at which the aggregate result was computed. This is always monotonically increasing; in this case, the previous AggregationResult in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `result` will be sent, and this represents the time at which the query was run.", "format": "google-datetime", @@ -3836,6 +3918,10 @@ "description": "The request for Firestore.RunQuery.", "id": "RunQueryRequest", "properties": { +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "newTransaction": { "$ref": "TransactionOptions", "description": "Starts a new transaction and reads the documents. Defaults to a read-only transaction. The new transaction ID will be returned as the first response in the stream." @@ -3869,6 +3955,10 @@ "description": "If present, Firestore has completely finished the request and no more documents will be returned.", "type": "boolean" }, +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "readTime": { "description": "The time at which the document was read. This may be monotonically increasing; in this case, the previous documents in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `document` will be sent, and this represents the time at which the query was run.", "format": "google-datetime", diff --git a/googleapiclient/discovery_cache/documents/firestore.v1beta1.json b/googleapiclient/discovery_cache/documents/firestore.v1beta1.json index dfa040d111..fc67269b35 100644 --- a/googleapiclient/discovery_cache/documents/firestore.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/firestore.v1beta1.json @@ -950,7 +950,7 @@ } } }, -"revision": "20240307", +"revision": "20240317", "rootUrl": "https://firestore.googleapis.com/", "schemas": { "Aggregation": { @@ -1436,6 +1436,36 @@ "properties": {}, "type": "object" }, +"ExecutionStats": { +"description": "Execution statistics for the query.", +"id": "ExecutionStats", +"properties": { +"debugStats": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"description": "Debugging statistics from the execution of the query. Note that the debugging stats are subject to change as Firestore evolves. It could include: { \"indexes_entries_scanned\": \"1000\", \"documents_scanned\": \"20\", \"billing_details\" : { \"documents_billable\": \"20\", \"index_entries_billable\": \"1000\", \"min_query_cost\": \"0\" } }", +"type": "object" +}, +"executionDuration": { +"description": "Total time to execute the query in the backend.", +"format": "google-duration", +"type": "string" +}, +"readOperations": { +"description": "Total billable read operations.", +"format": "int64", +"type": "string" +}, +"resultsReturned": { +"description": "Total number of results returned, including documents, projections, aggregation results, keys.", +"format": "int64", +"type": "string" +} +}, +"type": "object" +}, "ExistenceFilter": { "description": "A digest of all the documents that match a given target.", "id": "ExistenceFilter", @@ -1457,6 +1487,32 @@ }, "type": "object" }, +"ExplainMetrics": { +"description": "Explain metrics for the query.", +"id": "ExplainMetrics", +"properties": { +"executionStats": { +"$ref": "ExecutionStats", +"description": "Aggregated stats from the execution of the query. Only present when ExplainOptions.analyze is set to true." +}, +"planSummary": { +"$ref": "PlanSummary", +"description": "Planning phase information for the query." +} +}, +"type": "object" +}, +"ExplainOptions": { +"description": "Explain options for the query.", +"id": "ExplainOptions", +"properties": { +"analyze": { +"description": "Optional. Whether to execute this query. When false (the default), the query will be planned, returning only metrics from the planning stages. When true, the query will be planned and executed, returning the full query results along with both planning and execution stage metrics.", +"type": "boolean" +} +}, +"type": "object" +}, "FieldFilter": { "description": "A filter on a specific field.", "id": "FieldFilter", @@ -2223,6 +2279,24 @@ }, "type": "object" }, +"PlanSummary": { +"description": "Planning phase information for the query.", +"id": "PlanSummary", +"properties": { +"indexesUsed": { +"description": "The indexes selected for the query. For example: [ {\"query_scope\": \"Collection\", \"properties\": \"(foo ASC, __name__ ASC)\"}, {\"query_scope\": \"Collection\", \"properties\": \"(bar ASC, __name__ ASC)\"} ]", +"items": { +"additionalProperties": { +"description": "Properties of the object.", +"type": "any" +}, +"type": "object" +}, +"type": "array" +} +}, +"type": "object" +}, "Precondition": { "description": "A precondition on a document, used for conditional operations.", "id": "Precondition", @@ -2308,6 +2382,10 @@ "description": "The request for Firestore.RunAggregationQuery.", "id": "RunAggregationQueryRequest", "properties": { +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "newTransaction": { "$ref": "TransactionOptions", "description": "Starts a new transaction as part of the query, defaulting to read-only. The new transaction ID will be returned as the first response in the stream." @@ -2333,6 +2411,10 @@ "description": "The response for Firestore.RunAggregationQuery.", "id": "RunAggregationQueryResponse", "properties": { +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunAggregationQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "readTime": { "description": "The time at which the aggregate result was computed. This is always monotonically increasing; in this case, the previous AggregationResult in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `result` will be sent, and this represents the time at which the query was run.", "format": "google-datetime", @@ -2354,6 +2436,10 @@ "description": "The request for Firestore.RunQuery.", "id": "RunQueryRequest", "properties": { +"explainOptions": { +"$ref": "ExplainOptions", +"description": "Optional. Explain options for the query. If set, additional query statistics will be returned. If not, only query results will be returned." +}, "newTransaction": { "$ref": "TransactionOptions", "description": "Starts a new transaction and reads the documents. Defaults to a read-only transaction. The new transaction ID will be returned as the first response in the stream." @@ -2387,6 +2473,10 @@ "description": "If present, Firestore has completely finished the request and no more documents will be returned.", "type": "boolean" }, +"explainMetrics": { +"$ref": "ExplainMetrics", +"description": "Query explain metrics. This is only present when the RunQueryRequest.explain_options is provided, and it is sent only once with the last response in the stream." +}, "readTime": { "description": "The time at which the document was read. This may be monotonically increasing; in this case, the previous documents in the result stream are guaranteed not to have changed between their `read_time` and this one. If the query returns no results, a response with `read_time` and no `document` will be sent, and this represents the time at which the query was run.", "format": "google-datetime", diff --git a/googleapiclient/discovery_cache/documents/firestore.v1beta2.json b/googleapiclient/discovery_cache/documents/firestore.v1beta2.json index b1f5436d10..0e36503e93 100644 --- a/googleapiclient/discovery_cache/documents/firestore.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/firestore.v1beta2.json @@ -415,7 +415,7 @@ } } }, -"revision": "20240307", +"revision": "20240317", "rootUrl": "https://firestore.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/fitness.v1.json b/googleapiclient/discovery_cache/documents/fitness.v1.json index d78f10a626..ddefc93806 100644 --- a/googleapiclient/discovery_cache/documents/fitness.v1.json +++ b/googleapiclient/discovery_cache/documents/fitness.v1.json @@ -832,7 +832,7 @@ } } }, -"revision": "20240317", +"revision": "20240322", "rootUrl": "https://fitness.googleapis.com/", "schemas": { "AggregateBucket": { diff --git a/googleapiclient/discovery_cache/documents/forms.v1.json b/googleapiclient/discovery_cache/documents/forms.v1.json index 595354e247..c5f160c85b 100644 --- a/googleapiclient/discovery_cache/documents/forms.v1.json +++ b/googleapiclient/discovery_cache/documents/forms.v1.json @@ -423,7 +423,7 @@ } } }, -"revision": "20240305", +"revision": "20240321", "rootUrl": "https://forms.googleapis.com/", "schemas": { "Answer": { diff --git a/googleapiclient/discovery_cache/documents/gkebackup.v1.json b/googleapiclient/discovery_cache/documents/gkebackup.v1.json index c9f367d33f..fc3a387dd8 100644 --- a/googleapiclient/discovery_cache/documents/gkebackup.v1.json +++ b/googleapiclient/discovery_cache/documents/gkebackup.v1.json @@ -1688,7 +1688,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://gkebackup.googleapis.com/", "schemas": { "AuditConfig": { @@ -1740,7 +1740,7 @@ "type": "object" }, "Backup": { -"description": "Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups). Next id: 29", +"description": "Represents a request to perform a single point-in-time capture of some portion of the state of a GKE cluster, the record of the backup operation itself, and an anchor for the underlying artifacts that comprise the Backup (the config backup and VolumeBackups).", "id": "Backup", "properties": { "allNamespaces": { @@ -2590,7 +2590,7 @@ "type": "object" }, "Restore": { -"description": "Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself. Next id: 20", +"description": "Represents both a request to Restore some portion of a Backup into a target GKE cluster and a record of the restore operation itself.", "id": "Restore", "properties": { "backup": { @@ -2705,7 +2705,7 @@ "type": "object" }, "RestoreConfig": { -"description": "Configuration of a restore. Next id: 14", +"description": "Configuration of a restore.", "id": "RestoreConfig", "properties": { "allNamespaces": { @@ -2794,7 +2794,7 @@ "type": "object" }, "RestorePlan": { -"description": "The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan. Next id: 13", +"description": "The configuration of a potential series of Restore operations to be performed against Backups belong to a particular BackupPlan.", "id": "RestorePlan", "properties": { "backupPlan": { @@ -3051,7 +3051,7 @@ "type": "object" }, "VolumeBackup": { -"description": "Represents the backup of a specific persistent volume as a component of a Backup - both the record of the operation and a pointer to the underlying storage-specific artifacts. Next id: 14", +"description": "Represents the backup of a specific persistent volume as a component of a Backup - both the record of the operation and a pointer to the underlying storage-specific artifacts.", "id": "VolumeBackup", "properties": { "completeTime": { @@ -3154,7 +3154,7 @@ "type": "object" }, "VolumeRestore": { -"description": "Represents the operation of restoring a volume from a VolumeBackup. Next id: 13", +"description": "Represents the operation of restoring a volume from a VolumeBackup.", "id": "VolumeRestore", "properties": { "completeTime": { diff --git a/googleapiclient/discovery_cache/documents/gkehub.v1.json b/googleapiclient/discovery_cache/documents/gkehub.v1.json index ee855fce57..ae4c744355 100644 --- a/googleapiclient/discovery_cache/documents/gkehub.v1.json +++ b/googleapiclient/discovery_cache/documents/gkehub.v1.json @@ -1421,6 +1421,83 @@ "https://www.googleapis.com/auth/cloud-platform" ] }, +"listMemberships": { +"description": "Lists Memberships bound to a Scope. The response includes relevant Memberships from all regions.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/scopes/{scopesId}:listMemberships", +"httpMethod": "GET", +"id": "gkehub.projects.locations.scopes.listMemberships", +"parameterOrder": [ +"scopeName" +], +"parameters": { +"filter": { +"description": "Optional. Lists Memberships that match the filter expression, following the syntax outlined in https://google.aip.dev/160. Currently, filtering can be done only based on Memberships's `name`, `labels`, `create_time`, `update_time`, and `unique_id`.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned. Pagination is currently not supported; therefore, setting this field does not have any impact for now.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. Token returned by previous call to `ListBoundMemberships` which specifies the position in the list from where to continue listing the resources.", +"location": "query", +"type": "string" +}, +"scopeName": { +"description": "Required. Name of the Scope, in the format `projects/*/locations/global/scopes/*`, to which the Memberships are bound.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/scopes/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+scopeName}:listMemberships", +"response": { +"$ref": "ListBoundMembershipsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"listPermitted": { +"description": "Lists permitted Scopes.", +"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/scopes:listPermitted", +"httpMethod": "GET", +"id": "gkehub.projects.locations.scopes.listPermitted", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. Token returned by previous call to `ListPermittedScopes` which specifies the position in the list from where to continue listing the resources.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The parent (project and location) where the Scope will be listed. Specified in the format `projects/*/locations/*`.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1/{+parent}/scopes:listPermitted", +"response": { +"$ref": "ListPermittedScopesResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "patch": { "description": "Updates a scopes.", "flatPath": "v1/projects/{projectsId}/locations/{locationsId}/scopes/{scopesId}", @@ -1834,7 +1911,7 @@ } } }, -"revision": "20240307", +"revision": "20240318", "rootUrl": "https://gkehub.googleapis.com/", "schemas": { "AppDevExperienceFeatureSpec": { @@ -3722,6 +3799,10 @@ "$ref": "IdentityServiceGoogleConfig", "description": "GoogleConfig specific configuration." }, +"ldapConfig": { +"$ref": "IdentityServiceLdapConfig", +"description": "LDAP specific configuration." +}, "name": { "description": "Identifier for auth config.", "type": "string" @@ -3789,6 +3870,48 @@ }, "type": "object" }, +"IdentityServiceGroupConfig": { +"description": "Contains the properties for locating and authenticating groups in the directory.", +"id": "IdentityServiceGroupConfig", +"properties": { +"baseDn": { +"description": "Required. The location of the subtree in the LDAP directory to search for group entries.", +"type": "string" +}, +"filter": { +"description": "Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to \"(objectClass=Group)\".", +"type": "string" +}, +"idAttribute": { +"description": "Optional. The identifying name of each group a user belongs to. For example, if this is set to \"distinguishedName\" then RBACs and other group expectations should be written as full DNs. This defaults to \"distinguishedName\".", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceLdapConfig": { +"description": "Configuration for the LDAP Auth flow.", +"id": "IdentityServiceLdapConfig", +"properties": { +"group": { +"$ref": "IdentityServiceGroupConfig", +"description": "Optional. Contains the properties for locating and authenticating groups in the directory." +}, +"server": { +"$ref": "IdentityServiceServerConfig", +"description": "Required. Server settings for the external LDAP server." +}, +"serviceAccount": { +"$ref": "IdentityServiceServiceAccountConfig", +"description": "Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate." +}, +"user": { +"$ref": "IdentityServiceUserConfig", +"description": "Required. Defines where users exist in the LDAP directory." +} +}, +"type": "object" +}, "IdentityServiceMembershipSpec": { "description": "**Anthos Identity Service**: Configuration for a single Membership.", "id": "IdentityServiceMembershipSpec", @@ -3946,6 +4069,81 @@ }, "type": "object" }, +"IdentityServiceServerConfig": { +"description": "Server settings for the external LDAP server.", +"id": "IdentityServiceServerConfig", +"properties": { +"certificateAuthorityData": { +"description": "Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the \"ldaps\" and \"startTLS\" connections.", +"format": "byte", +"type": "string" +}, +"connectionType": { +"description": "Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty.", +"type": "string" +}, +"host": { +"description": "Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, \"ldap.server.example\" or \"10.10.10.10:389\".", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceServiceAccountConfig": { +"description": "Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate.", +"id": "IdentityServiceServiceAccountConfig", +"properties": { +"simpleBindCredentials": { +"$ref": "IdentityServiceSimpleBindCredentials", +"description": "Credentials for basic auth." +} +}, +"type": "object" +}, +"IdentityServiceSimpleBindCredentials": { +"description": "The structure holds the LDAP simple binding credential.", +"id": "IdentityServiceSimpleBindCredentials", +"properties": { +"dn": { +"description": "Required. The distinguished name(DN) of the service account object/user.", +"type": "string" +}, +"encryptedPassword": { +"description": "Output only. The encrypted password of the service account object/user.", +"format": "byte", +"readOnly": true, +"type": "string" +}, +"password": { +"description": "Required. Input only. The password of the service account object/user.", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceUserConfig": { +"description": "Defines where users exist in the LDAP directory.", +"id": "IdentityServiceUserConfig", +"properties": { +"baseDn": { +"description": "Required. The location of the subtree in the LDAP directory to search for user entries.", +"type": "string" +}, +"filter": { +"description": "Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to \"(objectClass=User)\".", +"type": "string" +}, +"idAttribute": { +"description": "Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to \"sAMAccountName\" and identifierAttribute to \"userPrincipalName\" would allow a user to login as \"bsmith\", but actual RBAC policies for the user would be written as \"bsmith@example.com\". Using \"userPrincipalName\" is recommended since this will be unique for each user. This defaults to \"userPrincipalName\".", +"type": "string" +}, +"loginAttribute": { +"description": "Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. \"(=)\" and is combined with the optional filter field. This defaults to \"userPrincipalName\".", +"type": "string" +} +}, +"type": "object" +}, "KubernetesMetadata": { "description": "KubernetesMetadata provides informational metadata for Memberships representing Kubernetes clusters.", "id": "KubernetesMetadata", @@ -4018,6 +4216,31 @@ }, "type": "object" }, +"ListBoundMembershipsResponse": { +"description": "List of Memberships bound to a Scope.", +"id": "ListBoundMembershipsResponse", +"properties": { +"memberships": { +"description": "The list of Memberships bound to the given Scope.", +"items": { +"$ref": "Membership" +}, +"type": "array" +}, +"nextPageToken": { +"description": "A token to request the next page of resources from the `ListBoundMemberships` method. The value of an empty string means that there are no more resources to return.", +"type": "string" +}, +"unreachable": { +"description": "List of locations that could not be reached while fetching this list.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "ListFeaturesResponse": { "description": "Response message for the `GkeHub.ListFeatures` method.", "id": "ListFeaturesResponse", @@ -4133,6 +4356,24 @@ }, "type": "object" }, +"ListPermittedScopesResponse": { +"description": "List of permitted Scopes.", +"id": "ListPermittedScopesResponse", +"properties": { +"nextPageToken": { +"description": "A token to request the next page of resources from the `ListPermittedScopes` method. The value of an empty string means that there are no more resources to return.", +"type": "string" +}, +"scopes": { +"description": "The list of permitted Scopes", +"items": { +"$ref": "Scope" +}, +"type": "array" +} +}, +"type": "object" +}, "ListScopeNamespacesResponse": { "description": "List of fleet namespaces.", "id": "ListScopeNamespacesResponse", diff --git a/googleapiclient/discovery_cache/documents/gkehub.v1alpha.json b/googleapiclient/discovery_cache/documents/gkehub.v1alpha.json index 8e89731451..f25fb1c49a 100644 --- a/googleapiclient/discovery_cache/documents/gkehub.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/gkehub.v1alpha.json @@ -2175,7 +2175,7 @@ } } }, -"revision": "20240307", +"revision": "20240318", "rootUrl": "https://gkehub.googleapis.com/", "schemas": { "AnthosObservabilityFeatureSpec": { @@ -4313,6 +4313,10 @@ "$ref": "IdentityServiceGoogleConfig", "description": "GoogleConfig specific configuration." }, +"ldapConfig": { +"$ref": "IdentityServiceLdapConfig", +"description": "LDAP specific configuration." +}, "name": { "description": "Identifier for auth config.", "type": "string" @@ -4380,6 +4384,48 @@ }, "type": "object" }, +"IdentityServiceGroupConfig": { +"description": "Contains the properties for locating and authenticating groups in the directory.", +"id": "IdentityServiceGroupConfig", +"properties": { +"baseDn": { +"description": "Required. The location of the subtree in the LDAP directory to search for group entries.", +"type": "string" +}, +"filter": { +"description": "Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to \"(objectClass=Group)\".", +"type": "string" +}, +"idAttribute": { +"description": "Optional. The identifying name of each group a user belongs to. For example, if this is set to \"distinguishedName\" then RBACs and other group expectations should be written as full DNs. This defaults to \"distinguishedName\".", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceLdapConfig": { +"description": "Configuration for the LDAP Auth flow.", +"id": "IdentityServiceLdapConfig", +"properties": { +"group": { +"$ref": "IdentityServiceGroupConfig", +"description": "Optional. Contains the properties for locating and authenticating groups in the directory." +}, +"server": { +"$ref": "IdentityServiceServerConfig", +"description": "Required. Server settings for the external LDAP server." +}, +"serviceAccount": { +"$ref": "IdentityServiceServiceAccountConfig", +"description": "Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate." +}, +"user": { +"$ref": "IdentityServiceUserConfig", +"description": "Required. Defines where users exist in the LDAP directory." +} +}, +"type": "object" +}, "IdentityServiceMembershipSpec": { "description": "**Anthos Identity Service**: Configuration for a single Membership.", "id": "IdentityServiceMembershipSpec", @@ -4537,6 +4583,81 @@ }, "type": "object" }, +"IdentityServiceServerConfig": { +"description": "Server settings for the external LDAP server.", +"id": "IdentityServiceServerConfig", +"properties": { +"certificateAuthorityData": { +"description": "Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the \"ldaps\" and \"startTLS\" connections.", +"format": "byte", +"type": "string" +}, +"connectionType": { +"description": "Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty.", +"type": "string" +}, +"host": { +"description": "Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, \"ldap.server.example\" or \"10.10.10.10:389\".", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceServiceAccountConfig": { +"description": "Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate.", +"id": "IdentityServiceServiceAccountConfig", +"properties": { +"simpleBindCredentials": { +"$ref": "IdentityServiceSimpleBindCredentials", +"description": "Credentials for basic auth." +} +}, +"type": "object" +}, +"IdentityServiceSimpleBindCredentials": { +"description": "The structure holds the LDAP simple binding credential.", +"id": "IdentityServiceSimpleBindCredentials", +"properties": { +"dn": { +"description": "Required. The distinguished name(DN) of the service account object/user.", +"type": "string" +}, +"encryptedPassword": { +"description": "Output only. The encrypted password of the service account object/user.", +"format": "byte", +"readOnly": true, +"type": "string" +}, +"password": { +"description": "Required. Input only. The password of the service account object/user.", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceUserConfig": { +"description": "Defines where users exist in the LDAP directory.", +"id": "IdentityServiceUserConfig", +"properties": { +"baseDn": { +"description": "Required. The location of the subtree in the LDAP directory to search for user entries.", +"type": "string" +}, +"filter": { +"description": "Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to \"(objectClass=User)\".", +"type": "string" +}, +"idAttribute": { +"description": "Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to \"sAMAccountName\" and identifierAttribute to \"userPrincipalName\" would allow a user to login as \"bsmith\", but actual RBAC policies for the user would be written as \"bsmith@example.com\". Using \"userPrincipalName\" is recommended since this will be unique for each user. This defaults to \"userPrincipalName\".", +"type": "string" +}, +"loginAttribute": { +"description": "Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. \"(=)\" and is combined with the optional filter field. This defaults to \"userPrincipalName\".", +"type": "string" +} +}, +"type": "object" +}, "KubernetesMetadata": { "description": "KubernetesMetadata provides informational metadata for Memberships representing Kubernetes clusters.", "id": "KubernetesMetadata", diff --git a/googleapiclient/discovery_cache/documents/gkehub.v1beta.json b/googleapiclient/discovery_cache/documents/gkehub.v1beta.json index 9b519d1d20..e45d61b7be 100644 --- a/googleapiclient/discovery_cache/documents/gkehub.v1beta.json +++ b/googleapiclient/discovery_cache/documents/gkehub.v1beta.json @@ -1611,6 +1611,83 @@ "https://www.googleapis.com/auth/cloud-platform" ] }, +"listMemberships": { +"description": "Lists Memberships bound to a Scope. The response includes relevant Memberships from all regions.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/scopes/{scopesId}:listMemberships", +"httpMethod": "GET", +"id": "gkehub.projects.locations.scopes.listMemberships", +"parameterOrder": [ +"scopeName" +], +"parameters": { +"filter": { +"description": "Optional. Lists Memberships that match the filter expression, following the syntax outlined in https://google.aip.dev/160. Currently, filtering can be done only based on Memberships's `name`, `labels`, `create_time`, `update_time`, and `unique_id`.", +"location": "query", +"type": "string" +}, +"pageSize": { +"description": "Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned. Pagination is currently not supported; therefore, setting this field does not have any impact for now.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. Token returned by previous call to `ListBoundMemberships` which specifies the position in the list from where to continue listing the resources.", +"location": "query", +"type": "string" +}, +"scopeName": { +"description": "Required. Name of the Scope, in the format `projects/*/locations/global/scopes/*`, to which the Memberships are bound.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+/scopes/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+scopeName}:listMemberships", +"response": { +"$ref": "ListBoundMembershipsResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, +"listPermitted": { +"description": "Lists permitted Scopes.", +"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/scopes:listPermitted", +"httpMethod": "GET", +"id": "gkehub.projects.locations.scopes.listPermitted", +"parameterOrder": [ +"parent" +], +"parameters": { +"pageSize": { +"description": "Optional. When requesting a 'page' of resources, `page_size` specifies number of resources to return. If unspecified or set to 0, all resources will be returned.", +"format": "int32", +"location": "query", +"type": "integer" +}, +"pageToken": { +"description": "Optional. Token returned by previous call to `ListPermittedScopes` which specifies the position in the list from where to continue listing the resources.", +"location": "query", +"type": "string" +}, +"parent": { +"description": "Required. The parent (project and location) where the Scope will be listed. Specified in the format `projects/*/locations/*`.", +"location": "path", +"pattern": "^projects/[^/]+/locations/[^/]+$", +"required": true, +"type": "string" +} +}, +"path": "v1beta/{+parent}/scopes:listPermitted", +"response": { +"$ref": "ListPermittedScopesResponse" +}, +"scopes": [ +"https://www.googleapis.com/auth/cloud-platform" +] +}, "patch": { "description": "Updates a scopes.", "flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/scopes/{scopesId}", @@ -2024,7 +2101,7 @@ } } }, -"revision": "20240307", +"revision": "20240318", "rootUrl": "https://gkehub.googleapis.com/", "schemas": { "AnthosObservabilityFeatureSpec": { @@ -4017,6 +4094,10 @@ "$ref": "IdentityServiceGoogleConfig", "description": "GoogleConfig specific configuration." }, +"ldapConfig": { +"$ref": "IdentityServiceLdapConfig", +"description": "LDAP specific configuration." +}, "name": { "description": "Identifier for auth config.", "type": "string" @@ -4084,6 +4165,48 @@ }, "type": "object" }, +"IdentityServiceGroupConfig": { +"description": "Contains the properties for locating and authenticating groups in the directory.", +"id": "IdentityServiceGroupConfig", +"properties": { +"baseDn": { +"description": "Required. The location of the subtree in the LDAP directory to search for group entries.", +"type": "string" +}, +"filter": { +"description": "Optional. Optional filter to be used when searching for groups a user belongs to. This can be used to explicitly match only certain groups in order to reduce the amount of groups returned for each user. This defaults to \"(objectClass=Group)\".", +"type": "string" +}, +"idAttribute": { +"description": "Optional. The identifying name of each group a user belongs to. For example, if this is set to \"distinguishedName\" then RBACs and other group expectations should be written as full DNs. This defaults to \"distinguishedName\".", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceLdapConfig": { +"description": "Configuration for the LDAP Auth flow.", +"id": "IdentityServiceLdapConfig", +"properties": { +"group": { +"$ref": "IdentityServiceGroupConfig", +"description": "Optional. Contains the properties for locating and authenticating groups in the directory." +}, +"server": { +"$ref": "IdentityServiceServerConfig", +"description": "Required. Server settings for the external LDAP server." +}, +"serviceAccount": { +"$ref": "IdentityServiceServiceAccountConfig", +"description": "Required. Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate." +}, +"user": { +"$ref": "IdentityServiceUserConfig", +"description": "Required. Defines where users exist in the LDAP directory." +} +}, +"type": "object" +}, "IdentityServiceMembershipSpec": { "description": "**Anthos Identity Service**: Configuration for a single Membership.", "id": "IdentityServiceMembershipSpec", @@ -4241,6 +4364,81 @@ }, "type": "object" }, +"IdentityServiceServerConfig": { +"description": "Server settings for the external LDAP server.", +"id": "IdentityServiceServerConfig", +"properties": { +"certificateAuthorityData": { +"description": "Optional. Contains a Base64 encoded, PEM formatted certificate authority certificate for the LDAP server. This must be provided for the \"ldaps\" and \"startTLS\" connections.", +"format": "byte", +"type": "string" +}, +"connectionType": { +"description": "Optional. Defines the connection type to communicate with the LDAP server. If `starttls` or `ldaps` is specified, the certificate_authority_data should not be empty.", +"type": "string" +}, +"host": { +"description": "Required. Defines the hostname or IP of the LDAP server. Port is optional and will default to 389, if unspecified. For example, \"ldap.server.example\" or \"10.10.10.10:389\".", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceServiceAccountConfig": { +"description": "Contains the credentials of the service account which is authorized to perform the LDAP search in the directory. The credentials can be supplied by the combination of the DN and password or the client certificate.", +"id": "IdentityServiceServiceAccountConfig", +"properties": { +"simpleBindCredentials": { +"$ref": "IdentityServiceSimpleBindCredentials", +"description": "Credentials for basic auth." +} +}, +"type": "object" +}, +"IdentityServiceSimpleBindCredentials": { +"description": "The structure holds the LDAP simple binding credential.", +"id": "IdentityServiceSimpleBindCredentials", +"properties": { +"dn": { +"description": "Required. The distinguished name(DN) of the service account object/user.", +"type": "string" +}, +"encryptedPassword": { +"description": "Output only. The encrypted password of the service account object/user.", +"format": "byte", +"readOnly": true, +"type": "string" +}, +"password": { +"description": "Required. Input only. The password of the service account object/user.", +"type": "string" +} +}, +"type": "object" +}, +"IdentityServiceUserConfig": { +"description": "Defines where users exist in the LDAP directory.", +"id": "IdentityServiceUserConfig", +"properties": { +"baseDn": { +"description": "Required. The location of the subtree in the LDAP directory to search for user entries.", +"type": "string" +}, +"filter": { +"description": "Optional. Filter to apply when searching for the user. This can be used to further restrict the user accounts which are allowed to login. This defaults to \"(objectClass=User)\".", +"type": "string" +}, +"idAttribute": { +"description": "Optional. Determines which attribute to use as the user's identity after they are authenticated. This is distinct from the loginAttribute field to allow users to login with a username, but then have their actual identifier be an email address or full Distinguished Name (DN). For example, setting loginAttribute to \"sAMAccountName\" and identifierAttribute to \"userPrincipalName\" would allow a user to login as \"bsmith\", but actual RBAC policies for the user would be written as \"bsmith@example.com\". Using \"userPrincipalName\" is recommended since this will be unique for each user. This defaults to \"userPrincipalName\".", +"type": "string" +}, +"loginAttribute": { +"description": "Optional. The name of the attribute which matches against the input username. This is used to find the user in the LDAP database e.g. \"(=)\" and is combined with the optional filter field. This defaults to \"userPrincipalName\".", +"type": "string" +} +}, +"type": "object" +}, "KubernetesMetadata": { "description": "KubernetesMetadata provides informational metadata for Memberships representing Kubernetes clusters.", "id": "KubernetesMetadata", @@ -4313,6 +4511,31 @@ }, "type": "object" }, +"ListBoundMembershipsResponse": { +"description": "List of Memberships bound to a Scope.", +"id": "ListBoundMembershipsResponse", +"properties": { +"memberships": { +"description": "The list of Memberships bound to the given Scope.", +"items": { +"$ref": "Membership" +}, +"type": "array" +}, +"nextPageToken": { +"description": "A token to request the next page of resources from the `ListBoundMemberships` method. The value of an empty string means that there are no more resources to return.", +"type": "string" +}, +"unreachable": { +"description": "List of locations that could not be reached while fetching this list.", +"items": { +"type": "string" +}, +"type": "array" +} +}, +"type": "object" +}, "ListFeaturesResponse": { "description": "Response message for the `GkeHub.ListFeatures` method.", "id": "ListFeaturesResponse", @@ -4446,6 +4669,24 @@ }, "type": "object" }, +"ListPermittedScopesResponse": { +"description": "List of permitted Scopes.", +"id": "ListPermittedScopesResponse", +"properties": { +"nextPageToken": { +"description": "A token to request the next page of resources from the `ListPermittedScopes` method. The value of an empty string means that there are no more resources to return.", +"type": "string" +}, +"scopes": { +"description": "The list of permitted Scopes", +"items": { +"$ref": "Scope" +}, +"type": "array" +} +}, +"type": "object" +}, "ListScopeNamespacesResponse": { "description": "List of fleet namespaces.", "id": "ListScopeNamespacesResponse", diff --git a/googleapiclient/discovery_cache/documents/gkehub.v1beta1.json b/googleapiclient/discovery_cache/documents/gkehub.v1beta1.json index bcfc4b987d..2fd4e51b25 100644 --- a/googleapiclient/discovery_cache/documents/gkehub.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/gkehub.v1beta1.json @@ -712,7 +712,7 @@ } } }, -"revision": "20240307", +"revision": "20240318", "rootUrl": "https://gkehub.googleapis.com/", "schemas": { "ApplianceCluster": { diff --git a/googleapiclient/discovery_cache/documents/gkehub.v2alpha.json b/googleapiclient/discovery_cache/documents/gkehub.v2alpha.json index e2c687750c..87ea1d6490 100644 --- a/googleapiclient/discovery_cache/documents/gkehub.v2alpha.json +++ b/googleapiclient/discovery_cache/documents/gkehub.v2alpha.json @@ -280,7 +280,7 @@ } } }, -"revision": "20240307", +"revision": "20240318", "rootUrl": "https://gkehub.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/gmail.v1.json b/googleapiclient/discovery_cache/documents/gmail.v1.json index ceeee47399..abc6ef403c 100644 --- a/googleapiclient/discovery_cache/documents/gmail.v1.json +++ b/googleapiclient/discovery_cache/documents/gmail.v1.json @@ -3077,7 +3077,7 @@ } } }, -"revision": "20240312", +"revision": "20240318", "rootUrl": "https://gmail.googleapis.com/", "schemas": { "AutoForwarding": { diff --git a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json index 14ea32c9f8..673f8863f3 100644 --- a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json +++ b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1.json @@ -265,7 +265,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://gmailpostmastertools.googleapis.com/", "schemas": { "DeliveryError": { diff --git a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json index ce9e074460..a7eb344248 100644 --- a/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/gmailpostmastertools.v1beta1.json @@ -265,7 +265,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://gmailpostmastertools.googleapis.com/", "schemas": { "DeliveryError": { diff --git a/googleapiclient/discovery_cache/documents/groupsmigration.v1.json b/googleapiclient/discovery_cache/documents/groupsmigration.v1.json index a30524b418..7e99d30666 100644 --- a/googleapiclient/discovery_cache/documents/groupsmigration.v1.json +++ b/googleapiclient/discovery_cache/documents/groupsmigration.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240305", +"revision": "20240311", "rootUrl": "https://groupsmigration.googleapis.com/", "schemas": { "Groups": { diff --git a/googleapiclient/discovery_cache/documents/healthcare.v1.json b/googleapiclient/discovery_cache/documents/healthcare.v1.json index b5bb6486d0..e8b68e11bb 100644 --- a/googleapiclient/discovery_cache/documents/healthcare.v1.json +++ b/googleapiclient/discovery_cache/documents/healthcare.v1.json @@ -4554,7 +4554,7 @@ } } }, -"revision": "20240228", +"revision": "20240312", "rootUrl": "https://healthcare.googleapis.com/", "schemas": { "ActivateConsentRequest": { @@ -7135,7 +7135,7 @@ "type": "boolean" }, "inputGcsObject": { -"description": "Optional. GCS object containing list of {resourceType}/{resourceId} lines, identifying resources to be reverted", +"description": "Optional. Cloud Storage object containing list of {resourceType}/{resourceId} lines, identifying resources to be reverted", "type": "string" }, "resultGcsBucket": { diff --git a/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json b/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json index af5d6a0104..7b1528ca18 100644 --- a/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/healthcare.v1beta1.json @@ -2650,7 +2650,7 @@ ], "parameters": { "resource": { -"description": "Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`.", +"description": "Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`.", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/datasets/[^/]+/dicomStores/[^/]+$", "required": true, @@ -2806,7 +2806,7 @@ ], "parameters": { "resource": { -"description": "Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`.", +"description": "Required. The path of the resource to update the blob storage settings in the format of `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}`, `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/`, or `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}/dicomWeb/studies/{studyUID}/series/{seriesUID}/instances/{instanceUID}`. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`.", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/datasets/[^/]+/dicomStores/[^/]+/dicomWeb/studies/.*$", "required": true, @@ -2869,7 +2869,7 @@ ], "parameters": { "resource": { -"description": "Required. The path of the resource for which the storage info is requested (for exaxmple for a DICOM Instance: `projects/{projectid}/datasets/{datasetid}/dicomStores/{dicomStoreId}/dicomWeb/studies/{study_uid}/series/{series_uid}/instances/{instance_uid}`)", +"description": "Required. The path of the resource for which the storage info is requested (for exaxmple for a DICOM Instance: `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreId}/dicomWeb/studies/{study_uid}/series/{series_uid}/instances/{instance_uid}`)", "location": "path", "pattern": "^projects/[^/]+/locations/[^/]+/datasets/[^/]+/dicomStores/[^/]+/dicomWeb/studies/[^/]+/series/[^/]+/instances/[^/]+$", "required": true, @@ -5614,7 +5614,7 @@ } } }, -"revision": "20240228", +"revision": "20240312", "rootUrl": "https://healthcare.googleapis.com/", "schemas": { "AccessDeterminationLogConfig": { @@ -5632,7 +5632,7 @@ "enumDescriptions": [ "No log level specified. This value is unused.", "No additional consent-related logging is added to audit logs.", -"The following information is included: * One of the following [`consentMode`](https://cloud.google.com/healthcare-api/docs/fhir-consent#audit_logs) fields: (`off`|`emptyScope`|`enforced`|`btg`|`bypass`). * The accessor's request headers * The `log_level` of the [AccessDeterminationLogConfig](https://cloud.google.com/healthcare-api/docs/reference/rest/v1beta1/projects.locations.datasets.fhirStores#AccessDeterminationLogConfig) * The final consent evaluation (`PERMIT`, `DENY`, or `NO_CONSENT`) * A human-readable summary of the evaluation", +"The following information is included: * One of the following [`consentMode`](https://cloud.google.com/healthcare-api/docs/fhir-consent#audit_logs) fields: (`off`|`emptyScope`|`enforced`|`btg`|`bypass`). * The accessor's request headers * The `log_level` of the AccessDeterminationLogConfig * The final consent evaluation (`PERMIT`, `DENY`, or `NO_CONSENT`) * A human-readable summary of the evaluation", "Includes `MINIMUM` and, for each resource owner, returns: * The resource owner's name * Most specific part of the `X-Consent-Scope` resulting in consensual determination * Timestamp of the applied enforcement leading to the decision * Enforcement version at the time the applicable consents were applied * The Consent resource name * The timestamp of the Consent resource used for enforcement * Policy type (`PATIENT` or `ADMIN`) Note that this mode adds some overhead to CRUD operations." ], "type": "string" @@ -6400,11 +6400,11 @@ "type": "string" }, "environment": { -"description": "An abstract identifier that describes the environment or conditions under which the accessor is acting. Can be \u201c*\u201d if it applies to all environments.", +"description": "An abstract identifier that describes the environment or conditions under which the accessor is acting. Can be \"*\" if it applies to all environments.", "type": "string" }, "purpose": { -"description": "The intent of data use. Can be \u201c*\u201d if it applies to all purposes.", +"description": "The intent of data use. Can be \"*\" if it applies to all purposes.", "type": "string" } }, @@ -7188,7 +7188,7 @@ "type": "array" }, "consentResource": { -"description": "The resource name of this consent resource. Format: `projects/{projectId}/datasets/{datasetId}/fhirStores/{fhirStoreId}/fhir/{resourceType}/{id}`.", +"description": "The resource name of this consent resource. Format: `projects/{projectId}/locations/{locationId}/datasets/{datasetId}/fhirStores/{fhirStoreId}/fhir/{resourceType}/{id}`.", "type": "string" }, "enforcementTime": { @@ -7225,9 +7225,9 @@ "description": "The consent's variant combinations. A single consent may have multiple variants.", "items": { "enum": [ -"VARIANT_UNSPECIFIED", -"VARIANT_STANDARD", -"VARIANT_CASCADE" +"CONSENT_VARIANT_UNSPECIFIED", +"CONSENT_VARIANT_STANDARD", +"CONSENT_VARIANT_CASCADE" ], "enumDescriptions": [ "Consent variant unspecified.", @@ -9709,7 +9709,7 @@ }, "filterConfig": { "$ref": "DicomFilterConfig", -"description": "Optional. A filter configuration. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`." +"description": "Optional. A filter configuration. If `filter_config` is specified, set the value of `resource` to the resource name of a DICOM store in the format `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicomStoreID}`." } }, "type": "object" @@ -9799,7 +9799,7 @@ "description": "Info about the data stored in blob storage for the resource." }, "referencedResource": { -"description": "The resource whose storage info is returned. For example, to specify the resource path of a DICOM Instance: `projects/{projectid}/datasets/{datasetid}/dicomStores/{dicom_store_id}/dicomWeb/studi/{study_uid}/series/{series_uid}/instances/{instance_uid}`", +"description": "The resource whose storage info is returned. For example, to specify the resource path of a DICOM Instance: `projects/{projectID}/locations/{locationID}/datasets/{datasetID}/dicomStores/{dicom_store_id}/dicomWeb/studi/{study_uid}/series/{series_uid}/instances/{instance_uid}`", "type": "string" }, "structuredStorageInfo": { diff --git a/googleapiclient/discovery_cache/documents/homegraph.v1.json b/googleapiclient/discovery_cache/documents/homegraph.v1.json index 944f3b1f11..343b2846c8 100644 --- a/googleapiclient/discovery_cache/documents/homegraph.v1.json +++ b/googleapiclient/discovery_cache/documents/homegraph.v1.json @@ -216,7 +216,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://homegraph.googleapis.com/", "schemas": { "AgentDeviceId": { diff --git a/googleapiclient/discovery_cache/documents/iam.v1.json b/googleapiclient/discovery_cache/documents/iam.v1.json index 2220bd21d8..c91c646a0b 100644 --- a/googleapiclient/discovery_cache/documents/iam.v1.json +++ b/googleapiclient/discovery_cache/documents/iam.v1.json @@ -12,7 +12,7 @@ "baseUrl": "https://iam.googleapis.com/", "batchPath": "batch", "canonicalName": "Iam", -"description": "Manages identity and access control for Google Cloud Platform resources, including the creation of service accounts, which you can use to authenticate to Google and make API calls. ", +"description": "Manages identity and access control for Google Cloud resources, including the creation of service accounts, which you can use to authenticate to Google and make API calls. Enabling this API also enables the IAM Service Account Credentials API (iamcredentials.googleapis.com). However, disabling this API doesn't disable the IAM Service Account Credentials API. ", "discoveryVersion": "v1", "documentationLink": "https://cloud.google.com/iam/", "fullyEncodeReservedExpansion": true, @@ -2850,7 +2850,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://iam.googleapis.com/", "schemas": { "AccessRestrictions": { @@ -3916,6 +3916,37 @@ }, "type": "object" }, +"ReconciliationOperationMetadata": { +"description": "Operation metadata returned by the CLH during resource state reconciliation.", +"id": "ReconciliationOperationMetadata", +"properties": { +"deleteResource": { +"deprecated": true, +"description": "DEPRECATED. Use exclusive_action instead.", +"type": "boolean" +}, +"exclusiveAction": { +"description": "Excluisive action returned by the CLH.", +"enum": [ +"UNKNOWN_REPAIR_ACTION", +"DELETE", +"RETRY" +], +"enumDeprecated": [ +false, +true, +false +], +"enumDescriptions": [ +"Unknown repair action.", +"The resource has to be deleted. When using this bit, the CLH should fail the operation. DEPRECATED. Instead use DELETE_RESOURCE OperationSignal in SideChannel.", +"This resource could not be repaired but the repair should be tried again at a later time. This can happen if there is a dependency that needs to be resolved first- e.g. if a parent resource must be repaired before a child resource." +], +"type": "string" +} +}, +"type": "object" +}, "Role": { "description": "A role in the Identity and Access Management API.", "id": "Role", @@ -3941,7 +3972,7 @@ "type": "array" }, "name": { -"description": "The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/my-role` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/my-role` for project-level custom roles.", +"description": "The name of the role. When `Role` is used in `CreateRole`, the role name must not be set. When `Role` is used in output and other input such as `UpdateRole`, the role name is the complete path. For example, `roles/logging.viewer` for predefined roles, `organizations/{ORGANIZATION_ID}/roles/myRole` for organization-level custom roles, and `projects/{PROJECT_ID}/roles/myRole` for project-level custom roles.", "type": "string" }, "stage": { @@ -3976,7 +4007,7 @@ "id": "Saml", "properties": { "idpMetadataXml": { -"description": "Required. SAML Identity provider configuration metadata xml doc. The xml document should comply with [SAML 2.0 specification](https://www.oasis-open.org/committees/download.php/56785/sstc-saml-metadata-errata-2.0-wd-05.pdf). The max size of the acceptable xml document will be bounded to 128k characters. The metadata xml document should satisfy the following constraints: 1) Must contain an Identity Provider Entity ID. 2) Must contain at least one non-expired signing key certificate. 3) For each signing key: a) Valid from should be no more than 7 days from now. b) Valid to should be no more than 15 years in the future. 4) Upto 3 IdP signing keys are allowed in the metadata xml. When updating the provider's metadata xml, at lease one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata", +"description": "Required. SAML identity provider (IdP) configuration metadata XML doc. The XML document must comply with the [SAML 2.0 specification](https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf). The maximum size of an acceptable XML document is 128K characters. The SAML metadata XML document must satisfy the following constraints: * Must contain an IdP Entity ID. * Must contain at least one non-expired signing certificate. * For each signing certificate, the expiration must be: * From no more than 7 days in the future. * To no more than 15 years in the future. * Up to three IdP signing keys are allowed. When updating the provider's metadata XML, at least one non-expired signing key must overlap with the existing metadata. This requirement is skipped if there are no non-expired signing keys present in the existing metadata.", "type": "string" } }, @@ -4417,7 +4448,7 @@ "additionalProperties": { "type": "string" }, -"description": "Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The linux username used by OS login. This is an optional field and the mapped posix username cannot exceed 32 characters, The key must match the regex \"^a-zA-Z0-9._{0,31}$\". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {\"google.subject\": \"assertion.sub\"} ```", +"description": "Required. Maps attributes from the authentication credentials issued by an external identity provider to Google Cloud attributes, such as `subject` and `segment`. Each key must be a string specifying the Google Cloud IAM attribute to map to. The following keys are supported: * `google.subject`: The principal IAM is authenticating. You can reference this value in IAM bindings. This is also the subject that appears in Cloud Logging logs. This is a required field and the mapped subject cannot exceed 127 bytes. * `google.groups`: Groups the authenticating user belongs to. You can grant groups access to resources using an IAM `principalSet` binding; access applies to all members of the group. * `google.display_name`: The name of the authenticated user. This is an optional field and the mapped display name cannot exceed 100 bytes. If not set, `google.subject` will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.profile_photo`: The URL that specifies the authenticated user's thumbnail photo. This is an optional field. When set, the image will be visible as the user's profile picture. If not set, a generic user icon will be displayed instead. This attribute cannot be referenced in IAM bindings. * `google.posix_username`: The Linux username used by OS Login. This is an optional field and the mapped POSIX username cannot exceed 32 characters, The key must match the regex \"^a-zA-Z0-9._{0,31}$\". This attribute cannot be referenced in IAM bindings. You can also provide custom attributes by specifying `attribute.{custom_attribute}`, where {custom_attribute} is the name of the custom attribute to be mapped. You can define a maximum of 50 custom attributes. The maximum length of a mapped attribute key is 100 characters, and the key may only contain the characters [a-z0-9_]. You can reference these attributes in IAM policies to define fine-grained access for a workforce pool to Google Cloud resources. For example: * `google.subject`: `principal://iam.googleapis.com/locations/global/workforcePools/{pool}/subject/{value}` * `google.groups`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/group/{value}` * `attribute.{custom_attribute}`: `principalSet://iam.googleapis.com/locations/global/workforcePools/{pool}/attribute.{custom_attribute}/{value}` Each value must be a [Common Expression Language] (https://opensource.google/projects/cel) function that maps an identity provider credential to the normalized attribute specified by the corresponding map key. You can use the `assertion` keyword in the expression to access a JSON representation of the authentication credential issued by the provider. The maximum length of an attribute mapping expression is 2048 characters. When evaluated, the total size of all mapped attributes must not exceed 4KB. For OIDC providers, you must supply a custom mapping that includes the `google.subject` attribute. For example, the following maps the `sub` claim of the incoming credential to the `subject` attribute on a Google token: ``` {\"google.subject\": \"assertion.sub\"} ```", "type": "object" }, "description": { @@ -4633,6 +4664,10 @@ ], "readOnly": true, "type": "string" +}, +"x509": { +"$ref": "X509", +"description": "An X.509-type identity provider." } }, "type": "object" @@ -4685,6 +4720,12 @@ } }, "type": "object" +}, +"X509": { +"description": "An X.509-type identity provider represents a CA. It is trusted to assert a client identity if the client has a certificate that chains up to this CA.", +"id": "X509", +"properties": {}, +"type": "object" } }, "servicePath": "", diff --git a/googleapiclient/discovery_cache/documents/iam.v2beta.json b/googleapiclient/discovery_cache/documents/iam.v2beta.json index 3a81af4b97..f5b53470a2 100644 --- a/googleapiclient/discovery_cache/documents/iam.v2beta.json +++ b/googleapiclient/discovery_cache/documents/iam.v2beta.json @@ -12,7 +12,7 @@ "baseUrl": "https://iam.googleapis.com/", "batchPath": "batch", "canonicalName": "Iam", -"description": "Manages identity and access control for Google Cloud Platform resources, including the creation of service accounts, which you can use to authenticate to Google and make API calls. ", +"description": "Manages identity and access control for Google Cloud resources, including the creation of service accounts, which you can use to authenticate to Google and make API calls. Enabling this API also enables the IAM Service Account Credentials API (iamcredentials.googleapis.com). However, disabling this API doesn't disable the IAM Service Account Credentials API. ", "discoveryVersion": "v1", "documentationLink": "https://cloud.google.com/iam/", "fullyEncodeReservedExpansion": true, @@ -293,9 +293,40 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://iam.googleapis.com/", "schemas": { +"CloudControl2SharedOperationsReconciliationOperationMetadata": { +"description": "Operation metadata returned by the CLH during resource state reconciliation.", +"id": "CloudControl2SharedOperationsReconciliationOperationMetadata", +"properties": { +"deleteResource": { +"deprecated": true, +"description": "DEPRECATED. Use exclusive_action instead.", +"type": "boolean" +}, +"exclusiveAction": { +"description": "Excluisive action returned by the CLH.", +"enum": [ +"UNKNOWN_REPAIR_ACTION", +"DELETE", +"RETRY" +], +"enumDeprecated": [ +false, +true, +false +], +"enumDescriptions": [ +"Unknown repair action.", +"The resource has to be deleted. When using this bit, the CLH should fail the operation. DEPRECATED. Instead use DELETE_RESOURCE OperationSignal in SideChannel.", +"This resource could not be repaired but the repair should be tried again at a later time. This can happen if there is a dependency that needs to be resolved first- e.g. if a parent resource must be repaired before a child resource." +], +"type": "string" +} +}, +"type": "object" +}, "GoogleCloudCommonOperationMetadata": { "description": "Represents the metadata of the long-running operation.", "id": "GoogleCloudCommonOperationMetadata", diff --git a/googleapiclient/discovery_cache/documents/iamcredentials.v1.json b/googleapiclient/discovery_cache/documents/iamcredentials.v1.json index 2793425303..c81cd12d18 100644 --- a/googleapiclient/discovery_cache/documents/iamcredentials.v1.json +++ b/googleapiclient/discovery_cache/documents/iamcredentials.v1.json @@ -226,7 +226,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://iamcredentials.googleapis.com/", "schemas": { "GenerateAccessTokenRequest": { diff --git a/googleapiclient/discovery_cache/documents/iap.v1.json b/googleapiclient/discovery_cache/documents/iap.v1.json index d51c63487b..dd8b231fea 100644 --- a/googleapiclient/discovery_cache/documents/iap.v1.json +++ b/googleapiclient/discovery_cache/documents/iap.v1.json @@ -650,7 +650,7 @@ ] }, "validateAttributeExpression": { -"description": "Validates a given CEL expression conforms to IAP restrictions.", +"description": "Validates that a given CEL expression conforms to IAP restrictions.", "flatPath": "v1/{v1Id}:validateAttributeExpression", "httpMethod": "POST", "id": "iap.validateAttributeExpression", @@ -659,7 +659,7 @@ ], "parameters": { "expression": { -"description": "Required. User input string expression. Should be of the form 'attributes.saml_attributes.filter(attribute, attribute.name in ['{attribute_name}', '{attribute_name}'])'", +"description": "Required. User input string expression. Should be of the form `attributes.saml_attributes.filter(attribute, attribute.name in ['{attribute_name}', '{attribute_name}'])`", "location": "query", "type": "string" }, @@ -682,7 +682,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://iap.googleapis.com/", "schemas": { "AccessDeniedPageSettings": { diff --git a/googleapiclient/discovery_cache/documents/iap.v1beta1.json b/googleapiclient/discovery_cache/documents/iap.v1beta1.json index 5d23bc8297..810176d08a 100644 --- a/googleapiclient/discovery_cache/documents/iap.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/iap.v1beta1.json @@ -194,7 +194,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://iap.googleapis.com/", "schemas": { "Binding": { diff --git a/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json b/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json index f9b7469d79..ac5aba61a6 100644 --- a/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json +++ b/googleapiclient/discovery_cache/documents/identitytoolkit.v2.json @@ -1655,7 +1655,7 @@ } } }, -"revision": "20240307", +"revision": "20240313", "rootUrl": "https://identitytoolkit.googleapis.com/", "schemas": { "GoogleCloudIdentitytoolkitAdminV2AllowByDefault": { @@ -2638,11 +2638,10 @@ "type": "array" }, "recaptchaKeys": { -"description": "Output only. The reCAPTCHA keys.", +"description": "The reCAPTCHA keys.", "items": { "$ref": "GoogleCloudIdentitytoolkitAdminV2RecaptchaKey" }, -"readOnly": true, "type": "array" }, "useAccountDefender": { diff --git a/googleapiclient/discovery_cache/documents/indexing.v3.json b/googleapiclient/discovery_cache/documents/indexing.v3.json index a6b14a464f..5c1c0c4b58 100644 --- a/googleapiclient/discovery_cache/documents/indexing.v3.json +++ b/googleapiclient/discovery_cache/documents/indexing.v3.json @@ -149,7 +149,7 @@ } } }, -"revision": "20240312", +"revision": "20240319", "rootUrl": "https://indexing.googleapis.com/", "schemas": { "PublishUrlNotificationResponse": { diff --git a/googleapiclient/discovery_cache/documents/integrations.v1alpha.json b/googleapiclient/discovery_cache/documents/integrations.v1alpha.json index 1c8c7549a2..dd9385392a 100644 --- a/googleapiclient/discovery_cache/documents/integrations.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/integrations.v1alpha.json @@ -3299,7 +3299,7 @@ } } }, -"revision": "20240305", +"revision": "20240325", "rootUrl": "https://integrations.googleapis.com/", "schemas": { "CrmlogErrorCode": { diff --git a/googleapiclient/discovery_cache/documents/keep.v1.json b/googleapiclient/discovery_cache/documents/keep.v1.json index 5d6f73c0a3..005aedbf11 100644 --- a/googleapiclient/discovery_cache/documents/keep.v1.json +++ b/googleapiclient/discovery_cache/documents/keep.v1.json @@ -314,7 +314,7 @@ } } }, -"revision": "20240312", +"revision": "20240319", "rootUrl": "https://keep.googleapis.com/", "schemas": { "Attachment": { diff --git a/googleapiclient/discovery_cache/documents/kgsearch.v1.json b/googleapiclient/discovery_cache/documents/kgsearch.v1.json index 7632582426..6d95a3a285 100644 --- a/googleapiclient/discovery_cache/documents/kgsearch.v1.json +++ b/googleapiclient/discovery_cache/documents/kgsearch.v1.json @@ -151,7 +151,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://kgsearch.googleapis.com/", "schemas": { "SearchResponse": { diff --git a/googleapiclient/discovery_cache/documents/kmsinventory.v1.json b/googleapiclient/discovery_cache/documents/kmsinventory.v1.json index 21ecb4c59b..4cebd48c3f 100644 --- a/googleapiclient/discovery_cache/documents/kmsinventory.v1.json +++ b/googleapiclient/discovery_cache/documents/kmsinventory.v1.json @@ -242,7 +242,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://kmsinventory.googleapis.com/", "schemas": { "GoogleCloudKmsInventoryV1ListCryptoKeysResponse": { diff --git a/googleapiclient/discovery_cache/documents/language.v1.json b/googleapiclient/discovery_cache/documents/language.v1.json index 0ce4009442..6820e8188a 100644 --- a/googleapiclient/discovery_cache/documents/language.v1.json +++ b/googleapiclient/discovery_cache/documents/language.v1.json @@ -246,7 +246,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://language.googleapis.com/", "schemas": { "AnalyzeEntitiesRequest": { diff --git a/googleapiclient/discovery_cache/documents/language.v1beta2.json b/googleapiclient/discovery_cache/documents/language.v1beta2.json index 84becc1e5c..1d4bc87be9 100644 --- a/googleapiclient/discovery_cache/documents/language.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/language.v1beta2.json @@ -246,7 +246,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://language.googleapis.com/", "schemas": { "AnalyzeEntitiesRequest": { diff --git a/googleapiclient/discovery_cache/documents/language.v2.json b/googleapiclient/discovery_cache/documents/language.v2.json index 5b33e20155..c92e3e6ea7 100644 --- a/googleapiclient/discovery_cache/documents/language.v2.json +++ b/googleapiclient/discovery_cache/documents/language.v2.json @@ -208,7 +208,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://language.googleapis.com/", "schemas": { "AnalyzeEntitiesRequest": { diff --git a/googleapiclient/discovery_cache/documents/libraryagent.v1.json b/googleapiclient/discovery_cache/documents/libraryagent.v1.json index fa4724b2da..ba8b8802ca 100644 --- a/googleapiclient/discovery_cache/documents/libraryagent.v1.json +++ b/googleapiclient/discovery_cache/documents/libraryagent.v1.json @@ -279,7 +279,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://libraryagent.googleapis.com/", "schemas": { "GoogleExampleLibraryagentV1Book": { diff --git a/googleapiclient/discovery_cache/documents/licensing.v1.json b/googleapiclient/discovery_cache/documents/licensing.v1.json index cc40d34cf6..8b2a8d4c1b 100644 --- a/googleapiclient/discovery_cache/documents/licensing.v1.json +++ b/googleapiclient/discovery_cache/documents/licensing.v1.json @@ -400,7 +400,7 @@ } } }, -"revision": "20240315", +"revision": "20240317", "rootUrl": "https://licensing.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json b/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json index d4b1f700cd..41c59d01e3 100644 --- a/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json +++ b/googleapiclient/discovery_cache/documents/lifesciences.v2beta.json @@ -312,7 +312,7 @@ } } }, -"revision": "20240313", +"revision": "20240315", "rootUrl": "https://lifesciences.googleapis.com/", "schemas": { "Accelerator": { diff --git a/googleapiclient/discovery_cache/documents/localservices.v1.json b/googleapiclient/discovery_cache/documents/localservices.v1.json index 1c94e841ca..f4d7f372d6 100644 --- a/googleapiclient/discovery_cache/documents/localservices.v1.json +++ b/googleapiclient/discovery_cache/documents/localservices.v1.json @@ -250,7 +250,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://localservices.googleapis.com/", "schemas": { "GoogleAdsHomeservicesLocalservicesV1AccountReport": { diff --git a/googleapiclient/discovery_cache/documents/logging.v2.json b/googleapiclient/discovery_cache/documents/logging.v2.json index fee32415da..f8f73a8850 100644 --- a/googleapiclient/discovery_cache/documents/logging.v2.json +++ b/googleapiclient/discovery_cache/documents/logging.v2.json @@ -7768,7 +7768,7 @@ } } }, -"revision": "20240311", +"revision": "20240312", "rootUrl": "https://logging.googleapis.com/", "schemas": { "BigQueryDataset": { diff --git a/googleapiclient/discovery_cache/documents/looker.v1.json b/googleapiclient/discovery_cache/documents/looker.v1.json index 025a46c1e4..54db4180ce 100644 --- a/googleapiclient/discovery_cache/documents/looker.v1.json +++ b/googleapiclient/discovery_cache/documents/looker.v1.json @@ -731,7 +731,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://looker.googleapis.com/", "schemas": { "AdminSettings": { diff --git a/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json b/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json index e71c85d6c1..77a71826ed 100644 --- a/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/marketingplatformadmin.v1alpha.json @@ -235,7 +235,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://marketingplatformadmin.googleapis.com/", "schemas": { "AnalyticsAccountLink": { diff --git a/googleapiclient/discovery_cache/documents/metastore.v1.json b/googleapiclient/discovery_cache/documents/metastore.v1.json index 46ede86652..1e4116ec94 100644 --- a/googleapiclient/discovery_cache/documents/metastore.v1.json +++ b/googleapiclient/discovery_cache/documents/metastore.v1.json @@ -1359,40 +1359,6 @@ ] } } -}, -"migrationExecutions": { -"methods": { -"delete": { -"description": "Deletes a single migration execution.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/services/{servicesId}/migrationExecutions/{migrationExecutionsId}", -"httpMethod": "DELETE", -"id": "metastore.projects.locations.services.migrationExecutions.delete", -"parameterOrder": [ -"name" -], -"parameters": { -"name": { -"description": "Required. The relative resource name of the migrationExecution to delete, in the following form:projects/{project_number}/locations/{location_id}/services/{service_id}/migrationExecutions/{migration_execution_id}.", -"location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/services/[^/]+/migrationExecutions/[^/]+$", -"required": true, -"type": "string" -}, -"requestId": { -"description": "Optional. A request ID. Specify a unique request ID to allow the server to ignore the request if it has completed. The server will ignore subsequent requests that provide a duplicate request ID for at least 60 minutes after the first request.For example, if an initial request times out, followed by another request with the same request ID, the server ignores the second request to prevent the creation of duplicate commitments.The request ID must be a valid UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier#Format) A zero UUID (00000000-0000-0000-0000-000000000000) is not supported.", -"location": "query", -"type": "string" -} -}, -"path": "v1/{+name}", -"response": { -"$ref": "Operation" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform" -] -} -} } } } @@ -1401,7 +1367,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://metastore.googleapis.com/", "schemas": { "AlterMetadataResourceLocationRequest": { diff --git a/googleapiclient/discovery_cache/documents/metastore.v1alpha.json b/googleapiclient/discovery_cache/documents/metastore.v1alpha.json index 155f1ae8f6..b983adf6ce 100644 --- a/googleapiclient/discovery_cache/documents/metastore.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/metastore.v1alpha.json @@ -1599,40 +1599,6 @@ ] } } -}, -"migrationExecutions": { -"methods": { -"delete": { -"description": "Deletes a single migration execution.", -"flatPath": "v1alpha/projects/{projectsId}/locations/{locationsId}/services/{servicesId}/migrationExecutions/{migrationExecutionsId}", -"httpMethod": "DELETE", -"id": "metastore.projects.locations.services.migrationExecutions.delete", -"parameterOrder": [ -"name" -], -"parameters": { -"name": { -"description": "Required. The relative resource name of the migrationExecution to delete, in the following form:projects/{project_number}/locations/{location_id}/services/{service_id}/migrationExecutions/{migration_execution_id}.", -"location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/services/[^/]+/migrationExecutions/[^/]+$", -"required": true, -"type": "string" -}, -"requestId": { -"description": "Optional. A request ID. Specify a unique request ID to allow the server to ignore the request if it has completed. The server will ignore subsequent requests that provide a duplicate request ID for at least 60 minutes after the first request.For example, if an initial request times out, followed by another request with the same request ID, the server ignores the second request to prevent the creation of duplicate commitments.The request ID must be a valid UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier#Format) A zero UUID (00000000-0000-0000-0000-000000000000) is not supported.", -"location": "query", -"type": "string" -} -}, -"path": "v1alpha/{+name}", -"response": { -"$ref": "Operation" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform" -] -} -} } } } @@ -1641,7 +1607,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://metastore.googleapis.com/", "schemas": { "AlterMetadataResourceLocationRequest": { diff --git a/googleapiclient/discovery_cache/documents/metastore.v1beta.json b/googleapiclient/discovery_cache/documents/metastore.v1beta.json index 9c14af0440..b5d6233f2e 100644 --- a/googleapiclient/discovery_cache/documents/metastore.v1beta.json +++ b/googleapiclient/discovery_cache/documents/metastore.v1beta.json @@ -1599,40 +1599,6 @@ ] } } -}, -"migrationExecutions": { -"methods": { -"delete": { -"description": "Deletes a single migration execution.", -"flatPath": "v1beta/projects/{projectsId}/locations/{locationsId}/services/{servicesId}/migrationExecutions/{migrationExecutionsId}", -"httpMethod": "DELETE", -"id": "metastore.projects.locations.services.migrationExecutions.delete", -"parameterOrder": [ -"name" -], -"parameters": { -"name": { -"description": "Required. The relative resource name of the migrationExecution to delete, in the following form:projects/{project_number}/locations/{location_id}/services/{service_id}/migrationExecutions/{migration_execution_id}.", -"location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/services/[^/]+/migrationExecutions/[^/]+$", -"required": true, -"type": "string" -}, -"requestId": { -"description": "Optional. A request ID. Specify a unique request ID to allow the server to ignore the request if it has completed. The server will ignore subsequent requests that provide a duplicate request ID for at least 60 minutes after the first request.For example, if an initial request times out, followed by another request with the same request ID, the server ignores the second request to prevent the creation of duplicate commitments.The request ID must be a valid UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier#Format) A zero UUID (00000000-0000-0000-0000-000000000000) is not supported.", -"location": "query", -"type": "string" -} -}, -"path": "v1beta/{+name}", -"response": { -"$ref": "Operation" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform" -] -} -} } } } @@ -1641,7 +1607,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://metastore.googleapis.com/", "schemas": { "AlterMetadataResourceLocationRequest": { diff --git a/googleapiclient/discovery_cache/documents/migrationcenter.v1.json b/googleapiclient/discovery_cache/documents/migrationcenter.v1.json index 7cd29c53d0..0f3e8479b0 100644 --- a/googleapiclient/discovery_cache/documents/migrationcenter.v1.json +++ b/googleapiclient/discovery_cache/documents/migrationcenter.v1.json @@ -2099,7 +2099,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://migrationcenter.googleapis.com/", "schemas": { "AddAssetsToGroupRequest": { diff --git a/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json b/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json index 8434a32460..34b7a15c24 100644 --- a/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/migrationcenter.v1alpha1.json @@ -2107,7 +2107,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://migrationcenter.googleapis.com/", "schemas": { "AddAssetsToGroupRequest": { diff --git a/googleapiclient/discovery_cache/documents/monitoring.v1.json b/googleapiclient/discovery_cache/documents/monitoring.v1.json index ad155bf7b9..0ebb94052a 100644 --- a/googleapiclient/discovery_cache/documents/monitoring.v1.json +++ b/googleapiclient/discovery_cache/documents/monitoring.v1.json @@ -753,7 +753,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://monitoring.googleapis.com/", "schemas": { "Aggregation": { diff --git a/googleapiclient/discovery_cache/documents/monitoring.v3.json b/googleapiclient/discovery_cache/documents/monitoring.v3.json index 15061b7a55..a2a63397f2 100644 --- a/googleapiclient/discovery_cache/documents/monitoring.v3.json +++ b/googleapiclient/discovery_cache/documents/monitoring.v3.json @@ -2714,7 +2714,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://monitoring.googleapis.com/", "schemas": { "Aggregation": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json b/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json index 6ac2022d11..33bc3349c3 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessaccountmanagement.v1.json @@ -530,7 +530,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://mybusinessaccountmanagement.googleapis.com/", "schemas": { "AcceptInvitationRequest": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json b/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json index a35bd13993..86d95c8a5c 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessbusinessinformation.v1.json @@ -612,7 +612,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://mybusinessbusinessinformation.googleapis.com/", "schemas": { "AdWordsLocationExtensions": { diff --git a/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json b/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json index 85b3430289..f271d5b564 100644 --- a/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinesslodging.v1.json @@ -194,7 +194,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://mybusinesslodging.googleapis.com/", "schemas": { "Accessibility": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json b/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json index c34b3e4a5c..51f9220c55 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessnotifications.v1.json @@ -154,7 +154,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://mybusinessnotifications.googleapis.com/", "schemas": { "NotificationSetting": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json b/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json index 686faee843..530ade3285 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessplaceactions.v1.json @@ -281,7 +281,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://mybusinessplaceactions.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json b/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json index 016f381712..2d6f2d71c4 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessqanda.v1.json @@ -323,7 +323,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://mybusinessqanda.googleapis.com/", "schemas": { "Answer": { diff --git a/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json b/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json index 3ffa3a375c..dc757be6bf 100644 --- a/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json +++ b/googleapiclient/discovery_cache/documents/mybusinessverifications.v1.json @@ -237,7 +237,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://mybusinessverifications.googleapis.com/", "schemas": { "AddressVerificationData": { diff --git a/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json b/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json index 9742c37ac8..9edade32fd 100644 --- a/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json +++ b/googleapiclient/discovery_cache/documents/networkconnectivity.v1.json @@ -2630,7 +2630,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://networkconnectivity.googleapis.com/", "schemas": { "AcceptHubSpokeRequest": { diff --git a/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json b/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json index 4345594a84..9ddbff0469 100644 --- a/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/networkconnectivity.v1alpha1.json @@ -1116,7 +1116,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://networkconnectivity.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/networkmanagement.v1.json b/googleapiclient/discovery_cache/documents/networkmanagement.v1.json index e5fe544f2a..161b77612d 100644 --- a/googleapiclient/discovery_cache/documents/networkmanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/networkmanagement.v1.json @@ -591,7 +591,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://networkmanagement.googleapis.com/", "schemas": { "AbortInfo": { diff --git a/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json b/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json index 7c0850677d..6b4fdbf7a2 100644 --- a/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/networkmanagement.v1beta1.json @@ -591,7 +591,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://networkmanagement.googleapis.com/", "schemas": { "AbortInfo": { diff --git a/googleapiclient/discovery_cache/documents/networksecurity.v1.json b/googleapiclient/discovery_cache/documents/networksecurity.v1.json index a7a319adb6..10a45b2d35 100644 --- a/googleapiclient/discovery_cache/documents/networksecurity.v1.json +++ b/googleapiclient/discovery_cache/documents/networksecurity.v1.json @@ -3162,7 +3162,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://networksecurity.googleapis.com/", "schemas": { "AddAddressGroupItemsRequest": { diff --git a/googleapiclient/discovery_cache/documents/networksecurity.v1beta1.json b/googleapiclient/discovery_cache/documents/networksecurity.v1beta1.json index 412bc74530..7b4653fd97 100644 --- a/googleapiclient/discovery_cache/documents/networksecurity.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/networksecurity.v1beta1.json @@ -3162,7 +3162,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://networksecurity.googleapis.com/", "schemas": { "AddAddressGroupItemsRequest": { diff --git a/googleapiclient/discovery_cache/documents/networkservices.v1.json b/googleapiclient/discovery_cache/documents/networkservices.v1.json index 9678e22866..950c96cfc0 100644 --- a/googleapiclient/discovery_cache/documents/networkservices.v1.json +++ b/googleapiclient/discovery_cache/documents/networkservices.v1.json @@ -2756,7 +2756,7 @@ } } }, -"revision": "20240306", +"revision": "20240315", "rootUrl": "https://networkservices.googleapis.com/", "schemas": { "AuditConfig": { @@ -3073,7 +3073,7 @@ "id": "ExtensionChainMatchCondition", "properties": { "celExpression": { -"description": "Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference).", +"description": "Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference).", "type": "string" } }, @@ -4091,7 +4091,7 @@ "additionalProperties": { "type": "string" }, -"description": "Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources.", +"description": "Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources.", "type": "object" }, "loadBalancingScheme": { @@ -4153,7 +4153,7 @@ "additionalProperties": { "type": "string" }, -"description": "Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources.", +"description": "Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources.", "type": "object" }, "loadBalancingScheme": { diff --git a/googleapiclient/discovery_cache/documents/networkservices.v1beta1.json b/googleapiclient/discovery_cache/documents/networkservices.v1beta1.json index 940ee98dc8..4d7766aa17 100644 --- a/googleapiclient/discovery_cache/documents/networkservices.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/networkservices.v1beta1.json @@ -2483,7 +2483,7 @@ } } }, -"revision": "20240306", +"revision": "20240315", "rootUrl": "https://networkservices.googleapis.com/", "schemas": { "AuditConfig": { @@ -2757,7 +2757,7 @@ "id": "ExtensionChainMatchCondition", "properties": { "celExpression": { -"description": "Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](/service-extensions/docs/cel-matcher-language-reference).", +"description": "Required. A Common Expression Language (CEL) expression that is used to match requests for which the extension chain is executed. For more information, see [CEL matcher language reference](https://cloud.google.com/service-extensions/docs/cel-matcher-language-reference).", "type": "string" } }, @@ -3775,7 +3775,7 @@ "additionalProperties": { "type": "string" }, -"description": "Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources.", +"description": "Optional. Set of labels associated with the `LbRouteExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources.", "type": "object" }, "loadBalancingScheme": { @@ -3837,7 +3837,7 @@ "additionalProperties": { "type": "string" }, -"description": "Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](/compute/docs/labeling-resources#requirements) for Google Cloud resources.", +"description": "Optional. Set of labels associated with the `LbTrafficExtension` resource. The format must comply with [the requirements for labels](https://cloud.google.com/compute/docs/labeling-resources#requirements) for Google Cloud resources.", "type": "object" }, "loadBalancingScheme": { diff --git a/googleapiclient/discovery_cache/documents/notebooks.v1.json b/googleapiclient/discovery_cache/documents/notebooks.v1.json index 981dafca79..fadadb9849 100644 --- a/googleapiclient/discovery_cache/documents/notebooks.v1.json +++ b/googleapiclient/discovery_cache/documents/notebooks.v1.json @@ -2008,7 +2008,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://notebooks.googleapis.com/", "schemas": { "AcceleratorConfig": { @@ -2031,6 +2031,7 @@ "NVIDIA_TESLA_T4", "NVIDIA_TESLA_A100", "NVIDIA_L4", +"NVIDIA_A100_80GB", "NVIDIA_TESLA_T4_VWS", "NVIDIA_TESLA_P100_VWS", "NVIDIA_TESLA_P4_VWS", @@ -2046,6 +2047,7 @@ "Accelerator type is Nvidia Tesla T4.", "Accelerator type is Nvidia Tesla A100.", "Accelerator type is Nvidia Tesla L4.", +"Accelerator type is Nvidia Tesla A100 80GB.", "Accelerator type is NVIDIA Tesla T4 Virtual Workstations.", "Accelerator type is NVIDIA Tesla P100 Virtual Workstations.", "Accelerator type is NVIDIA Tesla P4 Virtual Workstations.", @@ -4052,6 +4054,7 @@ false "NVIDIA_TESLA_T4", "NVIDIA_TESLA_A100", "NVIDIA_L4", +"NVIDIA_A100_80GB", "NVIDIA_TESLA_T4_VWS", "NVIDIA_TESLA_P100_VWS", "NVIDIA_TESLA_P4_VWS", @@ -4067,6 +4070,7 @@ false "Accelerator type is Nvidia Tesla T4.", "Accelerator type is Nvidia Tesla A100.", "Accelerator type is Nvidia Tesla L4.", +"Accelerator type is Nvidia Tesla A100 80GB.", "Accelerator type is NVIDIA Tesla T4 Virtual Workstations.", "Accelerator type is NVIDIA Tesla P100 Virtual Workstations.", "Accelerator type is NVIDIA Tesla P4 Virtual Workstations.", diff --git a/googleapiclient/discovery_cache/documents/notebooks.v2.json b/googleapiclient/discovery_cache/documents/notebooks.v2.json index 26e0977a53..4c1645a17d 100644 --- a/googleapiclient/discovery_cache/documents/notebooks.v2.json +++ b/googleapiclient/discovery_cache/documents/notebooks.v2.json @@ -876,7 +876,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://notebooks.googleapis.com/", "schemas": { "AcceleratorConfig": { diff --git a/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json b/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json index 3bd7843841..6619a2648e 100644 --- a/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json +++ b/googleapiclient/discovery_cache/documents/ondemandscanning.v1.json @@ -339,7 +339,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://ondemandscanning.googleapis.com/", "schemas": { "AliasContext": { diff --git a/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json b/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json index 5e8747948a..8f6161000a 100644 --- a/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/ondemandscanning.v1beta1.json @@ -339,7 +339,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://ondemandscanning.googleapis.com/", "schemas": { "AliasContext": { diff --git a/googleapiclient/discovery_cache/documents/orgpolicy.v2.json b/googleapiclient/discovery_cache/documents/orgpolicy.v2.json index 70d06e24fa..53d3688eb1 100644 --- a/googleapiclient/discovery_cache/documents/orgpolicy.v2.json +++ b/googleapiclient/discovery_cache/documents/orgpolicy.v2.json @@ -930,7 +930,7 @@ } } }, -"revision": "20240310", +"revision": "20240318", "rootUrl": "https://orgpolicy.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/osconfig.v1.json b/googleapiclient/discovery_cache/documents/osconfig.v1.json index 76040657ed..017268ce89 100644 --- a/googleapiclient/discovery_cache/documents/osconfig.v1.json +++ b/googleapiclient/discovery_cache/documents/osconfig.v1.json @@ -1063,7 +1063,7 @@ } } }, -"revision": "20240314", +"revision": "20240317", "rootUrl": "https://osconfig.googleapis.com/", "schemas": { "AptSettings": { diff --git a/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json b/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json index 0e54e9f5d9..8ca1159f81 100644 --- a/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/osconfig.v1alpha.json @@ -687,7 +687,7 @@ } } }, -"revision": "20240314", +"revision": "20240317", "rootUrl": "https://osconfig.googleapis.com/", "schemas": { "CVSSv3": { diff --git a/googleapiclient/discovery_cache/documents/osconfig.v1beta.json b/googleapiclient/discovery_cache/documents/osconfig.v1beta.json index 09d1fe3000..e08022d0cd 100644 --- a/googleapiclient/discovery_cache/documents/osconfig.v1beta.json +++ b/googleapiclient/discovery_cache/documents/osconfig.v1beta.json @@ -689,7 +689,7 @@ } } }, -"revision": "20240314", +"revision": "20240324", "rootUrl": "https://osconfig.googleapis.com/", "schemas": { "AptRepository": { diff --git a/googleapiclient/discovery_cache/documents/oslogin.v1alpha.json b/googleapiclient/discovery_cache/documents/oslogin.v1alpha.json index 1a52bacff2..ec1f270aaa 100644 --- a/googleapiclient/discovery_cache/documents/oslogin.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/oslogin.v1alpha.json @@ -477,7 +477,7 @@ } } }, -"revision": "20240303", +"revision": "20240308", "rootUrl": "https://oslogin.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/oslogin.v1beta.json b/googleapiclient/discovery_cache/documents/oslogin.v1beta.json index f1d5d31561..962b33307c 100644 --- a/googleapiclient/discovery_cache/documents/oslogin.v1beta.json +++ b/googleapiclient/discovery_cache/documents/oslogin.v1beta.json @@ -447,7 +447,7 @@ } } }, -"revision": "20240303", +"revision": "20240308", "rootUrl": "https://oslogin.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json b/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json index c5f31a667a..ef498a22bb 100644 --- a/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json +++ b/googleapiclient/discovery_cache/documents/pagespeedonline.v5.json @@ -193,7 +193,7 @@ } } }, -"revision": "20240315", +"revision": "20240322", "rootUrl": "https://pagespeedonline.googleapis.com/", "schemas": { "AuditRefs": { diff --git a/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json b/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json index 8c41077eb6..ece22e93ff 100644 --- a/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json +++ b/googleapiclient/discovery_cache/documents/paymentsresellersubscription.v1.json @@ -435,7 +435,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://paymentsresellersubscription.googleapis.com/", "schemas": { "GoogleCloudPaymentsResellerSubscriptionV1Amount": { diff --git a/googleapiclient/discovery_cache/documents/people.v1.json b/googleapiclient/discovery_cache/documents/people.v1.json index 9ba1dfb210..4117eb6062 100644 --- a/googleapiclient/discovery_cache/documents/people.v1.json +++ b/googleapiclient/discovery_cache/documents/people.v1.json @@ -1190,7 +1190,7 @@ } } }, -"revision": "20240317", +"revision": "20240320", "rootUrl": "https://people.googleapis.com/", "schemas": { "Address": { diff --git a/googleapiclient/discovery_cache/documents/places.v1.json b/googleapiclient/discovery_cache/documents/places.v1.json index 6757058402..196fa6cc57 100644 --- a/googleapiclient/discovery_cache/documents/places.v1.json +++ b/googleapiclient/discovery_cache/documents/places.v1.json @@ -276,7 +276,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://places.googleapis.com/", "schemas": { "GoogleGeoTypeViewport": { diff --git a/googleapiclient/discovery_cache/documents/playcustomapp.v1.json b/googleapiclient/discovery_cache/documents/playcustomapp.v1.json index d94aa40197..836aaa312c 100644 --- a/googleapiclient/discovery_cache/documents/playcustomapp.v1.json +++ b/googleapiclient/discovery_cache/documents/playcustomapp.v1.json @@ -158,7 +158,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://playcustomapp.googleapis.com/", "schemas": { "CustomApp": { diff --git a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json index bea2b58731..bea72b3f5c 100644 --- a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1alpha1.json @@ -947,7 +947,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://playdeveloperreporting.googleapis.com/", "schemas": { "GooglePlayDeveloperReportingV1alpha1Anomaly": { diff --git a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json index e4471d39dc..8eb12de351 100644 --- a/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/playdeveloperreporting.v1beta1.json @@ -947,7 +947,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://playdeveloperreporting.googleapis.com/", "schemas": { "GooglePlayDeveloperReportingV1beta1Anomaly": { diff --git a/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json b/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json index cf3c3a1721..3311733278 100644 --- a/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/playgrouping.v1alpha1.json @@ -177,7 +177,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://playgrouping.googleapis.com/", "schemas": { "CreateOrUpdateTagsRequest": { diff --git a/googleapiclient/discovery_cache/documents/playintegrity.v1.json b/googleapiclient/discovery_cache/documents/playintegrity.v1.json index 34f11565fd..3bb8fe1bd0 100644 --- a/googleapiclient/discovery_cache/documents/playintegrity.v1.json +++ b/googleapiclient/discovery_cache/documents/playintegrity.v1.json @@ -138,7 +138,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://playintegrity.googleapis.com/", "schemas": { "AccountActivity": { diff --git a/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json b/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json index 418da70f7a..242ab2cbe8 100644 --- a/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json +++ b/googleapiclient/discovery_cache/documents/policyanalyzer.v1.json @@ -163,7 +163,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://policyanalyzer.googleapis.com/", "schemas": { "GoogleCloudPolicyanalyzerV1Activity": { diff --git a/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json b/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json index 788d058058..5960de006e 100644 --- a/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/policyanalyzer.v1beta1.json @@ -163,7 +163,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://policyanalyzer.googleapis.com/", "schemas": { "GoogleCloudPolicyanalyzerV1beta1Activity": { diff --git a/googleapiclient/discovery_cache/documents/policysimulator.v1.json b/googleapiclient/discovery_cache/documents/policysimulator.v1.json index 0efdbbc591..b2d83da3fc 100644 --- a/googleapiclient/discovery_cache/documents/policysimulator.v1.json +++ b/googleapiclient/discovery_cache/documents/policysimulator.v1.json @@ -942,7 +942,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://policysimulator.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json b/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json index cac710173b..c6653ceb19 100644 --- a/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json +++ b/googleapiclient/discovery_cache/documents/policysimulator.v1alpha.json @@ -1078,7 +1078,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://policysimulator.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json b/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json index 1eb3d5c8cb..fd27cec6d7 100644 --- a/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json +++ b/googleapiclient/discovery_cache/documents/policysimulator.v1beta.json @@ -1078,7 +1078,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://policysimulator.googleapis.com/", "schemas": { "GoogleCloudOrgpolicyV2AlternatePolicySpec": { diff --git a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json index a682f6cb4a..aff493a2ce 100644 --- a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json +++ b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://policytroubleshooter.googleapis.com/", "schemas": { "GoogleCloudPolicytroubleshooterV1AccessTuple": { diff --git a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json index 29511d8f5a..cfff3b8b52 100644 --- a/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json +++ b/googleapiclient/discovery_cache/documents/policytroubleshooter.v1beta.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://policytroubleshooter.googleapis.com/", "schemas": { "GoogleCloudPolicytroubleshooterV1betaAccessTuple": { diff --git a/googleapiclient/discovery_cache/documents/privateca.v1.json b/googleapiclient/discovery_cache/documents/privateca.v1.json index b6451e1e52..02b47d7940 100644 --- a/googleapiclient/discovery_cache/documents/privateca.v1.json +++ b/googleapiclient/discovery_cache/documents/privateca.v1.json @@ -1605,7 +1605,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://privateca.googleapis.com/", "schemas": { "AccessUrls": { @@ -2059,7 +2059,7 @@ "id": "CertificateConfigKeyId", "properties": { "keyId": { -"description": "Optional. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key.", +"description": "Required. The value of this KeyId encoded in lowercase hexadecimal. This is most likely the 160 bit SHA-1 hash of the public key.", "type": "string" } }, diff --git a/googleapiclient/discovery_cache/documents/privateca.v1beta1.json b/googleapiclient/discovery_cache/documents/privateca.v1beta1.json index 0e2fe8d33e..dfb10f58c2 100644 --- a/googleapiclient/discovery_cache/documents/privateca.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/privateca.v1beta1.json @@ -580,7 +580,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://privateca.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json b/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json index da68cb43a6..6c8fbee775 100644 --- a/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/prod_tt_sasportal.v1alpha1.json @@ -2653,7 +2653,7 @@ } } }, -"revision": "20240303", +"revision": "20240318", "rootUrl": "https://prod-tt-sasportal.googleapis.com/", "schemas": { "SasPortalAssignment": { diff --git a/googleapiclient/discovery_cache/documents/publicca.v1.json b/googleapiclient/discovery_cache/documents/publicca.v1.json index b48a9ccd71..b7f5276b5e 100644 --- a/googleapiclient/discovery_cache/documents/publicca.v1.json +++ b/googleapiclient/discovery_cache/documents/publicca.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://publicca.googleapis.com/", "schemas": { "ExternalAccountKey": { diff --git a/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json b/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json index 4bfaca0d4d..fb4d36c060 100644 --- a/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/publicca.v1alpha1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://publicca.googleapis.com/", "schemas": { "ExternalAccountKey": { diff --git a/googleapiclient/discovery_cache/documents/publicca.v1beta1.json b/googleapiclient/discovery_cache/documents/publicca.v1beta1.json index 4bf828e0ba..bb37a2b6d7 100644 --- a/googleapiclient/discovery_cache/documents/publicca.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/publicca.v1beta1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240311", +"revision": "20240318", "rootUrl": "https://publicca.googleapis.com/", "schemas": { "ExternalAccountKey": { diff --git a/googleapiclient/discovery_cache/documents/pubsub.v1.json b/googleapiclient/discovery_cache/documents/pubsub.v1.json index faef994581..882f875816 100644 --- a/googleapiclient/discovery_cache/documents/pubsub.v1.json +++ b/googleapiclient/discovery_cache/documents/pubsub.v1.json @@ -1583,7 +1583,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://pubsub.googleapis.com/", "schemas": { "AcknowledgeRequest": { diff --git a/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json b/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json index 0583f0c55f..b044aa704f 100644 --- a/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json +++ b/googleapiclient/discovery_cache/documents/pubsub.v1beta1a.json @@ -474,7 +474,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://pubsub.googleapis.com/", "schemas": { "AcknowledgeRequest": { diff --git a/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json b/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json index 869b0c2be0..66c52a9e84 100644 --- a/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/pubsub.v1beta2.json @@ -741,7 +741,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://pubsub.googleapis.com/", "schemas": { "AcknowledgeRequest": { diff --git a/googleapiclient/discovery_cache/documents/pubsublite.v1.json b/googleapiclient/discovery_cache/documents/pubsublite.v1.json index b4019625b9..5bdb617937 100644 --- a/googleapiclient/discovery_cache/documents/pubsublite.v1.json +++ b/googleapiclient/discovery_cache/documents/pubsublite.v1.json @@ -1040,7 +1040,7 @@ } } }, -"revision": "20240301", +"revision": "20240315", "rootUrl": "https://pubsublite.googleapis.com/", "schemas": { "CancelOperationRequest": { diff --git a/googleapiclient/discovery_cache/documents/rapidmigrationassessment.v1.json b/googleapiclient/discovery_cache/documents/rapidmigrationassessment.v1.json index 3f648253d0..dffc39a61c 100644 --- a/googleapiclient/discovery_cache/documents/rapidmigrationassessment.v1.json +++ b/googleapiclient/discovery_cache/documents/rapidmigrationassessment.v1.json @@ -633,7 +633,7 @@ } } }, -"revision": "20240223", +"revision": "20240321", "rootUrl": "https://rapidmigrationassessment.googleapis.com/", "schemas": { "Annotation": { diff --git a/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json b/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json index 3c3a6ffb17..307b85a9dd 100644 --- a/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json +++ b/googleapiclient/discovery_cache/documents/readerrevenuesubscriptionlinking.v1.json @@ -207,7 +207,7 @@ } } }, -"revision": "20240312", +"revision": "20240324", "rootUrl": "https://readerrevenuesubscriptionlinking.googleapis.com/", "schemas": { "DeleteReaderResponse": { diff --git a/googleapiclient/discovery_cache/documents/realtimebidding.v1.json b/googleapiclient/discovery_cache/documents/realtimebidding.v1.json index 084bdc72ee..cdbb0109d9 100644 --- a/googleapiclient/discovery_cache/documents/realtimebidding.v1.json +++ b/googleapiclient/discovery_cache/documents/realtimebidding.v1.json @@ -1305,7 +1305,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://realtimebidding.googleapis.com/", "schemas": { "ActivatePretargetingConfigRequest": { @@ -2897,13 +2897,15 @@ "USER_ID_TYPE_UNSPECIFIED", "HOSTED_MATCH_DATA", "GOOGLE_COOKIE", -"DEVICE_ID" +"DEVICE_ID", +"PUBLISHER_PROVIDED_ID" ], "enumDescriptions": [ "Placeholder for unspecified user identifier.", "Hosted match data, referring to hosted_match_data in the bid request.", "Google cookie, referring to google_user_id in the bid request.", -"Mobile device advertising ID." +"Mobile device advertising ID.", +"The request has a publisher-provided ID available to the bidder." ], "type": "string" }, diff --git a/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json b/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json index bd89f881f8..f3216267fa 100644 --- a/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json +++ b/googleapiclient/discovery_cache/documents/recaptchaenterprise.v1.json @@ -694,7 +694,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://recaptchaenterprise.googleapis.com/", "schemas": { "GoogleCloudRecaptchaenterpriseV1AccountDefenderAssessment": { @@ -1415,7 +1415,7 @@ true "type": "object" }, "GoogleCloudRecaptchaenterpriseV1ListFirewallPoliciesResponse": { -"description": "Response to request to list firewall policies belonging to a key.", +"description": "Response to request to list firewall policies belonging to a project.", "id": "GoogleCloudRecaptchaenterpriseV1ListFirewallPoliciesResponse", "properties": { "firewallPolicies": { diff --git a/googleapiclient/discovery_cache/documents/recommender.v1.json b/googleapiclient/discovery_cache/documents/recommender.v1.json index 0fe874cc79..5500c3ab8d 100644 --- a/googleapiclient/discovery_cache/documents/recommender.v1.json +++ b/googleapiclient/discovery_cache/documents/recommender.v1.json @@ -1686,7 +1686,7 @@ } } }, -"revision": "20240305", +"revision": "20240317", "rootUrl": "https://recommender.googleapis.com/", "schemas": { "GoogleCloudRecommenderV1CostProjection": { diff --git a/googleapiclient/discovery_cache/documents/redis.v1.json b/googleapiclient/discovery_cache/documents/redis.v1.json index 7d3ee29e03..5ee5a8a292 100644 --- a/googleapiclient/discovery_cache/documents/redis.v1.json +++ b/googleapiclient/discovery_cache/documents/redis.v1.json @@ -821,9 +821,32 @@ } } }, -"revision": "20240307", +"revision": "20240319", "rootUrl": "https://redis.googleapis.com/", "schemas": { +"AOFConfig": { +"description": "Configuration of the AOF based persistence.", +"id": "AOFConfig", +"properties": { +"appendFsync": { +"description": "Optional. fsync configuration.", +"enum": [ +"APPEND_FSYNC_UNSPECIFIED", +"NO", +"EVERYSEC", +"ALWAYS" +], +"enumDescriptions": [ +"Not set. Default: EVERYSEC", +"Never fsync. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel's exact tuning.", +"fsync every second. Fast enough, and you may lose 1 second of data if there is a disaster", +"fsync every time new commands are appended to the AOF. It has the best data loss protection at the cost of performance" +], +"type": "string" +} +}, +"type": "object" +}, "AvailabilityConfiguration": { "description": "Configuration for availability of database instance", "id": "AvailabilityConfiguration", @@ -972,6 +995,10 @@ "description": "Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}`", "type": "string" }, +"persistenceConfig": { +"$ref": "ClusterPersistenceConfig", +"description": "Optional. Persistence config (RDB, AOF) for the cluster." +}, "pscConfigs": { "description": "Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported.", "items": { @@ -987,6 +1014,13 @@ "readOnly": true, "type": "array" }, +"redisConfigs": { +"additionalProperties": { +"type": "string" +}, +"description": "Optional. Key/Value pairs of customer overrides for mutable Redis Configs", +"type": "object" +}, "replicaCount": { "description": "Optional. The number of replica nodes per shard.", "format": "int32", @@ -1049,6 +1083,37 @@ }, "type": "object" }, +"ClusterPersistenceConfig": { +"description": "Configuration of the persistence functionality.", +"id": "ClusterPersistenceConfig", +"properties": { +"aofConfig": { +"$ref": "AOFConfig", +"description": "Optional. AOF configuration. This field will be ignored if mode is not AOF." +}, +"mode": { +"description": "Optional. The mode of persistence.", +"enum": [ +"PERSISTENCE_MODE_UNSPECIFIED", +"DISABLED", +"RDB", +"AOF" +], +"enumDescriptions": [ +"Not set.", +"Persistence is disabled, and any snapshot data is deleted.", +"RDB based persistence is enabled.", +"AOF based persistence is enabled." +], +"type": "string" +}, +"rdbConfig": { +"$ref": "RDBConfig", +"description": "Optional. RDB configuration. This field will be ignored if mode is not RDB." +} +}, +"type": "object" +}, "Compliance": { "description": "Contains compliance information about a security standard indicating unmet recommendations.", "id": "Compliance", @@ -1301,7 +1366,7 @@ "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", @@ -1780,7 +1845,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", @@ -2936,6 +3001,36 @@ false }, "type": "object" }, +"RDBConfig": { +"description": "Configuration of the RDB based persistence.", +"id": "RDBConfig", +"properties": { +"rdbSnapshotPeriod": { +"description": "Optional. Period between RDB snapshots.", +"enum": [ +"SNAPSHOT_PERIOD_UNSPECIFIED", +"ONE_HOUR", +"SIX_HOURS", +"TWELVE_HOURS", +"TWENTY_FOUR_HOURS" +], +"enumDescriptions": [ +"Not set.", +"One hour.", +"Six hours.", +"Twelve hours.", +"Twenty four hours." +], +"type": "string" +}, +"rdbSnapshotStartTime": { +"description": "Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, "ReconciliationOperationMetadata": { "description": "Operation metadata returned by the CLH during resource state reconciliation.", "id": "ReconciliationOperationMetadata", diff --git a/googleapiclient/discovery_cache/documents/redis.v1beta1.json b/googleapiclient/discovery_cache/documents/redis.v1beta1.json index 1173f6e8f2..ea925e33de 100644 --- a/googleapiclient/discovery_cache/documents/redis.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/redis.v1beta1.json @@ -821,9 +821,32 @@ } } }, -"revision": "20240307", +"revision": "20240319", "rootUrl": "https://redis.googleapis.com/", "schemas": { +"AOFConfig": { +"description": "Configuration of the AOF based persistence.", +"id": "AOFConfig", +"properties": { +"appendFsync": { +"description": "Optional. fsync configuration.", +"enum": [ +"APPEND_FSYNC_UNSPECIFIED", +"NO", +"EVERYSEC", +"ALWAYS" +], +"enumDescriptions": [ +"Not set. Default: EVERYSEC", +"Never fsync. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel's exact tuning.", +"fsync every second. Fast enough, and you may lose 1 second of data if there is a disaster", +"fsync every time new commands are appended to the AOF. It has the best data loss protection at the cost of performance" +], +"type": "string" +} +}, +"type": "object" +}, "AvailabilityConfiguration": { "description": "Configuration for availability of database instance", "id": "AvailabilityConfiguration", @@ -972,6 +995,10 @@ "description": "Required. Unique name of the resource in this scope including project and location using the form: `projects/{project_id}/locations/{location_id}/clusters/{cluster_id}`", "type": "string" }, +"persistenceConfig": { +"$ref": "ClusterPersistenceConfig", +"description": "Optional. Persistence config (RDB, AOF) for the cluster." +}, "pscConfigs": { "description": "Required. Each PscConfig configures the consumer network where IPs will be designated to the cluster for client access through Private Service Connect Automation. Currently, only one PscConfig is supported.", "items": { @@ -987,6 +1014,13 @@ "readOnly": true, "type": "array" }, +"redisConfigs": { +"additionalProperties": { +"type": "string" +}, +"description": "Optional. Key/Value pairs of customer overrides for mutable Redis Configs", +"type": "object" +}, "replicaCount": { "description": "Optional. The number of replica nodes per shard.", "format": "int32", @@ -1049,6 +1083,37 @@ }, "type": "object" }, +"ClusterPersistenceConfig": { +"description": "Configuration of the persistence functionality.", +"id": "ClusterPersistenceConfig", +"properties": { +"aofConfig": { +"$ref": "AOFConfig", +"description": "Optional. AOF configuration. This field will be ignored if mode is not AOF." +}, +"mode": { +"description": "Optional. The mode of persistence.", +"enum": [ +"PERSISTENCE_MODE_UNSPECIFIED", +"DISABLED", +"RDB", +"AOF" +], +"enumDescriptions": [ +"Not set.", +"Persistence is disabled, and any snapshot data is deleted.", +"RDB based persistence is enabled.", +"AOF based persistence is enabled." +], +"type": "string" +}, +"rdbConfig": { +"$ref": "RDBConfig", +"description": "Optional. RDB configuration. This field will be ignored if mode is not RDB." +} +}, +"type": "object" +}, "Compliance": { "description": "Contains compliance information about a security standard indicating unmet recommendations.", "id": "Compliance", @@ -1301,7 +1366,7 @@ "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", @@ -1780,7 +1845,7 @@ false "SIGNAL_TYPE_DATABASE_AUDITING_DISABLED", "SIGNAL_TYPE_RESTRICT_AUTHORIZED_NETWORKS", "SIGNAL_TYPE_VIOLATE_POLICY_RESTRICT_PUBLIC_IP", -"SIGNAL_TYPE_CLUSTER_QUOTA_LIMIT", +"SIGNAL_TYPE_QUOTA_LIMIT", "SIGNAL_TYPE_NO_PASSWORD_POLICY", "SIGNAL_TYPE_CONNECTIONS_PERFORMANCE_IMPACT", "SIGNAL_TYPE_TMP_TABLES_PERFORMANCE_IMPACT", @@ -2943,6 +3008,36 @@ false }, "type": "object" }, +"RDBConfig": { +"description": "Configuration of the RDB based persistence.", +"id": "RDBConfig", +"properties": { +"rdbSnapshotPeriod": { +"description": "Optional. Period between RDB snapshots.", +"enum": [ +"SNAPSHOT_PERIOD_UNSPECIFIED", +"ONE_HOUR", +"SIX_HOURS", +"TWELVE_HOURS", +"TWENTY_FOUR_HOURS" +], +"enumDescriptions": [ +"Not set.", +"One hour.", +"Six hours.", +"Twelve hours.", +"Twenty four hours." +], +"type": "string" +}, +"rdbSnapshotStartTime": { +"description": "Optional. The time that the first snapshot was/will be attempted, and to which future snapshots will be aligned. If not provided, the current time will be used.", +"format": "google-datetime", +"type": "string" +} +}, +"type": "object" +}, "ReconciliationOperationMetadata": { "description": "Operation metadata returned by the CLH during resource state reconciliation.", "id": "ReconciliationOperationMetadata", diff --git a/googleapiclient/discovery_cache/documents/resourcesettings.v1.json b/googleapiclient/discovery_cache/documents/resourcesettings.v1.json index 33f366d932..798fa5fe0d 100644 --- a/googleapiclient/discovery_cache/documents/resourcesettings.v1.json +++ b/googleapiclient/discovery_cache/documents/resourcesettings.v1.json @@ -499,7 +499,7 @@ } } }, -"revision": "20240313", +"revision": "20240324", "rootUrl": "https://resourcesettings.googleapis.com/", "schemas": { "GoogleCloudResourcesettingsV1ListSettingsResponse": { diff --git a/googleapiclient/discovery_cache/documents/retail.v2.json b/googleapiclient/discovery_cache/documents/retail.v2.json index 45e973a292..e777402242 100644 --- a/googleapiclient/discovery_cache/documents/retail.v2.json +++ b/googleapiclient/discovery_cache/documents/retail.v2.json @@ -2087,7 +2087,7 @@ } } }, -"revision": "20240315", +"revision": "20240319", "rootUrl": "https://retail.googleapis.com/", "schemas": { "GoogleApiHttpBody": { diff --git a/googleapiclient/discovery_cache/documents/retail.v2alpha.json b/googleapiclient/discovery_cache/documents/retail.v2alpha.json index d19dd4ca6d..523e734986 100644 --- a/googleapiclient/discovery_cache/documents/retail.v2alpha.json +++ b/googleapiclient/discovery_cache/documents/retail.v2alpha.json @@ -2475,7 +2475,7 @@ } } }, -"revision": "20240315", +"revision": "20240319", "rootUrl": "https://retail.googleapis.com/", "schemas": { "GoogleApiHttpBody": { diff --git a/googleapiclient/discovery_cache/documents/retail.v2beta.json b/googleapiclient/discovery_cache/documents/retail.v2beta.json index 91b920c74c..eeabf86291 100644 --- a/googleapiclient/discovery_cache/documents/retail.v2beta.json +++ b/googleapiclient/discovery_cache/documents/retail.v2beta.json @@ -2115,7 +2115,7 @@ } } }, -"revision": "20240315", +"revision": "20240319", "rootUrl": "https://retail.googleapis.com/", "schemas": { "GoogleApiHttpBody": { diff --git a/googleapiclient/discovery_cache/documents/run.v1.json b/googleapiclient/discovery_cache/documents/run.v1.json index f3db58c29a..9d9604a203 100644 --- a/googleapiclient/discovery_cache/documents/run.v1.json +++ b/googleapiclient/discovery_cache/documents/run.v1.json @@ -2614,7 +2614,7 @@ } } }, -"revision": "20240310", +"revision": "20240315", "rootUrl": "https://run.googleapis.com/", "schemas": { "Addressable": { diff --git a/googleapiclient/discovery_cache/documents/run.v2.json b/googleapiclient/discovery_cache/documents/run.v2.json index f7c430d14e..d59f4d334c 100644 --- a/googleapiclient/discovery_cache/documents/run.v2.json +++ b/googleapiclient/discovery_cache/documents/run.v2.json @@ -1323,7 +1323,7 @@ } } }, -"revision": "20240310", +"revision": "20240315", "rootUrl": "https://run.googleapis.com/", "schemas": { "GoogleCloudRunV2BinaryAuthorization": { diff --git a/googleapiclient/discovery_cache/documents/safebrowsing.v4.json b/googleapiclient/discovery_cache/documents/safebrowsing.v4.json index 029c9c0795..95258b383c 100644 --- a/googleapiclient/discovery_cache/documents/safebrowsing.v4.json +++ b/googleapiclient/discovery_cache/documents/safebrowsing.v4.json @@ -261,7 +261,7 @@ } } }, -"revision": "20240303", +"revision": "20240317", "rootUrl": "https://safebrowsing.googleapis.com/", "schemas": { "GoogleProtobufEmpty": { diff --git a/googleapiclient/discovery_cache/documents/safebrowsing.v5.json b/googleapiclient/discovery_cache/documents/safebrowsing.v5.json index c3e845e96a..be3a284d54 100644 --- a/googleapiclient/discovery_cache/documents/safebrowsing.v5.json +++ b/googleapiclient/discovery_cache/documents/safebrowsing.v5.json @@ -121,7 +121,7 @@ } } }, -"revision": "20240303", +"revision": "20240317", "rootUrl": "https://safebrowsing.googleapis.com/", "schemas": { "GoogleSecuritySafebrowsingV5FullHash": { diff --git a/googleapiclient/discovery_cache/documents/sasportal.v1alpha1.json b/googleapiclient/discovery_cache/documents/sasportal.v1alpha1.json index 89f9a50de8..5841763681 100644 --- a/googleapiclient/discovery_cache/documents/sasportal.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/sasportal.v1alpha1.json @@ -2652,7 +2652,7 @@ } } }, -"revision": "20240303", +"revision": "20240318", "rootUrl": "https://sasportal.googleapis.com/", "schemas": { "SasPortalAssignment": { diff --git a/googleapiclient/discovery_cache/documents/script.v1.json b/googleapiclient/discovery_cache/documents/script.v1.json index 6f1f5ef3f1..73bcfdf9a1 100644 --- a/googleapiclient/discovery_cache/documents/script.v1.json +++ b/googleapiclient/discovery_cache/documents/script.v1.json @@ -891,7 +891,7 @@ } } }, -"revision": "20240310", +"revision": "20240317", "rootUrl": "https://script.googleapis.com/", "schemas": { "Content": { diff --git a/googleapiclient/discovery_cache/documents/searchconsole.v1.json b/googleapiclient/discovery_cache/documents/searchconsole.v1.json index d7c7bfec6c..4e4b551e80 100644 --- a/googleapiclient/discovery_cache/documents/searchconsole.v1.json +++ b/googleapiclient/discovery_cache/documents/searchconsole.v1.json @@ -400,7 +400,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://searchconsole.googleapis.com/", "schemas": { "AmpInspectionResult": { diff --git a/googleapiclient/discovery_cache/documents/secretmanager.v1.json b/googleapiclient/discovery_cache/documents/secretmanager.v1.json index f1f845f5e4..f5303542e3 100644 --- a/googleapiclient/discovery_cache/documents/secretmanager.v1.json +++ b/googleapiclient/discovery_cache/documents/secretmanager.v1.json @@ -1115,7 +1115,7 @@ } } }, -"revision": "20240314", +"revision": "20240320", "rootUrl": "https://secretmanager.googleapis.com/", "schemas": { "AccessSecretVersionResponse": { diff --git a/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json b/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json index 6dd3bdb9af..9fa6ada97f 100644 --- a/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/secretmanager.v1beta1.json @@ -635,7 +635,7 @@ } } }, -"revision": "20240314", +"revision": "20240320", "rootUrl": "https://secretmanager.googleapis.com/", "schemas": { "AccessSecretVersionResponse": { diff --git a/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json b/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json index a7c588d796..4718c1dccb 100644 --- a/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/secretmanager.v1beta2.json @@ -15,6 +15,13 @@ "description": "Stores sensitive data such as API keys, passwords, and certificates. Provides convenience while improving security. ", "discoveryVersion": "v1", "documentationLink": "https://cloud.google.com/secret-manager/", +"endpoints": [ +{ +"description": "Regional Endpoint", +"endpointUrl": "https://secretmanager.me-central2.rep.googleapis.com/", +"location": "me-central2" +} +], "fullyEncodeReservedExpansion": true, "icons": { "x16": "http://www.google.com/images/icons/product/search-16.gif", @@ -1108,7 +1115,7 @@ } } }, -"revision": "20240309", +"revision": "20240320", "rootUrl": "https://secretmanager.googleapis.com/", "schemas": { "AccessSecretVersionResponse": { diff --git a/googleapiclient/discovery_cache/documents/securitycenter.v1.json b/googleapiclient/discovery_cache/documents/securitycenter.v1.json index cd485d2364..0d9bcc62a9 100644 --- a/googleapiclient/discovery_cache/documents/securitycenter.v1.json +++ b/googleapiclient/discovery_cache/documents/securitycenter.v1.json @@ -5820,7 +5820,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://securitycenter.googleapis.com/", "schemas": { "Access": { diff --git a/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json b/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json index 4bb2a58995..1086e2de37 100644 --- a/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/securitycenter.v1beta1.json @@ -896,7 +896,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://securitycenter.googleapis.com/", "schemas": { "Access": { diff --git a/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json b/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json index 181626cd04..04ad2896af 100644 --- a/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/securitycenter.v1beta2.json @@ -1906,7 +1906,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://securitycenter.googleapis.com/", "schemas": { "Access": { diff --git a/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1.json b/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1.json index 38c1b93198..45f87af6c5 100644 --- a/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1.json @@ -542,7 +542,7 @@ } } }, -"revision": "20240310", +"revision": "20240324", "rootUrl": "https://serviceconsumermanagement.googleapis.com/", "schemas": { "AddTenantProjectRequest": { diff --git a/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1beta1.json b/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1beta1.json index bc3b30a3e2..73b84eea2b 100644 --- a/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/serviceconsumermanagement.v1beta1.json @@ -315,12 +315,12 @@ ], "parameters": { "force": { -"description": "Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -364,12 +364,12 @@ ], "parameters": { "force": { -"description": "Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -446,12 +446,12 @@ ], "parameters": { "force": { -"description": "Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -500,7 +500,7 @@ } } }, -"revision": "20240310", +"revision": "20240324", "rootUrl": "https://serviceconsumermanagement.googleapis.com/", "schemas": { "Api": { @@ -2696,11 +2696,11 @@ "id": "V1Beta1ImportProducerOverridesRequest", "properties": { "force": { -"description": "Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "items": { "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", diff --git a/googleapiclient/discovery_cache/documents/servicecontrol.v1.json b/googleapiclient/discovery_cache/documents/servicecontrol.v1.json index bb3c3ea4f1..07a8384d62 100644 --- a/googleapiclient/discovery_cache/documents/servicecontrol.v1.json +++ b/googleapiclient/discovery_cache/documents/servicecontrol.v1.json @@ -197,7 +197,7 @@ } } }, -"revision": "20240309", +"revision": "20240315", "rootUrl": "https://servicecontrol.googleapis.com/", "schemas": { "AllocateInfo": { diff --git a/googleapiclient/discovery_cache/documents/servicecontrol.v2.json b/googleapiclient/discovery_cache/documents/servicecontrol.v2.json index f8cfabb4fb..73c57ad0f8 100644 --- a/googleapiclient/discovery_cache/documents/servicecontrol.v2.json +++ b/googleapiclient/discovery_cache/documents/servicecontrol.v2.json @@ -169,7 +169,7 @@ } } }, -"revision": "20240309", +"revision": "20240315", "rootUrl": "https://servicecontrol.googleapis.com/", "schemas": { "Api": { diff --git a/googleapiclient/discovery_cache/documents/servicemanagement.v1.json b/googleapiclient/discovery_cache/documents/servicemanagement.v1.json index d7b73baaff..eed1232379 100644 --- a/googleapiclient/discovery_cache/documents/servicemanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/servicemanagement.v1.json @@ -830,7 +830,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://servicemanagement.googleapis.com/", "schemas": { "Advice": { diff --git a/googleapiclient/discovery_cache/documents/servicenetworking.v1.json b/googleapiclient/discovery_cache/documents/servicenetworking.v1.json index 0dfabb3968..5c92f098fc 100644 --- a/googleapiclient/discovery_cache/documents/servicenetworking.v1.json +++ b/googleapiclient/discovery_cache/documents/servicenetworking.v1.json @@ -1029,7 +1029,7 @@ } } }, -"revision": "20240317", +"revision": "20240320", "rootUrl": "https://servicenetworking.googleapis.com/", "schemas": { "AddDnsRecordSetMetadata": { diff --git a/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json b/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json index fbf709e8c6..dea8a3688f 100644 --- a/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json +++ b/googleapiclient/discovery_cache/documents/servicenetworking.v1beta.json @@ -307,7 +307,7 @@ } } }, -"revision": "20240317", +"revision": "20240320", "rootUrl": "https://servicenetworking.googleapis.com/", "schemas": { "AddDnsRecordSetMetadata": { diff --git a/googleapiclient/discovery_cache/documents/serviceusage.v1.json b/googleapiclient/discovery_cache/documents/serviceusage.v1.json index 5f908b2f9f..f967760400 100644 --- a/googleapiclient/discovery_cache/documents/serviceusage.v1.json +++ b/googleapiclient/discovery_cache/documents/serviceusage.v1.json @@ -426,7 +426,7 @@ } } }, -"revision": "20240310", +"revision": "20240324", "rootUrl": "https://serviceusage.googleapis.com/", "schemas": { "AddEnableRulesMetadata": { diff --git a/googleapiclient/discovery_cache/documents/serviceusage.v1beta1.json b/googleapiclient/discovery_cache/documents/serviceusage.v1beta1.json index 29be06f871..103fbd209d 100644 --- a/googleapiclient/discovery_cache/documents/serviceusage.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/serviceusage.v1beta1.json @@ -581,12 +581,12 @@ ], "parameters": { "force": { -"description": "Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -631,12 +631,12 @@ ], "parameters": { "force": { -"description": "Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -715,12 +715,12 @@ ], "parameters": { "force": { -"description": "Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -775,12 +775,12 @@ ], "parameters": { "force": { -"description": "Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the creation of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -825,12 +825,12 @@ ], "parameters": { "force": { -"description": "Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the deletion of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -909,12 +909,12 @@ ], "parameters": { "force": { -"description": "Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the update of the quota override. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "location": "query", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", "LIMIT_DECREASE_BELOW_USAGE", @@ -964,7 +964,7 @@ } } }, -"revision": "20240310", +"revision": "20240324", "rootUrl": "https://serviceusage.googleapis.com/", "schemas": { "AddEnableRulesMetadata": { @@ -2609,11 +2609,11 @@ "id": "ImportAdminOverridesRequest", "properties": { "force": { -"description": "Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "items": { "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", @@ -2681,11 +2681,11 @@ "id": "ImportConsumerOverridesRequest", "properties": { "force": { -"description": "Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations.", +"description": "Whether to force the creation of the quota overrides. Setting the force parameter to 'true' ignores all quota safety checks that would fail the request. QuotaSafetyCheck lists all such validations. If force is set to true, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "type": "boolean" }, "forceOnly": { -"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set.", +"description": "The list of quota safety checks to ignore before the override mutation. Unlike 'force' field that ignores all the quota safety checks, the 'force_only' field ignores only the specified checks; other checks are still enforced. The 'force' and 'force_only' fields cannot both be set. If force_only is specified, it is recommended to include a case id in \"X-Goog-Request-Reason\" header when sending the request.", "items": { "enum": [ "QUOTA_SAFETY_CHECK_UNSPECIFIED", diff --git a/googleapiclient/discovery_cache/documents/sheets.v4.json b/googleapiclient/discovery_cache/documents/sheets.v4.json index eafc987706..1d5774a736 100644 --- a/googleapiclient/discovery_cache/documents/sheets.v4.json +++ b/googleapiclient/discovery_cache/documents/sheets.v4.json @@ -870,7 +870,7 @@ } } }, -"revision": "20240305", +"revision": "20240319", "rootUrl": "https://sheets.googleapis.com/", "schemas": { "AddBandingRequest": { @@ -3178,7 +3178,8 @@ "MISSING_COLUMN_ALIAS", "OBJECT_NOT_FOUND", "OBJECT_IN_ERROR_STATE", -"OBJECT_SPEC_INVALID" +"OBJECT_SPEC_INVALID", +"DATA_EXECUTION_CANCELLED" ], "enumDescriptions": [ "Default value, do not use.", @@ -3199,7 +3200,8 @@ "The data execution returns columns with missing aliases.", "The data source object does not exist.", "The data source object is currently in error state. To force refresh, set force in RefreshDataSourceRequest.", -"The data source object specification is invalid." +"The data source object specification is invalid.", +"The data execution has been cancelled." ], "type": "string" }, @@ -3218,6 +3220,7 @@ "DATA_EXECUTION_STATE_UNSPECIFIED", "NOT_STARTED", "RUNNING", +"CANCELLING", "SUCCEEDED", "FAILED" ], @@ -3225,6 +3228,7 @@ "Default value, do not use.", "The data execution has not started.", "The data execution has started and is running.", +"The data execution is currently being cancelled.", "The data execution has completed successfully.", "The data execution has completed with errors." ], @@ -6830,6 +6834,10 @@ "$ref": "CellFormat", "description": "The default format of all cells in the spreadsheet. CellData.effectiveFormat will not be set if the cell's format is equal to this default format. This field is read-only." }, +"importFunctionsExternalUrlAccessAllowed": { +"description": "Whether to allow external url access for image and import functions. Read only when true. When false, you can set to true.", +"type": "boolean" +}, "iterativeCalculationSettings": { "$ref": "IterativeCalculationSettings", "description": "Determines whether and how circular references are resolved with iterative calculation. Absence of this field means that circular references result in calculation errors." diff --git a/googleapiclient/discovery_cache/documents/slides.v1.json b/googleapiclient/discovery_cache/documents/slides.v1.json index 43e0f9d019..760e0e545f 100644 --- a/googleapiclient/discovery_cache/documents/slides.v1.json +++ b/googleapiclient/discovery_cache/documents/slides.v1.json @@ -313,7 +313,7 @@ } } }, -"revision": "20240305", +"revision": "20240319", "rootUrl": "https://slides.googleapis.com/", "schemas": { "AffineTransform": { diff --git a/googleapiclient/discovery_cache/documents/smartdevicemanagement.v1.json b/googleapiclient/discovery_cache/documents/smartdevicemanagement.v1.json index c979a800fc..c6f3ca9e4c 100644 --- a/googleapiclient/discovery_cache/documents/smartdevicemanagement.v1.json +++ b/googleapiclient/discovery_cache/documents/smartdevicemanagement.v1.json @@ -312,7 +312,7 @@ } } }, -"revision": "20240303", +"revision": "20240317", "rootUrl": "https://smartdevicemanagement.googleapis.com/", "schemas": { "GoogleHomeEnterpriseSdmV1Device": { diff --git a/googleapiclient/discovery_cache/documents/sourcerepo.v1.json b/googleapiclient/discovery_cache/documents/sourcerepo.v1.json index aa6b64e4c2..68ce9fb275 100644 --- a/googleapiclient/discovery_cache/documents/sourcerepo.v1.json +++ b/googleapiclient/discovery_cache/documents/sourcerepo.v1.json @@ -450,7 +450,7 @@ } } }, -"revision": "20240311", +"revision": "20240324", "rootUrl": "https://sourcerepo.googleapis.com/", "schemas": { "AuditConfig": { diff --git a/googleapiclient/discovery_cache/documents/sqladmin.v1.json b/googleapiclient/discovery_cache/documents/sqladmin.v1.json index 9eababf7f1..30a30249b0 100644 --- a/googleapiclient/discovery_cache/documents/sqladmin.v1.json +++ b/googleapiclient/discovery_cache/documents/sqladmin.v1.json @@ -2267,7 +2267,7 @@ } } }, -"revision": "20240304", +"revision": "20240317", "rootUrl": "https://sqladmin.googleapis.com/", "schemas": { "AclEntry": { @@ -2403,6 +2403,25 @@ "description": "The number of days of transaction logs we retain for point in time restore, from 1-7.", "format": "int32", "type": "integer" +}, +"transactionalLogStorageState": { +"description": "Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery.", +"enum": [ +"TRANSACTIONAL_LOG_STORAGE_STATE_UNSPECIFIED", +"DISK", +"SWITCHING_TO_CLOUD_STORAGE", +"SWITCHED_TO_CLOUD_STORAGE", +"CLOUD_STORAGE" +], +"enumDescriptions": [ +"Unspecified.", +"The transaction logs for the instance are stored on a data disk.", +"The transaction logs for the instance are switching from being stored on a data disk to being stored in Cloud Storage.", +"The transaction logs for the instance are now stored in Cloud Storage. Previously, they were stored on a data disk.", +"The transaction logs for the instance are stored in Cloud Storage." +], +"readOnly": true, +"type": "string" } }, "type": "object" @@ -3163,6 +3182,10 @@ false "description": "The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance.", "type": "string" }, +"geminiConfig": { +"$ref": "GeminiInstanceConfig", +"description": "Gemini configuration." +}, "instanceType": { "description": "The instance type.", "enum": [ @@ -3251,6 +3274,10 @@ false }, "type": "array" }, +"replicationCluster": { +"$ref": "ReplicationCluster", +"description": "Optional. The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure." +}, "rootPassword": { "description": "Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances.", "type": "string" @@ -3907,6 +3934,43 @@ false }, "type": "object" }, +"GeminiInstanceConfig": { +"description": "Gemini configuration.", +"id": "GeminiInstanceConfig", +"properties": { +"activeQueryEnabled": { +"description": "Output only. Whether active query is enabled.", +"readOnly": true, +"type": "boolean" +}, +"entitled": { +"description": "Output only. Whether gemini is enabled.", +"readOnly": true, +"type": "boolean" +}, +"flagRecommenderEnabled": { +"description": "Output only. Whether flag recommender is enabled.", +"readOnly": true, +"type": "boolean" +}, +"googleVacuumMgmtEnabled": { +"description": "Output only. Whether vacuum management is enabled.", +"readOnly": true, +"type": "boolean" +}, +"indexAdvisorEnabled": { +"description": "Output only. Whether index advisor is enabled.", +"readOnly": true, +"type": "boolean" +}, +"oomSessionCancelEnabled": { +"description": "Output only. Whether oom session cancel is enabled.", +"readOnly": true, +"type": "boolean" +} +}, +"type": "object" +}, "GenerateEphemeralCertRequest": { "description": "Ephemeral certificate creation request.", "id": "GenerateEphemeralCertRequest", @@ -4642,7 +4706,8 @@ false "REENCRYPT", "SWITCHOVER", "ACQUIRE_SSRS_LEASE", -"RELEASE_SSRS_LEASE" +"RELEASE_SSRS_LEASE", +"RECONFIGURE_OLD_PRIMARY" ], "enumDeprecated": [ false, @@ -4685,6 +4750,7 @@ false, false, false, false, +false, false ], "enumDescriptions": [ @@ -4728,7 +4794,8 @@ false "Re-encrypts CMEK instances with latest key version.", "Switches over to replica instance from primary.", "Acquire a lease for the setup of SQL Server Reporting Services (SSRS).", -"Release a lease for the setup of SQL Server Reporting Services (SSRS)." +"Release a lease for the setup of SQL Server Reporting Services (SSRS).", +"Reconfigures old primary after a promote replica operation. Effect of a promote operation to the old primary is executed in this operation, asynchronously from the promote replica operation executed to the replica." ], "type": "string" }, @@ -4994,6 +5061,22 @@ false }, "type": "object" }, +"ReplicationCluster": { +"description": "Primary-DR replica pair", +"id": "ReplicationCluster", +"properties": { +"drReplica": { +"description": "Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary.", +"readOnly": true, +"type": "boolean" +}, +"failoverDrReplicaName": { +"description": "Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica.", +"type": "string" +} +}, +"type": "object" +}, "Reschedule": { "id": "Reschedule", "properties": { @@ -5211,7 +5294,7 @@ true "type": "string" }, "enableGoogleMlIntegration": { -"description": "Optional. Configuration to enable Cloud SQL Vertex AI Integration", +"description": "Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances.", "type": "boolean" }, "insightsConfig": { @@ -5510,6 +5593,20 @@ true "description": "Instance start external sync request.", "id": "SqlInstancesStartExternalSyncRequest", "properties": { +"migrationType": { +"description": "Optional. MigrationType decides if the migration is a physical file based migration or logical migration.", +"enum": [ +"MIGRATION_TYPE_UNSPECIFIED", +"LOGICAL", +"PHYSICAL" +], +"enumDescriptions": [ +"Default value is logical migration", +"Logical Migrations", +"Physical file based Migrations" +], +"type": "string" +}, "mysqlSyncConfig": { "$ref": "MySqlSyncConfig", "description": "MySQL-specific settings for start external sync." @@ -5555,6 +5652,20 @@ true "description": "Instance verify external sync settings request.", "id": "SqlInstancesVerifyExternalSyncSettingsRequest", "properties": { +"migrationType": { +"description": "Optional. MigrationType decides if the migration is a physical file based migration or logical migration", +"enum": [ +"MIGRATION_TYPE_UNSPECIFIED", +"LOGICAL", +"PHYSICAL" +], +"enumDescriptions": [ +"Default value is logical migration", +"Logical Migrations", +"Physical file based Migrations" +], +"type": "string" +}, "mysqlSyncConfig": { "$ref": "MySqlSyncConfig", "description": "Optional. MySQL-specific settings for start external sync." diff --git a/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json b/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json index 53e1db78c7..dddda5d3a2 100644 --- a/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json +++ b/googleapiclient/discovery_cache/documents/sqladmin.v1beta4.json @@ -2267,7 +2267,7 @@ } } }, -"revision": "20240304", +"revision": "20240317", "rootUrl": "https://sqladmin.googleapis.com/", "schemas": { "AclEntry": { @@ -2403,6 +2403,25 @@ "description": "The number of days of transaction logs we retain for point in time restore, from 1-7.", "format": "int32", "type": "integer" +}, +"transactionalLogStorageState": { +"description": "Output only. This value contains the storage location of transactional logs for the database for point-in-time recovery.", +"enum": [ +"TRANSACTIONAL_LOG_STORAGE_STATE_UNSPECIFIED", +"DISK", +"SWITCHING_TO_CLOUD_STORAGE", +"SWITCHED_TO_CLOUD_STORAGE", +"CLOUD_STORAGE" +], +"enumDescriptions": [ +"Unspecified.", +"The transaction logs for the instance are stored on a data disk.", +"The transaction logs for the instance are switching from being stored on a data disk to being stored in Cloud Storage.", +"The transaction logs for the instance are now stored in Cloud Storage. Previously, they were stored on a data disk.", +"The transaction logs for the instance are stored in Cloud Storage." +], +"readOnly": true, +"type": "string" } }, "type": "object" @@ -3163,6 +3182,10 @@ false "description": "The Compute Engine zone that the instance is currently serving from. This value could be different from the zone that was specified when the instance was created if the instance has failed over to its secondary zone. WARNING: Changing this might restart the instance.", "type": "string" }, +"geminiConfig": { +"$ref": "GeminiInstanceConfig", +"description": "Gemini instance configuration." +}, "instanceType": { "description": "The instance type.", "enum": [ @@ -3251,6 +3274,10 @@ false }, "type": "array" }, +"replicationCluster": { +"$ref": "ReplicationCluster", +"description": "The pair of a primary instance and disaster recovery (DR) replica. A DR replica is a cross-region replica that you designate for failover in the event that the primary instance has regional failure." +}, "rootPassword": { "description": "Initial root password. Use only on creation. You must set root passwords before you can connect to PostgreSQL instances.", "type": "string" @@ -3908,6 +3935,43 @@ false }, "type": "object" }, +"GeminiInstanceConfig": { +"description": "Gemini configuration.", +"id": "GeminiInstanceConfig", +"properties": { +"activeQueryEnabled": { +"description": "Output only. Whether active query is enabled.", +"readOnly": true, +"type": "boolean" +}, +"entitled": { +"description": "Output only. Whether Gemini is enabled.", +"readOnly": true, +"type": "boolean" +}, +"flagRecommenderEnabled": { +"description": "Output only. Whether flag recommender is enabled.", +"readOnly": true, +"type": "boolean" +}, +"googleVacuumMgmtEnabled": { +"description": "Output only. Whether vacuum management is enabled.", +"readOnly": true, +"type": "boolean" +}, +"indexAdvisorEnabled": { +"description": "Output only. Whether index advisor is enabled.", +"readOnly": true, +"type": "boolean" +}, +"oomSessionCancelEnabled": { +"description": "Output only. Whether oom session cancel is enabled.", +"readOnly": true, +"type": "boolean" +} +}, +"type": "object" +}, "GenerateEphemeralCertRequest": { "description": "Ephemeral certificate creation request.", "id": "GenerateEphemeralCertRequest", @@ -4643,7 +4707,8 @@ false "REENCRYPT", "SWITCHOVER", "ACQUIRE_SSRS_LEASE", -"RELEASE_SSRS_LEASE" +"RELEASE_SSRS_LEASE", +"RECONFIGURE_OLD_PRIMARY" ], "enumDeprecated": [ false, @@ -4686,6 +4751,7 @@ false, false, false, false, +false, false ], "enumDescriptions": [ @@ -4729,7 +4795,8 @@ false "Re-encrypts CMEK instances with latest key version.", "Switches over to replica instance from primary.", "Acquire a lease for the setup of SQL Server Reporting Services (SSRS).", -"Release a lease for the setup of SQL Server Reporting Services (SSRS)." +"Release a lease for the setup of SQL Server Reporting Services (SSRS).", +"Reconfigures old primary after a promote replica operation. Effect of a promote operation to the old primary is executed in this operation, asynchronously from the promote replica operation executed to the replica." ], "type": "string" }, @@ -4995,6 +5062,22 @@ false }, "type": "object" }, +"ReplicationCluster": { +"description": "Primary-DR replica pair", +"id": "ReplicationCluster", +"properties": { +"drReplica": { +"description": "Output only. read-only field that indicates if the replica is a dr_replica; not set for a primary.", +"readOnly": true, +"type": "boolean" +}, +"failoverDrReplicaName": { +"description": "Optional. If the instance is a primary instance, then this field identifies the disaster recovery (DR) replica. A DR replica is an optional configuration for Enterprise Plus edition instances. If the instance is a read replica, then the field is not set. Users can set this field to set a designated DR replica for a primary. Removing this field removes the DR replica.", +"type": "string" +} +}, +"type": "object" +}, "Reschedule": { "id": "Reschedule", "properties": { @@ -5212,7 +5295,7 @@ true "type": "string" }, "enableGoogleMlIntegration": { -"description": "Optional. Configuration to enable Cloud SQL Vertex AI Integration", +"description": "Optional. When this parameter is set to true, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AI. The default value is false. This applies only to Cloud SQL for PostgreSQL instances.", "type": "boolean" }, "insightsConfig": { @@ -5510,6 +5593,20 @@ true "SqlInstancesStartExternalSyncRequest": { "id": "SqlInstancesStartExternalSyncRequest", "properties": { +"migrationType": { +"description": "Optional. MigrationType decides if the migration is a physical file based migration or logical migration.", +"enum": [ +"MIGRATION_TYPE_UNSPECIFIED", +"LOGICAL", +"PHYSICAL" +], +"enumDescriptions": [ +"If no migration type is specified it will be defaulted to LOGICAL.", +"Logical Migrations", +"Physical file based Migrations" +], +"type": "string" +}, "mysqlSyncConfig": { "$ref": "MySqlSyncConfig", "description": "MySQL-specific settings for start external sync." @@ -5554,6 +5651,20 @@ true "SqlInstancesVerifyExternalSyncSettingsRequest": { "id": "SqlInstancesVerifyExternalSyncSettingsRequest", "properties": { +"migrationType": { +"description": "Optional. MigrationType field decides if the migration is a physical file based migration or logical migration", +"enum": [ +"MIGRATION_TYPE_UNSPECIFIED", +"LOGICAL", +"PHYSICAL" +], +"enumDescriptions": [ +"If no migration type is specified it will be defaulted to LOGICAL.", +"Logical Migrations", +"Physical file based Migrations" +], +"type": "string" +}, "mysqlSyncConfig": { "$ref": "MySqlSyncConfig", "description": "Optional. MySQL-specific settings for start external sync." diff --git a/googleapiclient/discovery_cache/documents/storage.v1.json b/googleapiclient/discovery_cache/documents/storage.v1.json index e82a65cd6f..85300eee84 100644 --- a/googleapiclient/discovery_cache/documents/storage.v1.json +++ b/googleapiclient/discovery_cache/documents/storage.v1.json @@ -33,7 +33,7 @@ "location": "me-central2" } ], -"etag": "\"31383132363637383635323832393938363535\"", +"etag": "\"33303333323233383838323039393532373539\"", "icons": { "x16": "https://www.google.com/images/icons/product/cloud_storage-16.png", "x32": "https://www.google.com/images/icons/product/cloud_storage-32.png" @@ -3146,7 +3146,8 @@ "id": "storage.objects.restore", "parameterOrder": [ "bucket", -"object" +"object", +"generation" ], "parameters": { "bucket": { @@ -4042,7 +4043,7 @@ } } }, -"revision": "20240315", +"revision": "20240319", "rootUrl": "https://storage.googleapis.com/", "schemas": { "AnywhereCache": { diff --git a/googleapiclient/discovery_cache/documents/storagetransfer.v1.json b/googleapiclient/discovery_cache/documents/storagetransfer.v1.json index d496ec68cd..e5dd828fc3 100644 --- a/googleapiclient/discovery_cache/documents/storagetransfer.v1.json +++ b/googleapiclient/discovery_cache/documents/storagetransfer.v1.json @@ -632,7 +632,7 @@ } } }, -"revision": "20240311", +"revision": "20240315", "rootUrl": "https://storagetransfer.googleapis.com/", "schemas": { "AgentPool": { diff --git a/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json b/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json index 8212c34853..ab7f03bab6 100644 --- a/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json +++ b/googleapiclient/discovery_cache/documents/streetviewpublish.v1.json @@ -534,7 +534,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://streetviewpublish.googleapis.com/", "schemas": { "BatchDeletePhotosRequest": { diff --git a/googleapiclient/discovery_cache/documents/sts.v1.json b/googleapiclient/discovery_cache/documents/sts.v1.json index edb7e8ecde..11c93b8616 100644 --- a/googleapiclient/discovery_cache/documents/sts.v1.json +++ b/googleapiclient/discovery_cache/documents/sts.v1.json @@ -116,7 +116,7 @@ } } }, -"revision": "20240307", +"revision": "20240320", "rootUrl": "https://sts.googleapis.com/", "schemas": { "GoogleIamV1Binding": { diff --git a/googleapiclient/discovery_cache/documents/sts.v1beta.json b/googleapiclient/discovery_cache/documents/sts.v1beta.json index 8837918ccd..5c84c4bf25 100644 --- a/googleapiclient/discovery_cache/documents/sts.v1beta.json +++ b/googleapiclient/discovery_cache/documents/sts.v1beta.json @@ -116,7 +116,7 @@ } } }, -"revision": "20240307", +"revision": "20240320", "rootUrl": "https://sts.googleapis.com/", "schemas": { "GoogleIamV1Binding": { diff --git a/googleapiclient/discovery_cache/documents/tagmanager.v1.json b/googleapiclient/discovery_cache/documents/tagmanager.v1.json index 737794dedc..6dffca9333 100644 --- a/googleapiclient/discovery_cache/documents/tagmanager.v1.json +++ b/googleapiclient/discovery_cache/documents/tagmanager.v1.json @@ -1932,7 +1932,7 @@ } } }, -"revision": "20240313", +"revision": "20240320", "rootUrl": "https://tagmanager.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/tagmanager.v2.json b/googleapiclient/discovery_cache/documents/tagmanager.v2.json index 9bb421e488..8fb87712d3 100644 --- a/googleapiclient/discovery_cache/documents/tagmanager.v2.json +++ b/googleapiclient/discovery_cache/documents/tagmanager.v2.json @@ -3890,7 +3890,7 @@ } } }, -"revision": "20240313", +"revision": "20240320", "rootUrl": "https://tagmanager.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/tasks.v1.json b/googleapiclient/discovery_cache/documents/tasks.v1.json index 7c59cdc93c..844f31e379 100644 --- a/googleapiclient/discovery_cache/documents/tasks.v1.json +++ b/googleapiclient/discovery_cache/documents/tasks.v1.json @@ -157,7 +157,7 @@ ] }, "insert": { -"description": "Creates a new task list and adds it to the authenticated user's task lists.", +"description": "Creates a new task list and adds it to the authenticated user's task lists. A user can have up to 2000 lists at a time.", "flatPath": "tasks/v1/users/@me/lists", "httpMethod": "POST", "id": "tasks.tasklists.insert", @@ -175,7 +175,7 @@ ] }, "list": { -"description": "Returns all the authenticated user's task lists.", +"description": "Returns all the authenticated user's task lists. A user can have up to 2000 lists at a time.", "flatPath": "tasks/v1/users/@me/lists", "httpMethod": "GET", "id": "tasks.tasklists.list", @@ -342,7 +342,7 @@ ] }, "insert": { -"description": "Creates a new task on the specified task list.", +"description": "Creates a new task on the specified task list. A user can have up to 20,000 uncompleted tasks per list and up to 100,000 tasks in total at a time.", "flatPath": "tasks/v1/lists/{tasklist}/tasks", "httpMethod": "POST", "id": "tasks.tasks.insert", @@ -379,7 +379,7 @@ ] }, "list": { -"description": "Returns all tasks in the specified task list.", +"description": "Returns all tasks in the specified task list. A user can have up to 20,000 uncompleted tasks per list and up to 100,000 tasks in total at a time.", "flatPath": "tasks/v1/lists/{tasklist}/tasks", "httpMethod": "GET", "id": "tasks.tasks.list", @@ -455,7 +455,7 @@ ] }, "move": { -"description": "Moves the specified task to another position in the task list. This can include putting it as a child task under a new parent and/or move it to a different position among its sibling tasks.", +"description": "Moves the specified task to another position in the task list. This can include putting it as a child task under a new parent and/or move it to a different position among its sibling tasks. A user can have up to 2,000 subtasks per task.", "flatPath": "tasks/v1/lists/{tasklist}/tasks/{task}/move", "httpMethod": "POST", "id": "tasks.tasks.move", @@ -566,7 +566,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://tasks.googleapis.com/", "schemas": { "Task": { @@ -622,7 +622,7 @@ "type": "array" }, "notes": { -"description": "Notes describing the task. Optional.", +"description": "Notes describing the task. Optional. Maximum length allowed: 8192 characters.", "type": "string" }, "parent": { @@ -642,7 +642,7 @@ "type": "string" }, "title": { -"description": "Title of the task.", +"description": "Title of the task. Maximum length allowed: 1024 characters.", "type": "string" }, "updated": { @@ -676,7 +676,7 @@ "type": "string" }, "title": { -"description": "Title of the task list.", +"description": "Title of the task list. Maximum length allowed: 1024 characters.", "type": "string" }, "updated": { diff --git a/googleapiclient/discovery_cache/documents/testing.v1.json b/googleapiclient/discovery_cache/documents/testing.v1.json index dca05ec1c4..b7fc50b320 100644 --- a/googleapiclient/discovery_cache/documents/testing.v1.json +++ b/googleapiclient/discovery_cache/documents/testing.v1.json @@ -16,7 +16,8 @@ "batchPath": "batch", "description": "Allows developers to run automated tests for their mobile applications on Google infrastructure.", "discoveryVersion": "v1", -"documentationLink": "https://developers.google.com/cloud-test-lab/", +"documentationLink": "https://firebase.google.com/docs/test-lab/", +"fullyEncodeReservedExpansion": true, "icons": { "x16": "http://www.google.com/images/icons/product/search-16.gif", "x32": "http://www.google.com/images/icons/product/search-32.gif" @@ -448,7 +449,7 @@ } } }, -"revision": "20240311", +"revision": "20240319", "rootUrl": "https://testing.googleapis.com/", "schemas": { "Account": { diff --git a/googleapiclient/discovery_cache/documents/texttospeech.v1.json b/googleapiclient/discovery_cache/documents/texttospeech.v1.json index c9181f0a54..5c1bb7ce52 100644 --- a/googleapiclient/discovery_cache/documents/texttospeech.v1.json +++ b/googleapiclient/discovery_cache/documents/texttospeech.v1.json @@ -318,7 +318,7 @@ } } }, -"revision": "20240307", +"revision": "20240313", "rootUrl": "https://texttospeech.googleapis.com/", "schemas": { "AudioConfig": { diff --git a/googleapiclient/discovery_cache/documents/texttospeech.v1beta1.json b/googleapiclient/discovery_cache/documents/texttospeech.v1beta1.json index c540bda69b..65fccf1d47 100644 --- a/googleapiclient/discovery_cache/documents/texttospeech.v1beta1.json +++ b/googleapiclient/discovery_cache/documents/texttospeech.v1beta1.json @@ -261,7 +261,7 @@ } } }, -"revision": "20240307", +"revision": "20240313", "rootUrl": "https://texttospeech.googleapis.com/", "schemas": { "AudioConfig": { diff --git a/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json b/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json index 1f7f2aada1..eb58bcb316 100644 --- a/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json +++ b/googleapiclient/discovery_cache/documents/toolresults.v1beta3.json @@ -1463,7 +1463,7 @@ } } }, -"revision": "20240314", +"revision": "20240325", "rootUrl": "https://toolresults.googleapis.com/", "schemas": { "ANR": { @@ -2643,7 +2643,8 @@ "GREY_MAX_O", "GREY_MAX_P", "GREY_MAX_Q", -"GREY_MAX_R" +"GREY_MAX_R", +"GREY_MAX_S" ], "enumDescriptions": [ "", @@ -2653,6 +2654,7 @@ "", "", "", +"", "" ], "type": "string" diff --git a/googleapiclient/discovery_cache/documents/trafficdirector.v3.json b/googleapiclient/discovery_cache/documents/trafficdirector.v3.json index 3054fa81c2..6daa49d323 100644 --- a/googleapiclient/discovery_cache/documents/trafficdirector.v3.json +++ b/googleapiclient/discovery_cache/documents/trafficdirector.v3.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240307", +"revision": "20240312", "rootUrl": "https://trafficdirector.googleapis.com/", "schemas": { "Address": { diff --git a/googleapiclient/discovery_cache/documents/transcoder.v1.json b/googleapiclient/discovery_cache/documents/transcoder.v1.json index be3e60ffe9..8f4e3a260a 100644 --- a/googleapiclient/discovery_cache/documents/transcoder.v1.json +++ b/googleapiclient/discovery_cache/documents/transcoder.v1.json @@ -385,7 +385,7 @@ } } }, -"revision": "20240306", +"revision": "20240313", "rootUrl": "https://transcoder.googleapis.com/", "schemas": { "AdBreak": { diff --git a/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json b/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json index dfd470df0f..2e6ef1030c 100644 --- a/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json +++ b/googleapiclient/discovery_cache/documents/travelimpactmodel.v1.json @@ -116,7 +116,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://travelimpactmodel.googleapis.com/", "schemas": { "ComputeFlightEmissionsRequest": { diff --git a/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json b/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json index 6efcf650b8..e4206b6276 100644 --- a/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json +++ b/googleapiclient/discovery_cache/documents/verifiedaccess.v1.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240305", +"revision": "20240319", "rootUrl": "https://verifiedaccess.googleapis.com/", "schemas": { "Challenge": { diff --git a/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json b/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json index 555f65bd6f..354dbd85b6 100644 --- a/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json +++ b/googleapiclient/discovery_cache/documents/verifiedaccess.v2.json @@ -146,7 +146,7 @@ } } }, -"revision": "20240305", +"revision": "20240319", "rootUrl": "https://verifiedaccess.googleapis.com/", "schemas": { "Challenge": { diff --git a/googleapiclient/discovery_cache/documents/versionhistory.v1.json b/googleapiclient/discovery_cache/documents/versionhistory.v1.json index 425ba02736..fbde6962ef 100644 --- a/googleapiclient/discovery_cache/documents/versionhistory.v1.json +++ b/googleapiclient/discovery_cache/documents/versionhistory.v1.json @@ -271,7 +271,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://versionhistory.googleapis.com/", "schemas": { "Channel": { diff --git a/googleapiclient/discovery_cache/documents/videointelligence.v1.json b/googleapiclient/discovery_cache/documents/videointelligence.v1.json index c5ad0b0cdb..f6a33c0fee 100644 --- a/googleapiclient/discovery_cache/documents/videointelligence.v1.json +++ b/googleapiclient/discovery_cache/documents/videointelligence.v1.json @@ -350,7 +350,7 @@ } } }, -"revision": "20240308", +"revision": "20240325", "rootUrl": "https://videointelligence.googleapis.com/", "schemas": { "GoogleCloudVideointelligenceV1_AnnotateVideoProgress": { diff --git a/googleapiclient/discovery_cache/documents/videointelligence.v1beta2.json b/googleapiclient/discovery_cache/documents/videointelligence.v1beta2.json index 418249175b..225bbf7d1e 100644 --- a/googleapiclient/discovery_cache/documents/videointelligence.v1beta2.json +++ b/googleapiclient/discovery_cache/documents/videointelligence.v1beta2.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240308", +"revision": "20240325", "rootUrl": "https://videointelligence.googleapis.com/", "schemas": { "GoogleCloudVideointelligenceV1_AnnotateVideoProgress": { diff --git a/googleapiclient/discovery_cache/documents/videointelligence.v1p1beta1.json b/googleapiclient/discovery_cache/documents/videointelligence.v1p1beta1.json index 9e278c2291..d274995644 100644 --- a/googleapiclient/discovery_cache/documents/videointelligence.v1p1beta1.json +++ b/googleapiclient/discovery_cache/documents/videointelligence.v1p1beta1.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240308", +"revision": "20240325", "rootUrl": "https://videointelligence.googleapis.com/", "schemas": { "GoogleCloudVideointelligenceV1_AnnotateVideoProgress": { diff --git a/googleapiclient/discovery_cache/documents/videointelligence.v1p2beta1.json b/googleapiclient/discovery_cache/documents/videointelligence.v1p2beta1.json index 8f97d1411e..8fcb94e904 100644 --- a/googleapiclient/discovery_cache/documents/videointelligence.v1p2beta1.json +++ b/googleapiclient/discovery_cache/documents/videointelligence.v1p2beta1.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240308", +"revision": "20240325", "rootUrl": "https://videointelligence.googleapis.com/", "schemas": { "GoogleCloudVideointelligenceV1_AnnotateVideoProgress": { diff --git a/googleapiclient/discovery_cache/documents/videointelligence.v1p3beta1.json b/googleapiclient/discovery_cache/documents/videointelligence.v1p3beta1.json index 8ad63de1ab..d752f401a9 100644 --- a/googleapiclient/discovery_cache/documents/videointelligence.v1p3beta1.json +++ b/googleapiclient/discovery_cache/documents/videointelligence.v1p3beta1.json @@ -128,7 +128,7 @@ } } }, -"revision": "20240308", +"revision": "20240325", "rootUrl": "https://videointelligence.googleapis.com/", "schemas": { "GoogleCloudVideointelligenceV1_AnnotateVideoProgress": { diff --git a/googleapiclient/discovery_cache/documents/vmmigration.v1.json b/googleapiclient/discovery_cache/documents/vmmigration.v1.json index e741c7360f..3b2e65e3f5 100644 --- a/googleapiclient/discovery_cache/documents/vmmigration.v1.json +++ b/googleapiclient/discovery_cache/documents/vmmigration.v1.json @@ -2220,7 +2220,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://vmmigration.googleapis.com/", "schemas": { "AccessKeyCredentials": { diff --git a/googleapiclient/discovery_cache/documents/vmmigration.v1alpha1.json b/googleapiclient/discovery_cache/documents/vmmigration.v1alpha1.json index e7001dcb50..2d7acc3be3 100644 --- a/googleapiclient/discovery_cache/documents/vmmigration.v1alpha1.json +++ b/googleapiclient/discovery_cache/documents/vmmigration.v1alpha1.json @@ -2220,7 +2220,7 @@ } } }, -"revision": "20240307", +"revision": "20240314", "rootUrl": "https://vmmigration.googleapis.com/", "schemas": { "AccessKeyCredentials": { diff --git a/googleapiclient/discovery_cache/documents/walletobjects.v1.json b/googleapiclient/discovery_cache/documents/walletobjects.v1.json index 121e0b5e3e..bd66cddc5f 100644 --- a/googleapiclient/discovery_cache/documents/walletobjects.v1.json +++ b/googleapiclient/discovery_cache/documents/walletobjects.v1.json @@ -2681,7 +2681,7 @@ } } }, -"revision": "20240318", +"revision": "20240325", "rootUrl": "https://walletobjects.googleapis.com/", "schemas": { "ActivationOptions": { diff --git a/googleapiclient/discovery_cache/documents/webfonts.v1.json b/googleapiclient/discovery_cache/documents/webfonts.v1.json index 91b8331a6b..a6c11b22ca 100644 --- a/googleapiclient/discovery_cache/documents/webfonts.v1.json +++ b/googleapiclient/discovery_cache/documents/webfonts.v1.json @@ -161,7 +161,7 @@ } } }, -"revision": "20240306", +"revision": "20240320", "rootUrl": "https://webfonts.googleapis.com/", "schemas": { "Axis": { diff --git a/googleapiclient/discovery_cache/documents/webrisk.v1.json b/googleapiclient/discovery_cache/documents/webrisk.v1.json index fe70ce8ec1..5e543f8ad1 100644 --- a/googleapiclient/discovery_cache/documents/webrisk.v1.json +++ b/googleapiclient/discovery_cache/documents/webrisk.v1.json @@ -420,7 +420,7 @@ } } }, -"revision": "20240315", +"revision": "20240318", "rootUrl": "https://webrisk.googleapis.com/", "schemas": { "GoogleCloudWebriskV1ComputeThreatListDiffResponse": { diff --git a/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json b/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json index 4e1230a88b..94194802e0 100644 --- a/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json +++ b/googleapiclient/discovery_cache/documents/workflowexecutions.v1.json @@ -457,7 +457,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://workflowexecutions.googleapis.com/", "schemas": { "Callback": { diff --git a/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json b/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json index ad9326076c..201ca2e7b3 100644 --- a/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json +++ b/googleapiclient/discovery_cache/documents/workflowexecutions.v1beta.json @@ -269,7 +269,7 @@ } } }, -"revision": "20240305", +"revision": "20240312", "rootUrl": "https://workflowexecutions.googleapis.com/", "schemas": { "CancelExecutionRequest": { diff --git a/googleapiclient/discovery_cache/documents/workflows.v1.json b/googleapiclient/discovery_cache/documents/workflows.v1.json index 0a905d8bdd..c2d5bbd1fa 100644 --- a/googleapiclient/discovery_cache/documents/workflows.v1.json +++ b/googleapiclient/discovery_cache/documents/workflows.v1.json @@ -485,7 +485,7 @@ } } }, -"revision": "20240221", +"revision": "20240313", "rootUrl": "https://workflows.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/workflows.v1beta.json b/googleapiclient/discovery_cache/documents/workflows.v1beta.json index 8039395796..548665d54f 100644 --- a/googleapiclient/discovery_cache/documents/workflows.v1beta.json +++ b/googleapiclient/discovery_cache/documents/workflows.v1beta.json @@ -444,7 +444,7 @@ } } }, -"revision": "20240221", +"revision": "20240313", "rootUrl": "https://workflows.googleapis.com/", "schemas": { "Empty": { diff --git a/googleapiclient/discovery_cache/documents/workloadmanager.v1.json b/googleapiclient/discovery_cache/documents/workloadmanager.v1.json index 752aec2156..0c3865eda4 100644 --- a/googleapiclient/discovery_cache/documents/workloadmanager.v1.json +++ b/googleapiclient/discovery_cache/documents/workloadmanager.v1.json @@ -226,6 +226,11 @@ "name" ], "parameters": { +"force": { +"description": "Optional. Followed the best practice from https://aip.dev/135#cascading-delete", +"location": "query", +"type": "boolean" +}, "name": { "description": "Required. Name of the resource", "location": "path", @@ -761,179 +766,15 @@ ] } } -}, -"workloadProfiles": { -"methods": { -"get": { -"description": "Gets details of a single workload.", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/workloadProfiles/{workloadProfilesId}", -"httpMethod": "GET", -"id": "workloadmanager.projects.locations.workloadProfiles.get", -"parameterOrder": [ -"name" -], -"parameters": { -"name": { -"description": "Required. Name of the resource", -"location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+/workloadProfiles/[^/]+$", -"required": true, -"type": "string" -} -}, -"path": "v1/{+name}", -"response": { -"$ref": "WorkloadProfile" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform" -] -}, -"list": { -"deprecated": true, -"description": "List workloads", -"flatPath": "v1/projects/{projectsId}/locations/{locationsId}/workloadProfiles", -"httpMethod": "GET", -"id": "workloadmanager.projects.locations.workloadProfiles.list", -"parameterOrder": [ -"parent" -], -"parameters": { -"filter": { -"description": "Optional. Filtering results", -"location": "query", -"type": "string" -}, -"pageSize": { -"description": "Optional. Requested page size. Server may return fewer items than requested. If unspecified, server will pick an appropriate default.", -"format": "int32", -"location": "query", -"type": "integer" -}, -"pageToken": { -"description": "Optional. A token identifying a page of results the server should return.", -"location": "query", -"type": "string" -}, -"parent": { -"description": "Required. Parent value for ListWorkloadRequest", -"location": "path", -"pattern": "^projects/[^/]+/locations/[^/]+$", -"required": true, -"type": "string" -} -}, -"path": "v1/{+parent}/workloadProfiles", -"response": { -"$ref": "ListWorkloadProfilesResponse" -}, -"scopes": [ -"https://www.googleapis.com/auth/cloud-platform" -] -} -} } } } } } }, -"revision": "20240228", +"revision": "20240322", "rootUrl": "https://workloadmanager.googleapis.com/", "schemas": { -"APILayerServer": { -"description": "The API layer server", -"id": "APILayerServer", -"properties": { -"name": { -"description": "Output only. The api layer name", -"readOnly": true, -"type": "string" -}, -"osVersion": { -"description": "Output only. OS information", -"readOnly": true, -"type": "string" -}, -"resources": { -"description": "Output only. resources in the component", -"items": { -"$ref": "CloudResource" -}, -"readOnly": true, -"type": "array" -} -}, -"type": "object" -}, -"AvailabilityGroup": { -"description": "The availability groups for sqlserver", -"id": "AvailabilityGroup", -"properties": { -"databases": { -"description": "Output only. The databases", -"items": { -"type": "string" -}, -"readOnly": true, -"type": "array" -}, -"name": { -"description": "Output only. The availability group name", -"readOnly": true, -"type": "string" -}, -"primaryServer": { -"description": "Output only. The primary server", -"readOnly": true, -"type": "string" -}, -"secondaryServers": { -"description": "Output only. The secondary servers", -"items": { -"type": "string" -}, -"readOnly": true, -"type": "array" -} -}, -"type": "object" -}, -"BackendServer": { -"description": "The backend server", -"id": "BackendServer", -"properties": { -"backupFile": { -"description": "Output only. The backup file", -"readOnly": true, -"type": "string" -}, -"backupSchedule": { -"description": "Output only. The backup schedule", -"readOnly": true, -"type": "string" -}, -"name": { -"description": "Output only. The backend name", -"readOnly": true, -"type": "string" -}, -"osVersion": { -"description": "Output only. OS information", -"readOnly": true, -"type": "string" -}, -"resources": { -"description": "Output only. resources in the component", -"items": { -"$ref": "CloudResource" -}, -"readOnly": true, -"type": "array" -} -}, -"type": "object" -}, "BigQueryDestination": { "description": "Message describing big query destination", "id": "BigQueryDestination", @@ -955,98 +796,6 @@ "properties": {}, "type": "object" }, -"CloudResource": { -"description": "The resource on GCP", -"id": "CloudResource", -"properties": { -"kind": { -"description": "Output only. ComputeInstance, ComputeDisk, VPC, Bare Metal server, etc.", -"enum": [ -"RESOURCE_KIND_UNSPECIFIED", -"RESOURCE_KIND_INSTANCE", -"RESOURCE_KIND_DISK", -"RESOURCE_KIND_ADDRESS", -"RESOURCE_KIND_FILESTORE", -"RESOURCE_KIND_HEALTH_CHECK", -"RESOURCE_KIND_FORWARDING_RULE", -"RESOURCE_KIND_BACKEND_SERVICE", -"RESOURCE_KIND_SUBNETWORK", -"RESOURCE_KIND_NETWORK", -"RESOURCE_KIND_PUBLIC_ADDRESS", -"RESOURCE_KIND_INSTANCE_GROUP" -], -"enumDescriptions": [ -"Unspecified resource kind.", -"This is a compute instance.", -"This is a compute disk.", -"This is a compute address.", -"This is a filestore instance.", -"This is a compute health check.", -"This is a compute forwarding rule.", -"This is a compute backend service.", -"This is a compute subnetwork.", -"This is a compute network.", -"This is a public accessible IP Address.", -"This is a compute instance group." -], -"readOnly": true, -"type": "string" -}, -"name": { -"description": "Output only. resource name", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, -"Cluster": { -"description": "The cluster for sqlserver", -"id": "Cluster", -"properties": { -"nodes": { -"description": "Output only. The nodes", -"items": { -"type": "string" -}, -"readOnly": true, -"type": "array" -}, -"witnessServer": { -"description": "Output only. The witness server", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, -"Database": { -"description": "The database for sqlserver", -"id": "Database", -"properties": { -"backupFile": { -"description": "Output only. The backup file", -"readOnly": true, -"type": "string" -}, -"backupSchedule": { -"description": "Output only. The backup schedule", -"readOnly": true, -"type": "string" -}, -"hostVm": { -"description": "Output only. The host VM", -"readOnly": true, -"type": "string" -}, -"name": { -"description": "Output only. The database name", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, "Empty": { "description": "A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }", "id": "Empty", @@ -1226,31 +975,6 @@ }, "type": "object" }, -"FrontEndServer": { -"description": "The front end server", -"id": "FrontEndServer", -"properties": { -"name": { -"description": "Output only. The frontend name", -"readOnly": true, -"type": "string" -}, -"osVersion": { -"description": "Output only. OS information", -"readOnly": true, -"type": "string" -}, -"resources": { -"description": "Output only. resources in the component", -"items": { -"$ref": "CloudResource" -}, -"readOnly": true, -"type": "array" -} -}, -"type": "object" -}, "GceInstanceFilter": { "description": "Message describing compute engine instance filter", "id": "GceInstanceFilter", @@ -1294,81 +1018,6 @@ }, "type": "object" }, -"Instance": { -"description": "a vm instance", -"id": "Instance", -"properties": { -"name": { -"description": "Output only. name of the VM", -"readOnly": true, -"type": "string" -}, -"region": { -"description": "Output only. The location of the VM", -"readOnly": true, -"type": "string" -}, -"status": { -"description": "Output only. The state of the VM", -"enum": [ -"INSTANCESTATE_UNSPECIFIED", -"PROVISIONING", -"STAGING", -"RUNNING", -"STOPPING", -"STOPPED", -"TERMINATED", -"SUSPENDING", -"SUSPENDED", -"REPAIRING", -"DEPROVISIONING" -], -"enumDescriptions": [ -"The Status of the VM is unspecified", -"Resources are being allocated for the instance.", -"All required resources have been allocated and the instance is being started.", -"The instance is running.", -"The instance is currently stopping (either being deleted or killed).", -"The instance has stopped due to various reasons (user request, VM preemption, project freezing, etc.).", -"The instance has failed in some way.", -"The instance is suspending.", -"The instance is suspended.", -"The instance is in repair.", -"The instance is in de-provisioning state." -], -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, -"Layer": { -"description": "The database layer", -"id": "Layer", -"properties": { -"applicationType": { -"description": "the application layer", -"type": "string" -}, -"databaseType": { -"description": "Optional. the database layer", -"type": "string" -}, -"instances": { -"description": "Optional. instances in a layer", -"items": { -"$ref": "Instance" -}, -"type": "array" -}, -"sid": { -"description": "Output only. system identification of a layer", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, "ListEvaluationsResponse": { "description": "Message for response to listing Evaluations", "id": "ListEvaluationsResponse", @@ -1509,50 +1158,6 @@ }, "type": "object" }, -"ListWorkloadProfilesResponse": { -"description": "List workloadResponse returns a response with the list of workload overview", -"id": "ListWorkloadProfilesResponse", -"properties": { -"nextPageToken": { -"description": "Output only. A token identifying a page of results the server should return", -"readOnly": true, -"type": "string" -}, -"unreachable": { -"description": "Locations that could not be reached.", -"items": { -"type": "string" -}, -"type": "array" -}, -"workloadOverviews": { -"description": "Output only. The list of Workload Overview", -"items": { -"$ref": "WorkloadProfileOverview" -}, -"readOnly": true, -"type": "array" -} -}, -"type": "object" -}, -"LoadBalancerServer": { -"description": "The load balancer for sqlserver", -"id": "LoadBalancerServer", -"properties": { -"ip": { -"description": "Output only. The IP address", -"readOnly": true, -"type": "string" -}, -"vm": { -"description": "Output only. The VM name", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, "Location": { "description": "A resource that represents a Google Cloud location.", "id": "Location", @@ -1822,47 +1427,6 @@ }, "type": "object" }, -"SapComponent": { -"description": "The component of sap workload", -"id": "SapComponent", -"properties": { -"haHosts": { -"description": "A list of host URIs that are part of the HA configuration if present. An empty list indicates the component is not configured for HA.", -"items": { -"type": "string" -}, -"type": "array" -}, -"resources": { -"description": "Output only. resources in the component", -"items": { -"$ref": "CloudResource" -}, -"readOnly": true, -"type": "array" -}, -"sid": { -"description": "Output only. sid is the sap component identificator", -"readOnly": true, -"type": "string" -}, -"topologyType": { -"description": "The detected topology of the component.", -"enum": [ -"TOPOLOGY_TYPE_UNSPECIFIED", -"TOPOLOGY_SCALE_UP", -"TOPOLOGY_SCALE_OUT" -], -"enumDescriptions": [ -"Unspecified topology.", -"A scale-up single node system.", -"A scale-out multi-node system." -], -"type": "string" -} -}, -"type": "object" -}, "SapDiscovery": { "description": "The schema of SAP system discovery data.", "id": "SapDiscovery", @@ -1959,6 +1523,10 @@ "description": "Optional. Indicates whether this is a Java or ABAP Netweaver instance. true means it is ABAP, false means it is Java.", "type": "boolean" }, +"appInstanceNumber": { +"description": "Optional. Instance number of the SAP application instance.", +"type": "string" +}, "applicationType": { "description": "Required. Type of the application. Netweaver, etc.", "enum": [ @@ -1971,6 +1539,10 @@ ], "type": "string" }, +"ascsInstanceNumber": { +"description": "Optional. Instance number of the ASCS instance.", +"type": "string" +}, "ascsUri": { "description": "Optional. Resource URI of the recognized ASCS host of the application.", "type": "string" @@ -1990,6 +1562,10 @@ "description": "A set of properties describing an SAP Database layer.", "id": "SapDiscoveryComponentDatabaseProperties", "properties": { +"databaseSid": { +"description": "Optional. SID of the system database.", +"type": "string" +}, "databaseType": { "description": "Required. Type of the database. HANA, DB2, etc.", "enum": [ @@ -2010,6 +1586,10 @@ "description": "Optional. The version of the database software running in the system.", "type": "string" }, +"instanceNumber": { +"description": "Optional. Instance number of the SAP instance.", +"type": "string" +}, "primaryInstanceUri": { "description": "Required. URI of the recognized primary instance of the database.", "type": "string" @@ -2130,6 +1710,11 @@ }, "type": "array" }, +"instanceNumber": { +"description": "Optional. The VM's instance number.", +"format": "int64", +"type": "string" +}, "virtualHostname": { "description": "Optional. A virtual hostname of the instance if it has one.", "type": "string" @@ -2260,53 +1845,6 @@ }, "type": "object" }, -"SapWorkload": { -"description": "The body of sap workload", -"id": "SapWorkload", -"properties": { -"application": { -"$ref": "SapComponent", -"description": "Output only. the acsc componment", -"readOnly": true -}, -"database": { -"$ref": "SapComponent", -"description": "Output only. the database componment", -"readOnly": true -}, -"metadata": { -"additionalProperties": { -"type": "string" -}, -"description": "Output only. The metadata for SAP workload.", -"readOnly": true, -"type": "object" -} -}, -"type": "object" -}, -"SapWorkloadOverview": { -"description": "The overview of sap workload", -"id": "SapWorkloadOverview", -"properties": { -"appSid": { -"description": "Output only. The application SID", -"readOnly": true, -"type": "string" -}, -"dbSid": { -"description": "Output only. The database SID", -"readOnly": true, -"type": "string" -}, -"sapSystemId": { -"description": "Output only. The UUID for a SAP workload", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, "ScannedResource": { "description": "Message of scanned resource", "id": "ScannedResource", @@ -2410,59 +1948,6 @@ }, "type": "object" }, -"SqlserverWorkload": { -"description": "The body of sqlserver workload", -"id": "SqlserverWorkload", -"properties": { -"ags": { -"description": "Output only. The availability groups for sqlserver", -"items": { -"$ref": "AvailabilityGroup" -}, -"readOnly": true, -"type": "array" -}, -"cluster": { -"$ref": "Cluster", -"description": "Output only. The cluster for sqlserver", -"readOnly": true -}, -"databases": { -"description": "Output only. The databases for sqlserver", -"items": { -"$ref": "Database" -}, -"readOnly": true, -"type": "array" -}, -"loadBalancerServer": { -"$ref": "LoadBalancerServer", -"description": "Output only. The load balancer for sqlserver", -"readOnly": true -} -}, -"type": "object" -}, -"SqlserverWorkloadOverview": { -"description": "The overview of sqlserver workload", -"id": "SqlserverWorkloadOverview", -"properties": { -"availabilityGroup": { -"description": "Output only. The availability groups", -"items": { -"type": "string" -}, -"readOnly": true, -"type": "array" -}, -"sqlserverSystemId": { -"description": "Output only. The UUID for a Sqlserver workload", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, "Status": { "description": "The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors).", "id": "Status", @@ -2490,45 +1975,6 @@ }, "type": "object" }, -"ThreeTierWorkload": { -"description": "The body of three tier workload", -"id": "ThreeTierWorkload", -"properties": { -"apiLayer": { -"$ref": "APILayerServer", -"description": "Output only. The API layer for three tier workload", -"readOnly": true -}, -"backend": { -"$ref": "BackendServer", -"description": "Output only. The backend for three tier workload", -"readOnly": true -}, -"endpoint": { -"description": "Output only. the workload endpoint", -"readOnly": true, -"type": "string" -}, -"frontend": { -"$ref": "FrontEndServer", -"description": "Output only. The frontend for three tier workload", -"readOnly": true -} -}, -"type": "object" -}, -"ThreeTierWorkloadOverview": { -"description": "The overview of three tier workload", -"id": "ThreeTierWorkloadOverview", -"properties": { -"threeTierSystemId": { -"description": "Output only. The UUID for a three tier workload", -"readOnly": true, -"type": "string" -} -}, -"type": "object" -}, "ViolationDetails": { "description": "Message describing the violdation in execution result", "id": "ViolationDetails", @@ -2551,115 +1997,14 @@ }, "type": "object" }, -"WorkloadProfile": { -"description": "workload resource", -"id": "WorkloadProfile", -"properties": { -"application": { -"$ref": "Layer", -"deprecated": true, -"description": "Optional. The application layer" -}, -"ascs": { -"$ref": "Layer", -"deprecated": true, -"description": "Optional. The ascs layer" -}, -"database": { -"$ref": "Layer", -"deprecated": true, -"description": "Optional. The database layer" -}, -"labels": { -"additionalProperties": { -"type": "string" -}, -"description": "Optional. such as name, description, version. More example can be found in deployment", -"type": "object" -}, -"name": { -"description": "Identifier. name of resource names have the form 'projects/{project_id}/workloads/{workload_id}'", -"type": "string" -}, -"refreshedTime": { -"description": "Required. time when the workload data was refreshed", -"format": "google-datetime", -"type": "string" -}, -"sapWorkload": { -"$ref": "SapWorkload", -"description": "The sap workload content" -}, -"sqlserverWorkload": { -"$ref": "SqlserverWorkload", -"description": "The sqlserver workload content" -}, -"state": { -"deprecated": true, -"description": "Output only. [output only] the current state if a a workload", -"enum": [ -"STATE_UNSPECIFIED", -"ACTIVE", -"DEPLOYING", -"DESTROYING", -"MAINTENANCE" -], -"enumDescriptions": [ -"unspecified", -"ACTIVE state", -"workload is in Deploying state", -"The workload is in Destroying state", -"The Workload is undermaintance" -], -"readOnly": true, -"type": "string" -}, -"threeTierWorkload": { -"$ref": "ThreeTierWorkload", -"description": "The 3 tier web app workload content" -}, -"workloadType": { -"description": "Required. The type of the workload", -"enum": [ -"WORKLOAD_TYPE_UNSPECIFIED", -"S4_HANA", -"SQL_SERVER", -"THREE_TIER_WEB_APP" -], -"enumDescriptions": [ -"unspecified workload type", -"running sap workload s4/hana", -"running sqlserver workload", -"running 3 tier web app workload" -], -"type": "string" -} -}, -"type": "object" -}, -"WorkloadProfileOverview": { -"description": "a workload profile overview", -"id": "WorkloadProfileOverview", -"properties": { -"sapWorkloadOverview": { -"$ref": "SapWorkloadOverview", -"description": "The sap workload overview" -}, -"sqlserverWorkloadOverview": { -"$ref": "SqlserverWorkloadOverview", -"description": "The sqlserver workload overview" -}, -"threeTierWorkloadOverview": { -"$ref": "ThreeTierWorkloadOverview", -"description": "The three tier workload overview" -} -}, -"type": "object" -}, "WriteInsightRequest": { "description": "Request for sending the data insights.", "id": "WriteInsightRequest", "properties": { +"agentVersion": { +"description": "Optional. The agent version collected this data point.", +"type": "string" +}, "insight": { "$ref": "Insight", "description": "Required. The metrics data details." diff --git a/googleapiclient/discovery_cache/documents/workspaceevents.v1.json b/googleapiclient/discovery_cache/documents/workspaceevents.v1.json index 0a81527161..9ac8cf0bcb 100644 --- a/googleapiclient/discovery_cache/documents/workspaceevents.v1.json +++ b/googleapiclient/discovery_cache/documents/workspaceevents.v1.json @@ -424,7 +424,7 @@ } } }, -"revision": "20240312", +"revision": "20240319", "rootUrl": "https://workspaceevents.googleapis.com/", "schemas": { "ListSubscriptionsResponse": { diff --git a/googleapiclient/discovery_cache/documents/youtube.v3.json b/googleapiclient/discovery_cache/documents/youtube.v3.json index 4421c45f69..0dbf81b806 100644 --- a/googleapiclient/discovery_cache/documents/youtube.v3.json +++ b/googleapiclient/discovery_cache/documents/youtube.v3.json @@ -4037,7 +4037,7 @@ } } }, -"revision": "20240317", +"revision": "20240324", "rootUrl": "https://youtube.googleapis.com/", "schemas": { "AbuseReport": { diff --git a/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json b/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json index b55c49f3c7..f647e6d05b 100644 --- a/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json +++ b/googleapiclient/discovery_cache/documents/youtubeAnalytics.v2.json @@ -421,7 +421,7 @@ } } }, -"revision": "20240313", +"revision": "20240322", "rootUrl": "https://youtubeanalytics.googleapis.com/", "schemas": { "EmptyResponse": { diff --git a/googleapiclient/discovery_cache/documents/youtubereporting.v1.json b/googleapiclient/discovery_cache/documents/youtubereporting.v1.json index 1284851a4f..d1156b7cce 100644 --- a/googleapiclient/discovery_cache/documents/youtubereporting.v1.json +++ b/googleapiclient/discovery_cache/documents/youtubereporting.v1.json @@ -411,7 +411,7 @@ } } }, -"revision": "20240313", +"revision": "20240322", "rootUrl": "https://youtubereporting.googleapis.com/", "schemas": { "Empty": {