Skip to content

Commit

Permalink
ci(java): ignore bot users for generate-files-bot
Browse files Browse the repository at this point in the history
Depends on googleapis/repo-automation-bots#1254

Fixes googleapis/repo-automation-bots#1096

Source-Author: Jeff Ching <chingor@google.com>
Source-Date: Tue Dec 15 16:16:07 2020 -0800
Source-Repo: googleapis/synthtool
Source-Sha: 3f67ceece7e797a5736a25488aae35405649b90b
Source-Link: googleapis/synthtool@3f67cee
  • Loading branch information
yoshi-automation committed Dec 16, 2020
1 parent 6471488 commit 516195f
Show file tree
Hide file tree
Showing 13 changed files with 390 additions and 624 deletions.
14 changes: 7 additions & 7 deletions docs/dyn/dataproc_v1.projects.locations.workflowTemplates.html
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ <h3>Method Details</h3>
},
&quot;scheduling&quot;: { # Job scheduling options. # Optional. Job scheduling configuration.
&quot;maxFailuresPerHour&quot;: 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240
},
&quot;sparkJob&quot;: { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
&quot;archiveUris&quot;: [ # Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Expand Down Expand Up @@ -686,7 +686,7 @@ <h3>Method Details</h3>
},
&quot;scheduling&quot;: { # Job scheduling options. # Optional. Job scheduling configuration.
&quot;maxFailuresPerHour&quot;: 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240
},
&quot;sparkJob&quot;: { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
&quot;archiveUris&quot;: [ # Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Expand Down Expand Up @@ -1148,7 +1148,7 @@ <h3>Method Details</h3>
},
&quot;scheduling&quot;: { # Job scheduling options. # Optional. Job scheduling configuration.
&quot;maxFailuresPerHour&quot;: 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240
},
&quot;sparkJob&quot;: { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
&quot;archiveUris&quot;: [ # Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Expand Down Expand Up @@ -1674,7 +1674,7 @@ <h3>Method Details</h3>
},
&quot;scheduling&quot;: { # Job scheduling options. # Optional. Job scheduling configuration.
&quot;maxFailuresPerHour&quot;: 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240
},
&quot;sparkJob&quot;: { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
&quot;archiveUris&quot;: [ # Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Expand Down Expand Up @@ -2150,7 +2150,7 @@ <h3>Method Details</h3>
},
&quot;scheduling&quot;: { # Job scheduling options. # Optional. Job scheduling configuration.
&quot;maxFailuresPerHour&quot;: 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240
},
&quot;sparkJob&quot;: { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
&quot;archiveUris&quot;: [ # Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Expand Down Expand Up @@ -2691,7 +2691,7 @@ <h3>Method Details</h3>
},
&quot;scheduling&quot;: { # Job scheduling options. # Optional. Job scheduling configuration.
&quot;maxFailuresPerHour&quot;: 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240
},
&quot;sparkJob&quot;: { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
&quot;archiveUris&quot;: [ # Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Expand Down Expand Up @@ -3126,7 +3126,7 @@ <h3>Method Details</h3>
},
&quot;scheduling&quot;: { # Job scheduling options. # Optional. Job scheduling configuration.
&quot;maxFailuresPerHour&quot;: 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
&quot;maxFailuresTotal&quot;: 42, # Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240
},
&quot;sparkJob&quot;: { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
&quot;archiveUris&quot;: [ # Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Expand Down
48 changes: 0 additions & 48 deletions docs/dyn/dataproc_v1.projects.regions.clusters.html
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,6 @@ <h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="#getIamPolicy">getIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.</p>
<p class="toc_element">
<code><a href="#injectCredentials">injectCredentials(project, region, cluster, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Inject encrypted credentials into all of the VMs in a cluster.The target cluster must be a personal auth cluster assigned to the user who is issuing the RPC.</p>
<p class="toc_element">
<code><a href="#list">list(projectId, region, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</a></code></p>
<p class="firstline">Lists all regions/{region}/clusters in a project alphabetically.</p>
Expand Down Expand Up @@ -734,51 +731,6 @@ <h3>Method Details</h3>
}</pre>
</div>

<div class="method">
<code class="details" id="injectCredentials">injectCredentials(project, region, cluster, body=None, x__xgafv=None)</code>
<pre>Inject encrypted credentials into all of the VMs in a cluster.The target cluster must be a personal auth cluster assigned to the user who is issuing the RPC.

Args:
project: string, Required. The ID of the Google Cloud Platform project the cluster belongs to, of the form projects/. (required)
region: string, Required. The region containing the cluster, of the form regions/. (required)
cluster: string, Required. The cluster, in the form clusters/. (required)
body: object, The request body.
The object takes the form of:

{ # A request to inject credentials into a cluster.
&quot;clusterUuid&quot;: &quot;A String&quot;, # Required. The cluster UUID.
&quot;credentialsCiphertext&quot;: &quot;A String&quot;, # Required. The encrypted credentials being injected in to the cluster.The client is responsible for encrypting the credentials in a way that is supported by the cluster.A wrapped value is used here so that the actual contents of the encrypted credentials are not written to audit logs.
}

x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format

Returns:
An object of the form:

{ # This resource represents a long-running operation that is the result of a network API call.
&quot;done&quot;: True or False, # If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.
&quot;error&quot;: { # The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
&quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
&quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
],
&quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
&quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
&quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.
&quot;response&quot;: { # The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>

<div class="method">
<code class="details" id="list">list(projectId, region, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</code>
<pre>Lists all regions/{region}/clusters in a project alphabetically.
Expand Down

0 comments on commit 516195f

Please sign in to comment.