You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/dyn/dataproc_v1.projects.locations.autoscalingPolicies.html
+6
Original file line number
Diff line number
Diff line change
@@ -124,6 +124,7 @@ <h3>Method Details</h3>
124
124
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
127
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
127
128
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
128
129
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
129
130
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -167,6 +168,7 @@ <h3>Method Details</h3>
167
168
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
171
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
170
172
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
171
173
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
172
174
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -235,6 +237,7 @@ <h3>Method Details</h3>
235
237
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
240
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
238
241
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
239
242
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
240
243
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -333,6 +336,7 @@ <h3>Method Details</h3>
333
336
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
339
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
336
340
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
337
341
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
338
342
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -482,6 +486,7 @@ <h3>Method Details</h3>
482
486
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
489
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
485
490
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
486
491
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
487
492
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -525,6 +530,7 @@ <h3>Method Details</h3>
525
530
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
533
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
528
534
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
529
535
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
530
536
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
Copy file name to clipboardexpand all lines: docs/dyn/dataproc_v1.projects.regions.autoscalingPolicies.html
+6
Original file line number
Diff line number
Diff line change
@@ -124,6 +124,7 @@ <h3>Method Details</h3>
124
124
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
127
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
127
128
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
128
129
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
129
130
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -167,6 +168,7 @@ <h3>Method Details</h3>
167
168
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
171
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
170
172
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
171
173
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
172
174
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -235,6 +237,7 @@ <h3>Method Details</h3>
235
237
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
240
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
238
241
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
239
242
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
240
243
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -333,6 +336,7 @@ <h3>Method Details</h3>
333
336
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
339
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
336
340
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
337
341
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
338
342
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -482,6 +486,7 @@ <h3>Method Details</h3>
482
486
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
489
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
485
490
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
486
491
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
487
492
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
@@ -525,6 +530,7 @@ <h3>Method Details</h3>
525
530
"cooldownPeriod": "A String", # Optional. Duration between scaling events. A scaling period starts after the update operation from the previous event has completed.Bounds: 2m, 1d. Default: 2m.
"gracefulDecommissionTimeout": "A String", # Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
533
+
"removeOnlyIdleWorkers": True or False, # Optional. Remove only idle workers when scaling down cluster
528
534
"scaleDownFactor": 3.14, # Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
529
535
"scaleDownMinWorkerFraction": 3.14, # Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
530
536
"scaleUpFactor": 3.14, # Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
Copy file name to clipboardexpand all lines: docs/dyn/dataproc_v1.projects.regions.clusters.nodeGroups.html
+6
Original file line number
Diff line number
Diff line change
@@ -158,6 +158,9 @@ <h3>Method Details</h3>
158
158
"minNumInstances": 42, # Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
159
159
"numInstances": 42, # Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
160
160
"preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
161
+
"startupConfig": { # Configuration to handle the startup of instances during cluster create and update process. # Optional. Configuration to handle the startup of instances during cluster create and update process.
162
+
"requiredRegistrationFraction": 3.14, # Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
163
+
},
161
164
},
162
165
"roles": [ # Required. Node group roles.
163
166
"A String",
@@ -266,6 +269,9 @@ <h3>Method Details</h3>
266
269
"minNumInstances": 42, # Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
267
270
"numInstances": 42, # Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
268
271
"preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
272
+
"startupConfig": { # Configuration to handle the startup of instances during cluster create and update process. # Optional. Configuration to handle the startup of instances during cluster create and update process.
273
+
"requiredRegistrationFraction": 3.14, # Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
274
+
},
269
275
},
270
276
"roles": [ # Required. Node group roles.
Copy file name to clipboardexpand all lines: googleapiclient/discovery_cache/documents/dataproc.v1.json
+21-1
Original file line number
Diff line number
Diff line change
@@ -3001,7 +3001,7 @@
3001
3001
}
3002
3002
}
3003
3003
},
3004
-
"revision": "20230919",
3004
+
"revision": "20230926",
3005
3005
"rootUrl": "https://dataproc.googleapis.com/",
3006
3006
"schemas": {
3007
3007
"AcceleratorConfig": {
@@ -4496,6 +4496,10 @@
4496
4496
"Instances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features."
4497
4497
],
4498
4498
"type": "string"
4499
+
},
4500
+
"startupConfig": {
4501
+
"$ref": "StartupConfig",
4502
+
"description": "Optional. Configuration to handle the startup of instances during cluster create and update process."
4499
4503
}
4500
4504
},
4501
4505
"type": "object"
@@ -6640,6 +6644,10 @@
6640
6644
"format": "google-duration",
6641
6645
"type": "string"
6642
6646
},
6647
+
"removeOnlyIdleWorkers": {
6648
+
"description": "Optional. Remove only idle workers when scaling down cluster",
6649
+
"type": "boolean"
6650
+
},
6643
6651
"scaleDownFactor": {
6644
6652
"description": "Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.",
6645
6653
"format": "double",
@@ -6678,6 +6686,18 @@
6678
6686
},
6679
6687
"type": "object"
6680
6688
},
6689
+
"StartupConfig": {
6690
+
"description": "Configuration to handle the startup of instances during cluster create and update process.",
6691
+
"id": "StartupConfig",
6692
+
"properties": {
6693
+
"requiredRegistrationFraction": {
6694
+
"description": "Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).",
0 commit comments