Skip to content

Commit

Permalink
feat(firestore): update the api
Browse files Browse the repository at this point in the history
#### firestore:v1

The following keys were added:
- schemas.Aggregation.properties.avg.$ref (Total Keys: 1)
- schemas.Aggregation.properties.sum.$ref (Total Keys: 1)
- schemas.Avg (Total Keys: 3)
- schemas.GoogleFirestoreAdminV1RestoreDatabaseMetadata.properties.progressPercentage.$ref (Total Keys: 1)
- schemas.Sum (Total Keys: 3)

#### firestore:v1beta1

The following keys were added:
- schemas.Aggregation.properties.avg.$ref (Total Keys: 1)
- schemas.Aggregation.properties.sum.$ref (Total Keys: 1)
- schemas.Avg (Total Keys: 3)
- schemas.GoogleFirestoreAdminV1Progress (Total Keys: 6)
- schemas.GoogleFirestoreAdminV1RestoreDatabaseMetadata.properties.progressPercentage.$ref (Total Keys: 1)
- schemas.Sum (Total Keys: 3)

#### firestore:v1beta2

The following keys were added:
- schemas.GoogleFirestoreAdminV1Progress (Total Keys: 6)
- schemas.GoogleFirestoreAdminV1RestoreDatabaseMetadata.properties.progressPercentage.$ref (Total Keys: 1)
  • Loading branch information
yoshi-automation committed Aug 4, 2023
1 parent e289b1e commit 569043e
Show file tree
Hide file tree
Showing 6 changed files with 144 additions and 18 deletions.
12 changes: 6 additions & 6 deletions docs/dyn/firestore_v1.projects.databases.backupSchedules.html
Expand Up @@ -112,7 +112,7 @@ <h3>Method Details</h3>
&quot;dailyRecurrence&quot;: { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time.
},
&quot;name&quot;: &quot;A String&quot;, # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to the creation time of the backup should the backup be deleted, i.e. keep backups for 7 days.
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days.
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. The timestamp at which this backup schedule was most recently updated. When a backup schedule is first created, this is the same as create_time.
&quot;weeklyRecurrence&quot;: { # Represents a recurring schedule that runs on a specified day of the week. The time zone is UTC. # For a schedule that runs weekly on a specific day and time.
&quot;day&quot;: &quot;A String&quot;, # The day of week to run. DAY_OF_WEEK_UNSPECIFIED is not allowed.
Expand All @@ -132,7 +132,7 @@ <h3>Method Details</h3>
&quot;dailyRecurrence&quot;: { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time.
},
&quot;name&quot;: &quot;A String&quot;, # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to the creation time of the backup should the backup be deleted, i.e. keep backups for 7 days.
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days.
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. The timestamp at which this backup schedule was most recently updated. When a backup schedule is first created, this is the same as create_time.
&quot;weeklyRecurrence&quot;: { # Represents a recurring schedule that runs on a specified day of the week. The time zone is UTC. # For a schedule that runs weekly on a specific day and time.
&quot;day&quot;: &quot;A String&quot;, # The day of week to run. DAY_OF_WEEK_UNSPECIFIED is not allowed.
Expand Down Expand Up @@ -177,7 +177,7 @@ <h3>Method Details</h3>
&quot;dailyRecurrence&quot;: { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time.
},
&quot;name&quot;: &quot;A String&quot;, # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to the creation time of the backup should the backup be deleted, i.e. keep backups for 7 days.
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days.
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. The timestamp at which this backup schedule was most recently updated. When a backup schedule is first created, this is the same as create_time.
&quot;weeklyRecurrence&quot;: { # Represents a recurring schedule that runs on a specified day of the week. The time zone is UTC. # For a schedule that runs weekly on a specific day and time.
&quot;day&quot;: &quot;A String&quot;, # The day of week to run. DAY_OF_WEEK_UNSPECIFIED is not allowed.
Expand Down Expand Up @@ -206,7 +206,7 @@ <h3>Method Details</h3>
&quot;dailyRecurrence&quot;: { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time.
},
&quot;name&quot;: &quot;A String&quot;, # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to the creation time of the backup should the backup be deleted, i.e. keep backups for 7 days.
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days.
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. The timestamp at which this backup schedule was most recently updated. When a backup schedule is first created, this is the same as create_time.
&quot;weeklyRecurrence&quot;: { # Represents a recurring schedule that runs on a specified day of the week. The time zone is UTC. # For a schedule that runs weekly on a specific day and time.
&quot;day&quot;: &quot;A String&quot;, # The day of week to run. DAY_OF_WEEK_UNSPECIFIED is not allowed.
Expand All @@ -230,7 +230,7 @@ <h3>Method Details</h3>
&quot;dailyRecurrence&quot;: { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time.
},
&quot;name&quot;: &quot;A String&quot;, # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to the creation time of the backup should the backup be deleted, i.e. keep backups for 7 days.
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days.
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. The timestamp at which this backup schedule was most recently updated. When a backup schedule is first created, this is the same as create_time.
&quot;weeklyRecurrence&quot;: { # Represents a recurring schedule that runs on a specified day of the week. The time zone is UTC. # For a schedule that runs weekly on a specific day and time.
&quot;day&quot;: &quot;A String&quot;, # The day of week to run. DAY_OF_WEEK_UNSPECIFIED is not allowed.
Expand All @@ -251,7 +251,7 @@ <h3>Method Details</h3>
&quot;dailyRecurrence&quot;: { # Represent a recurring schedule that runs at a specific time every day. The time zone is UTC. # For a schedule that runs daily at a specified time.
},
&quot;name&quot;: &quot;A String&quot;, # Output only. The unique backup schedule identifier across all locations and databases for the given project. This will be auto-assigned. Format is `projects/{project}/databases/{database}/backupSchedules/{backup_schedule}`
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to the creation time of the backup should the backup be deleted, i.e. keep backups for 7 days.
&quot;retention&quot;: &quot;A String&quot;, # At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days.
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. The timestamp at which this backup schedule was most recently updated. When a backup schedule is first created, this is the same as create_time.
&quot;weeklyRecurrence&quot;: { # Represents a recurring schedule that runs on a specified day of the week. The time zone is UTC. # For a schedule that runs weekly on a specific day and time.
&quot;day&quot;: &quot;A String&quot;, # The day of week to run. DAY_OF_WEEK_UNSPECIFIED is not allowed.
Expand Down
14 changes: 12 additions & 2 deletions docs/dyn/firestore_v1.projects.databases.documents.html
Expand Up @@ -1398,9 +1398,9 @@ <h3>Method Details</h3>
],
},
&quot;filter&quot;: { # A digest of all the documents that match a given target. # A filter to apply to the set of documents previously returned for the given target. Returned when documents may have been removed from the given target, but the exact documents are unknown.
&quot;count&quot;: 42, # The total count of documents that match target_id. If different from the count of documents in the client that match, the client must manually determine which documents no longer match the target. The client can use the `unchanged_names` bloom filter to assist with this determination.
&quot;count&quot;: 42, # The total count of documents that match target_id. If different from the count of documents in the client that match, the client must manually determine which documents no longer match the target. The client can use the `unchanged_names` bloom filter to assist with this determination by testing ALL the document names against the filter; if the document name is NOT in the filter, it means the document no longer matches the target.
&quot;targetId&quot;: 42, # The target ID to which this filter applies.
&quot;unchangedNames&quot;: { # A bloom filter (https://en.wikipedia.org/wiki/Bloom_filter). The bloom filter hashes the entries with MD5 and treats the resulting 128-bit hash as 2 distinct 64-bit hash values, interpreted as unsigned integers using 2&#x27;s complement encoding. These two hash values, named `h1` and `h2`, are then used to compute the `hash_count` hash values using the formula, starting at `i=0`: h(i) = h1 + (i * h2) These resulting values are then taken modulo the number of bits in the bloom filter to get the bits of the bloom filter to test for the given entry. # A bloom filter that contains the UTF-8 byte encodings of the resource names of the documents that match target_id, in the form `projects/{project_id}/databases/{database_id}/documents/{document_path}` that have NOT changed since the query results indicated by the resume token or timestamp given in `Target.resume_type`. This bloom filter may be omitted at the server&#x27;s discretion, such as if it is deemed that the client will not make use of it or if it is too computationally expensive to calculate or transmit. Clients must gracefully handle this field being absent by falling back to the logic used before this field existed; that is, re-add the target without a resume token to figure out which documents in the client&#x27;s cache are out of sync.
&quot;unchangedNames&quot;: { # A bloom filter (https://en.wikipedia.org/wiki/Bloom_filter). The bloom filter hashes the entries with MD5 and treats the resulting 128-bit hash as 2 distinct 64-bit hash values, interpreted as unsigned integers using 2&#x27;s complement encoding. These two hash values, named `h1` and `h2`, are then used to compute the `hash_count` hash values using the formula, starting at `i=0`: h(i) = h1 + (i * h2) These resulting values are then taken modulo the number of bits in the bloom filter to get the bits of the bloom filter to test for the given entry. # A bloom filter that, despite its name, contains the UTF-8 byte encodings of the resource names of ALL the documents that match target_id, in the form `projects/{project_id}/databases/{database_id}/documents/{document_path}`. This bloom filter may be omitted at the server&#x27;s discretion, such as if it is deemed that the client will not make use of it or if it is too computationally expensive to calculate or transmit. Clients must gracefully handle this field being absent by falling back to the logic used before this field existed; that is, re-add the target without a resume token to figure out which documents in the client&#x27;s cache are out of sync.
&quot;bits&quot;: { # A sequence of bits, encoded in a byte array. Each byte in the `bitmap` byte array stores 8 bits of the sequence. The only exception is the last byte, which may store 8 _or fewer_ bits. The `padding` defines the number of bits of the last byte to be ignored as &quot;padding&quot;. The values of these &quot;padding&quot; bits are unspecified and must be ignored. To retrieve the first bit, bit 0, calculate: `(bitmap[0] &amp; 0x01) != 0`. To retrieve the second bit, bit 1, calculate: `(bitmap[0] &amp; 0x02) != 0`. To retrieve the third bit, bit 2, calculate: `(bitmap[0] &amp; 0x04) != 0`. To retrieve the fourth bit, bit 3, calculate: `(bitmap[0] &amp; 0x08) != 0`. To retrieve bit n, calculate: `(bitmap[n / 8] &amp; (0x01 &lt;&lt; (n % 8))) != 0`. The &quot;size&quot; of a `BitSequence` (the number of bits it contains) is calculated by this formula: `(bitmap.length * 8) - padding`. # The bloom filter data.
&quot;bitmap&quot;: &quot;A String&quot;, # The bytes that encode the bit sequence. May have a length of zero.
&quot;padding&quot;: 42, # The number of bits of the last byte in `bitmap` to ignore as &quot;padding&quot;. If the length of `bitmap` is zero, then this value must be `0`. Otherwise, this value must be between 0 and 7, inclusive.
Expand Down Expand Up @@ -1763,9 +1763,19 @@ <h3>Method Details</h3>
&quot;aggregations&quot;: [ # Optional. Series of aggregations to apply over the results of the `structured_query`. Requires: * A minimum of one and maximum of five aggregations per query.
{ # Defines an aggregation that produces a single result.
&quot;alias&quot;: &quot;A String&quot;, # Optional. Optional name of the field to store the result of the aggregation into. If not provided, Firestore will pick a default name following the format `field_`. For example: ``` AGGREGATE COUNT_UP_TO(1) AS count_up_to_1, COUNT_UP_TO(2), COUNT_UP_TO(3) AS count_up_to_3, COUNT(*) OVER ( ... ); ``` becomes: ``` AGGREGATE COUNT_UP_TO(1) AS count_up_to_1, COUNT_UP_TO(2) AS field_1, COUNT_UP_TO(3) AS count_up_to_3, COUNT(*) AS field_2 OVER ( ... ); ``` Requires: * Must be unique across all aggregation aliases. * Conform to document field name limitations.
&quot;avg&quot;: { # Average of the values of the requested field. * Only numeric values will be aggregated. All non-numeric values including `NULL` are skipped. * If the aggregated values contain `NaN`, returns `NaN`. * If the aggregated value set is empty, returns `NULL`. * Always returns the result as a double. # Average aggregator.
&quot;field&quot;: { # A reference to a field in a document, ex: `stats.operations`. # The field to aggregate on.
&quot;fieldPath&quot;: &quot;A String&quot;, # The relative path of the document being referenced. Requires: * Conform to document field name limitations.
},
},
&quot;count&quot;: { # Count of documents that match the query. The `COUNT(*)` aggregation function operates on the entire document so it does not require a field reference. # Count aggregator.
&quot;upTo&quot;: &quot;A String&quot;, # Optional. Optional constraint on the maximum number of documents to count. This provides a way to set an upper bound on the number of documents to scan, limiting latency, and cost. Unspecified is interpreted as no bound. High-Level Example: ``` AGGREGATE COUNT_UP_TO(1000) OVER ( SELECT * FROM k ); ``` Requires: * Must be greater than zero when present.
},
&quot;sum&quot;: { # Sum of the values of the requested field. * Only numeric values will be aggregated. All non-numeric values including `NULL` are skipped. * If the aggregated values contain `NaN`, returns `NaN`. * If the aggregated value set is empty, returns 0. * Returns a 64-bit integer if the sum result is an integer value and does not overflow. Otherwise, the result is returned as a double. Note that even if all the aggregated values are integers, the result is returned as a double if it cannot fit within a 64-bit signed integer. When this occurs, the returned value will lose precision. * When underflow occurs, floating-point aggregation is non-deterministic. This means that running the same query repeatedly without any changes to the underlying values could produce slightly different results each time. In those cases, values should be stored as integers over floating-point numbers. # Sum aggregator.
&quot;field&quot;: { # A reference to a field in a document, ex: `stats.operations`. # The field to aggregate on.
&quot;fieldPath&quot;: &quot;A String&quot;, # The relative path of the document being referenced. Requires: * Conform to document field name limitations.
},
},
},
],
&quot;structuredQuery&quot;: { # A Firestore query. # Nested structured query.
Expand Down

0 comments on commit 569043e

Please sign in to comment.