From accb18a1e7d8beda4d7d65505eda095027391637 Mon Sep 17 00:00:00 2001 From: Colin Hryniowski <13041044+co-jo@users.noreply.github.com> Date: Tue, 26 Oct 2021 15:30:45 -0700 Subject: [PATCH] Feature - Large Appends: Fix Conflict & Update with Master. (#6398) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Issue 6130: Fix HealthServiceManager Race Condition (#6131) Fixed a race condition in HealthServiceManager. Signed-off-by: Colin Hryniowski * Issue 6117: Transactions get aborted due to lease expiry on failure or slowness of pingTxn API (#6120) Signed-off-by: Shashwat Sharma * Increase timeout of transaction to 10 minutes * Issue 6099: Table Segment Key Count (#6123) Created a new API to retrieve information about a Table Segment: i) Length and start offset, ii) Key length, and iii) Entry count. Signed-off-by: Andrei Paduroiu * Issue 6147: Stream Tag documenation updates. (#6148) * Ensure Stream Tag option, which is now a part of the StreamConfiguration is updated as part of the Controller documentation. Signed-off-by: Sandeep * Issue 6121: SLTS - Make CHUNKED_STORAGE default. (#6122) Make CHUNKED_STORAGE default storage option. Signed-off-by: Sachin Joshi * Issue 6132: Update README with new public token (#6133) Updates snapshot builds section with the new public token. Mentions alternative option for pre-release artifacts. Fixes a broken link to Quick Start page. Signed-off-by: Igor Medvedev * Issue 6128: Ensure RevisionedStreamClient logs the fact that no new updates are present to read post a revision. (#6127) Ensure RevisionedStreamClient logs the fact that no new updates are present to read post a revision. Signed-off-by: Sandeep Co-authored-by: Tom Kaitchuck * Issue 6156: (Controller) Bugfix for ListStreamsForTag API's handling of continuation tokens. (#6157) * Fix IndexOutOfBoundsException on listStreamsForTag API. Signed-off-by: Sandeep * Issue 2431: Reduce warnings (#6153) Reduce warnings in build and in IDE. Signed-off-by: Tom Kaitchuck Co-authored-by: Andrei Paduroiu * Issue 6164: Typo in Pravega concepts Transactions section (#6165) Grammar fix by removing the "a" in a sentence in Transactions session. Signed-off-by: James Kim Co-authored-by: Andrei Paduroiu * Issue 6145: Custom thread pool (#6134) This provides a custom impementation of the same interfaces as ScheduledThreadPoolExecutor in the standard library. However instead of using a heap data structure under lock to store the queued messages, it uses a lock-free queue. The queue works by splitting the delayed and non-delayed tasks. The non-delayed tasks are in a dequeue so adding and removing them can be done in O(1) without holding a lock. The delayed tasks still take O(log(n)) but do so with a lock-free data structure so that we can have parallelism when adding and removing them. Signed-off-by: Tom Kaitchuck * Issue 5699: SLTS - Old system journal files should be garbage collected. (#6154) Old system journal files are now added garbage collector queue after new snapshot is saved. Signed-off-by: Sachin Joshi Co-authored-by: Andrei Paduroiu * Issue 6162: Extra spaces on Pravega.io docs -> Understanding Pravega -> Concepts (#6163) Removed all extra spaces in pravega-concepts.md to keep the spacing consistent among the sentences. Signed-off-by: James Kim * Issue 6159: Extra space on Pravega.io docs (#6161) Removed an extra space in pravega.concepts.md. Signed-off-by: James Kim Co-authored-by: Andrei Paduroiu * Issue 5525: Adding support for reading influxdb credentials from a file (#6173) Currently, influxdb credentails are set as part of java options in plain text. Made change in pravega-operator to pass those credentials as secret and mount in a volume. Changes are done to read the credentials from file and populate in java options. Signed-off-by: anishakj * Issue 6139: (SegmentStore) Improving Table Segment Pre-Caching and Background Indexing (#6149) - Made TableExtensionConfig externally-configurable. This should allow us to properly tweak this if we have to (for debugging/repair purposes). - Changed ContainerKeyIndex pre-caching to execute in batches, no bigger than 128MB. This will enable precaching of larger segments without the risk of running out of memory. - Changed WriterTableProcessor to index a maximum of 128MB at once, even if the backlog is bigger. Remaining unindexed data will be processed in later iterations. Signed-off-by: Andrei Paduroiu * Issue 6011: Implement Stream Tag REST APIs (#6143) Signed-off-by: Atharva Implement the stream tag REST API's. * Issue 6155: Secure Access to Segment Store CLI (#6141) Added delegationtoken for segment store cli commands. Updated unit tests for testing segment store cli commands with tls and auth. Signed-off-by: dellThejas * Issue 5560: Read the contents of a segment from the storage (#6172) The CLI command "segmentstore read-segment" has been tweaked to write the obtained segment data into a specified file. Signed-off-by: anirudhkovuru * Issue 6191: Fix transaction lease period in system tests (#6190) * Changed default transaction lease max time to 600 seconds in remaining places. Signed-off-by: Raúl Gracia * Issue 6193: Temporarily revert of Retry predicate change for Controller Events (#6064) (#6194) This PR temporarily reverts PR #6064 as it shows instability in system tests. Signed-off-by: Sandeep * Issue 5791: Fix LocalPravegaEmulator dependencies for external projects (#5795) Overrides gRPC's older transitive protobuf-java version with our force-upgraded protobuf-java version, otherwise external projects pick up an older protobuf-java version that Pravega was not compiled against. Adds jjwt dependency to pravega-standalone for users of LocalPravegaEmulator. For some reason, pravega-standalone requires jjwt, and jjwt is otherwise excluded as a transitive dependency by a rule in pravega-client. Signed-off-by: Derek Moore Co-authored-by: Sandeep Co-authored-by: Andrei Paduroiu * Issues 6086: SLTS - Robust garbage collection (PDP-53). (#6108) Initial implementation of PDP-53. Make Garbage collection in SLTS more robust and comprehensively cover additional failure modes without adding complexity. Signed-off-by: Sachin Joshi Co-authored-by: Andrei Paduroiu Co-authored-by: Raúl Gracia * Issue 6184: SLTS - Fix generation of inconsistent snapshots (#6185) SLTS - SystemJournal - Fix concurrency issue with inconsistent metadata during snapshot creation. Signed-off-by: Sachin Joshi * Issue 6187: Disabling authentication for the health check APIs (#6188) Disabled authentication for the health check API endpoints, so that no credentials would be required to invoke them. Signed-off-by: SrishT * Issue 6195: Bug fix for KeyValueTableImpl.exists() when the key actually exists (#6196) Fixed a bug in KeyValueTableImpl where exists() would throw an unexpected exception instead of returning true. Also properly covering this method in unit tests. Signed-off-by: Andrei Paduroiu Co-authored-by: Tom Kaitchuck * Issue 6093: Conditional Segment Merge (#6138) Adds a new mergeStreamSegment() call that enables to conditionally update a set of attributes on the target Segment and merge the source Segment into the target Segment if the attributes have been successfully updated. Signed-off-by: Raúl Gracia * Issue: 6070 Configuration of TLSv1.3 in Pravega Components (#6114) - This PR implements TLSv1.3 support in Pravega components: Segment store and Controller. - Segment store and Controller's GRPC endpoint uses Netty library to support secure communication. - Controller's REST endpoint uses Grizzly library for TLS communication. - This PR also includes changes for Pravega standalone to support TLSv1.3 protocol. - Implements Strict mode (either TLSv1.2 or TLSv1.3) or mixed mode (TLSv1.2 and TLSv1.3) communication. Signed-off-by: SaiCharan * Issue 6200: SLTS Bootstrap Improvements. (#6201) Immediately after bootstrap, persist extra snapshot to journals and use it for identification/elimination of zombie journal records. Signed-off-by: Sachin Joshi * Issue 6198: (Bugfix) Fix incorrect handling of empty value during listStreams (#6211) Signed-off-by: Sandeep sandeep.shridhar@emc.com Fix incorrect handling of empty value during listStreams * Issue 6144: Health Checks for Controller Services (#6192) * Adding liveness and readiness check for controller Signed-off-by: anishakj * Issue 6012: System test for Stream Tags (#6151) Signed-off-by: Shashwat Sharma Co-authored-by: Sandeep * Issue 6210: Configure REST ServerConfig's (SegmentStore) TLS Certificates in JKS format (#6214) Added new properties which loads the TLS Certificates and keys. Signed-off-by: SaiCharan * Issue 6219: Feature HealthCheck for SegmentStore (#6197) Added healtcheck for the following features in segmentstore: zookeeper connection. Signed-off-by: dellThejas * Issue 6205: ListKeyValueTables API throws NullPointerException if metadataTable storing list of KVTs in Scope is deleted. (#6207) Signed-off-by: pbelgundi * Issue 6223: SLTS - GC deleteSegment delete/update metadata in small batches. (#6224) GC deleteSegment delete/update metadata in small batches. Signed-off-by: Sachin Joshi * Issue 6158: Expose Flush-To-Storage API in Admin CLI (#6208) Exposes the flushToStorage API for a given container ID through the admin CLI. The command allows a force flush of all Tier 1 data to LTS. Signed-off-by: anirudhkovuru Co-authored-by: Andrei Paduroiu * Issue 4790: Pruning logs for standalone Pravega (#6234) Signed-off-by: Abhin Balur Co-authored-by: Sandeep * Issue 6183: Change readSegment method Parameter from int to long in SegmentHelper (#6239) SegmentHelper.readSegment() takes an integer segment offset where as the underlying WireCommand it invokes takes a long offset. Using an Integer offset can limit the reading at max first 2GB of data in a Segment and so changing this API to take a long offset instead of int. Signed-off-by: Shashwat Sharma Co-authored-by: Andrei Paduroiu * Issue 6223: SLTS - Enable loading third party storages. (#6231) SLTS - Enable loading third party storages. Signed-off-by: Sachin Joshi * Issue 6226: Update of security documentation. (#6227) Ensure TLSv1.3 configuration is updated as a part of Security documentation. Signed-off-by: SaiCharan Co-authored-by: Andrei Paduroiu * Issue 6021: Make sure that commitProcessor can be unblocked if queueProcessor terminates exceptionally (#6220) Make sure that commitProcessor can be unblocked if queueProcessor terminates exceptionally in OperationProcessor class. Signed-off-by: Raúl Gracia * Issue 6232: (Bugfix) Ensure StateSynchronizer handles stale updates from RevisionedStreamClient. (#6233) Ensure StateSyncrhonizer handles a scenario where its in-memory state is newer than the update received from the RevisionedStreamClient. Signed-off-by: Sandeep Co-authored-by: Andrei Paduroiu Co-authored-by: Tom Kaitchuck * Issue 6252: SLTS -Add robustness to system journal scan. (#6253) Issue 6252: SLTS -Add robustness to system journal scan. Instead of stopping journal scan at first gap, code should stop after gap greater than maxJournalWriteAttempts Code should use optimistic approach to read journals and remove redundant calls Fix unit test bug by properly injecting failures in doCreateWithContent and doCreate Signed-off-by: Sachin Joshi * Issue 6150: Change Health Check REST port in Pravega standalone (#6251) * Issue 6150: Change the rest listening port from 9092 to the default 6061 in Pravega standalone Signed-off-by: Brian Zhou * Issue 6238: Security related fixes (#6237) * change Pravega docker base image to Alpine+AdoptOpenJDK * update some dependencies to newer versions Signed-off-by: Igor Medvedev Co-authored-by: Sandeep * Issue 6112: System Test Log Bundle Format (#6115) If available, the system test log collection will default to bundle the log files using zip instead of a compress tar. Signed-off-by: Colin Hryniowski * Issue 5551: Fix CommandEncoder memory leak (#5552) Set a memory limit for CommandEncoder and when that limit is reached, shut down the TCP connection and let caller to re-create a new one. Signed-off-by: Wenqi Mou * Issue 6142: Expose segment rolloverSizeBytes to client. (#6203) * add rollover size for stream segment and table segment Signed-off-by: Wenqi Mou * Issue 6265: Enforce JUnit version 4.12 (#6264) A recent upgrade of fasterxml.jackson caused an upgrade of JUnit dependency to 4.13.1. This causes multiple test failures. Signed-off-by: Sandeep * Issue 6229: Setting tlsProtocolVersion defaults in InProcPravegaCluster (#6235) Signed-off-by: SaiCharan * Issue 6179: Implement scope list command on pravega-cli (#6249) Add scope list command as part of the commands used in pravega-cli and when called, the existing scopes are displayed alphabetically. Signed-off-by: James Kim * Issue 6219: Additional Segment Store Health-checks (#6245) Added health-checks for Segment Store cache manager and container registry. Signed-off-by: dellThejas * Issue 5554: Test Consumption based Stream Retention with Single Subscriber (#5617) Added a new system test to check Consumption based Stream Retention through a single subscriber. Signed-off-by: anirudhkovuru * Issue 6278: Add missing jq tool (#6279) Added installation of jq tool in Dockerfile. Signed-off-by: Igor Medvedev * Issue 6269: (Bugfix) Controller failed to update readergroup (#6270) Signed-off-by: Shashwat Sharma Fixes #6270 #Change log description In ZKCheckpointStore, the AtomicBoolean isZKConnected is initialized to {{false. }}In the constructor, we add a listener that listens for state changes to the Curator Client connection state. However, by the time this listener is added, the ZKConnectionState is already "CONNECTED" and there are no further state changes, hence the value of isZKConnected remains false and never changes to true though the curator client is connected to Zookeeper. This causes ZKCheckpointStore.isZKConnected() to always return false. * Issue 6061,6109: Transaction performance improvements (#6283) Fixes for open metric count and record commit offset improvements for transactions. -Issue 6109: Change metric computation for Open Transaction Count to be based on Table Segment Entry Count (#6126) - Open Transactions Count metric computation has been changed to use the GetTableSegmentInfo WireCommand to get the count of number of entries in each transactionsInEpochTable. - Issue 6061: Reduce update time for recording of commit offsets post transaction segment merge. (#6125) - During transaction Commit Event processing we recordCommitOffsets for every transaction, after all transaction segments have been merged. This causes one network call to be made to Segment Store per transaction, for updating the ActiveTxnRecord. This can instead be done as a bulk update operation after segments for all transactions have been merged. Signed-off-by: pbelgundi * Issue 6284: Log Collection Unbound Variable Fix. (#6286) Add variable at start of script to avoid any unbound variable errors. Signed-off-by: Colin Hryniowski * Issue 6272: Removing JAVA_HOME from entrypoint.sh and adding it to Dockerfile, so that bookkeeper OSS and branch builds can use different java runtime. (#6287) Removing JAVA_HOME from entrypoint.sh and adding it to Dockerfile, so that bookkeeper OSS and branch builds can use different java runtime. Signed-off-by: gaddas3 Co-authored-by: gaddas3 Co-authored-by: Raúl Gracia * Issue 6294: Need for setting memory limits in Kubernetes system test pods (#6295) Added memory limits for JVM heap size and Direct Memory in Kubernetes system test pods. Signed-off-by: Raúl * Issue 6281: Reduce connection pool size in Controller (#6280) Sets to 1 the number of connections per Segment Store in Controller connection pool. Signed-off-by: Raúl * Issue 6289: Update Reader Group never completes post Controller restart (#6296) Added isRunning() check to the isReady() method in ControllerEventProcessors so that we wait for the Service to be started completely before attempting to sweep tasks to it. Corrected the logging mistakes in ControllerImpl class. Signed-off-by: pbelgundi * Issue 6300: Update pravegaVersion in master to 0.11.0 (#6301) Signed-off-by: pbelgundi * Issue 6221: Setting BookieID for every bookie instance (#6222) Sets a unique non-network related bookie id value for every newly created bookkeeper instance (and keep the bookie id across restarts of the same instance). Signed-off-by: SrishT * Issue 6314: ContainerEventProcessor close method may exhaust threads if executed in bursts (#6317) Replace call to super.close() by super.stopAsync() in EventProcessorImpl.close() method to avoid locking threads until the service is stopped. Signed-off-by: Raúl * Issue 6306: SLTS - avoid deadlock in truncate (#6307) In case of system segments, do not attempt to delete block read index entries. Add debug logs for openRead, openWrite and getStreamSegmentInfo methods. Reduce log noise in system journals Signed-off-by: Sachin Joshi * Issue 6266: Fixing the bookkeeper image spec in the system test framework (#6267) Modifies the image specification within the bookkeeper cluster spec, so that it reflects the right directory structure. Signed-off-by: SrishT * Issue 6315: Exception on transactions open metric count computation (#6316) Added entries for expected and failing replies for WireCommands.GetTableSegmentInfo in SegmentHelper. Signed-off-by: pbelgundi * Issue 6290: Fix some Segment Store flaky tests (#6273) Fixes a couple of Segment Store flaky tests (related to OperationProcessor termination state and StreamSegmentContainer attributes test). Signed-off-by: Raúl Co-authored-by: Andrei Paduroiu * Issue 6338: Controller log improvements (#6339) Signed-off-by: pbelgundi * Issue 6334: Defensive fix for update Reader Group remaining in stuck state (#6343) Signed-off-by: pbelgundi * Issue 6335: Segment storage and core threads stats always have zero values (#6336) getSnapshot() in ExecutiveServiceHelpers can now get values for executor type ThreadPoolScheduledExecutorService Signed-off-by: dellThejas Co-authored-by: Raúl Gracia * Issue 6268: validate segmentstore memory settings (#6313) Validate the Segment Store memory settings before starting the process. Signed-off-by: Abhin Balur * Issue 6340: Fix CommitRequestHandler may get stuck throwing NPE (#6355) * NPE fix Signed-off-by: pbelgundi * Issue 6356: ContainerEventProcessor internal segment could be evicted from memory (#6357) Make sure to always call registerPinnedSegment when loading/creating the internal segment for ContainerEventProcessor. Signed-off-by: Raúl Gracia * Issue 6341: Validating operator versions (#6342) * Issue 6341: Validating operator versions Signed-off-by: SrishT * Issue 6292: EventStreamWriter.flush() should throw RetriesExhaustedException in case of consistent connectivity failures. (#6291) Ensure EventStreamWriter.flush() throws RetriesExhaustedException consistently incase all retries to establish connection fails. Signed-off-by: Sandeep * Issue 6345: Fix system test false positives (#6369) Assign resources to ZK in the system test framework. Signed-off-by: Abhin Balur * Issue 6375: Defensive fix to ensure the notifiers work incase of an empty StateSynchronizer state. (#6373) Defensive fix to ensure the notifiers work in case of an empty StateSynchronizer state. Signed-off-by: Sandeep * Issue 6204: make InProcPravegaCluster listen on all IPv4 interfaces (#6274) Make InProcPravegaCluster listen on all IPv4 interfaces instead of just localhost, so that Docker can redirect traffic to segment store. Signed-off-by: Derek Moore * Issue 6236: Table Segment Admin Commands (#6303) Implements the following commands: - `table-segment get-info ` - `table-segment get ` - `table-segment put ` - `table-segment set-key-serializer ` - `table-segment set-value-serializer ` The serializer commands allow the user to choose the required in-built serializer to deal with the type of table segment being queried. The currently available in-built serializers are for container metadata and SLTS metadata. Signed-off-by: anirudhkovuru Co-authored-by: Andrei Paduroiu * Issue 6370: Change the default log level in system tests to INFO (#6374) * Issue 6370: Change the default log level in system tests to INFO Signed-off-by: anishakj * Issue 6135: Update/truncate stream workflows should not be retried on a sealed stream (#6212) Signed-off-by: SrishT Truncate and update workflows on streams do not fail even if the operation is being performed on a sealed stream, and instead is retried indefinitely, which is unnecessary since the stream state can never be modified once it has been sealed. * Issue 5624: Adding API to fetch current head of Byte Stream (#6372) * Issue 5624: Adding API to fetch current head of Byte Stream Signed-off-by: SrishT * SLTS: Implement S3 binding using AWS java SDK v2 (#6384) Initial implementation of S3 binding using AWS java SDK v2. Fix tests to handle ChunkStorage which is strictly no-append. Fix bugs found in DefragmentOperation. Signed-off-by: Sachin Joshi * Issue 6386: LTS - Enable use of Custom binding in system tests. (#6388) Signed-off-by: Sachin Joshi * Issue 6392: Upgrade grpc library to 1.36.2 (#6393) Upgrade grpc library to 1.36.2 Signed-off-by: Sandeep * * Fix test compilation issues. Signed-off-by: Colin Hryniowski Co-authored-by: Shashwat Sharma <85731764+shshashwat@users.noreply.github.com> Co-authored-by: Andrei Paduroiu Co-authored-by: Sandeep Co-authored-by: Sachin Jayant Joshi <44757683+sachin-j-joshi@users.noreply.github.com> Co-authored-by: Igor Medvedev <55915597+medvedevigorek@users.noreply.github.com> Co-authored-by: Tom Kaitchuck Co-authored-by: James Kim <33760507+kgh475926@users.noreply.github.com> Co-authored-by: anishakj <43978302+anishakj@users.noreply.github.com> Co-authored-by: Atharva Joshi Co-authored-by: dellThejas <55413736+dellThejas@users.noreply.github.com> Co-authored-by: anirudhkovuru <31472818+anirudhkovuru@users.noreply.github.com> Co-authored-by: Raúl Gracia Co-authored-by: Derek P. Moore Co-authored-by: Srishti Thakkar Co-authored-by: SaiCharan Co-authored-by: Prajakta Belgundi Co-authored-by: abhinb <88840194+abhinb@users.noreply.github.com> Co-authored-by: Brian Zhou Co-authored-by: Wenqi Mou Co-authored-by: Subba Gaddamadugu Co-authored-by: gaddas3 --- README.md | 12 +- .../extendeds3/ExtendedS3StorageFactory.java | 1 + .../filesystem/FileSystemStorageFactory.java | 1 + .../storage/hdfs/HDFSChunkStorage.java | 2 +- .../storage/hdfs/HDFSExceptionHelpers.java | 2 +- .../storage/hdfs/HDFSStorageFactory.java | 1 + .../io/pravega/storage/s3/S3ChunkStorage.java | 394 ++++++ .../storage/s3/S3SimpleStorageFactory.java | 85 ++ .../pravega/storage/s3/S3StorageConfig.java | 128 ++ .../storage/s3/S3StorageFactoryCreator.java | 53 + ...segmentstore.storage.StorageFactoryCreator | 1 + .../pravega/storage/StorageFactoryTests.java | 64 +- .../ExtendedS3SimpleStorageTests.java | 2 + .../FileSystemChunkStorageMockTest.java | 5 +- .../filesystem/FileSystemMockTests.java | 4 +- .../FileSystemSimpleStorageTest.java | 3 + .../storage/hdfs/HDFSSimpleStorageTest.java | 11 + .../io/pravega/storage/s3/S3ClientMock.java | 117 ++ .../java/io/pravega/storage/s3/S3Mock.java | 276 +++++ .../storage/s3/S3SimpleStorageTests.java | 156 +++ .../storage/s3/S3StorageConfigTest.java | 59 + .../io/pravega/storage/s3/S3TestContext.java | 61 + build.gradle | 76 +- checkstyle/import-control.xml | 7 +- cli/admin/README.md | 24 +- .../io/pravega/cli/admin/AdminCommand.java | 64 +- .../pravega/cli/admin/AdminCommandState.java | 8 + .../admin/controller/ControllerCommand.java | 4 +- .../ControllerDescribeStreamCommand.java | 4 +- .../dataRecovery/DataRecoveryCommand.java | 2 +- .../DurableLogRecoveryCommand.java | 5 +- .../StorageListSegmentsCommand.java | 2 +- .../admin/segmentstore/ContainerCommand.java | 29 + .../segmentstore/FlushToStorageCommand.java | 80 ++ .../GetSegmentAttributeCommand.java | 2 +- .../segmentstore/GetSegmentInfoCommand.java | 2 +- .../segmentstore/ReadSegmentRangeCommand.java | 97 +- .../segmentstore/SegmentStoreCommand.java | 7 +- .../UpdateSegmentAttributeCommand.java | 2 +- .../GetTableSegmentEntryCommand.java | 68 ++ .../GetTableSegmentInfoCommand.java | 79 ++ .../ListTableSegmentKeysCommand.java | 72 ++ .../tableSegment/ModifyTableSegmentEntry.java | 93 ++ .../PutTableSegmentEntryCommand.java | 59 + .../tableSegment/SetSerializerCommand.java | 64 + .../tableSegment/TableSegmentCommand.java | 110 ++ .../admin/serializers/AbstractSerializer.java | 81 ++ .../serializers/ContainerKeySerializer.java | 38 + .../ContainerMetadataSerializer.java | 117 ++ .../admin/serializers/SltsKeySerializer.java | 38 + .../serializers/SltsMetadataSerializer.java | 215 ++++ .../cli/admin/utils/AdminSegmentHelper.java | 125 ++ ...LIControllerConfig.java => CLIConfig.java} | 47 +- .../io/pravega/cli/admin/utils/ZKHelper.java | 1 + .../bookkeeper/BookkeeperCommandsTest.java | 4 +- .../controller/ControllerCommandsTest.java | 11 +- .../SecureControllerCommandsTest.java | 3 +- .../admin/dataRecovery/DataRecoveryTest.java | 4 +- .../AbstractSegmentStoreCommandsTest.java | 399 ++++++ .../SegmentStoreCommandsTest.java | 133 -- .../ContainerKeySerializerTest.java | 32 + .../ContainerMetadataSerializerTest.java | 60 + .../cli/admin/serializers/SerializerTest.java | 63 + .../serializers/SltsKeySerializerTest.java | 32 + .../SltsMetadataSerializerTest.java | 126 ++ .../io/pravega/cli/admin/utils/TestUtils.java | 35 +- .../java/io/pravega/cli/user/Command.java | 1 + .../cli/user/config/InteractiveConfig.java | 7 + .../cli/user/kvs/KeyValueTableCommand.java | 2 + .../pravega/cli/user/scope/ScopeCommand.java | 33 + .../user/utils/BackgroundConsoleListener.java | 1 + .../cli/user/scope/ScopeCommandsTest.java | 28 + .../client/ByteStreamClientFactory.java | 6 +- .../java/io/pravega/client/ClientConfig.java | 4 +- .../client/byteStream/ByteStreamReader.java | 6 + .../client/byteStream/ByteStreamWriter.java | 6 + .../impl/BufferedByteStreamWriterImpl.java | 5 + .../byteStream/impl/ByteStreamReaderImpl.java | 6 + .../byteStream/impl/ByteStreamWriterImpl.java | 5 + .../connection/impl/CommandEncoder.java | 27 +- .../pravega/client/connection/impl/Flow.java | 2 +- .../connection/impl/TcpClientConnection.java | 2 +- .../client/control/impl/ControllerImpl.java | 39 +- .../client/control/impl/ModelHelper.java | 10 +- .../impl/AsyncSegmentInputStreamImpl.java | 4 +- .../pravega/client/segment/impl/Segment.java | 5 +- .../segment/impl/SegmentMetadataClient.java | 7 + .../impl/SegmentMetadataClientImpl.java | 13 +- .../segment/impl/SegmentOutputStreamImpl.java | 8 +- .../client/state/impl/RevisionImpl.java | 5 +- .../impl/RevisionedStreamClientImpl.java | 12 +- .../state/impl/StateSynchronizerImpl.java | 19 +- .../client/stream/EventWriterConfig.java | 4 +- .../client/stream/ReaderGroupConfig.java | 5 +- .../io/pravega/client/stream/Sequence.java | 5 +- .../client/stream/StreamConfiguration.java | 14 +- .../client/stream/impl/ClientFactoryImpl.java | 6 +- .../stream/impl/EventStreamReaderImpl.java | 4 + .../stream/impl/EventStreamWriterImpl.java | 14 +- .../client/stream/impl/StreamCutImpl.java | 5 +- .../client/stream/impl/StreamImpl.java | 5 +- .../notifier/EndOfDataNotifier.java | 4 +- .../notifier/SegmentNotifier.java | 26 +- .../tables/KeyValueTableConfiguration.java | 17 +- .../client/tables/impl/KeyValueTableImpl.java | 1 + .../client/tables/impl/TableSegment.java | 15 + .../client/tables/impl/TableSegmentImpl.java | 10 + .../tables/impl/TableSegmentKeyVersion.java | 5 +- .../client/CredentialsExtractorTest.java | 2 + .../batch/impl/BatchClientImplTest.java | 1 + .../byteStream/ByteStreamReaderTest.java | 10 +- .../byteStream/ByteStreamWriterTest.java | 11 +- .../connection/impl/CommandEncoderTest.java | 82 +- .../impl/ConnectionFactoryImplTest.java | 4 +- .../client/connection/impl/RawClientTest.java | 2 + .../control/impl/ControllerImplLBTest.java | 2 +- .../control/impl/ControllerImplTest.java | 2 +- .../client/control/impl/ModelHelperTest.java | 36 +- .../auth/JwtTokenProviderImplTest.java | 2 +- .../impl/ConditionalOutputStreamTest.java | 4 +- .../impl/EventSegmentReaderImplTest.java | 5 + .../segment/impl/SegmentInputStreamTest.java | 4 +- .../impl/SegmentMetadataClientTest.java | 3 +- .../segment/impl/SegmentOutputStreamTest.java | 210 ++-- .../client/state/impl/SynchronizerTest.java | 92 +- .../stream/StreamConfigurationTest.java | 6 +- .../pravega/client/stream/StreamCutTest.java | 2 +- .../stream/impl/DefaultCredentialsTest.java | 3 + .../stream/impl/EventStreamReaderTest.java | 4 +- .../stream/impl/EventStreamWriterTest.java | 5 + .../client/stream/impl/PingerTest.java | 4 +- .../stream/impl/SegmentTransactionTest.java | 1 + .../client/stream/mock/MockController.java | 6 +- .../stream/mock/MockSegmentIoStreams.java | 6 + .../notifications/EndOfDataNotifierTest.java | 22 + .../notifications/SegmentNotifierTest.java | 24 + .../tables/impl/KeyValueTableImplTests.java | 1 + .../impl/KeyValueTableIteratorImplTests.java | 2 +- .../tables/impl/KeyValueTableTestBase.java | 2 + .../tables/impl/MockTableSegmentFactory.java | 9 + .../tables/impl/TableSegmentImplTest.java | 16 + .../watermark/WatermarkSerializerTest.java | 3 +- .../common/concurrent/AsyncSemaphore.java | 4 +- .../concurrent/ExecutorServiceFactory.java | 46 +- .../concurrent/ExecutorServiceHelpers.java | 7 +- .../pravega/common/concurrent/Scheduled.java | 36 + .../common/concurrent/ScheduledQueue.java | 353 ++++++ .../ThreadPoolScheduledExecutorService.java | 396 ++++++ .../io/serialization/VersionedSerializer.java | 2 +- .../java/io/pravega/common/lang/Int96.java | 9 +- .../common/security/TLSProtocolVersion.java | 41 + .../io/pravega/common/util/BitConverter.java | 1 + .../util/PriorityBlockingDrainingQueue.java | 2 +- .../pravega/common/util/TypedProperties.java | 36 +- .../concurrent/AsyncSemaphoreTests.java | 2 +- .../ExecutorServiceFactoryTests.java | 6 +- .../ExecutorServiceHelpersTests.java | 23 +- .../common/concurrent/ScheduledQueueTest.java | 187 +++ ...hreadPoolScheduledExecutorServiceTest.java | 299 +++++ .../FileModificationEventWatcherTests.java | 2 +- .../security/TLSProtocolVersionTest.java | 47 + .../util/BlockingDrainingQueueTests.java | 1 + .../common/util/BufferViewTestBase.java | 2 +- .../common/util/CompositeBufferViewTests.java | 1 + .../util/CompositeByteArraySegmentTests.java | 2 + .../io/pravega/common/util/RetryTests.java | 2 + .../common/util/SortedIndexTestBase.java | 6 +- .../common/util/ToStringUtilsTest.java | 5 +- .../common/util/TypedPropertiesTests.java | 17 +- .../pravega/common/util/btree/BTreeIndex.java | 1 + .../common/util/btree/BTreePageTests.java | 2 +- config/admin-cli.properties | 14 +- config/config.properties | 6 +- config/standalone-config.properties | 5 + .../src/conf/controller.config.properties | 1 + .../impl/EventProcessorGroupImpl.java | 2 +- .../fault/ControllerClusterListener.java | 27 + .../fault/SegmentContainerMonitor.java | 12 +- .../controller/server/ControllerService.java | 25 +- .../server/ControllerServiceStarter.java | 26 +- .../io/pravega/controller/server/Main.java | 1 + .../controller/server/SegmentHelper.java | 117 +- .../server/SegmentStoreConnectionManager.java | 421 ------- .../server/bucket/BucketManager.java | 2 + .../server/bucket/InMemoryBucketManager.java | 10 + .../server/bucket/ZooKeeperBucketManager.java | 10 + .../ControllerEventProcessors.java | 65 +- .../eventProcessor/LocalController.java | 60 +- .../AbstractRequestProcessor.java | 24 +- .../requesthandlers/CommitRequestHandler.java | 106 +- .../CreateReaderGroupTask.java | 4 +- .../requesthandlers/StreamRequestHandler.java | 41 +- .../requesthandlers/TruncateStreamTask.java | 35 +- .../UpdateReaderGroupTask.java | 17 +- .../requesthandlers/UpdateStreamTask.java | 36 +- .../kvtable/CreateTableTask.java | 4 +- .../ClusterListenerHealthContributor.java | 45 + .../EventProcessorHealthContributor.java | 43 + .../health/GRPCServerHealthContributor.java | 41 + .../RetentionServiceHealthContributor.java | 46 + ...mentContainerMonitorHealthContributor.java | 46 + .../WatermarkingServiceHealthContributor.java | 45 + .../controller/server/rest/ModelHelper.java | 59 +- .../server/rest/generated/api/Bootstrap.java | 2 +- .../server/rest/generated/api/ScopesApi.java | 5 +- .../rest/generated/api/ScopesApiService.java | 2 +- .../api/impl/ScopesApiServiceImpl.java | 2 +- .../generated/model/CreateScopeRequest.java | 7 +- .../generated/model/CreateStreamRequest.java | 79 +- .../generated/model/ReaderGroupProperty.java | 2 +- .../generated/model/ReaderGroupsList.java | 2 +- .../model/ReaderGroupsListReaderGroups.java | 2 +- .../rest/generated/model/RetentionConfig.java | 2 +- .../rest/generated/model/ScaleMetadata.java | 2 +- .../rest/generated/model/ScalingConfig.java | 2 +- .../generated/model/ScalingEventList.java | 2 +- .../rest/generated/model/ScopeProperty.java | 2 +- .../rest/generated/model/ScopesList.java | 2 +- .../server/rest/generated/model/Segment.java | 2 +- .../rest/generated/model/StreamProperty.java | 79 +- .../rest/generated/model/StreamState.java | 2 +- .../rest/generated/model/StreamsList.java | 2 +- .../server/rest/generated/model/TagsList.java | 64 + .../generated/model/TimeBasedRetention.java | 2 +- .../generated/model/UpdateStreamRequest.java | 79 +- .../resources/StreamMetadataResourceImpl.java | 82 +- .../controller/server/rest/v1/ApiV1.java | 6 +- .../server/rpc/grpc/GRPCServer.java | 14 + .../server/rpc/grpc/GRPCServerConfig.java | 7 + .../rpc/grpc/impl/GRPCServerConfigImpl.java | 12 +- .../controller/store/InMemoryScope.java | 2 +- .../controller/store/PravegaTablesScope.java | 81 +- .../store/PravegaTablesStoreHelper.java | 42 +- .../controller/store/ZKStoreHelper.java | 15 +- .../store/checkpoint/CheckpointStore.java | 7 + .../checkpoint/InMemoryCheckpointStore.java | 10 + .../store/checkpoint/ZKCheckpointStore.java | 20 +- .../store/kvtable/AbstractKVTableBase.java | 1 + .../kvtable/AbstractKVTableMetadataStore.java | 1 + .../store/kvtable/KVTableMetadataStore.java | 2 +- .../PravegaTablesKVTMetadataStore.java | 8 + .../store/kvtable/PravegaTablesKVTable.java | 1 + .../kvtable/records/KVTSegmentRecord.java | 1 + .../stream/AbstractStreamMetadataStore.java | 19 +- .../store/stream/InMemoryStream.java | 33 +- .../store/stream/PersistentStreamBase.java | 130 +- .../store/stream/PravegaTablesStream.java | 59 +- .../PravegaTablesStreamMetadataStore.java | 2 +- .../controller/store/stream/Stream.java | 7 +- .../store/stream/StreamMetadataStore.java | 23 +- .../store/stream/TxnWriterMark.java | 33 + .../stream/VersionedTransactionData.java | 2 +- .../store/stream/ZKGarbageCollector.java | 30 +- .../controller/store/stream/ZKStream.java | 6 +- .../store/stream/ZookeeperBucketStore.java | 4 + .../store/stream/records/ActiveTxnRecord.java | 2 +- .../store/stream/records/EpochRecord.java | 1 + .../records/ReaderGroupConfigRecord.java | 6 +- .../records/StreamConfigurationRecord.java | 20 +- .../stream/records/StreamSegmentRecord.java | 1 + .../KeyValueTable/TableMetadataTasks.java | 14 +- .../task/Stream/StreamMetadataTasks.java | 99 +- .../StreamTransactionMetadataTasks.java | 35 +- .../io/pravega/controller/util/Config.java | 19 +- .../impl/CheckpointStoreTests.java | 2 +- .../impl/ConcurrentEPSerializedRHTest.java | 2 +- .../impl/ConcurrentEventProcessorTest.java | 2 +- .../impl/EventProcessorTest.java | 2 +- .../impl/SerializedRequestHandlerTest.java | 2 +- .../impl/ZKCheckpointStoreTests.java | 1 + .../ZkCheckpointStoreConnectivityTest.java | 2 +- .../fault/ControllerClusterListenerTest.java | 4 + .../fault/SegmentContainerMonitorTest.java | 1 - .../controller/mocks/SegmentHelperMock.java | 18 +- .../controller/rest/v1/ModelHelperTest.java | 25 +- .../pravega/controller/rest/v1/PingTest.java | 2 + .../v1/StreamMetaDataAuthFocusedTests.java | 4 +- .../rest/v1/StreamMetaDataTests.java | 37 +- .../server/ControllerServiceConfigTest.java | 2 +- .../server/ControllerServiceMainTest.java | 7 +- .../server/ControllerServiceStarterTest.java | 3 +- .../ControllerServiceWithKVTableTest.java | 2 +- ...erServiceWithPravegaTablesKVTableTest.java | 2 +- .../ControllerServiceWithStreamTest.java | 5 +- ...egaTablesControllerServiceStarterTest.java | 4 +- .../controller/server/SegmentHelperTest.java | 80 +- .../SegmentStoreConnectionManagerTest.java | 422 ------- .../ZKBackedControllerServiceStarterTest.java | 2 + .../server/bucket/BucketServiceTest.java | 2 +- .../server/bucket/WatermarkWorkflowTest.java | 4 +- ...EventProcessorPravegaTablesStreamTest.java | 98 ++ .../ControllerEventProcessorTest.java | 17 +- .../ControllerEventProcessorsTest.java | 131 +- .../PravegaTablesScaleRequestHandlerTest.java | 2 +- .../eventProcessor/RequestHandlersTest.java | 128 +- .../ScaleRequestHandlerTest.java | 9 +- .../StreamRequestProcessorTest.java | 24 +- .../ClusterListenerHealthContributorTest.java | 83 ++ .../EventProcessorHealthContributorTest.java | 193 +++ .../GRPCServerHealthContributorTest.java | 66 + ...RetentionServiceHealthContributorTest.java | 81 ++ ...ContainerMonitorHealthContributorTest.java | 78 ++ ...termarkingServiceHealthContibutorTest.java | 80 ++ .../grpc/impl/GRPCServerConfigImplTest.java | 2 +- .../grpc/v1/ControllerServiceImplTest.java | 74 ++ .../security/auth/StreamAuthParamsTest.java | 2 +- .../server/v1/ControllerServiceTest.java | 2 +- .../store/PravegaTablesScopeTest.java | 85 ++ .../store/PravegaTablesStoreHelperTest.java | 6 +- .../store/client/StoreClientFactoryTest.java | 2 +- .../store/index/ZkHostIndexTest.java | 2 +- .../PravegaTablesKVTMetadataStoreTest.java | 43 +- .../store/stream/BucketStoreTest.java | 2 +- .../store/stream/HostStoreTest.java | 2 +- .../PravegaTablesStreamMetadataStoreTest.java | 36 +- .../store/stream/StreamMetadataStoreTest.java | 125 +- .../controller/store/stream/StreamTest.java | 2 +- .../store/stream/StreamTestBase.java | 33 +- .../store/stream/TestStreamStoreFactory.java | 13 +- .../store/stream/ZKCounterTest.java | 2 +- .../store/stream/ZkGarbageCollectorTest.java | 2 +- .../controller/store/stream/ZkStreamTest.java | 9 +- .../KeyValueTable/TableMetadataTasksTest.java | 2 +- .../Stream/IntermittentCnxnFailureTest.java | 4 +- .../task/Stream/RequestSweeperTest.java | 2 +- .../task/Stream/StreamMetadataTasksTest.java | 130 +- .../StreamTransactionMetadataTasksTest.java | 32 +- .../Stream/ZkStreamMetadataTasksTest.java | 8 + .../task/TaskMetadataStoreTests.java | 2 +- .../io/pravega/controller/task/TaskTest.java | 2 +- .../timeout/TimeoutServiceTest.java | 2 +- docker/bookkeeper/Dockerfile | 1 + docker/bookkeeper/entrypoint.sh | 29 +- docker/pravega/Dockerfile | 31 +- docker/pravega/scripts/common.sh | 2 +- docker/pravega/scripts/init_controller.sh | 5 + docker/pravega/scripts/init_kubernetes.sh | 2 +- docker/pravega/scripts/init_segmentstore.sh | 4 + documentation/src/docs/controller-service.md | 13 +- documentation/src/docs/pravega-concepts.md | 69 +- documentation/src/docs/rest/restapis.md | 545 ++++++++- .../pravega-security-configurations.md | 41 +- .../securing-distributed-mode-cluster.md | 35 +- .../securing-standalone-mode-cluster.md | 3 + gradle.properties | 20 +- gradle/java.gradle | 12 +- gradle/protobuf.gradle | 2 + .../contracts/BadSegmentTypeException.java | 16 +- .../segmentstore/contracts/SegmentApi.java | 242 ++++ .../contracts/StreamSegmentStore.java | 207 +--- .../contracts/tables/TableSegmentConfig.java | 4 +- .../contracts/tables/TableSegmentInfo.java | 68 ++ .../contracts/tables/TableStore.java | 44 +- .../contracts/SegmentTypeTests.java | 2 + .../server/host/ServiceStarter.java | 78 +- .../handler/AbstractConnectionListener.java | 89 +- .../host/handler/AdminConnectionListener.java | 30 +- .../handler/AdminRequestProcessorImpl.java | 59 +- .../handler/PravegaConnectionListener.java | 58 +- .../host/handler/PravegaRequestProcessor.java | 102 +- .../SegmentContainerHealthContributor.java | 57 + ...entContainerRegistryHealthContributor.java | 52 + .../host/health/ZKHealthContributor.java | 56 + .../TLSConfigChangeEventConsumer.java | 4 +- .../security/TLSConfigChangeFileConsumer.java | 4 +- .../host/security/TLSConfigChangeHandler.java | 3 +- .../server/host/security/TLSHelper.java | 14 +- .../server/host/stat/SegmentAggregates.java | 2 +- .../host/stat/TableSegmentStatsRecorder.java | 13 + .../stat/TableSegmentStatsRecorderImpl.java | 10 + .../host/ExtendedS3IntegrationTest.java | 2 + .../host/FileSystemIntegrationTest.java | 2 + .../server/host/HDFSIntegrationTest.java | 2 + .../NonAppendExtendedS3IntegrationTest.java | 2 + .../server/host/S3IntegrationTest.java | 153 +++ .../server/host/ServiceStarterTest.java | 102 +- .../server/host/StorageLoaderTest.java | 5 +- .../handler/AdminConnectionListenerTest.java | 5 +- .../AdminRequestProcessorAuthFailedTest.java | 58 + .../AdminRequestProcessorImplTest.java | 51 + .../host/handler/AppendProcessorTest.java | 1 + .../host/handler/ConnectionTrackerTests.java | 4 +- .../PravegaConnectionListenerTest.java | 48 +- ...PravegaRequestProcessorAuthFailedTest.java | 2 +- .../handler/PravegaRequestProcessorTest.java | 257 ++-- ...SegmentContainerHealthContributorTest.java | 68 ++ ...ontainerRegistryHealthContributorTest.java | 65 + .../server/host/load/AttributeLoadTests.java | 2 +- .../TLSConfigChangeEventConsumerTests.java | 10 +- .../TLSConfigChangeFileConsumerTests.java | 10 +- .../server/host/security/TLSHelperTests.java | 14 +- .../host/stat/AutoScaleProcessorTest.java | 3 +- .../stat/TableSegmentStatsRecorderTest.java | 5 + .../segmentstore/server/CacheManager.java | 38 + .../segmentstore/server/SegmentContainer.java | 4 +- .../server/SegmentContainerRegistry.java | 16 +- .../attributes/AttributeIndexConfig.java | 2 +- .../ContainerEventProcessorImpl.java | 28 +- .../server/containers/MetadataStore.java | 6 +- .../containers/ReadOnlySegmentContainer.java | 6 + .../containers/StorageEventProcessor.java | 143 +++ .../containers/StreamSegmentContainer.java | 66 +- .../server/logs/OperationProcessor.java | 24 +- .../SegmentMetadataUpdateTransaction.java | 7 +- .../operations/MergeSegmentOperation.java | 62 +- .../mocks/SynchronousStreamSegmentStore.java | 15 + .../server/reading/CacheIndexEntry.java | 2 +- .../store/SegmentContainerCollection.java | 18 +- .../server/store/ServiceBuilder.java | 9 +- .../server/store/ServiceConfig.java | 27 +- .../store/StreamSegmentContainerRegistry.java | 19 +- .../server/store/StreamSegmentService.java | 19 +- .../server/tables/ContainerKeyCache.java | 11 + .../server/tables/ContainerKeyIndex.java | 112 +- .../tables/ContainerTableExtensionImpl.java | 32 +- .../server/tables/DeltaIteratorState.java | 1 + .../FixedKeyLengthTableSegmentLayout.java | 19 + .../server/tables/HashTableSegmentLayout.java | 24 +- .../server/tables/SegmentKeyCache.java | 10 + .../server/tables/TableExtensionConfig.java | 85 +- .../server/tables/TableSegmentLayout.java | 10 + .../server/tables/TableService.java | 22 +- .../server/tables/TableWriterConnector.java | 8 + .../server/tables/WriterTableProcessor.java | 90 +- .../server/writer/SegmentAggregator.java | 3 + .../server/CacheManagerTests.java | 29 +- .../segmentstore/server/ReadResultMock.java | 4 +- .../server/SegmentStoreMetricsTests.java | 19 + .../segmentstore/server/TableStoreMock.java | 14 +- .../DebugStreamSegmentContainerTests.java | 4 +- .../containers/MetadataStoreTestBase.java | 2 +- .../ReadOnlySegmentContainerTests.java | 2 + .../StorageEventProcessorTests.java | 147 +++ .../StreamSegmentContainerTests.java | 269 +++- .../StreamSegmentMetadataTests.java | 7 +- .../containers/TableMetadataStoreTests.java | 5 +- .../server/logs/OperationProcessorTests.java | 47 + .../server/logs/ThrottlerTests.java | 2 +- ...ConditionalMergeSegmentOperationTests.java | 37 + .../store/ServiceBuilderConfigTests.java | 9 +- .../server/store/ServiceConfigTests.java | 1 + .../StreamSegmentContainerRegistryTests.java | 37 + .../store/StreamSegmentStoreTestBase.java | 81 ++ .../server/tables/ContainerKeyCacheTests.java | 18 +- .../server/tables/ContainerKeyIndexTests.java | 56 +- ...FixedKeyLengthTableSegmentLayoutTests.java | 2 +- .../tables/HashTableSegmentLayoutTests.java | 11 +- .../server/tables/TableContext.java | 6 + .../tables/TableEntryDeltaIteratorTests.java | 5 +- .../tables/TableExtensionConfigTests.java | 71 ++ .../tables/TableSegmentLayoutTestBase.java | 37 +- .../server/tables/TableServiceTests.java | 17 +- .../tables/WriterTableProcessorTests.java | 116 +- .../server/writer/StorageWriterTests.java | 2 +- .../impl/bookkeeper/BookKeeperLog.java | 2 +- .../impl/bookkeeper/BookKeeperLogFactory.java | 2 +- .../SequentialAsyncProcessorTests.java | 3 + .../storage/StorageFactoryInfo.java | 1 + .../segmentstore/storage/SyncStorage.java | 1 + .../chunklayer/AbstractTaskQueueManager.java | 42 + .../storage/chunklayer/BaseChunkStorage.java | 1 + .../storage/chunklayer/ChunkIterator.java | 15 +- .../storage/chunklayer/ChunkStorage.java | 18 +- .../chunklayer/ChunkStorageMetrics.java | 13 + .../chunklayer/ChunkedSegmentStorage.java | 199 +-- .../ChunkedSegmentStorageConfig.java | 9 + .../storage/chunklayer/ConcatOperation.java | 22 +- .../chunklayer/DefragmentOperation.java | 127 +- .../storage/chunklayer/GarbageCollector.java | 726 +++++++---- .../storage/chunklayer/ReadOperation.java | 1 + .../storage/chunklayer/SystemJournal.java | 290 +++-- .../storage/chunklayer/TruncateOperation.java | 43 +- .../storage/chunklayer/WriteOperation.java | 99 +- .../storage/metadata/BaseMetadataStore.java | 23 +- .../storage/metadata/ChunkMetadataStore.java | 12 + .../storage/metadata/MetadataTransaction.java | 2 + .../storage/metadata/SegmentMetadata.java | 2 +- .../storage/mocks/InMemoryChunkStorage.java | 27 + .../storage/mocks/InMemoryMetadataStore.java | 2 + .../storage/IdempotentStorageTestBase.java | 2 +- .../segmentstore/storage/StorageTestBase.java | 7 +- .../storage/chunklayer/ChunkStorageTests.java | 115 +- .../ChunkedRollingStorageTests.java | 14 +- .../ChunkedSegmentStorageConfigTests.java | 3 + .../ChunkedSegmentStorageMockTests.java | 73 +- .../ChunkedSegmentStorageTests.java | 272 ++++- .../chunklayer/GarbageCollectorTests.java | 1078 +++++++++++------ .../NoAppendSimpleStorageTests.java | 34 +- .../chunklayer/SimpleStorageTests.java | 10 +- .../SystemJournalOperationsTests.java | 438 ++++++- .../chunklayer/SystemJournalRecordsTests.java | 430 +++++++ .../chunklayer/SystemJournalTests.java | 442 +++---- .../storage/chunklayer/TestUtils.java | 56 +- .../TableBasedMetadataStoreMockTests.java | 12 +- .../TableBasedMetadataStoreTests.java | 6 + .../mocks/InMemorySimpleStorageTests.java | 2 + .../storage/mocks/InMemoryTableStore.java | 27 +- .../mocks/InMemoryTaskQueueManager.java | 59 + .../storage/noop/NoOpSimpleStorageTests.java | 2 + .../NoOpStorageUserDataWriteOnlyTests.java | 4 + .../java/io/pravega/auth/FakeAuthHandler.java | 4 +- .../java/io/pravega/auth/MockPrincipal.java | 26 + .../java/io/pravega/auth/TestAuthHandler.java | 4 +- .../io/pravega/common/cluster/Cluster.java | 6 + .../common/cluster/zkImpl/ClusterZKImpl.java | 12 + .../shared/controller/event/AbortEvent.java | 1 + .../controller/event/AutoScaleEvent.java | 1 + .../shared/controller/event/CommitEvent.java | 1 + .../event/CreateReaderGroupEvent.java | 2 +- .../event/DeleteReaderGroupEvent.java | 1 + .../controller/event/DeleteStreamEvent.java | 1 + .../controller/event/RGStreamCutRecord.java | 11 +- .../shared/controller/event/ScaleOpEvent.java | 1 + .../controller/event/SealStreamEvent.java | 1 + .../controller/event/TruncateStreamEvent.java | 1 + .../event/UpdateReaderGroupEvent.java | 1 + .../controller/event/UpdateStreamEvent.java | 1 + .../event/kvtable/CreateTableEvent.java | 11 + .../event/kvtable/DeleteTableEvent.java | 1 + .../src/main/proto/Controller.proto | 5 + .../src/main/swagger/Controller.yaml | 40 +- .../event/ControllerEventSerializerTests.java | 2 +- .../tracing/RPCTracingHelpersTest.java | 4 +- .../health/bindings/resources/HealthImpl.java | 4 - .../shared/health/bindings/HealthTests.java | 28 +- .../shared/health/HealthContributor.java | 1 + .../shared/health/HealthServiceManager.java | 33 +- .../impl/AbstractHealthContributor.java | 19 +- .../health/impl/HealthServiceUpdaterImpl.java | 3 +- .../shared/health/HealthManagerTests.java | 10 +- .../health/HealthServiceUpdaterTests.java | 8 +- .../shared/health/TestHealthContributors.java | 3 + .../java/io/pravega/shared/MetricsNames.java | 19 +- .../shared/metrics/StatsLoggerImpl.java | 4 +- .../java/io/pravega/shared/NameUtils.java | 13 +- .../protocol/netty/AdminRequestProcessor.java | 3 +- .../netty/DelegatingRequestProcessor.java | 11 +- .../protocol/netty/FailingReplyProcessor.java | 14 +- .../netty/FailingRequestProcessor.java | 15 +- .../shared/protocol/netty/ReplyProcessor.java | 4 + .../protocol/netty/RequestProcessor.java | 8 +- .../protocol/netty/WireCommandType.java | 8 +- .../shared/protocol/netty/WireCommands.java | 268 ++-- .../shared/StreamSegmentNameUtilsTests.java | 2 +- .../netty/DelegatingRequestProcessorTest.java | 8 +- .../netty/FailingReplyProcessorTest.java | 5 +- .../netty/FailingRequestProcessorTest.java | 10 +- .../protocol/netty/WireCommandsTest.java | 91 +- .../shared/watermarks/WatermarksTest.java | 2 +- .../io/pravega/shared/rest/RESTServer.java | 3 +- .../pravega/shared/rest/RESTServerConfig.java | 6 + .../rest/impl/RESTServerConfigImpl.java | 21 +- .../rest/impl/RESTServerConfigImplTests.java | 10 +- .../shared/rest/impl/RESTServerTest.java | 4 +- .../rest/security/PravegaAuthManagerTest.java | 2 +- .../crypto/StrongPasswordProcessor.java | 2 +- .../pravega/local/InProcPravegaCluster.java | 20 +- .../pravega/local/LocalPravegaEmulator.java | 7 +- .../io/pravega/local/SingleNodeConfig.java | 9 + .../AuthEnabledInProcPravegaClusterTest.java | 2 +- .../local/InProcPravegaClusterTest.java | 2 +- .../local/PravegaEmulatorResource.java | 24 +- .../local/SecurePravegaClusterTest.java | 2 +- .../TlsEnabledInProcPravegaClusterTest.java | 2 +- .../local/TlsProtocolVersion12Test.java | 33 + .../local/TlsProtocolVersion13Test.java | 33 + .../test/integration/demo/ClusterWrapper.java | 13 +- .../integration/demo/ControllerWrapper.java | 7 +- .../demo/EndToEndAutoScaleDownTest.java | 3 +- .../demo/EndToEndAutoScaleUpTest.java | 2 +- .../demo/EndToEndAutoScaleUpWithTxnTest.java | 3 +- .../demo/EndToEndTransactionTest.java | 3 +- .../test/integration/selftest/Reporter.java | 2 +- .../test/integration/selftest/TestState.java | 2 +- ...InProcessListenerWithRealStoreAdapter.java | 1 + .../adapters/InProcessMockClientAdapter.java | 25 +- .../adapters/OutOfProcessAdapter.java | 2 + .../test/integration/utils/SetupUtils.java | 92 +- .../pravega/test/integration/AppendTest.java | 6 +- .../test/integration/BatchClientAuthTest.java | 1 + .../test/integration/BatchClientTest.java | 2 +- .../test/integration/ByteStreamTest.java | 8 +- .../test/integration/ClusterWrapperTest.java | 2 + .../integration/ControllerRestApiTest.java | 49 +- .../test/integration/KeyValueTableTest.java | 3 +- .../pravega/test/integration/MetricsTest.java | 5 +- .../ReadFromDeletedStreamTest.java | 3 +- .../ReaderGroupStreamCutUpdateTest.java | 2 +- .../RestoreBackUpDataRecoveryTest.java | 4 +- .../test/integration/StreamMetricsTest.java | 3 +- .../test/integration/WatermarkingTest.java | 1 + .../server/ControllerServiceTest.java | 2 +- .../controller/server/EventProcessorTest.java | 1 + .../endtoendtest/EndToEndStatsTest.java | 3 +- .../EndToEndTransactionOrderTest.java | 3 +- .../endtoendtest/EndToEndTruncationTest.java | 28 + .../endtoendtest/EndToEndTxnWithTest.java | 4 +- .../endtoendtest/EndToEndUpdateTest.java | 21 +- test/system/kubernetes/fluentBitSetup.sh | 29 +- test/system/kubernetes/setupTestPod.sh | 11 +- .../test/system/SingleJUnitTestRunner.java | 2 +- .../framework/DockerBasedTestExecutor.java | 2 +- .../framework/TestFrameworkException.java | 4 +- .../framework/kubernetes/K8sClient.java | 48 +- .../services/docker/DockerBasedService.java | 2 +- .../PravegaControllerDockerService.java | 2 +- .../services/kubernetes/AbstractService.java | 47 +- .../kubernetes/BookkeeperK8sService.java | 2 +- .../kubernetes/K8SequentialExecutor.java | 20 +- .../PravegaControllerK8sService.java | 2 +- .../PravegaSegmentStoreK8sService.java | 2 +- .../kubernetes/ZookeeperK8sService.java | 14 + .../marathon/PravegaControllerService.java | 2 +- .../test/system/DynamicRestApiTest.java | 6 +- ...ubscriberUpdateRetentionStreamCutTest.java | 210 ++++ .../pravega/test/system/StreamCutsTest.java | 9 +- .../StreamsAndScopesManagementTest.java | 76 +- .../src/test/resources/pravega.properties | 8 +- .../resources/pravega_withAuth.properties | 8 +- .../test/resources/pravega_withTLS.properties | 8 +- .../pravega/test/common/AssertExtensions.java | 2 +- .../test/common/SecurityConfigDefaults.java | 2 + 622 files changed, 18023 insertions(+), 4723 deletions(-) create mode 100644 bindings/src/main/java/io/pravega/storage/s3/S3ChunkStorage.java create mode 100644 bindings/src/main/java/io/pravega/storage/s3/S3SimpleStorageFactory.java create mode 100644 bindings/src/main/java/io/pravega/storage/s3/S3StorageConfig.java create mode 100644 bindings/src/main/java/io/pravega/storage/s3/S3StorageFactoryCreator.java create mode 100644 bindings/src/test/java/io/pravega/storage/s3/S3ClientMock.java create mode 100644 bindings/src/test/java/io/pravega/storage/s3/S3Mock.java create mode 100644 bindings/src/test/java/io/pravega/storage/s3/S3SimpleStorageTests.java create mode 100644 bindings/src/test/java/io/pravega/storage/s3/S3StorageConfigTest.java create mode 100644 bindings/src/test/java/io/pravega/storage/s3/S3TestContext.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ContainerCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/FlushToStorageCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentEntryCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentInfoCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ListTableSegmentKeysCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ModifyTableSegmentEntry.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/PutTableSegmentEntryCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/SetSerializerCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/TableSegmentCommand.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/serializers/AbstractSerializer.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerKeySerializer.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializer.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsKeySerializer.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsMetadataSerializer.java create mode 100644 cli/admin/src/main/java/io/pravega/cli/admin/utils/AdminSegmentHelper.java rename cli/admin/src/main/java/io/pravega/cli/admin/utils/{CLIControllerConfig.java => CLIConfig.java} (67%) create mode 100644 cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/AbstractSegmentStoreCommandsTest.java delete mode 100644 cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommandsTest.java create mode 100644 cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerKeySerializerTest.java create mode 100644 cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializerTest.java create mode 100644 cli/admin/src/test/java/io/pravega/cli/admin/serializers/SerializerTest.java create mode 100644 cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsKeySerializerTest.java create mode 100644 cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsMetadataSerializerTest.java create mode 100644 common/src/main/java/io/pravega/common/concurrent/Scheduled.java create mode 100644 common/src/main/java/io/pravega/common/concurrent/ScheduledQueue.java create mode 100644 common/src/main/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorService.java create mode 100644 common/src/main/java/io/pravega/common/security/TLSProtocolVersion.java create mode 100644 common/src/test/java/io/pravega/common/concurrent/ScheduledQueueTest.java create mode 100644 common/src/test/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorServiceTest.java create mode 100644 common/src/test/java/io/pravega/common/security/TLSProtocolVersionTest.java delete mode 100644 controller/src/main/java/io/pravega/controller/server/SegmentStoreConnectionManager.java create mode 100644 controller/src/main/java/io/pravega/controller/server/health/ClusterListenerHealthContributor.java create mode 100644 controller/src/main/java/io/pravega/controller/server/health/EventProcessorHealthContributor.java create mode 100644 controller/src/main/java/io/pravega/controller/server/health/GRPCServerHealthContributor.java create mode 100644 controller/src/main/java/io/pravega/controller/server/health/RetentionServiceHealthContributor.java create mode 100644 controller/src/main/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributor.java create mode 100644 controller/src/main/java/io/pravega/controller/server/health/WatermarkingServiceHealthContributor.java create mode 100644 controller/src/main/java/io/pravega/controller/server/rest/generated/model/TagsList.java create mode 100644 controller/src/main/java/io/pravega/controller/store/stream/TxnWriterMark.java delete mode 100644 controller/src/test/java/io/pravega/controller/server/SegmentStoreConnectionManagerTest.java create mode 100644 controller/src/test/java/io/pravega/controller/server/health/ClusterListenerHealthContributorTest.java create mode 100644 controller/src/test/java/io/pravega/controller/server/health/EventProcessorHealthContributorTest.java create mode 100644 controller/src/test/java/io/pravega/controller/server/health/GRPCServerHealthContributorTest.java create mode 100644 controller/src/test/java/io/pravega/controller/server/health/RetentionServiceHealthContributorTest.java create mode 100644 controller/src/test/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributorTest.java create mode 100644 controller/src/test/java/io/pravega/controller/server/health/WatermarkingServiceHealthContibutorTest.java create mode 100644 controller/src/test/java/io/pravega/controller/store/PravegaTablesScopeTest.java create mode 100644 segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/SegmentApi.java create mode 100644 segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentInfo.java create mode 100644 segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributor.java create mode 100644 segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributor.java create mode 100644 segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/ZKHealthContributor.java create mode 100644 segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/S3IntegrationTest.java create mode 100644 segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorAuthFailedTest.java create mode 100644 segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImplTest.java create mode 100644 segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributorTest.java create mode 100644 segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributorTest.java create mode 100644 segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StorageEventProcessor.java create mode 100644 segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StorageEventProcessorTests.java create mode 100644 segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/operations/ConditionalMergeSegmentOperationTests.java create mode 100644 segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableExtensionConfigTests.java create mode 100644 segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/AbstractTaskQueueManager.java create mode 100644 segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalRecordsTests.java create mode 100644 segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTaskQueueManager.java create mode 100644 shared/authplugin/src/test/java/io/pravega/auth/MockPrincipal.java create mode 100644 standalone/src/test/java/io/pravega/local/TlsProtocolVersion12Test.java create mode 100644 standalone/src/test/java/io/pravega/local/TlsProtocolVersion13Test.java create mode 100644 test/system/src/test/java/io/pravega/test/system/SingleSubscriberUpdateRetentionStreamCutTest.java diff --git a/README.md b/README.md index 6d7164b6dc1..4b330e55789 100644 --- a/README.md +++ b/README.md @@ -74,7 +74,7 @@ The latest pravega releases can be found on the [Github Release](https://github. ## Snapshot artifacts -All snapshot artifacts from `master` and `release` branches are available in GitHUB Packages Registry +All snapshot artifacts from `master` and `release` branches are available in GitHub Packages Registry Add the following to your repositories list and import dependencies as usual. @@ -83,15 +83,19 @@ maven { url "https://maven.pkg.github.com/pravega/pravega" credentials { username = "pravega-public" - password = "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" + password = "\u0067\u0068\u0070\u005F\u0048\u0034\u0046\u0079\u0047\u005A\u0031\u006B\u0056\u0030\u0051\u0070\u006B\u0079\u0058\u006D\u0035\u0063\u0034\u0055\u0033\u006E\u0032\u0065\u0078\u0039\u0032\u0046\u006E\u0071\u0033\u0053\u0046\u0076\u005A\u0049" } } ``` -Note GitHub Packages requires authentication to download packages thus credentials above are required +Note GitHub Packages requires authentication to download packages thus credentials above are required. Use the provided password as is, please do not decode it. + +If you need a dedicated token to use in your repository (and GitHub Actions) please reach out to us. + +As alternative option you can use JitPack (https://jitpack.io/#pravega/pravega) to get pre-release artifacts. ## Quick Start -Read [Getting Started](documentation/src/docs/getting-started/getting-started.md) page for more information, and also visit [sample-apps](https://github.com/pravega/pravega-samples) repo for more applications. +Read [Getting Started](documentation/src/docs/getting-started/quick-start.md) page for more information, and also visit [sample-apps](https://github.com/pravega/pravega-samples) repo for more applications. ## Running Pravega diff --git a/bindings/src/main/java/io/pravega/storage/extendeds3/ExtendedS3StorageFactory.java b/bindings/src/main/java/io/pravega/storage/extendeds3/ExtendedS3StorageFactory.java index 178ad60fd7c..96e6d5e9a0e 100644 --- a/bindings/src/main/java/io/pravega/storage/extendeds3/ExtendedS3StorageFactory.java +++ b/bindings/src/main/java/io/pravega/storage/extendeds3/ExtendedS3StorageFactory.java @@ -38,6 +38,7 @@ public class ExtendedS3StorageFactory implements StorageFactory { @Getter private final ExecutorService executor; + @Override public Storage createStorageAdapter() { return new AsyncStorageWrapper(new RollingStorage(createS3Storage()), this.executor); } diff --git a/bindings/src/main/java/io/pravega/storage/filesystem/FileSystemStorageFactory.java b/bindings/src/main/java/io/pravega/storage/filesystem/FileSystemStorageFactory.java index 8464a42bba7..0aad6690d9d 100644 --- a/bindings/src/main/java/io/pravega/storage/filesystem/FileSystemStorageFactory.java +++ b/bindings/src/main/java/io/pravega/storage/filesystem/FileSystemStorageFactory.java @@ -38,6 +38,7 @@ public class FileSystemStorageFactory implements StorageFactory { @Getter private final ExecutorService executor; + @Override public Storage createStorageAdapter() { FileSystemStorage s = new FileSystemStorage(this.config); return new AsyncStorageWrapper(new RollingStorage(s), this.executor); diff --git a/bindings/src/main/java/io/pravega/storage/hdfs/HDFSChunkStorage.java b/bindings/src/main/java/io/pravega/storage/hdfs/HDFSChunkStorage.java index 665e174aef0..0e6d20a05c8 100644 --- a/bindings/src/main/java/io/pravega/storage/hdfs/HDFSChunkStorage.java +++ b/bindings/src/main/java/io/pravega/storage/hdfs/HDFSChunkStorage.java @@ -59,7 +59,7 @@ */ @Slf4j -class HDFSChunkStorage extends BaseChunkStorage { +public class HDFSChunkStorage extends BaseChunkStorage { private static final FsPermission READWRITE_PERMISSION = new FsPermission(FsAction.READ_WRITE, FsAction.NONE, FsAction.NONE); private static final FsPermission READONLY_PERMISSION = new FsPermission(FsAction.READ, FsAction.READ, FsAction.READ); diff --git a/bindings/src/main/java/io/pravega/storage/hdfs/HDFSExceptionHelpers.java b/bindings/src/main/java/io/pravega/storage/hdfs/HDFSExceptionHelpers.java index 11bcbc8842a..53bc8ca7e1d 100644 --- a/bindings/src/main/java/io/pravega/storage/hdfs/HDFSExceptionHelpers.java +++ b/bindings/src/main/java/io/pravega/storage/hdfs/HDFSExceptionHelpers.java @@ -39,7 +39,7 @@ final class HDFSExceptionHelpers { * @param e The exception to be translated. * @return The exception to be thrown. */ - static StreamSegmentException convertException(String segmentName, Throwable e) { + static StreamSegmentException convertException(String segmentName, Throwable e) { if (e instanceof RemoteException) { e = ((RemoteException) e).unwrapRemoteException(); } diff --git a/bindings/src/main/java/io/pravega/storage/hdfs/HDFSStorageFactory.java b/bindings/src/main/java/io/pravega/storage/hdfs/HDFSStorageFactory.java index 196d2d978c1..1dc2d01ebee 100644 --- a/bindings/src/main/java/io/pravega/storage/hdfs/HDFSStorageFactory.java +++ b/bindings/src/main/java/io/pravega/storage/hdfs/HDFSStorageFactory.java @@ -38,6 +38,7 @@ public class HDFSStorageFactory implements StorageFactory { @Getter private final Executor executor; + @Override public Storage createStorageAdapter() { HDFSStorage s = new HDFSStorage(this.config); return new AsyncStorageWrapper(new RollingStorage(s), this.executor); diff --git a/bindings/src/main/java/io/pravega/storage/s3/S3ChunkStorage.java b/bindings/src/main/java/io/pravega/storage/s3/S3ChunkStorage.java new file mode 100644 index 00000000000..91ac570b127 --- /dev/null +++ b/bindings/src/main/java/io/pravega/storage/s3/S3ChunkStorage.java @@ -0,0 +1,394 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import io.pravega.segmentstore.storage.chunklayer.BaseChunkStorage; +import io.pravega.segmentstore.storage.chunklayer.ChunkAlreadyExistsException; +import io.pravega.segmentstore.storage.chunklayer.ChunkHandle; +import io.pravega.segmentstore.storage.chunklayer.ChunkInfo; +import io.pravega.segmentstore.storage.chunklayer.ChunkNotFoundException; +import io.pravega.segmentstore.storage.chunklayer.ChunkStorage; +import io.pravega.segmentstore.storage.chunklayer.ChunkStorageException; +import io.pravega.segmentstore.storage.chunklayer.ConcatArgument; +import software.amazon.awssdk.core.ResponseBytes; +import software.amazon.awssdk.core.sync.RequestBody; +import software.amazon.awssdk.services.s3.S3Client; +import com.google.common.base.Preconditions; +import com.google.common.base.Strings; +import io.pravega.common.io.StreamHelpers; +import lombok.SneakyThrows; +import lombok.extern.slf4j.Slf4j; +import lombok.val; +import org.apache.http.HttpStatus; +import software.amazon.awssdk.services.s3.model.AbortMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.CompleteMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.CompletedMultipartUpload; +import software.amazon.awssdk.services.s3.model.CompletedPart; +import software.amazon.awssdk.services.s3.model.CreateMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.DeleteObjectRequest; +import software.amazon.awssdk.services.s3.model.GetObjectRequest; +import software.amazon.awssdk.services.s3.model.GetObjectResponse; +import software.amazon.awssdk.services.s3.model.HeadObjectRequest; +import software.amazon.awssdk.services.s3.model.Permission; +import software.amazon.awssdk.services.s3.model.PutObjectRequest; +import software.amazon.awssdk.services.s3.model.S3Exception; +import software.amazon.awssdk.services.s3.model.UploadPartCopyRequest; + +import java.io.InputStream; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.Executor; +import java.util.concurrent.atomic.AtomicBoolean; + +/** + * {@link ChunkStorage} for S3 based storage. + * + * Each chunk is represented as a single Object on the underlying storage. + * + * This implementation works under the assumption that is only created once and never modified. + * The concat operation is implemented as multi part copy. + */ +@Slf4j +public class S3ChunkStorage extends BaseChunkStorage { + + public static final String NO_SUCH_KEY = "NoSuchKey"; + public static final String PRECONDITION_FAILED = "PreconditionFailed"; + public static final String INVALID_RANGE = "InvalidRange"; + public static final String INVALID_ARGUMENT = "InvalidArgument"; + public static final String METHOD_NOT_ALLOWED = "MethodNotAllowed"; + public static final String ACCESS_DENIED = "AccessDenied"; + public static final String INVALID_PART = "InvalidPart"; + + //region members + private final S3StorageConfig config; + private final S3Client client; + private final boolean shouldCloseClient; + private final AtomicBoolean closed; + + //endregion + + //region constructor + public S3ChunkStorage(S3Client client, S3StorageConfig config, Executor executor, boolean shouldCloseClient) { + super(executor); + this.config = Preconditions.checkNotNull(config, "config"); + this.client = Preconditions.checkNotNull(client, "client"); + this.closed = new AtomicBoolean(false); + this.shouldCloseClient = shouldCloseClient; + } + //endregion + + //region capabilities + + @Override + public boolean supportsConcat() { + return true; + } + + @Override + public boolean supportsAppend() { + return false; + } + + @Override + public boolean supportsTruncation() { + return false; + } + + //endregion + + //region implementation + + @Override + protected ChunkHandle doOpenRead(String chunkName) throws ChunkStorageException { + if (!checkExists(chunkName)) { + throw new ChunkNotFoundException(chunkName, "doOpenRead"); + } + return ChunkHandle.readHandle(chunkName); + } + + @Override + protected ChunkHandle doOpenWrite(String chunkName) throws ChunkStorageException { + if (!checkExists(chunkName)) { + throw new ChunkNotFoundException(chunkName, "doOpenWrite"); + } + + return new ChunkHandle(chunkName, false); + } + + @Override + protected int doRead(ChunkHandle handle, long fromOffset, int length, byte[] buffer, int bufferOffset) throws ChunkStorageException { + try { + GetObjectRequest objectRequest = GetObjectRequest + .builder() + .key(getObjectPath(handle.getChunkName())) + .range(getRangeWithLength(fromOffset, length)) + .bucket(config.getBucket()) + .build(); + + ResponseBytes objectBytes = client.getObjectAsBytes(objectRequest); + try (val inputStream = objectBytes.asInputStream()) { + return StreamHelpers.readAll(inputStream, buffer, bufferOffset, length); + } + } catch (Exception e) { + throw convertException(handle.getChunkName(), "doRead", e); + } + } + + @Override + protected int doWrite(ChunkHandle handle, long offset, int length, InputStream data) { + throw new UnsupportedOperationException("S3ChunkStorage does not support writing to already existing objects."); + } + + @Override + public int doConcat(ConcatArgument[] chunks) throws ChunkStorageException { + int totalBytesConcatenated = 0; + String targetPath = getObjectPath(chunks[0].getName()); + String uploadId = null; + boolean isCompleted = false; + try { + int partNumber = 1; + + val response = client.createMultipartUpload(CreateMultipartUploadRequest.builder() + .bucket(config.getBucket()) + .key(targetPath) + .build()); + uploadId = response.uploadId(); + + // check whether the target exists + if (!checkExists(chunks[0].getName())) { + throw new ChunkNotFoundException(chunks[0].getName(), "doConcat - Target segment does not exist"); + } + CompletedPart[] completedParts = new CompletedPart[chunks.length]; + + //Copy the parts + for (int i = 0; i < chunks.length; i++) { + if (0 != chunks[i].getLength()) { + val sourceHandle = chunks[i]; + long objectSize = client.headObject(HeadObjectRequest.builder() + .bucket(this.config.getBucket()) + .key(getObjectPath(sourceHandle.getName())) + .build()).contentLength(); + + Preconditions.checkState(objectSize >= chunks[i].getLength(), + "Length of object should be equal or greater. Length on LTS={} provided={}", + objectSize, chunks[i].getLength()); + + UploadPartCopyRequest copyRequest = UploadPartCopyRequest.builder() + .destinationBucket(config.getBucket()) + .destinationKey(targetPath) + .sourceBucket(config.getBucket()) + .sourceKey(getObjectPath(sourceHandle.getName())) + .uploadId(uploadId) + .partNumber(partNumber) + .copySourceRange(getRangeWithLength(0, chunks[i].getLength())) + .build(); + val copyResult = client.uploadPartCopy(copyRequest); + val eTag = copyResult.copyPartResult().eTag(); + + completedParts[i] = CompletedPart.builder() + .partNumber(partNumber) + .eTag(eTag) + .build(); + + partNumber++; + totalBytesConcatenated += chunks[i].getLength(); + } + } + + //Close the upload + CompletedMultipartUpload completedRequest = CompletedMultipartUpload.builder() + .parts(completedParts) + .build(); + client.completeMultipartUpload(CompleteMultipartUploadRequest.builder() + .bucket(config.getBucket()) + .key(targetPath) + .multipartUpload(completedRequest) + .uploadId(uploadId) + .build()); + isCompleted = true; + } catch (RuntimeException e) { + // Make spotbugs happy. Wants us to catch RuntimeException in a separate catch block. + // Error message is REC_CATCH_EXCEPTION: Exception is caught when Exception is not thrown + throw convertException(chunks[0].getName(), "doConcat", e); + } catch (Exception e) { + throw convertException(chunks[0].getName(), "doConcat", e); + } finally { + if (!isCompleted && null != uploadId) { + try { + client.abortMultipartUpload(AbortMultipartUploadRequest.builder() + .bucket(config.getBucket()) + .key(targetPath) + .uploadId(uploadId) + .build()); + } catch (Exception e) { + throw convertException(chunks[0].getName(), "doConcat", e); + } + } + } + return totalBytesConcatenated; + } + + @Override + protected void doSetReadOnly(ChunkHandle handle, boolean isReadOnly) throws ChunkStorageException { + try { + setPermission(handle, isReadOnly ? Permission.READ : Permission.FULL_CONTROL); + } catch (Exception e) { + throw convertException(handle.getChunkName(), "doSetReadOnly", e); + } + } + + private void setPermission(ChunkHandle handle, Permission permission) { + throw new UnsupportedOperationException("S3ChunkStorage does not support ACL"); + } + + @Override + protected ChunkInfo doGetInfo(String chunkName) throws ChunkStorageException { + try { + val objectPath = getObjectPath(chunkName); + val response = client.headObject(HeadObjectRequest.builder() + .bucket(this.config.getBucket()) + .key(objectPath) + .build()); + + return ChunkInfo.builder() + .name(chunkName) + .length(response.contentLength()) + .build(); + } catch (Exception e) { + throw convertException(chunkName, "doGetInfo", e); + } + } + + @Override + protected ChunkHandle doCreate(String chunkName) { + throw new UnsupportedOperationException("S3ChunkStorage does not support creating object without content."); + } + + @Override + protected ChunkHandle doCreateWithContent(String chunkName, int length, InputStream data) throws ChunkStorageException { + try { + val objectPath = getObjectPath(chunkName); + + Map metadata = new HashMap<>(); + metadata.put("Content-Type", "application/octet-stream"); + metadata.put("Content-Length", Integer.toString(length)); + val request = PutObjectRequest.builder() + .bucket(this.config.getBucket()) + .key(objectPath) + .metadata(metadata) + .build(); + client.putObject(request, RequestBody.fromInputStream(data, length)); + + return ChunkHandle.writeHandle(chunkName); + } catch (Exception e) { + throw convertException(chunkName, "doCreateWithContent", e); + } + } + + @Override + protected boolean checkExists(String chunkName) throws ChunkStorageException { + try { + val objectPath = getObjectPath(chunkName); + val response = client.headObject(HeadObjectRequest.builder() + .bucket(this.config.getBucket()) + .key(objectPath) + .build()); + return true; + } catch (S3Exception e) { + if (e.awsErrorDetails().errorCode().equals(NO_SUCH_KEY)) { + return false; + } else { + throw convertException(chunkName, "checkExists", e); + } + } + } + + @Override + protected void doDelete(ChunkHandle handle) throws ChunkStorageException { + try { + // check whether the chunk exists + if (!checkExists(handle.getChunkName())) { + throw new ChunkNotFoundException(handle.getChunkName(), "doDelete"); + } + DeleteObjectRequest deleteRequest = DeleteObjectRequest.builder() + .bucket(this.config.getBucket()) + .key(getObjectPath(handle.getChunkName())) + .build(); + client.deleteObject(deleteRequest); + } catch (Exception e) { + throw convertException(handle.getChunkName(), "doDelete", e); + } + } + + @Override + @SneakyThrows + public void close() { + if (shouldCloseClient && !this.closed.getAndSet(true)) { + this.client.close(); + } + super.close(); + } + + /** + * Create formatted string for range. + */ + private String getRangeWithLength(long fromOffset, long length) { + return String.format("bytes=%d-%d", fromOffset, fromOffset + length - 1); + } + + private ChunkStorageException convertException(String chunkName, String message, Exception e) { + ChunkStorageException retValue = null; + if (e instanceof ChunkStorageException) { + return (ChunkStorageException) e; + } + if (e instanceof S3Exception) { + S3Exception s3Exception = (S3Exception) e; + String errorCode = Strings.nullToEmpty(s3Exception.awsErrorDetails().errorCode()); + + if (errorCode.equals(NO_SUCH_KEY)) { + retValue = new ChunkNotFoundException(chunkName, message, e); + } + + if (errorCode.equals(PRECONDITION_FAILED)) { + retValue = new ChunkAlreadyExistsException(chunkName, message, e); + } + + if (errorCode.equals(INVALID_RANGE) + || errorCode.equals(INVALID_ARGUMENT) + || errorCode.equals(METHOD_NOT_ALLOWED) + || s3Exception.awsErrorDetails().sdkHttpResponse().statusCode() == HttpStatus.SC_REQUESTED_RANGE_NOT_SATISFIABLE) { + throw new IllegalArgumentException(chunkName, e); + } + + if (errorCode.equals(ACCESS_DENIED)) { + retValue = new ChunkStorageException(chunkName, String.format("Access denied for chunk %s - %s.", chunkName, message), e); + } + } + + if (retValue == null) { + retValue = new ChunkStorageException(chunkName, message, e); + } + + return retValue; + } + + private String getObjectPath(String objectName) { + return config.getPrefix() + objectName; + } + + //endregion + +} diff --git a/bindings/src/main/java/io/pravega/storage/s3/S3SimpleStorageFactory.java b/bindings/src/main/java/io/pravega/storage/s3/S3SimpleStorageFactory.java new file mode 100644 index 00000000000..212eedcf8c7 --- /dev/null +++ b/bindings/src/main/java/io/pravega/storage/s3/S3SimpleStorageFactory.java @@ -0,0 +1,85 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import software.amazon.awssdk.auth.credentials.AwsBasicCredentials; +import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider; +import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider; +import software.amazon.awssdk.regions.Region; +import software.amazon.awssdk.services.s3.S3Client; +import software.amazon.awssdk.services.s3.S3ClientBuilder; +import io.pravega.segmentstore.storage.SimpleStorageFactory; +import io.pravega.segmentstore.storage.Storage; +import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorage; +import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorageConfig; +import io.pravega.segmentstore.storage.metadata.ChunkMetadataStore; +import lombok.Getter; +import lombok.NonNull; +import lombok.RequiredArgsConstructor; + +import java.net.URI; +import java.util.concurrent.ScheduledExecutorService; + +/** + * Factory for S3 {@link Storage} implemented using {@link ChunkedSegmentStorage} and {@link S3ChunkStorage}. + */ +@RequiredArgsConstructor +public class S3SimpleStorageFactory implements SimpleStorageFactory { + @NonNull + @Getter + private final ChunkedSegmentStorageConfig chunkedSegmentStorageConfig; + + @NonNull + private final S3StorageConfig config; + + @NonNull + @Getter + private final ScheduledExecutorService executor; + + @Override + public Storage createStorageAdapter(int containerId, ChunkMetadataStore metadataStore) { + S3Client s3Client = createS3Client(this.config); + ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(containerId, + new S3ChunkStorage(s3Client, this.config, this.executor, true), + metadataStore, + this.executor, + this.chunkedSegmentStorageConfig); + return chunkedSegmentStorage; + } + + /** + * Creates a new instance of a Storage adapter. + */ + @Override + public Storage createStorageAdapter() { + throw new UnsupportedOperationException("SimpleStorageFactory requires ChunkMetadataStore"); + } + + static S3Client createS3Client(S3StorageConfig config) { + S3ClientBuilder builder = S3Client.builder() + .credentialsProvider(getCredentialsProvider(config)) + .region(Region.of(config.getRegion())); + if (config.isShouldOverrideUri()) { + builder = builder.endpointOverride(URI.create(config.getS3Config())); + } + return builder.build(); + } + + private static AwsCredentialsProvider getCredentialsProvider(S3StorageConfig config) { + AwsBasicCredentials credentials = AwsBasicCredentials.create(config.getAccessKey(), config.getSecretKey()); + return StaticCredentialsProvider.create(credentials); + } +} diff --git a/bindings/src/main/java/io/pravega/storage/s3/S3StorageConfig.java b/bindings/src/main/java/io/pravega/storage/s3/S3StorageConfig.java new file mode 100644 index 00000000000..147d3bc6995 --- /dev/null +++ b/bindings/src/main/java/io/pravega/storage/s3/S3StorageConfig.java @@ -0,0 +1,128 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import com.google.common.base.Preconditions; +import io.pravega.common.util.ConfigBuilder; +import io.pravega.common.util.ConfigurationException; +import io.pravega.common.util.Property; +import io.pravega.common.util.TypedProperties; +import lombok.Getter; +import lombok.extern.slf4j.Slf4j; +import software.amazon.awssdk.regions.Region; + +/** + * Configuration for the ExtendedS3 Storage component. + */ +@Slf4j +public class S3StorageConfig { + //region Config Names + public static final Property OVERRIDE_CONFIGURI = Property.named("connect.config.uri.override", false); + public static final Property CONFIGURI = Property.named("connect.config.uri", "", "configUri"); + public static final Property ACCESS_KEY = Property.named("connect.config.access.key", ""); + public static final Property SECRET_KEY = Property.named("connect.config.secret.key", ""); + public static final Property REGION = Property.named("connect.config.region", Region.US_EAST_1.toString()); + public static final Property BUCKET = Property.named("bucket", ""); + public static final Property PREFIX = Property.named("prefix", "/"); + public static final Property USENONEMATCH = Property.named("noneMatch.enable", false, "useNoneMatch"); + + private static final String COMPONENT_CODE = "s3"; + private static final String PATH_SEPARATOR = "/"; + + //endregion + + //region Members + + /** + * The S3 complete client config of the S3 REST interface + */ + @Getter + private final String s3Config; + + /** + * The S3 region to use + */ + @Getter + private final String region; + + /** + * The S3 access key id - this is equivalent to the user + */ + @Getter + private final String accessKey; + + /** + * The S3 secret key associated with the accessKey + */ + @Getter + private final String secretKey; + + /** + * A unique bucket name to store objects + */ + @Getter + private final String bucket; + + /** + * Prefix of the Pravega owned S3 path under the assigned buckets. All the objects under this path will be + * exclusively owned by Pravega. + */ + @Getter + private final String prefix; + + /** + * Whether to use if-none-match header or not. + */ + @Getter + private final boolean useNoneMatch; + + /** + * Whether to use end point other than default. + */ + @Getter + private final boolean shouldOverrideUri; + //endregion + + //region Constructor + + /** + * Creates a new instance of the S3StorageConfigConfig class. + * + * @param properties The TypedProperties object to read Properties from. + */ + private S3StorageConfig(TypedProperties properties) throws ConfigurationException { + this.shouldOverrideUri = properties.getBoolean(OVERRIDE_CONFIGURI); + this.s3Config = Preconditions.checkNotNull(properties.get(CONFIGURI), "configUri"); + this.region = Preconditions.checkNotNull(properties.get(REGION), "region"); + this.accessKey = Preconditions.checkNotNull(properties.get(ACCESS_KEY), "accessKey"); + this.secretKey = Preconditions.checkNotNull(properties.get(SECRET_KEY), "secretKey"); + this.bucket = Preconditions.checkNotNull(properties.get(BUCKET), "bucket"); + String givenPrefix = Preconditions.checkNotNull(properties.get(PREFIX), "prefix"); + this.prefix = givenPrefix.endsWith(PATH_SEPARATOR) ? givenPrefix : givenPrefix + PATH_SEPARATOR; + this.useNoneMatch = properties.getBoolean(USENONEMATCH); + } + + /** + * Creates a new ConfigBuilder that can be used to create instances of this class. + * + * @return A new Builder for this class. + */ + public static ConfigBuilder builder() { + return new ConfigBuilder<>(COMPONENT_CODE, S3StorageConfig::new); + } + + //endregion +} diff --git a/bindings/src/main/java/io/pravega/storage/s3/S3StorageFactoryCreator.java b/bindings/src/main/java/io/pravega/storage/s3/S3StorageFactoryCreator.java new file mode 100644 index 00000000000..38171bc13c8 --- /dev/null +++ b/bindings/src/main/java/io/pravega/storage/s3/S3StorageFactoryCreator.java @@ -0,0 +1,53 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import com.google.common.base.Preconditions; +import io.pravega.segmentstore.storage.ConfigSetup; +import io.pravega.segmentstore.storage.StorageFactory; +import io.pravega.segmentstore.storage.StorageFactoryCreator; +import io.pravega.segmentstore.storage.StorageFactoryInfo; +import io.pravega.segmentstore.storage.StorageLayoutType; +import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorageConfig; + +import java.util.concurrent.ScheduledExecutorService; + +public class S3StorageFactoryCreator implements StorageFactoryCreator { + @Override + public StorageFactory createFactory(StorageFactoryInfo storageFactoryInfo, ConfigSetup setup, ScheduledExecutorService executor) { + Preconditions.checkNotNull(storageFactoryInfo, "storageFactoryInfo"); + Preconditions.checkNotNull(setup, "setup"); + Preconditions.checkNotNull(executor, "executor"); + Preconditions.checkArgument(storageFactoryInfo.getName().equals("S3")); + if (storageFactoryInfo.getStorageLayoutType().equals(StorageLayoutType.CHUNKED_STORAGE)) { + return new S3SimpleStorageFactory(setup.getConfig(ChunkedSegmentStorageConfig::builder), + setup.getConfig(S3StorageConfig::builder), + executor); + } else { + throw new UnsupportedOperationException("S3StorageFactoryCreator only supports CHUNKED_STORAGE."); + } + } + + @Override + public StorageFactoryInfo[] getStorageFactories() { + return new StorageFactoryInfo[]{ + StorageFactoryInfo.builder() + .name("S3") + .storageLayoutType(StorageLayoutType.CHUNKED_STORAGE) + .build() + }; + } +} diff --git a/bindings/src/main/resources/META-INF/services/io.pravega.segmentstore.storage.StorageFactoryCreator b/bindings/src/main/resources/META-INF/services/io.pravega.segmentstore.storage.StorageFactoryCreator index 3942ab38ba4..c625253f75f 100644 --- a/bindings/src/main/resources/META-INF/services/io.pravega.segmentstore.storage.StorageFactoryCreator +++ b/bindings/src/main/resources/META-INF/services/io.pravega.segmentstore.storage.StorageFactoryCreator @@ -20,3 +20,4 @@ io.pravega.storage.filesystem.FileSystemStorageFactoryCreator io.pravega.storage.hdfs.HDFSStorageFactoryCreator io.pravega.storage.extendeds3.ExtendedS3StorageFactoryCreator +io.pravega.storage.s3.S3StorageFactoryCreator diff --git a/bindings/src/test/java/io/pravega/storage/StorageFactoryTests.java b/bindings/src/test/java/io/pravega/storage/StorageFactoryTests.java index 2db76fdad84..7b3bfe10478 100644 --- a/bindings/src/test/java/io/pravega/storage/StorageFactoryTests.java +++ b/bindings/src/test/java/io/pravega/storage/StorageFactoryTests.java @@ -25,18 +25,25 @@ import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorage; import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorageConfig; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; +import io.pravega.storage.extendeds3.ExtendedS3ChunkStorage; import io.pravega.storage.extendeds3.ExtendedS3SimpleStorageFactory; import io.pravega.storage.extendeds3.ExtendedS3StorageConfig; import io.pravega.storage.extendeds3.ExtendedS3StorageFactory; import io.pravega.storage.extendeds3.ExtendedS3StorageFactoryCreator; +import io.pravega.storage.filesystem.FileSystemChunkStorage; import io.pravega.storage.filesystem.FileSystemSimpleStorageFactory; import io.pravega.storage.filesystem.FileSystemStorageConfig; import io.pravega.storage.filesystem.FileSystemStorageFactory; import io.pravega.storage.filesystem.FileSystemStorageFactoryCreator; +import io.pravega.storage.hdfs.HDFSChunkStorage; import io.pravega.storage.hdfs.HDFSSimpleStorageFactory; import io.pravega.storage.hdfs.HDFSStorageConfig; import io.pravega.storage.hdfs.HDFSStorageFactory; import io.pravega.storage.hdfs.HDFSStorageFactoryCreator; +import io.pravega.storage.s3.S3ChunkStorage; +import io.pravega.storage.s3.S3SimpleStorageFactory; +import io.pravega.storage.s3.S3StorageConfig; +import io.pravega.storage.s3.S3StorageFactoryCreator; import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.ThreadPooledTestSuite; import lombok.Cleanup; @@ -83,7 +90,7 @@ public void testHDFSStorageFactoryCreator() { @Cleanup Storage storage1 = ((HDFSSimpleStorageFactory) factory1).createStorageAdapter(42, new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService())); Assert.assertTrue(storage1 instanceof ChunkedSegmentStorage); - + Assert.assertTrue(((ChunkedSegmentStorage) storage1).getChunkStorage() instanceof HDFSChunkStorage); // Legacy Storage ConfigSetup configSetup2 = mock(ConfigSetup.class); when(configSetup2.getConfig(any())).thenReturn(HDFSStorageConfig.builder().build()); @@ -96,6 +103,11 @@ public void testHDFSStorageFactoryCreator() { SyncStorage syncStorage = factory2.createSyncStorage(); Assert.assertNotNull(syncStorage); + + AssertExtensions.assertThrows( + "createStorageAdapter should throw UnsupportedOperationException.", + () -> factory1.createStorageAdapter(), + ex -> ex instanceof UnsupportedOperationException); } @Test @@ -130,6 +142,7 @@ public void testExtendedS3StorageFactoryCreator() { @Cleanup Storage storage1 = ((ExtendedS3SimpleStorageFactory) factory1).createStorageAdapter(42, new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService())); Assert.assertTrue(storage1 instanceof ChunkedSegmentStorage); + Assert.assertTrue(((ChunkedSegmentStorage) storage1).getChunkStorage() instanceof ExtendedS3ChunkStorage); // Legacy Storage ConfigSetup configSetup2 = mock(ConfigSetup.class); @@ -144,6 +157,49 @@ public void testExtendedS3StorageFactoryCreator() { @Cleanup SyncStorage syncStorage = factory2.createSyncStorage(); Assert.assertNotNull(syncStorage); + + AssertExtensions.assertThrows( + "createStorageAdapter should throw UnsupportedOperationException.", + () -> factory1.createStorageAdapter(), + ex -> ex instanceof UnsupportedOperationException); + } + + @Test + public void testS3StorageFactoryCreator() { + StorageFactoryCreator factoryCreator = new S3StorageFactoryCreator(); + val expected = new StorageFactoryInfo[]{ + StorageFactoryInfo.builder() + .name("S3") + .storageLayoutType(StorageLayoutType.CHUNKED_STORAGE) + .build() + }; + + val factoryInfoList = factoryCreator.getStorageFactories(); + Assert.assertEquals(1, factoryInfoList.length); + Assert.assertArrayEquals(expected, factoryInfoList); + + // Simple Storage + ConfigSetup configSetup1 = mock(ConfigSetup.class); + val config = S3StorageConfig.builder() + .with(S3StorageConfig.CONFIGURI, "http://127.0.0.1") + .with(S3StorageConfig.BUCKET, "bucket") + .with(S3StorageConfig.PREFIX, "samplePrefix") + .with(S3StorageConfig.ACCESS_KEY, "user") + .with(S3StorageConfig.SECRET_KEY, "secret") + .build(); + when(configSetup1.getConfig(any())).thenReturn(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, config); + val factory1 = factoryCreator.createFactory(expected[0], configSetup1, executorService()); + Assert.assertTrue(factory1 instanceof S3SimpleStorageFactory); + + @Cleanup + Storage storage1 = ((S3SimpleStorageFactory) factory1).createStorageAdapter(42, new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService())); + Assert.assertTrue(storage1 instanceof ChunkedSegmentStorage); + Assert.assertTrue(((ChunkedSegmentStorage) storage1).getChunkStorage() instanceof S3ChunkStorage); + + AssertExtensions.assertThrows( + "createStorageAdapter should throw UnsupportedOperationException.", + () -> factory1.createStorageAdapter(), + ex -> ex instanceof UnsupportedOperationException); } @Test @@ -173,6 +229,7 @@ public void testFileSystemStorageFactoryCreator() { @Cleanup Storage storage1 = ((FileSystemSimpleStorageFactory) factory1).createStorageAdapter(42, new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService())); Assert.assertTrue(storage1 instanceof ChunkedSegmentStorage); + Assert.assertTrue(((ChunkedSegmentStorage) storage1).getChunkStorage() instanceof FileSystemChunkStorage); // Legacy Storage ConfigSetup configSetup2 = mock(ConfigSetup.class); @@ -186,6 +243,11 @@ public void testFileSystemStorageFactoryCreator() { @Cleanup SyncStorage syncStorage = factory2.createSyncStorage(); Assert.assertNotNull(syncStorage); + + AssertExtensions.assertThrows( + "createStorageAdapter should throw UnsupportedOperationException.", + () -> factory1.createStorageAdapter(), + ex -> ex instanceof UnsupportedOperationException); } @Test diff --git a/bindings/src/test/java/io/pravega/storage/extendeds3/ExtendedS3SimpleStorageTests.java b/bindings/src/test/java/io/pravega/storage/extendeds3/ExtendedS3SimpleStorageTests.java index 698402dcef9..46adc5a6814 100644 --- a/bindings/src/test/java/io/pravega/storage/extendeds3/ExtendedS3SimpleStorageTests.java +++ b/bindings/src/test/java/io/pravega/storage/extendeds3/ExtendedS3SimpleStorageTests.java @@ -177,12 +177,14 @@ protected ChunkStorage getChunkStorage() { public static class NoAppendExtendedS3ChunkStorageSystemJournalTests extends SystemJournalTests { private ExtendedS3TestContext testContext = null; + @Override @Before public void before() throws Exception { this.testContext = new ExtendedS3TestContext(); super.before(); } + @Override @After public void after() throws Exception { if (this.testContext != null) { diff --git a/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemChunkStorageMockTest.java b/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemChunkStorageMockTest.java index d4ef532ef19..74db19812fb 100644 --- a/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemChunkStorageMockTest.java +++ b/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemChunkStorageMockTest.java @@ -35,6 +35,7 @@ import java.nio.file.Files; import java.time.Duration; import java.util.List; +import lombok.Cleanup; import static org.junit.Assert.assertEquals; import static org.mockito.ArgumentMatchers.any; @@ -75,7 +76,7 @@ public void testWithNonRegularFile() throws Exception { FileSystemWrapper fileSystemWrapper = mock(FileSystemWrapper.class); when(fileSystemWrapper.exists(any())).thenReturn(true); when(fileSystemWrapper.isRegularFile(any())).thenReturn(false); - + @Cleanup FileSystemChunkStorage testStorage = new FileSystemChunkStorage(storageConfig, fileSystemWrapper, executorService()); AssertExtensions.assertFutureThrows( " openRead should throw ChunkStorageException.", @@ -99,6 +100,7 @@ public void testWithRandomException() throws Exception { when(fileSystemWrapper.createDirectories(any())).thenThrow(new IOException("Random")); doThrow(new IOException("Random")).when(fileSystemWrapper).delete(any()); + @Cleanup FileSystemChunkStorage testStorage = new FileSystemChunkStorage(storageConfig, fileSystemWrapper, executorService()); AssertExtensions.assertThrows( " doDelete should throw ChunkStorageException.", @@ -169,6 +171,7 @@ private void doReadTest(int index, int bufferSize) throws Exception { when(fileSystemWrapper.getFileChannel(any(), any())).thenReturn(channel); when(fileSystemWrapper.getFileSize(any())).thenReturn(2L * bufferSize); + @Cleanup FileSystemChunkStorage testStorage = new FileSystemChunkStorage(storageConfig, fileSystemWrapper, executorService()); ChunkHandle handle = ChunkHandle.readHandle(chunkName); diff --git a/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemMockTests.java b/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemMockTests.java index c3dc4d7497d..9a779a8f2cf 100644 --- a/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemMockTests.java +++ b/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemMockTests.java @@ -17,6 +17,7 @@ import io.pravega.segmentstore.storage.SegmentHandle; import io.pravega.test.common.AssertExtensions; +import lombok.Cleanup; import lombok.Getter; import lombok.Setter; import org.junit.Before; @@ -64,6 +65,7 @@ public void setUp() throws Exception { public void testListSegmentsNumberIoException() { FileChannel channel1 = mock(FileChannel.class); FileSystemStorageConfig storageConfig = FileSystemStorageConfig.builder().build(); + @Cleanup TestFileSystemStorage testFileSystemStorage = new TestFileSystemStorage(storageConfig, channel1); AssertExtensions.assertThrows(IOException.class, () -> testFileSystemStorage.listSegments()); } @@ -84,7 +86,7 @@ private void doReadTest(int index, int bufferSize) throws Exception { FileChannel channel = mock(FileChannel.class); fixChannelMock(channel); String segmentName = "test"; - + @Cleanup TestFileSystemStorage testStorage = new TestFileSystemStorage(storageConfig, channel); testStorage.setSizeToReturn(2L * bufferSize); SegmentHandle handle = FileSystemSegmentHandle.readHandle(segmentName); diff --git a/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemSimpleStorageTest.java b/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemSimpleStorageTest.java index 22c6f6efa47..d068c6762bb 100644 --- a/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemSimpleStorageTest.java +++ b/bindings/src/test/java/io/pravega/storage/filesystem/FileSystemSimpleStorageTest.java @@ -42,6 +42,7 @@ private static ChunkStorage newChunkStorage(Executor executor) throws IOExceptio executor); } + @Override protected ChunkStorage getChunkStorage() throws Exception { return newChunkStorage(executorService()); } @@ -50,6 +51,7 @@ protected ChunkStorage getChunkStorage() throws Exception { * {@link ChunkedRollingStorageTests} tests for {@link FileSystemChunkStorage} based {@link io.pravega.segmentstore.storage.Storage}. */ public static class FileSystemRollingTests extends ChunkedRollingStorageTests { + @Override protected ChunkStorage getChunkStorage() throws Exception { return newChunkStorage(executorService()); } @@ -68,6 +70,7 @@ protected ChunkStorage createChunkStorage() throws Exception { /** * Test default capabilities. */ + @Override @Test public void testCapabilities() { assertEquals(true, getChunkStorage().supportsAppend()); diff --git a/bindings/src/test/java/io/pravega/storage/hdfs/HDFSSimpleStorageTest.java b/bindings/src/test/java/io/pravega/storage/hdfs/HDFSSimpleStorageTest.java index 8d698bad0ed..007652fd21d 100644 --- a/bindings/src/test/java/io/pravega/storage/hdfs/HDFSSimpleStorageTest.java +++ b/bindings/src/test/java/io/pravega/storage/hdfs/HDFSSimpleStorageTest.java @@ -43,18 +43,21 @@ public class HDFSSimpleStorageTest extends SimpleStorageTests { public Timeout globalTimeout = Timeout.seconds(TIMEOUT.getSeconds()); private TestContext testContext = new TestContext(executorService()); + @Override @Before public void before() throws Exception { super.before(); testContext.setUp(); } + @Override @After public void after() throws Exception { testContext.tearDown(); super.after(); } + @Override protected ChunkStorage getChunkStorage() throws Exception { return testContext.getChunkStorage(executorService()); } @@ -67,18 +70,21 @@ public static class HDFSRollingTests extends ChunkedRollingStorageTests { public Timeout globalTimeout = Timeout.seconds(TIMEOUT.getSeconds()); private TestContext testContext = new TestContext(executorService()); + @Override @Before public void before() throws Exception { super.before(); testContext.setUp(); } + @Override @After public void after() throws Exception { testContext.tearDown(); super.after(); } + @Override protected ChunkStorage getChunkStorage() throws Exception { return testContext.getChunkStorage(executorService()); } @@ -92,12 +98,14 @@ public static class HDFSChunkStorageTests extends ChunkStorageTests { public Timeout globalTimeout = Timeout.seconds(TIMEOUT.getSeconds()); private TestContext testContext = new TestContext(executorService()); + @Override @Before public void before() throws Exception { testContext.setUp(); super.before(); } + @Override @After public void after() throws Exception { super.after(); @@ -112,6 +120,7 @@ protected ChunkStorage createChunkStorage() throws Exception { /** * Test default capabilities. */ + @Override @Test public void testCapabilities() { assertEquals(true, getChunkStorage().supportsAppend()); @@ -126,12 +135,14 @@ public void testCapabilities() { public static class HDFSChunkStorageSystemJournalTests extends SystemJournalTests { private TestContext testContext = new TestContext(executorService()); + @Override @Before public void before() throws Exception { testContext.setUp(); super.before(); } + @Override @After public void after() throws Exception { testContext.tearDown(); diff --git a/bindings/src/test/java/io/pravega/storage/s3/S3ClientMock.java b/bindings/src/test/java/io/pravega/storage/s3/S3ClientMock.java new file mode 100644 index 00000000000..e80bb12abce --- /dev/null +++ b/bindings/src/test/java/io/pravega/storage/s3/S3ClientMock.java @@ -0,0 +1,117 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import lombok.NonNull; + +import software.amazon.awssdk.awscore.exception.AwsServiceException; +import software.amazon.awssdk.core.ResponseBytes; +import software.amazon.awssdk.core.exception.SdkClientException; +import software.amazon.awssdk.core.sync.RequestBody; +import software.amazon.awssdk.services.s3.S3Client; +import software.amazon.awssdk.services.s3.model.AbortMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.AbortMultipartUploadResponse; +import software.amazon.awssdk.services.s3.model.CompleteMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.CompleteMultipartUploadResponse; +import software.amazon.awssdk.services.s3.model.CreateMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.CreateMultipartUploadResponse; +import software.amazon.awssdk.services.s3.model.DeleteObjectRequest; +import software.amazon.awssdk.services.s3.model.DeleteObjectResponse; +import software.amazon.awssdk.services.s3.model.DeleteObjectsRequest; +import software.amazon.awssdk.services.s3.model.DeleteObjectsResponse; +import software.amazon.awssdk.services.s3.model.GetObjectRequest; +import software.amazon.awssdk.services.s3.model.GetObjectResponse; +import software.amazon.awssdk.services.s3.model.HeadObjectRequest; +import software.amazon.awssdk.services.s3.model.HeadObjectResponse; +import software.amazon.awssdk.services.s3.model.PutObjectRequest; +import software.amazon.awssdk.services.s3.model.PutObjectResponse; +import software.amazon.awssdk.services.s3.model.S3Exception; +import software.amazon.awssdk.services.s3.model.UploadPartCopyRequest; +import software.amazon.awssdk.services.s3.model.UploadPartCopyResponse; + +/** + * {@link S3Client} implementation that communicates with a {@link S3Mock} storage. + */ +public class S3ClientMock implements S3Client { + private final S3Mock s3Impl; + + public S3ClientMock(@NonNull S3Mock s3Impl) { + this.s3Impl = s3Impl; + } + + @Override + public String serviceName() { + return "S3"; + } + + @Override + public PutObjectResponse putObject(PutObjectRequest putObjectRequest, RequestBody requestBody) throws AwsServiceException, + SdkClientException { + return s3Impl.putObject(putObjectRequest, requestBody); + } + + @Override + public DeleteObjectResponse deleteObject(DeleteObjectRequest deleteObjectRequest) throws AwsServiceException, + SdkClientException { + return s3Impl.deleteObject(deleteObjectRequest); + } + + @Override + public DeleteObjectsResponse deleteObjects(DeleteObjectsRequest deleteObjectsRequest) throws AwsServiceException, + SdkClientException { + return s3Impl.deleteObjects(deleteObjectsRequest); + } + + @Override + public ResponseBytes getObjectAsBytes(GetObjectRequest getObjectRequest) throws + AwsServiceException, SdkClientException { + return s3Impl.readObjectStream(getObjectRequest); + } + + @Override + public HeadObjectResponse headObject(HeadObjectRequest headObjectRequest) throws AwsServiceException, + SdkClientException { + return s3Impl.headObject(headObjectRequest); + } + + @Override + public CreateMultipartUploadResponse createMultipartUpload(CreateMultipartUploadRequest createMultipartUploadRequest) + throws AwsServiceException, SdkClientException, S3Exception { + return s3Impl.createMultipartUpload(createMultipartUploadRequest); + } + + @Override + public UploadPartCopyResponse uploadPartCopy(UploadPartCopyRequest uploadPartCopyRequest) throws AwsServiceException, + SdkClientException { + return s3Impl.uploadPartCopy(uploadPartCopyRequest); + } + + @Override + public AbortMultipartUploadResponse abortMultipartUpload(AbortMultipartUploadRequest abortMultipartUploadRequest) + throws AwsServiceException, SdkClientException, S3Exception { + return s3Impl.abortMultipartUpload(abortMultipartUploadRequest); + } + + @Override + public CompleteMultipartUploadResponse completeMultipartUpload(CompleteMultipartUploadRequest completeMultipartUploadRequest) + throws AwsServiceException, SdkClientException { + return s3Impl.completeMultipartUpload(completeMultipartUploadRequest); + } + + @Override + public void close() { + } +} diff --git a/bindings/src/test/java/io/pravega/storage/s3/S3Mock.java b/bindings/src/test/java/io/pravega/storage/s3/S3Mock.java new file mode 100644 index 00000000000..c39baa1a738 --- /dev/null +++ b/bindings/src/test/java/io/pravega/storage/s3/S3Mock.java @@ -0,0 +1,276 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import io.pravega.common.util.BufferView; +import io.pravega.common.util.ByteArraySegment; +import lombok.AllArgsConstructor; +import lombok.val; +import org.apache.http.HttpStatus; +import software.amazon.awssdk.awscore.exception.AwsErrorDetails; +import software.amazon.awssdk.awscore.exception.AwsServiceException; +import software.amazon.awssdk.core.ResponseBytes; +import software.amazon.awssdk.core.exception.SdkClientException; +import software.amazon.awssdk.core.sync.RequestBody; +import software.amazon.awssdk.http.SdkHttpResponse; +import software.amazon.awssdk.services.s3.model.AbortMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.AbortMultipartUploadResponse; +import software.amazon.awssdk.services.s3.model.CompleteMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.CompleteMultipartUploadResponse; +import software.amazon.awssdk.services.s3.model.CopyPartResult; +import software.amazon.awssdk.services.s3.model.CreateMultipartUploadRequest; +import software.amazon.awssdk.services.s3.model.CreateMultipartUploadResponse; +import software.amazon.awssdk.services.s3.model.DeleteObjectRequest; +import software.amazon.awssdk.services.s3.model.DeleteObjectResponse; +import software.amazon.awssdk.services.s3.model.DeleteObjectsRequest; +import software.amazon.awssdk.services.s3.model.DeleteObjectsResponse; +import software.amazon.awssdk.services.s3.model.GetObjectRequest; +import software.amazon.awssdk.services.s3.model.GetObjectResponse; +import software.amazon.awssdk.services.s3.model.HeadObjectRequest; +import software.amazon.awssdk.services.s3.model.HeadObjectResponse; +import software.amazon.awssdk.services.s3.model.ObjectCannedACL; +import software.amazon.awssdk.services.s3.model.PutObjectRequest; +import software.amazon.awssdk.services.s3.model.PutObjectResponse; +import software.amazon.awssdk.services.s3.model.S3Exception; +import software.amazon.awssdk.services.s3.model.UploadPartCopyRequest; +import software.amazon.awssdk.services.s3.model.UploadPartCopyResponse; + +import javax.annotation.concurrent.GuardedBy; +import javax.annotation.concurrent.NotThreadSafe; +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.atomic.AtomicLong; + +/** + * In-memory mock for S3. + */ +public class S3Mock { + //region Private + + @GuardedBy("objects") + private final Map> multipartUploadParts; + @GuardedBy("objects") + private final Map multipartUploads; + + private final AtomicLong multipartNextId = new AtomicLong(0); + private final AtomicLong eTags = new AtomicLong(0); + + @GuardedBy("objects") + private final Map objects; + + //endregion + + //region Constructor + + public S3Mock() { + this.objects = new HashMap<>(); + this.multipartUploads = new HashMap<>(); + this.multipartUploadParts = new HashMap<>(); + } + + //endregion + + //region Mock Implementation + + private String getObjectName(String bucketName, String key) { + return String.format("%s-%s", bucketName, key); + } + + PutObjectResponse putObject(PutObjectRequest request, RequestBody requestBody) { + String objectName = getObjectName(request.bucket(), request.key()); + synchronized (this.objects) { + if (this.objects.containsKey(objectName)) { + throw S3Exception.builder().build(); + } + if (null != requestBody) { + try (val inputStream = requestBody.contentStreamProvider().newStream()) { + val bufferView = new ByteArraySegment(inputStream.readAllBytes()); + this.objects.put(objectName, new ObjectData(bufferView, request.acl())); + } catch (IOException ex) { + throw getException("Copy error", "Copy error", HttpStatus.SC_INTERNAL_SERVER_ERROR); + } + } else { + this.objects.put(objectName, new ObjectData(BufferView.empty(), request.acl())); + } + return PutObjectResponse.builder().build(); + } + } + + DeleteObjectResponse deleteObject(DeleteObjectRequest deleteObjectRequest) { + String objectName = getObjectName(deleteObjectRequest.bucket(), deleteObjectRequest.key()); + return deleteObject(objectName); + } + + DeleteObjectResponse deleteObject(String objectName) { + synchronized (this.objects) { + if (this.objects.remove(objectName) == null) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + return DeleteObjectResponse.builder().build(); + } + } + + DeleteObjectsResponse deleteObjects(DeleteObjectsRequest deleteObjectsRequest) { + for (var object: deleteObjectsRequest.delete().objects()) { + String objectName = getObjectName(deleteObjectsRequest.bucket(), object.key()); + deleteObject(objectName); + } + return DeleteObjectsResponse.builder().build(); + } + + HeadObjectResponse headObject(HeadObjectRequest headObjectRequest) throws AwsServiceException, + SdkClientException { + ObjectData result; + String objectName = getObjectName(headObjectRequest.bucket(), headObjectRequest.key()); + synchronized (this.objects) { + ObjectData od = this.objects.get(objectName); + + if (od == null) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + result = od; + } + var objectData = result; + return HeadObjectResponse.builder() + .contentLength((long) objectData.content.getLength()) + .build(); + } + + ResponseBytes readObjectStream(GetObjectRequest getObjectRequest) { + String objectName = getObjectName(getObjectRequest.bucket(), getObjectRequest.key()); + synchronized (this.objects) { + ObjectData od = this.objects.get(objectName); + + if (od == null) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + int offset = 0; + int length = od.content.getLength(); + if (null != getObjectRequest.range()) { + var parts = getObjectRequest.range().replace("bytes=", "").split("-"); + offset = Integer.parseInt(parts[0]); + length = Integer.parseInt(parts[1]) - offset + 1; + } + try { + return ResponseBytes.fromInputStream(GetObjectResponse.builder().build(), + od.content.slice(offset, length).getReader()); + } catch (Exception ex) { + throw getException(S3ChunkStorage.INVALID_RANGE, S3ChunkStorage.INVALID_RANGE, HttpStatus.SC_REQUESTED_RANGE_NOT_SATISFIABLE); + } + } + } + + AwsServiceException getException(String message, String errorCode, int status) { + return S3Exception.builder() + .message(message) + .statusCode(status) + .awsErrorDetails(AwsErrorDetails.builder() + .errorCode(errorCode) + .sdkHttpResponse(SdkHttpResponse.builder() + .statusCode(status) + .build()) + .build()) + .build(); + } + + CreateMultipartUploadResponse createMultipartUpload(CreateMultipartUploadRequest createMultipartUploadRequest) { + String objectName = getObjectName(createMultipartUploadRequest.bucket(), createMultipartUploadRequest.key()); + synchronized (this.objects) { + val id = this.multipartNextId.incrementAndGet(); + this.multipartUploads.put(Long.toString(id), objectName); + this.multipartUploadParts.put(Long.toString(id), new HashMap<>()); + return CreateMultipartUploadResponse.builder() + .uploadId(Long.toString(id)) + .build(); + } + } + + UploadPartCopyResponse uploadPartCopy(UploadPartCopyRequest uploadPartCopyRequest) { + synchronized (this.objects) { + val parts = this.multipartUploadParts.get(uploadPartCopyRequest.uploadId()); + if (null == parts) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + parts.put(uploadPartCopyRequest.partNumber(), uploadPartCopyRequest); + return UploadPartCopyResponse.builder() + .copyPartResult(CopyPartResult.builder().eTag(Long.toString(eTags.incrementAndGet())).build()) + .build(); + } + } + + CompleteMultipartUploadResponse completeMultipartUpload(CompleteMultipartUploadRequest completeMultipartUploadRequest) { + String objectName = getObjectName(completeMultipartUploadRequest.bucket(), completeMultipartUploadRequest.key()); + synchronized (this.objects) { + val partMap = this.multipartUploadParts.get(completeMultipartUploadRequest.uploadId()); + if (partMap == null) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + + if (!this.objects.containsKey(objectName)) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + + val builder = BufferView.builder(); + + for (int i = 1; i <= partMap.size(); i++) { + val part = partMap.get(i); + if (null == part) { + // Make sure all the parts are there. + throw getException(S3ChunkStorage.INVALID_PART, S3ChunkStorage.INVALID_PART, HttpStatus.SC_BAD_REQUEST); + } + String partObjectName = getObjectName(part.sourceBucket(), part.sourceKey()); + ObjectData od = this.objects.get(partObjectName); + if (od == null) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + builder.add(od.content); + } + + this.objects.get(objectName).content = builder.build(); + this.multipartUploads.remove(completeMultipartUploadRequest.uploadId()); + this.multipartUploadParts.remove(completeMultipartUploadRequest.uploadId()); + + return CompleteMultipartUploadResponse.builder().build(); + } + } + + AbortMultipartUploadResponse abortMultipartUpload(AbortMultipartUploadRequest request) { + synchronized (this.objects) { + val partMap = this.multipartUploads.remove(request.uploadId()); + if (partMap == null) { + throw getException(S3ChunkStorage.NO_SUCH_KEY, S3ChunkStorage.NO_SUCH_KEY, HttpStatus.SC_NOT_FOUND); + } + this.multipartUploadParts.remove(request.uploadId()); + + return AbortMultipartUploadResponse.builder().build(); + } + } + + //endregion + + //region Helper classes + + @NotThreadSafe + @AllArgsConstructor + static class ObjectData { + volatile BufferView content; + volatile ObjectCannedACL acl; + } + + //endregion +} + diff --git a/bindings/src/test/java/io/pravega/storage/s3/S3SimpleStorageTests.java b/bindings/src/test/java/io/pravega/storage/s3/S3SimpleStorageTests.java new file mode 100644 index 00000000000..0a9643f2399 --- /dev/null +++ b/bindings/src/test/java/io/pravega/storage/s3/S3SimpleStorageTests.java @@ -0,0 +1,156 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import io.pravega.segmentstore.storage.chunklayer.ChunkStorage; +import io.pravega.segmentstore.storage.chunklayer.ChunkStorageTests; +import io.pravega.segmentstore.storage.chunklayer.ChunkedRollingStorageTests; +import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorageConfig; +import io.pravega.segmentstore.storage.chunklayer.SimpleStorageTests; +import io.pravega.segmentstore.storage.chunklayer.SystemJournalTests; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +/** + * Unit tests for {@link S3ChunkStorage} based {@link io.pravega.segmentstore.storage.Storage}. + */ +public class S3SimpleStorageTests extends SimpleStorageTests { + private S3TestContext testContext = null; + + @Override + @Before + public void before() throws Exception { + this.testContext = new S3TestContext(); + super.before(); + } + + @Override + @After + public void after() throws Exception { + if (this.testContext != null) { + this.testContext.close(); + } + super.after(); + } + + @Override + protected ChunkStorage getChunkStorage() { + return new S3ChunkStorage(testContext.s3Client, testContext.adapterConfig, executorService(), false); + } + + /** + * {@link ChunkedRollingStorageTests} tests for {@link S3ChunkStorage} based {@link io.pravega.segmentstore.storage.Storage}. + */ + public static class S3StorageRollingTests extends ChunkedRollingStorageTests { + private S3TestContext testContext = null; + + @Before + public void setUp() throws Exception { + this.testContext = new S3TestContext(); + } + + @After + public void tearDown() throws Exception { + if (this.testContext != null) { + this.testContext.close(); + } + } + + @Override + protected ChunkStorage getChunkStorage() { + return new S3ChunkStorage(testContext.s3Client, testContext.adapterConfig, executorService(), false); + } + + @Override + protected ChunkedSegmentStorageConfig getDefaultConfig() { + return ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .minSizeLimitForConcat(5 * 1024 * 1024) + .maxSizeLimitForConcat(5 * 1024 * 1024 * 1024) + .build(); + } + } + + /** + * {@link ChunkStorageTests} tests for {@link S3ChunkStorage} based {@link io.pravega.segmentstore.storage.Storage}. + */ + public static class S3ChunkStorageTests extends ChunkStorageTests { + private S3TestContext testContext = null; + + @Override + @Before + public void before() throws Exception { + this.testContext = new S3TestContext(); + super.before(); + } + + @Override + @After + public void after() throws Exception { + if (this.testContext != null) { + this.testContext.close(); + } + super.after(); + } + + @Override + protected ChunkStorage createChunkStorage() { + return new S3ChunkStorage(testContext.s3Client, testContext.adapterConfig, executorService(), false); + } + + /** + * Test default capabilities. + */ + @Override + @Test + public void testCapabilities() { + assertFalse(getChunkStorage().supportsAppend()); + assertFalse(getChunkStorage().supportsTruncation()); + assertTrue(getChunkStorage().supportsConcat()); + } + } + + /** + * {@link SystemJournalTests} tests for {@link S3ChunkStorage} based {@link io.pravega.segmentstore.storage.Storage}. + */ + public static class S3ChunkStorageSystemJournalTests extends SystemJournalTests { + private S3TestContext testContext = null; + + @Override + @Before + public void before() throws Exception { + this.testContext = new S3TestContext(); + super.before(); + } + + @Override + @After + public void after() throws Exception { + if (this.testContext != null) { + this.testContext.close(); + } + super.after(); + } + + @Override + protected ChunkStorage getChunkStorage() { + return new S3ChunkStorage(testContext.s3Client, testContext.adapterConfig, executorService(), false); + } + } +} diff --git a/bindings/src/test/java/io/pravega/storage/s3/S3StorageConfigTest.java b/bindings/src/test/java/io/pravega/storage/s3/S3StorageConfigTest.java new file mode 100644 index 00000000000..e2bb74de9a3 --- /dev/null +++ b/bindings/src/test/java/io/pravega/storage/s3/S3StorageConfigTest.java @@ -0,0 +1,59 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import io.pravega.common.util.ConfigBuilder; +import io.pravega.common.util.Property; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; + +public class S3StorageConfigTest { + + @Test + public void testDefaultS3Config() { + ConfigBuilder builder = S3StorageConfig.builder(); + builder.with(Property.named("configUri"), "http://127.0.0.1:9020") + .with(Property.named("bucket"), "testBucket") + .with(Property.named("prefix"), "testPrefix"); + S3StorageConfig config = builder.build(); + assertEquals("testBucket", config.getBucket()); + assertEquals("testPrefix/", config.getPrefix()); + assertEquals("us-east-1", config.getRegion()); + assertEquals(false, config.isShouldOverrideUri()); + } + + @Test + public void testConstructS3Config() { + ConfigBuilder builder = S3StorageConfig.builder(); + builder.with(Property.named("connect.config.uri"), "http://example.com") + .with(Property.named("bucket"), "testBucket") + .with(Property.named("prefix"), "testPrefix") + .with(Property.named("connect.config.region"), "my-region") + .with(Property.named("connect.config.access.key"), "key") + .with(Property.named("connect.config.secret.key"), "secret") + .with(Property.named("connect.config.uri.override"), true); + S3StorageConfig config = builder.build(); + assertEquals("testBucket", config.getBucket()); + assertEquals("testPrefix/", config.getPrefix()); + assertEquals("my-region", config.getRegion()); + assertEquals(true, config.isShouldOverrideUri()); + assertEquals("http://example.com", config.getS3Config()); + assertEquals( "my-region", config.getRegion()); + assertEquals( "key", config.getAccessKey()); + assertEquals( "secret", config.getSecretKey()); + } +} diff --git a/bindings/src/test/java/io/pravega/storage/s3/S3TestContext.java b/bindings/src/test/java/io/pravega/storage/s3/S3TestContext.java new file mode 100644 index 00000000000..2aeff3506ae --- /dev/null +++ b/bindings/src/test/java/io/pravega/storage/s3/S3TestContext.java @@ -0,0 +1,61 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.storage.s3; + +import io.pravega.test.common.TestUtils; +import software.amazon.awssdk.services.s3.S3Client; + +import java.util.UUID; + +/** + * Test context S3 tests. + */ +public class S3TestContext { + public static final String BUCKET_NAME_PREFIX = "pravega-unit-test-"; + public final S3StorageConfig adapterConfig; + + public final int port; + public final String configUri; + public final S3Client s3Client; + public final S3Mock s3Mock; + + public S3TestContext() { + try { + this.port = TestUtils.getAvailableListenPort(); + this.configUri = "https://localhost"; + String bucketName = "test-bucket"; + String prefix = BUCKET_NAME_PREFIX + UUID.randomUUID(); + this.adapterConfig = S3StorageConfig.builder() + .with(S3StorageConfig.CONFIGURI, configUri) + .with(S3StorageConfig.BUCKET, bucketName) + .with(S3StorageConfig.PREFIX, prefix) + .with(S3StorageConfig.ACCESS_KEY, "access") + .with(S3StorageConfig.SECRET_KEY, "secret") + .build(); + s3Mock = new S3Mock(); + s3Client = new S3ClientMock(this.s3Mock); + } catch (Exception e) { + close(); + throw e; + } + } + + public void close() { + if (null != s3Client) { + s3Client.close(); + } + } +} diff --git a/build.gradle b/build.gradle index 3c30e63e1e4..84d60e0ae7e 100644 --- a/build.gradle +++ b/build.gradle @@ -120,6 +120,7 @@ allprojects { force "io.netty:netty-handler-proxy:" + nettyVersion force "io.netty:netty-transport-native-epoll:" + nettyVersion force "io.netty:netty-tcnative-boringssl-static:" + nettyBoringSSLVersion + force "junit:junit:" + junitVersion // Netty 4 uber jar exclude group: 'io.netty', module: 'netty-all' // Netty 3 @@ -187,7 +188,6 @@ def withoutJaxbAndJjwt = { exclude group: 'javax.xml.bind', module: 'jaxb-api' project('shared:authplugin') { dependencies { compile group: 'com.google.guava', name: 'guava', version: guavaVersion - testCompile project(":shared:security") } tasks.withType(JavaCompile) { @@ -321,6 +321,7 @@ project ('shared:rest') { testCompile project(':test:testcommon') testCompile project(":shared:authplugin").sourceSets.test.output testCompile "io.grpc:grpc-netty:" + grpcVersion + testCompile group: 'ch.qos.logback', name: 'logback-classic', version: qosLogbackVersion } } @@ -480,6 +481,9 @@ project ('bindings') { //For Extended S3 compile group: 'com.emc.ecs', name: 'object-client', version: ecsObjectClientVersion, withoutLogger + compile group: 'software.amazon.awssdk', name: 's3', version: awsSdkVersion + compile group: 'software.amazon.awssdk', name: 'sts', version: awsSdkVersion + // These were previously brought in as transitive dependencies of HDFS, although they were required for running // ExtendedS3SimpleStorageTests. But since the HDFS 3.x upgrade, these transitive dependencies are excluded, // so adding explicit dependencies is necessary (as well as desirable). @@ -509,6 +513,7 @@ project('segmentstore:server') { compile project(':segmentstore:storage') compile project(':shared:metrics') compile project(':shared:rest') + compile project(':shared:health') testCompile project(':test:testcommon') } javadoc { @@ -912,7 +917,7 @@ project('standalone') { applicationName = "pravega-standalone" mainClassName = "io.pravega.local.LocalPravegaEmulator" applicationDefaultJvmArgs = ["-server", "-Xmx4g", "-XX:+HeapDumpOnOutOfMemoryError", - "-Dlogback.configurationFile=PRAVEGA_APP_HOME/conf/logback.xml", + "-Dlogback.configurationFile=PRAVEGA_APP_HOME/conf/standalone-logback.xml", "-Dsinglenode.configurationFile=PRAVEGA_APP_HOME/conf/standalone-config.properties", "-Dlog.dir=PRAVEGA_APP_HOME/logs", "-Dlog.name=pravega", @@ -949,6 +954,7 @@ project('standalone') { compile group: 'org.apache.hadoop', name: 'hadoop-hdfs', version: hadoopVersion, withoutLogger compile group: 'org.apache.hadoop', name: 'hadoop-minicluster', version: hadoopVersion, withoutLogger compile group: 'io.netty', name: 'netty-tcnative-boringssl-static', version: nettyBoringSSLVersion + compile group: 'io.jsonwebtoken', name: 'jjwt', version: jjwtVersion testCompile project(':test:testcommon') testCompile project(path:':common', configuration:'testRuntime') testCompile project(path:':segmentstore:server', configuration:'testRuntime') @@ -1081,16 +1087,22 @@ project('test:system') { description 'Used to copy test artifacts to the Kubernetes Cluster.' dependsOn 'jar' + environment "dockerRegistryUrl", System.getProperty("dockerRegistryUrl") + environment "imagePrefix", System.getProperty("imagePrefix", "pravega") environment "tier2Type", System.getProperty("tier2Type", "nfs") - environment "pravegaOperatorVersion", System.getProperty("pravegaOperatorVersion", "latest") - environment "bookkeeperOperatorVersion", System.getProperty("bookkeeperOperatorVersion", "latest") - environment "zookeeperOperatorVersion", System.getProperty("zookeeperOperatorVersion", "latest") + environment "pravegaOperatorChartVersion", System.getProperty("pravegaOperatorChartVersion", "0.6.1") + environment "bookkeeperOperatorChartVersion", System.getProperty("bookkeeperOperatorChartVersion", "0.2.1") + environment "zookeeperOperatorChartVersion", System.getProperty("zookeeperOperatorChartVersion", "0.2.13") + environment "pravegaOperatorVersion", System.getProperty("pravegaOperatorVersion", "0.5.5") + environment "bookkeeperOperatorVersion", System.getProperty("bookkeeperOperatorVersion", "0.1.6") + environment "zookeeperOperatorVersion", System.getProperty("zookeeperOperatorVersion", "0.2.13") + environment "pravegaOperatorImageName", System.getProperty("pravegaOperatorImageName", "pravega-operator") + environment "bookkeeperOperatorImageName", System.getProperty("bookkeeperOperatorImageName", "bookkeeper-operator") + environment "zookeeperOperatorImageName", System.getProperty("zookeeperOperatorImageName", "zookeeper-operator") // The below properties are used to to specify the pravega published chart and repository to deploy pravega,bookkeeper & zookkeeper operators using helm docker image properties environment "publishedChartName", System.getProperty("publishedChartName", "pravega") environment "helmRepository", System.getProperty("helmRepository", "https://charts.pravega.io") - // To update config map with desired pravega version - environment "desiredPravegaCMVersion", System.getProperty("desiredPravegaCMVersion", "0.9.0") - environment "desiredBookkeeperCMVersion", System.getProperty("desiredBookkeeperCMVersion", "0.9.0") + environment "helmHookImageName", System.getProperty("helmHookImageName", "lachlanevenson/k8s-kubectl") environment "tlsEnabled", System.getProperty("tlsEnabled", "false") environment "skipServiceInstallation", System.getProperty("skipServiceInstallation", "false") environment "testPodImage", System.getProperty("testPodImage", "openjdk:8u181-jre-alpine") @@ -1145,13 +1157,19 @@ project('test:system') { systemProperty "tier2Type", System.getProperty("tier2Type", "nfs") // tier2 configuration, specified as comma separated key values k1=v1,k2=v2 systemProperty "tier2Config", System.getProperty("tier2Config") - systemProperty "pravegaOperatorVersion",System.getProperty("pravegaOperatorVersion", "latest") - systemProperty "bookkeeperOperatorVersion",System.getProperty("bookkeeperOperatorVersion", "latest") - systemProperty "zookeeperOperatorVersion",System.getProperty("zookeeperOperatorVersion", "latest") - systemProperty "desiredPravegaCMVersion",System.getProperty("desiredPravegaCMVersion", "0.9.0") - systemProperty "desiredBookkeeperCMVersion",System.getProperty("desiredBookkeeperCMVersion", "0.9.0") - systemProperty "publishedChartName",System.getProperty("publishedChartName", "pravega") - systemProperty "helmRepository",System.getProperty("helmRepository", "pravega") + systemProperty "tier2Env", System.getProperty("tier2Env") + systemProperty "pravegaOperatorChartVersion", System.getProperty("pravegaOperatorChartVersion", "0.6.1") + systemProperty "bookkeeperOperatorChartVersion", System.getProperty("bookkeeperOperatorChartVersion", "0.2.1") + systemProperty "zookeeperOperatorChartVersion", System.getProperty("zookeeperOperatorChartVersion", "0.2.13") + systemProperty "pravegaOperatorVersion", System.getProperty("pravegaOperatorVersion", "0.5.5") + systemProperty "bookkeeperOperatorVersion", System.getProperty("bookkeeperOperatorVersion", "0.1.6") + systemProperty "zookeeperOperatorVersion", System.getProperty("zookeeperOperatorVersion", "0.2.13") + systemProperty "pravegaOperatorImageName", System.getProperty("pravegaOperatorImageName", "pravega-operator") + systemProperty "bookkeeperOperatorImageName", System.getProperty("bookkeeperOperatorImageName", "bookkeeper-operator") + systemProperty "zookeeperOperatorImageName", System.getProperty("zookeeperOperatorImageName", "zookeeper-operator") + systemProperty "publishedChartName", System.getProperty("publishedChartName", "pravega") + systemProperty "helmRepository", System.getProperty("helmRepository", "pravega") + systemProperty "helmHookImageName", System.getProperty("helmHookImageName", "lachlanevenson/k8s-kubectl") // target versions wrt Upgrade for all operators & clusters systemProperty "controllerLabel", System.getProperty("controllerLabel", "pravega-controller"); @@ -1162,12 +1180,12 @@ project('test:system') { systemProperty "bookkeeperID", System.getProperty("bookkeeperID", "pravega-bk"); systemProperty "bookkeeperConfigMap", System.getProperty("bookkeeperConfigMap", "bk-config-map"); - systemProperty "targetPravegaOperatorVersion",System.getProperty("targetPravegaOperatorVersion", "latest") - systemProperty "targetBookkeeperOperatorVersion",System.getProperty("targetBookkeeperOperatorVersion", "latest") - systemProperty "targetZookeeperOperatorVersion",System.getProperty("targetZookeeperOperatorVersion", "latest") - systemProperty "targetPravegaVersion",System.getProperty("targetPravegaVersion", "latest") - systemProperty "targetBookkeeperVersion",System.getProperty("targetBookkeeperVersion", "latest") - systemProperty "targetZookeeperVersion",System.getProperty("targetZookeeperVersion", "latest") + systemProperty "targetPravegaOperatorVersion", System.getProperty("targetPravegaOperatorVersion", "latest") + systemProperty "targetBookkeeperOperatorVersion", System.getProperty("targetBookkeeperOperatorVersion", "latest") + systemProperty "targetZookeeperOperatorVersion", System.getProperty("targetZookeeperOperatorVersion", "latest") + systemProperty "targetPravegaVersion", System.getProperty("targetPravegaVersion", "latest") + systemProperty "targetBookkeeperVersion", System.getProperty("targetBookkeeperVersion", "latest") + systemProperty "targetZookeeperVersion", System.getProperty("targetZookeeperVersion", "latest") //testPodImage , default is openjdk:8u181-jre-alpine systemProperty "testPodImage", System.getProperty("testPodImage", "openjdk:8u181-jre-alpine") systemProperty "testServiceAccount", System.getProperty("testServiceAccount", "test-framework"); @@ -1176,10 +1194,18 @@ project('test:system') { if (System.getProperty("imageVersion") == null) { systemProperty "imageVersion", "INVALID_IMAGE_VERSION" } + // validating that bookkeeper operator version >= 0.1.5 + if (System.getProperty("bookkeeperOperatorVersion") ==~ /^(0)\.(1)\.(0|1|2|3|4)(-.+)*/) { + throw new InvalidUserDataException('Requires Bookkeeper Operator Version >= 0.1.5') + } + // validating that pravega operator version >= 0.5.4 + if (System.getProperty("pravegaOperatorVersion") ==~ /^((0)\.(1|2|3|4)\.(\d)(-.+)*)|((0)\.(5)\.(0|1|2|3)(-.+)*)/) { + throw new InvalidUserDataException('Requires Pravega Operator Version >= 0.5.4') + } systemProperty "securityEnabled", System.getProperty("securityEnabled", "false") systemProperty "tlsEnabled", System.getProperty("tlsEnabled", "false") - systemProperty "logLevel", System.getProperty("log.level", "DEBUG") + systemProperty "log.level", System.getProperty("log.level", "DEBUG") if (System.getProperty("configFile") != null) { systemProperty "configs", new File(System.getProperty("configFile")).text @@ -1275,7 +1301,7 @@ project('test:system') { } maxParallelForks = 1 systemProperty "dockerImageRegistry", "${dockerRegistryUrl}" - systemProperty "logLevel", System.getProperty("log.level", "DEBUG") + systemProperty "log.level", System.getProperty("log.level", "DEBUG") systemProperty "tlsEnabled", System.getProperty("tlsEnabled", "false") } @@ -1447,7 +1473,7 @@ task preparePravegaImage(type: Copy) { from "docker/pravega" from (installDist) { into "pravega" - exclude "**/*.bat" + exclude "**/*.bat", "**/log4j*.jar" } } @@ -1537,4 +1563,4 @@ class DockerPushTask extends Exec { return tag } } -} \ No newline at end of file +} diff --git a/checkstyle/import-control.xml b/checkstyle/import-control.xml index 1d33c8fefed..3498953cdd3 100644 --- a/checkstyle/import-control.xml +++ b/checkstyle/import-control.xml @@ -43,5 +43,10 @@ - + + + + + + diff --git a/cli/admin/README.md b/cli/admin/README.md index 4ed55cfa0e1..6409536037c 100644 --- a/cli/admin/README.md +++ b/cli/admin/README.md @@ -65,18 +65,19 @@ Pravega Admin CLI. Initial configuration: cli.store.metadata.backend=segmentstore - cli.controller.connect.credentials.username=admin + cli.credentials.username=admin pravegaservice.admin.gateway.port=9999 pravegaservice.storage.impl.name=FILESYSTEM - cli.controller.connect.credentials.pwd=1111_aaaa + cli.credentials.pwd=1111_aaaa bookkeeper.ledger.path=/pravega/pravega/bookkeeper/ledgers - cli.controller.connect.channel.auth=false - cli.controller.connect.channel.tls=false + cli.channel.auth=false + cli.channel.tls=false pravegaservice.clusterName=pravega/pravega pravegaservice.zk.connect.uri=localhost:2181 cli.controller.connect.rest.uri=localhost:9091 cli.controller.connect.grpc.uri=localhost:9090 - cli.controller.connect.trustStore.location=conf/ca-cert.crt + cli.trustStore.location=conf/ca-cert.crt + cli.trustStore.access.token.ttl.seconds=300 pravegaservice.container.count=4 ``` @@ -241,5 +242,18 @@ the appropriate implementation in `AdminRequestProcessorImpl`. - The last step would be to add each new `segmentstore` command as a separate class in the package where the existing commands are already placed. +## Enable tls and auth in cli +Make sure to update the following fields in the configuration to enable tls ad auth in the cli: +``` +cli.channel.auth=true +cli.channel.tls=true + +cli.credentials.username=admin +cli.credentials.pwd=1111_aaaa +cli.trustStore.location=conf/ca-cert.crt +cli.trustStore.access.token.ttl.seconds=600 +``` +Set above fields to match the username, password, and certificate location in the environment. + ## Support If you find any issue or you have any suggestion, please report an issue to [this repository](https://github.com/pravega/pravega/issues). \ No newline at end of file diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommand.java index bcd71242a8d..b6827ba6932 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommand.java @@ -44,11 +44,19 @@ import io.pravega.cli.admin.cluster.ListContainersCommand; import io.pravega.cli.admin.config.ConfigListCommand; import io.pravega.cli.admin.config.ConfigSetCommand; +import io.pravega.cli.admin.segmentstore.FlushToStorageCommand; import io.pravega.cli.admin.segmentstore.GetSegmentAttributeCommand; import io.pravega.cli.admin.segmentstore.GetSegmentInfoCommand; import io.pravega.cli.admin.segmentstore.ReadSegmentRangeCommand; import io.pravega.cli.admin.segmentstore.UpdateSegmentAttributeCommand; -import io.pravega.cli.admin.utils.CLIControllerConfig; +import io.pravega.cli.admin.segmentstore.tableSegment.GetTableSegmentEntryCommand; +import io.pravega.cli.admin.segmentstore.tableSegment.GetTableSegmentInfoCommand; +import io.pravega.cli.admin.segmentstore.tableSegment.ListTableSegmentKeysCommand; +import io.pravega.cli.admin.segmentstore.tableSegment.ModifyTableSegmentEntry; +import io.pravega.cli.admin.segmentstore.tableSegment.PutTableSegmentEntryCommand; +import io.pravega.cli.admin.segmentstore.tableSegment.SetSerializerCommand; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import io.pravega.cli.admin.utils.CLIConfig; import io.pravega.client.ClientConfig; import io.pravega.client.connection.impl.ConnectionPool; import io.pravega.client.connection.impl.ConnectionPoolImpl; @@ -136,10 +144,10 @@ protected ServiceConfig getServiceConfig() { } /** - * Creates a new instance of the CLIControllerConfig class from the shared AdminCommandState passed in via the Constructor. + * Creates a new instance of the CLIConfig class from the shared AdminCommandState passed in via the Constructor. */ - protected CLIControllerConfig getCLIControllerConfig() { - return getCommandArgs().getState().getConfigBuilder().build().getConfig(CLIControllerConfig::builder); + protected CLIConfig getCLIControllerConfig() { + return getCommandArgs().getState().getConfigBuilder().build().getConfig(CLIConfig::builder); } /** @@ -176,6 +184,7 @@ protected void prettyJSONOutput(String key, Object value) { protected boolean confirmContinue() { output("Do you want to continue?[yes|no]"); + @SuppressWarnings("resource") Scanner s = new Scanner(System.in); String input = s.nextLine(); return input.equals("yes"); @@ -286,6 +295,13 @@ public static class Factory { .put(ReadSegmentRangeCommand::descriptor, ReadSegmentRangeCommand::new) .put(GetSegmentAttributeCommand::descriptor, GetSegmentAttributeCommand::new) .put(UpdateSegmentAttributeCommand::descriptor, UpdateSegmentAttributeCommand::new) + .put(FlushToStorageCommand::descriptor, FlushToStorageCommand::new) + .put(GetTableSegmentInfoCommand::descriptor, GetTableSegmentInfoCommand::new) + .put(GetTableSegmentEntryCommand::descriptor, GetTableSegmentEntryCommand::new) + .put(PutTableSegmentEntryCommand::descriptor, PutTableSegmentEntryCommand::new) + .put(SetSerializerCommand::descriptor, SetSerializerCommand::new) + .put(ListTableSegmentKeysCommand::descriptor, ListTableSegmentKeysCommand::new) + .put(ModifyTableSegmentEntry::descriptor, ModifyTableSegmentEntry::new) .build()); /** @@ -383,19 +399,43 @@ private String objectToJSON(Object object) { @VisibleForTesting public SegmentHelper instantiateSegmentHelper(CuratorFramework zkClient) { + HostControllerStore hostStore = createHostControllerStore(zkClient); + ConnectionPool pool = createConnectionPool(); + return new SegmentHelper(pool, hostStore, pool.getInternalExecutor()); + } + + @VisibleForTesting + public AdminSegmentHelper instantiateAdminSegmentHelper(CuratorFramework zkClient) { + HostControllerStore hostStore = createHostControllerStore(zkClient); + ConnectionPool pool = createConnectionPool(); + return new AdminSegmentHelper(pool, hostStore, pool.getInternalExecutor()); + } + + private HostControllerStore createHostControllerStore(CuratorFramework zkClient) { HostMonitorConfig hostMonitorConfig = HostMonitorConfigImpl.builder() .hostMonitorEnabled(true) .hostMonitorMinRebalanceInterval(Config.CLUSTER_MIN_REBALANCE_INTERVAL) .containerCount(getServiceConfig().getContainerCount()) .build(); - HostControllerStore hostStore = HostStoreFactory.createStore(hostMonitorConfig, StoreClientFactory.createZKStoreClient(zkClient)); - ClientConfig clientConfig = ClientConfig.builder() - .controllerURI(URI.create(getCLIControllerConfig().getControllerGrpcURI())) - .validateHostName(getCLIControllerConfig().isAuthEnabled()) - .credentials(new DefaultCredentials(getCLIControllerConfig().getPassword(), getCLIControllerConfig().getUserName())) - .build(); - ConnectionPool pool = new ConnectionPoolImpl(clientConfig, new SocketConnectionFactoryImpl(clientConfig)); - return new SegmentHelper(pool, hostStore, pool.getInternalExecutor()); + return HostStoreFactory.createStore(hostMonitorConfig, StoreClientFactory.createZKStoreClient(zkClient)); + } + + private ConnectionPool createConnectionPool() { + ClientConfig.ClientConfigBuilder clientConfigBuilder = ClientConfig.builder() + .controllerURI(URI.create(getCLIControllerConfig().getControllerGrpcURI())); + + if (getCLIControllerConfig().isAuthEnabled()) { + clientConfigBuilder.credentials(new DefaultCredentials(getCLIControllerConfig().getPassword(), + getCLIControllerConfig().getUserName())); + } + if (getCLIControllerConfig().isTlsEnabled()) { + clientConfigBuilder.trustStore(getCLIControllerConfig().getTruststore()) + .validateHostName(false); + } + + ClientConfig clientConfig = clientConfigBuilder.build(); + + return new ConnectionPoolImpl(clientConfig, new SocketConnectionFactoryImpl(clientConfig)); } //endregion diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommandState.java b/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommandState.java index 1e13abd911c..74485355c58 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommandState.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/AdminCommandState.java @@ -15,12 +15,14 @@ */ package io.pravega.cli.admin; +import io.pravega.cli.admin.serializers.AbstractSerializer; import io.pravega.common.concurrent.ExecutorServiceHelpers; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import java.io.FileNotFoundException; import java.io.IOException; import java.util.concurrent.ScheduledExecutorService; import lombok.Getter; +import lombok.Setter; /** * Keeps state between commands. @@ -30,6 +32,12 @@ public class AdminCommandState implements AutoCloseable { private final ServiceBuilderConfig.Builder configBuilder; @Getter private final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(2, "password-tools"); + @Getter + @Setter + private AbstractSerializer keySerializer = null; + @Getter + @Setter + private AbstractSerializer valueSerializer = null; /** * Creates a new instance of the AdminCommandState class. diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerCommand.java index b4203eb8796..490e7cdbc0a 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerCommand.java @@ -18,7 +18,7 @@ import com.google.common.annotations.VisibleForTesting; import io.pravega.cli.admin.AdminCommand; import io.pravega.cli.admin.CommandArgs; -import io.pravega.cli.admin.utils.CLIControllerConfig; +import io.pravega.cli.admin.utils.CLIConfig; import io.pravega.controller.server.rest.generated.api.JacksonJsonProvider; import lombok.AccessLevel; import lombok.RequiredArgsConstructor; @@ -66,7 +66,7 @@ public abstract class ControllerCommand extends AdminCommand { * @return REST client. */ protected Context createContext() { - CLIControllerConfig config = getCLIControllerConfig(); + CLIConfig config = getCLIControllerConfig(); ClientConfig clientConfig = new ClientConfig(); clientConfig.register(JacksonJsonProvider.class); clientConfig.property("sun.net.http.allowRestrictedHeaders", "true"); diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerDescribeStreamCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerDescribeStreamCommand.java index 715a4aab030..2860bdd87a1 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerDescribeStreamCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/controller/ControllerDescribeStreamCommand.java @@ -16,7 +16,7 @@ package io.pravega.cli.admin.controller; import io.pravega.cli.admin.CommandArgs; -import io.pravega.cli.admin.utils.CLIControllerConfig; +import io.pravega.cli.admin.utils.CLIConfig; import io.pravega.client.stream.StreamConfiguration; import io.pravega.controller.server.SegmentHelper; import io.pravega.controller.server.security.auth.GrpcAuthHelper; @@ -61,7 +61,7 @@ public void execute() { // (tables). We need to instantiate the correct type of metadata store object based on the cluster at hand. StreamMetadataStore store; SegmentHelper segmentHelper = null; - if (getCLIControllerConfig().getMetadataBackend().equals(CLIControllerConfig.MetadataBackends.ZOOKEEPER.name())) { + if (getCLIControllerConfig().getMetadataBackend().equals(CLIConfig.MetadataBackends.ZOOKEEPER.name())) { store = StreamStoreFactory.createZKStore(zkClient, executor); } else { segmentHelper = instantiateSegmentHelper(zkClient); diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DataRecoveryCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DataRecoveryCommand.java index 50fdb541717..6e3749f1ba2 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DataRecoveryCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DataRecoveryCommand.java @@ -49,7 +49,7 @@ public abstract class DataRecoveryCommand extends AdminCommand { StorageFactory createStorageFactory(ScheduledExecutorService executorService) { ServiceBuilder.ConfigSetupHelper configSetupHelper = new ServiceBuilder.ConfigSetupHelper(getCommandArgs().getState().getConfigBuilder().build()); StorageLoader loader = new StorageLoader(); - return loader.load(configSetupHelper, getServiceConfig().getStorageImplementation().toString(), + return loader.load(configSetupHelper, getServiceConfig().getStorageImplementation(), getServiceConfig().getStorageLayout(), executorService); } diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DurableLogRecoveryCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DurableLogRecoveryCommand.java index 99f062688d4..5ac97cc2133 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DurableLogRecoveryCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/DurableLogRecoveryCommand.java @@ -36,6 +36,7 @@ import io.pravega.segmentstore.server.reading.ReadIndexConfig; import io.pravega.segmentstore.server.tables.ContainerTableExtension; import io.pravega.segmentstore.server.tables.ContainerTableExtensionImpl; +import io.pravega.segmentstore.server.tables.TableExtensionConfig; import io.pravega.segmentstore.server.writer.StorageWriterFactory; import io.pravega.segmentstore.server.writer.WriterConfig; import io.pravega.segmentstore.storage.Storage; @@ -113,7 +114,7 @@ public void execute() throws Exception { outputInfo("Started ZK Client at %s.", getServiceConfig().getZkURL()); storage.initialize(CONTAINER_EPOCH); - outputInfo("Loaded %s Storage.", getServiceConfig().getStorageImplementation().toString()); + outputInfo("Loaded %s Storage.", getServiceConfig().getStorageImplementation()); outputInfo("Starting recovery..."); // create back up of metadata segments @@ -217,7 +218,7 @@ public SegmentContainerFactory.CreateExtensions getDefaultExtensions() { } private ContainerTableExtension createTableExtension(SegmentContainer c, ScheduledExecutorService e) { - return new ContainerTableExtensionImpl(c, this.cacheManager, e); + return new ContainerTableExtensionImpl(TableExtensionConfig.builder().build(), c, this.cacheManager, e); } @Override diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/StorageListSegmentsCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/StorageListSegmentsCommand.java index f60770863f2..dd262151393 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/StorageListSegmentsCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/dataRecovery/StorageListSegmentsCommand.java @@ -116,7 +116,7 @@ public void execute() throws Exception { // Get the storage using the config. storage.initialize(CONTAINER_EPOCH); - outputInfo("Loaded %s Storage.", getServiceConfig().getStorageImplementation().toString()); + outputInfo("Loaded %s Storage.", getServiceConfig().getStorageImplementation()); // Gets total number of segments listed. int segmentsCount = 0; diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ContainerCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ContainerCommand.java new file mode 100644 index 00000000000..35f69f676f9 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ContainerCommand.java @@ -0,0 +1,29 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore; + +import io.pravega.cli.admin.CommandArgs; + +/** + * Base class for all Segment Container related commands. + */ +public abstract class ContainerCommand extends SegmentStoreCommand { + static final String COMPONENT = "container"; + + public ContainerCommand(CommandArgs args) { + super(args); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/FlushToStorageCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/FlushToStorageCommand.java new file mode 100644 index 00000000000..7c6143b1ca0 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/FlushToStorageCommand.java @@ -0,0 +1,80 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore; + +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import io.pravega.shared.protocol.netty.PravegaNodeUri; +import io.pravega.shared.protocol.netty.WireCommands; +import lombok.Cleanup; +import org.apache.curator.framework.CuratorFramework; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.TimeUnit; + +import static java.lang.Integer.parseInt; + +/** + * Executes a FlushToStorage request against the chosen Segment Store instance. + */ +public class FlushToStorageCommand extends ContainerCommand { + + private static final int REQUEST_TIMEOUT_SECONDS = 30; + private static final String ALL_CONTAINERS = "all"; + + /** + * Creates new instance of the FlushToStorageCommand. + * + * @param args The arguments for the command. + */ + public FlushToStorageCommand(CommandArgs args) { + super(args); + } + + @Override + public void execute() throws Exception { + ensureArgCount(2); + + final String containerId = getArg(0); + final String segmentStoreHost = getArg(1); + @Cleanup + CuratorFramework zkClient = createZKClient(); + @Cleanup + AdminSegmentHelper adminSegmentHelper = instantiateAdminSegmentHelper(zkClient); + if (containerId.equalsIgnoreCase(ALL_CONTAINERS)) { + int containerCount = getServiceConfig().getContainerCount(); + for (int id = 0; id < containerCount; id++) { + flushContainerToStorage(adminSegmentHelper, id, segmentStoreHost); + } + } else { + flushContainerToStorage(adminSegmentHelper, parseInt(containerId), segmentStoreHost); + } + } + + private void flushContainerToStorage(AdminSegmentHelper adminSegmentHelper, int containerId, String segmentStoreHost) throws Exception { + CompletableFuture reply = adminSegmentHelper.flushToStorage(containerId, + new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), super.authHelper.retrieveMasterToken()); + reply.get(REQUEST_TIMEOUT_SECONDS, TimeUnit.SECONDS); + output("Flushed the Segment Container with containerId %d to Storage.", containerId); + } + + public static CommandDescriptor descriptor() { + return new CommandDescriptor(COMPONENT, "flush-to-storage", "Persist the given Segment Container into Storage.", + new ArgDescriptor("container-id", "The container Id of the Segment Container that needs to be persisted, " + + "if given as \"all\" all the containers will be persisted."), + new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request.")); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentAttributeCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentAttributeCommand.java index 8a9eed26eeb..6d17fed23a2 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentAttributeCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentAttributeCommand.java @@ -48,7 +48,7 @@ public void execute() { @Cleanup SegmentHelper segmentHelper = instantiateSegmentHelper(zkClient); CompletableFuture reply = segmentHelper.getSegmentAttribute(fullyQualifiedSegmentName, - attributeId, new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), ""); + attributeId, new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), super.authHelper.retrieveMasterToken()); output("GetSegmentAttribute: %s", reply.join().toString()); } diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentInfoCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentInfoCommand.java index 675f8238552..0a7cede00e0 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentInfoCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/GetSegmentInfoCommand.java @@ -49,7 +49,7 @@ public void execute() { @Cleanup SegmentHelper segmentHelper = instantiateSegmentHelper(zkClient); CompletableFuture reply = segmentHelper.getSegmentInfo(fullyQualifiedSegmentName, - new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), "", 0L); + new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), super.authHelper.retrieveMasterToken(), 0L); output("StreamSegmentInfo: %s", reply.join().toString()); } diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ReadSegmentRangeCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ReadSegmentRangeCommand.java index 481fae7cb31..7dec3416abf 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ReadSegmentRangeCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/ReadSegmentRangeCommand.java @@ -15,6 +15,7 @@ */ package io.pravega.cli.admin.segmentstore; +import com.google.common.base.Preconditions; import io.pravega.cli.admin.CommandArgs; import io.pravega.controller.server.SegmentHelper; import io.pravega.shared.protocol.netty.PravegaNodeUri; @@ -22,15 +23,18 @@ import lombok.Cleanup; import org.apache.curator.framework.CuratorFramework; -import java.nio.charset.StandardCharsets; +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.nio.file.FileAlreadyExistsException; import java.util.concurrent.CompletableFuture; -import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; public class ReadSegmentRangeCommand extends SegmentStoreCommand { private static final int REQUEST_TIMEOUT_SECONDS = 10; + private static final int READ_WRITE_BUFFER_SIZE = 2 * 1024 * 1024; + private static final String PROGRESS_BAR = "|/-\\"; /** * Creates a new instance of the ReadSegmentRangeCommand. @@ -42,29 +46,94 @@ public ReadSegmentRangeCommand(CommandArgs args) { } @Override - public void execute() throws ExecutionException, InterruptedException, TimeoutException { - ensureArgCount(4); + public void execute() throws Exception { + ensureArgCount(5); final String fullyQualifiedSegmentName = getArg(0); - final int offset = getIntArg(1); - final int length = getIntArg(2); + final long offset = getLongArg(1); + final long length = getLongArg(2); final String segmentStoreHost = getArg(3); + final String fileName = getArg(4); + + Preconditions.checkArgument(offset >= 0, "The provided offset cannot be negative."); + Preconditions.checkArgument(length >= 0, "The provided length cannot be negative."); + @Cleanup CuratorFramework zkClient = createZKClient(); @Cleanup SegmentHelper segmentHelper = instantiateSegmentHelper(zkClient); - CompletableFuture reply = segmentHelper.readSegment(fullyQualifiedSegmentName, - offset, length, new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), ""); - WireCommands.SegmentRead segmentRead = reply.get(REQUEST_TIMEOUT_SECONDS, TimeUnit.SECONDS); - output("ReadSegment: %s", segmentRead.toString()); - output("SegmentRead content: %s", segmentRead.getData().toString(StandardCharsets.UTF_8)); + readAndWriteSegmentToFile(segmentHelper, segmentStoreHost, fullyQualifiedSegmentName, offset, length, fileName); + output("\nThe segment data has been successfully written into %s", fileName); + } + + /** + * Creates the file (and parent directory if required) into which thee segment data is written. + * + * @param fileName The name of the file to create. + * @return A {@link File} object representing the filename provided. + * @throws FileAlreadyExistsException if the file already exists, to avoid any accidental overwrites. + * @throws IOException if the file/directory creation fails. + */ + private File createFileAndDirectory(String fileName) throws IOException { + File f = new File(fileName); + // If file exists throw FileAlreadyExistsException, an existing file should not be overwritten with new data. + if (f.exists()) { + throw new FileAlreadyExistsException("Cannot write segment data into a file that already exists."); + } + if (!f.getParentFile().exists()) { + f.getParentFile().mkdirs(); + } + f.createNewFile(); + return f; + } + + /** + * Reads the contents of the segment starting from the given offset and writes into the provided file. + * + * @param segmentHelper A {@link SegmentHelper} instance to read the segment. + * @param segmentStoreHost Address of the segment-store to read from. + * @param fullyQualifiedSegmentName The name of the segment. + * @param offset The starting point from where the segment is to be read. + * @param length The number of bytes to read. + * @param fileName A name of the file to which the data will be written. + * @throws IOException if the file create/write fails. + * @throws Exception if the request fails. + */ + private void readAndWriteSegmentToFile(SegmentHelper segmentHelper, String segmentStoreHost, String fullyQualifiedSegmentName, + long offset, long length, String fileName) throws IOException, Exception { + File file = createFileAndDirectory(fileName); + + output("Downloading %d bytes from offset %d into %s.", length, offset, fileName); + long currentOffset = offset; + long bytesToRead = length; + int progress = 0; + @Cleanup + FileOutputStream fileOutputStream = new FileOutputStream(file, true); + while (bytesToRead > 0) { + long bufferLength = Math.min(READ_WRITE_BUFFER_SIZE, bytesToRead); + CompletableFuture reply = segmentHelper.readSegment(fullyQualifiedSegmentName, + currentOffset, (int) bufferLength, new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), super.authHelper.retrieveMasterToken()); + WireCommands.SegmentRead bufferRead = reply.get(REQUEST_TIMEOUT_SECONDS, TimeUnit.SECONDS); + int bytesRead = bufferRead.getData().readableBytes(); + // Write the buffer into the file. + bufferRead.getData().readBytes(fileOutputStream, bytesRead); + + currentOffset += bytesRead; + bytesToRead -= bytesRead; + showProgress(progress++, String.format("Written %d/%d bytes.", length - bytesToRead, length)); + } + } + + private void showProgress(int progress, String message) { + System.out.print("\r Processing " + PROGRESS_BAR.charAt(progress % PROGRESS_BAR.length()) + " : " + message); } public static CommandDescriptor descriptor() { - return new CommandDescriptor(COMPONENT, "read-segment", "Read a range from a given Segment.", + return new CommandDescriptor(COMPONENT, "read-segment", "Read a range from a given Segment into given file.", new ArgDescriptor("qualified-segment-name", "Fully qualified name of the Segment to get info from (e.g., scope/stream/0.#epoch.0)."), new ArgDescriptor("offset", "Starting point of the read request within the target Segment."), new ArgDescriptor("length", "Number of bytes to read."), - new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request.")); + new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request."), + new ArgDescriptor("file-name", "Name of the file to write the contents into.")); } } diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommand.java index 953ea5bea7f..f5c203668cc 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommand.java @@ -14,17 +14,22 @@ * limitations under the License. */ package io.pravega.cli.admin.segmentstore; - import io.pravega.cli.admin.AdminCommand; import io.pravega.cli.admin.CommandArgs; +import io.pravega.controller.server.security.auth.GrpcAuthHelper; /** * Base class for all the Segment Store related commands. */ public abstract class SegmentStoreCommand extends AdminCommand { static final String COMPONENT = "segmentstore"; + protected final GrpcAuthHelper authHelper; public SegmentStoreCommand(CommandArgs args) { super(args); + + authHelper = new GrpcAuthHelper(true, + "secret", + 600); } } diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/UpdateSegmentAttributeCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/UpdateSegmentAttributeCommand.java index 2c4919308fb..b5088f401e4 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/UpdateSegmentAttributeCommand.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/UpdateSegmentAttributeCommand.java @@ -50,7 +50,7 @@ public void execute() { @Cleanup SegmentHelper segmentHelper = instantiateSegmentHelper(zkClient); CompletableFuture reply = segmentHelper.updateSegmentAttribute(fullyQualifiedSegmentName, - attributeId, newValue, existingValue, new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), ""); + attributeId, newValue, existingValue, new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), super.authHelper.retrieveMasterToken()); output("UpdateSegmentAttribute: %s", reply.join().toString()); } diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentEntryCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentEntryCommand.java new file mode 100644 index 00000000000..ee814d3ff09 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentEntryCommand.java @@ -0,0 +1,68 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore.tableSegment; + +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import lombok.Cleanup; +import org.apache.curator.framework.CuratorFramework; + +import java.util.Map; + +import static io.pravega.cli.admin.serializers.AbstractSerializer.parseStringData; + +public class GetTableSegmentEntryCommand extends TableSegmentCommand { + + /** + * Creates a new instance of the GetTableSegmentEntryCommand. + * + * @param args The arguments for the command. + */ + public GetTableSegmentEntryCommand(CommandArgs args) { + super(args); + } + + @Override + public void execute() { + ensureArgCount(3); + ensureSerializersExist(); + + final String fullyQualifiedTableSegmentName = getArg(0); + final String key = getArg(1); + final String segmentStoreHost = getArg(2); + @Cleanup + CuratorFramework zkClient = createZKClient(); + @Cleanup + AdminSegmentHelper adminSegmentHelper = instantiateAdminSegmentHelper(zkClient); + String value = getTableEntry(fullyQualifiedTableSegmentName, key, segmentStoreHost, adminSegmentHelper); + output("For the given key: %s", key); + userFriendlyOutput(value); + } + + public static CommandDescriptor descriptor() { + return new CommandDescriptor(COMPONENT, "get", "Get the entry for the given key in the table." + + "Use the command \"table-segment set-serializer \" to use the appropriate serializer before using this command.", + new ArgDescriptor("qualified-table-segment-name", "Fully qualified name of the table segment to get info from."), + new ArgDescriptor("key", "The key to be queried."), + new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request.")); + } + + private void userFriendlyOutput(String data) { + Map dataMap = parseStringData(data); + output("%s metadata info: ", getCommandArgs().getState().getValueSerializer().getName()); + dataMap.forEach((k, v) -> output("%s = %s;", k, v)); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentInfoCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentInfoCommand.java new file mode 100644 index 00000000000..6cc365dc673 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/GetTableSegmentInfoCommand.java @@ -0,0 +1,79 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore.tableSegment; + +import com.google.common.collect.ImmutableMap; +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import io.pravega.shared.protocol.netty.PravegaNodeUri; +import io.pravega.shared.protocol.netty.WireCommands; +import lombok.Cleanup; +import org.apache.curator.framework.CuratorFramework; + +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.function.Function; + +public class GetTableSegmentInfoCommand extends TableSegmentCommand { + + public static final String SEGMENT_NAME = "segmentName"; + public static final String START_OFFSET = "startOffset"; + public static final String LENGTH = "length"; + public static final String ENTRY_COUNT = "entryCount"; + public static final String KEY_LENGTH = "keyLength"; + + private static final Map> SEGMENT_INFO_FIELD_MAP = + ImmutableMap.>builder() + .put(SEGMENT_NAME, WireCommands.TableSegmentInfo::getSegmentName) + .put(START_OFFSET, WireCommands.TableSegmentInfo::getStartOffset) + .put(LENGTH, WireCommands.TableSegmentInfo::getLength) + .put(ENTRY_COUNT, WireCommands.TableSegmentInfo::getEntryCount) + .put(KEY_LENGTH, WireCommands.TableSegmentInfo::getKeyLength) + .build(); + + /** + * Creates a new instance of the GetTableSegmentInfoCommand. + * + * @param args The arguments for the command. + */ + public GetTableSegmentInfoCommand(CommandArgs args) { + super(args); + } + + @Override + public void execute() { + ensureArgCount(2); + + final String fullyQualifiedTableSegmentName = getArg(0); + final String segmentStoreHost = getArg(1); + @Cleanup + CuratorFramework zkClient = createZKClient(); + @Cleanup + AdminSegmentHelper adminSegmentHelper = instantiateAdminSegmentHelper(zkClient); + CompletableFuture reply = adminSegmentHelper.getTableSegmentInfo(fullyQualifiedTableSegmentName, + new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), super.authHelper.retrieveMasterToken()); + + WireCommands.TableSegmentInfo tableSegmentInfo = reply.join(); + output("TableSegmentInfo for %s: ", fullyQualifiedTableSegmentName); + SEGMENT_INFO_FIELD_MAP.forEach((name, f) -> output("%s = %s", name, f.apply(tableSegmentInfo))); + } + + public static CommandDescriptor descriptor() { + return new CommandDescriptor(COMPONENT, "get-info", "Get the details of a given table.", + new ArgDescriptor("qualified-table-segment-name", "Fully qualified name of the table to get info from."), + new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request.")); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ListTableSegmentKeysCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ListTableSegmentKeysCommand.java new file mode 100644 index 00000000000..369ac88e3f5 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ListTableSegmentKeysCommand.java @@ -0,0 +1,72 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore.tableSegment; + +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import io.pravega.client.tables.impl.HashTableIteratorItem; +import io.pravega.client.tables.impl.TableSegmentKey; +import io.pravega.shared.protocol.netty.PravegaNodeUri; +import lombok.Cleanup; +import org.apache.curator.framework.CuratorFramework; + +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.stream.Collectors; + +public class ListTableSegmentKeysCommand extends TableSegmentCommand { + + /** + * Creates a new instance of ListTableSegmentKeysCommand. + * + * @param args The arguments for the command. + */ + public ListTableSegmentKeysCommand(CommandArgs args) { + super(args); + } + + @Override + public void execute() { + ensureArgCount(3); + ensureSerializersExist(); + + final String fullyQualifiedTableSegmentName = getArg(0); + final int keyCount = getIntArg(1); + final String segmentStoreHost = getArg(2); + + @Cleanup + CuratorFramework zkClient = createZKClient(); + @Cleanup + AdminSegmentHelper adminSegmentHelper = instantiateAdminSegmentHelper(zkClient); + CompletableFuture> reply = adminSegmentHelper.readTableKeys(fullyQualifiedTableSegmentName, + new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), keyCount, + HashTableIteratorItem.State.EMPTY, super.authHelper.retrieveMasterToken(), 0L); + + List keys = reply.join().getItems() + .stream() + .map(tableSegmentKey -> getCommandArgs().getState().getKeySerializer().deserialize(getByteBuffer(tableSegmentKey.getKey()))) + .collect(Collectors.toList()); + output("List of at most %s keys in %s: ", keyCount, fullyQualifiedTableSegmentName); + keys.forEach(k -> output("- %s", k)); + } + + public static CommandDescriptor descriptor() { + return new CommandDescriptor(COMPONENT, "list-keys", "List at most the required number of keys from the table segment.", + new ArgDescriptor("qualified-table-segment-name", "Fully qualified name of the table segment."), + new ArgDescriptor("key-count", "The upper limit for the number of keys to be listed."), + new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request.")); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ModifyTableSegmentEntry.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ModifyTableSegmentEntry.java new file mode 100644 index 00000000000..48ca3cf44cb --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/ModifyTableSegmentEntry.java @@ -0,0 +1,93 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore.tableSegment; + +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import lombok.Cleanup; +import org.apache.curator.framework.CuratorFramework; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; + +import static io.pravega.cli.admin.serializers.AbstractSerializer.appendField; +import static io.pravega.cli.admin.serializers.AbstractSerializer.parseStringData; + +public class ModifyTableSegmentEntry extends TableSegmentCommand { + + /** + * Creates a new instance of the ModifyTableSegmentEntryCommand. + * + * @param args The arguments for the command. + */ + public ModifyTableSegmentEntry(CommandArgs args) { + super(args); + } + + @Override + public void execute() { + ensureArgCount(4); + ensureSerializersExist(); + + final String fullyQualifiedTableSegmentName = getArg(0); + final String segmentStoreHost = getArg(1); + final String key = getArg(2); + Map newFieldMap = parseStringData(getArg(3)); + + @Cleanup + CuratorFramework zkClient = createZKClient(); + @Cleanup + AdminSegmentHelper adminSegmentHelper = instantiateAdminSegmentHelper(zkClient); + String currentValue = getTableEntry(fullyQualifiedTableSegmentName, key, segmentStoreHost, adminSegmentHelper); + + List changedFields = new ArrayList<>(); + Map currentValueFieldMap = parseStringData(currentValue); + // Make changes to the fields in the entry that exists currently. + // If the field name does not exist then the user is notified of the same. + newFieldMap.forEach((f, v) -> { + if (currentValueFieldMap.containsKey(f)) { + currentValueFieldMap.put(f, v); + changedFields.add(f); + } else { + output("%s field does not exist.", f); + } + }); + // If no change is made to fields of the current entry then return. + if (changedFields.isEmpty()) { + output("No fields provided to modify."); + return; + } + + StringBuilder updatedValueBuilder = new StringBuilder(); + currentValueFieldMap.forEach((f, v) -> appendField(updatedValueBuilder, f, v)); + String updatedValue = updatedValueBuilder.toString(); + long version = updateTableEntry(fullyQualifiedTableSegmentName, key, updatedValue, segmentStoreHost, adminSegmentHelper); + + output("Successfully modified the following fields in the value for key %s in table %s with version %s: %s", + key, fullyQualifiedTableSegmentName, version, String.join(",", changedFields)); + } + + public static CommandDescriptor descriptor() { + return new CommandDescriptor(COMPONENT, "modify", "Modify the entry for the given key in the table using the given field info." + + "Use the command \"table-segment set-serializer \" to use the appropriate serializer before using this command.", + new ArgDescriptor("qualified-table-segment-name", "Fully qualified name of the table segment to get info from."), + new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request."), + new ArgDescriptor("key", "The key whose entry is to be modified."), + new ArgDescriptor("value", "The fields of the entry and the values to which they need to be modified, " + + "provided as \"key1=value1;key2=value2;key3=value3;...\".")); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/PutTableSegmentEntryCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/PutTableSegmentEntryCommand.java new file mode 100644 index 00000000000..67a5b34ba3a --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/PutTableSegmentEntryCommand.java @@ -0,0 +1,59 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore.tableSegment; + +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import lombok.Cleanup; +import org.apache.curator.framework.CuratorFramework; + +public class PutTableSegmentEntryCommand extends TableSegmentCommand { + + /** + * Creates a new instance of the PutTableSegmentEntryCommand. + * + * @param args The arguments for the command. + */ + public PutTableSegmentEntryCommand(CommandArgs args) { + super(args); + } + + @Override + public void execute() { + ensureArgCount(4); + ensureSerializersExist(); + + final String fullyQualifiedTableSegmentName = getArg(0); + final String segmentStoreHost = getArg(1); + final String key = getArg(2); + final String value = getArg(3); + @Cleanup + CuratorFramework zkClient = createZKClient(); + @Cleanup + AdminSegmentHelper adminSegmentHelper = instantiateAdminSegmentHelper(zkClient); + long version = updateTableEntry(fullyQualifiedTableSegmentName, key, value, segmentStoreHost, adminSegmentHelper); + output("Successfully updated the key %s in table %s with version %s", key, fullyQualifiedTableSegmentName, version); + } + + public static CommandDescriptor descriptor() { + return new CommandDescriptor(COMPONENT, "put", "Update the given key in the table with the provided value." + + "Use the command \"table-segment set-serializer \" to use the appropriate serializer before using this command.", + new ArgDescriptor("qualified-table-segment-name", "Fully qualified name of the table segment to get info from."), + new ArgDescriptor("segmentstore-endpoint", "Address of the Segment Store we want to send this request."), + new ArgDescriptor("key", "The key whose value is to be updated."), + new ArgDescriptor("value", "The new value, provided as \"key1=value1;key2=value2;key3=value3;...\".")); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/SetSerializerCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/SetSerializerCommand.java new file mode 100644 index 00000000000..96b1c3f1c78 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/SetSerializerCommand.java @@ -0,0 +1,64 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore.tableSegment; + +import com.google.common.collect.ImmutableMap; +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.serializers.AbstractSerializer; +import io.pravega.cli.admin.serializers.ContainerKeySerializer; +import io.pravega.cli.admin.serializers.ContainerMetadataSerializer; +import io.pravega.cli.admin.serializers.SltsKeySerializer; +import io.pravega.cli.admin.serializers.SltsMetadataSerializer; +import org.apache.commons.lang3.tuple.ImmutablePair; + +import java.util.Map; + +public class SetSerializerCommand extends TableSegmentCommand { + private static final Map> SERIALIZERS = + ImmutableMap.>builder() + .put("slts", ImmutablePair.of(new SltsKeySerializer(), new SltsMetadataSerializer())) + .put("container_meta", ImmutablePair.of(new ContainerKeySerializer(), new ContainerMetadataSerializer())) + .build(); + + /** + * Creates a new instance of the SetSerializerCommand. + * + * @param args The arguments for the command. + */ + public SetSerializerCommand(CommandArgs args) { + super(args); + } + + @Override + public void execute() { + ensureArgCount(1); + + String identifier = getArg(0).toLowerCase(); + if (!SERIALIZERS.containsKey(identifier)) { + output("Serializers named %s do not exist.", identifier); + } else { + getCommandArgs().getState().setKeySerializer(SERIALIZERS.get(identifier).getLeft()); + getCommandArgs().getState().setValueSerializer(SERIALIZERS.get(identifier).getRight()); + output("Serializers changed to %s successfully.", identifier); + } + } + + public static CommandDescriptor descriptor() { + return new CommandDescriptor(COMPONENT, "set-serializer", "Set the serializer for keys and values that are obtained from, and updated to table segments.", + new ArgDescriptor("serializer-name", "The required serializer. " + + "Serializer-names for built-in serializers are " + String.join(",", SERIALIZERS.keySet()))); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/TableSegmentCommand.java b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/TableSegmentCommand.java new file mode 100644 index 00000000000..6e922366c4d --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/segmentstore/tableSegment/TableSegmentCommand.java @@ -0,0 +1,110 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore.tableSegment; + +import com.google.common.base.Preconditions; +import io.netty.buffer.ByteBuf; +import io.pravega.cli.admin.CommandArgs; +import io.pravega.cli.admin.segmentstore.SegmentStoreCommand; +import io.pravega.cli.admin.utils.AdminSegmentHelper; +import io.pravega.client.tables.impl.TableSegmentEntry; +import io.pravega.client.tables.impl.TableSegmentKey; +import io.pravega.client.tables.impl.TableSegmentKeyVersion; +import io.pravega.common.util.ByteArraySegment; +import io.pravega.shared.protocol.netty.PravegaNodeUri; + +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.CompletableFuture; + +public abstract class TableSegmentCommand extends SegmentStoreCommand { + static final String COMPONENT = "table-segment"; + + public TableSegmentCommand(CommandArgs args) { + super(args); + } + + /** + * Method to check if the serializers are set. + */ + void ensureSerializersExist() { + Preconditions.checkArgument(getCommandArgs().getState().getKeySerializer() != null && getCommandArgs().getState().getValueSerializer() != null, + "The serializers have not been set. Use the command \"table-segment set-serializer \" and try again."); + } + + /** + * Method to get the entry corresponding to the provided key in the table segment. + * + * @param tableSegmentName The name of the table segment. + * @param key The key. + * @param segmentStoreHost The address of the segment store instance. + * @param adminSegmentHelper An instance of {@link AdminSegmentHelper}. + * @return A string, obtained through deserialization, containing the contents of the queried table segment entry. + */ + String getTableEntry( String tableSegmentName, + String key, + String segmentStoreHost, + AdminSegmentHelper adminSegmentHelper) { + ByteArraySegment serializedKey = new ByteArraySegment(getCommandArgs().getState().getKeySerializer().serialize(key)); + + CompletableFuture> reply = adminSegmentHelper.readTable(tableSegmentName, + new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), + Collections.singletonList(TableSegmentKey.unversioned(serializedKey.getCopy())), + super.authHelper.retrieveMasterToken(), 0L); + + ByteBuffer serializedValue = getByteBuffer(reply.join().get(0).getValue()); + return getCommandArgs().getState().getValueSerializer().deserialize(serializedValue); + } + + /** + * Method to update the entry corresponding to the provided key in the table segment. + * + * @param tableSegmentName The name of the table segment. + * @param key The key. + * @param value The entry to be updated in the table segment. + * @param segmentStoreHost The address of the segment store instance. + * @param adminSegmentHelper An instance of {@link AdminSegmentHelper} + * @return A long indicating the version obtained from updating the provided key in the table segment. + */ + long updateTableEntry(String tableSegmentName, + String key, String value, + String segmentStoreHost, + AdminSegmentHelper adminSegmentHelper) { + ByteArraySegment serializedKey = new ByteArraySegment(getCommandArgs().getState().getKeySerializer().serialize(key)); + ByteArraySegment serializedValue = new ByteArraySegment(getCommandArgs().getState().getValueSerializer().serialize(value)); + TableSegmentEntry updatedEntry = TableSegmentEntry.unversioned(serializedKey.getCopy(), serializedValue.getCopy()); + + CompletableFuture> reply = adminSegmentHelper.updateTableEntries(tableSegmentName, + new PravegaNodeUri(segmentStoreHost, getServiceConfig().getAdminGatewayPort()), + Collections.singletonList(updatedEntry), super.authHelper.retrieveMasterToken(), 0L); + return reply.join().get(0).getSegmentVersion(); + } + + /** + * Method to convert a {@link ByteBuf} to a {@link ByteBuffer}. + * + * @param byteBuf The {@link ByteBuf} instance. + * @return A {@link ByteBuffer} containing the data present in the provided {@link ByteBuf}. + */ + ByteBuffer getByteBuffer(ByteBuf byteBuf) { + final byte[] bytes = new byte[byteBuf.readableBytes()]; + final int readerIndex = byteBuf.readerIndex(); + byteBuf.getBytes(readerIndex, bytes); + return ByteBuffer.wrap(bytes); + } +} + diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/serializers/AbstractSerializer.java b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/AbstractSerializer.java new file mode 100644 index 00000000000..2b184352391 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/AbstractSerializer.java @@ -0,0 +1,81 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import com.google.common.base.Preconditions; +import io.pravega.client.stream.Serializer; + +import java.util.Arrays; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; + +/** + * Base class for serializers. + */ +public abstract class AbstractSerializer implements Serializer { + + /** + * Method to return the name of the metadata being serialized. + * + * @return A string representing the metadata that is dealt with by this serializer. + */ + public abstract String getName(); + + /** + * Append the given field name-value in a user-friendly format to the StringBuilder. + * + * @param builder The StringBuilder to append to. + * @param name The name of the field. + * @param value The value of the field. + */ + public static void appendField(StringBuilder builder, String name, String value) { + builder.append(name).append("=").append(value).append(";"); + } + + /** + * Parse the given string into a map of keys and values. + * + * @param stringData The string to parse + * @return A map containing all the key-value pairs parsed from the string. + */ + public static Map parseStringData(String stringData) { + Map parsedData = new LinkedHashMap<>(); + Arrays.stream(stringData.split(";")).forEachOrdered(kv -> { + List pair = Arrays.asList(kv.split("=")); + Preconditions.checkArgument(pair.size() == 2, String.format("Incomplete key-value pair provided in %s", kv)); + if (!parsedData.containsKey(pair.get(0))) { + parsedData.put(pair.get(0), pair.get(1)); + } + }); + return parsedData; + } + + /** + * Checks if the given key exists in the map. If it exists, it returns the value corresponding to the key and removes + * the key from the map. + * + * @param data The map to check in. + * @param key The key. + * @return The value of the key if it exists. + * @throws IllegalArgumentException if the key does not exist. + */ + public static String getAndRemoveIfExists(Map data, String key) { + String value = data.remove(key); + Preconditions.checkArgument(value != null, String.format("%s not provided.", key)); + return value; + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerKeySerializer.java b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerKeySerializer.java new file mode 100644 index 00000000000..0fffc17d310 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerKeySerializer.java @@ -0,0 +1,38 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import org.apache.curator.shaded.com.google.common.base.Charsets; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +public class ContainerKeySerializer extends AbstractSerializer { + @Override + public String getName() { + return "container"; + } + + @Override + public ByteBuffer serialize(String value) { + return ByteBuffer.wrap(value.getBytes(Charsets.UTF_8)); + } + + @Override + public String deserialize(ByteBuffer serializedValue) { + return StandardCharsets.UTF_8.decode(serializedValue).toString(); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializer.java b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializer.java new file mode 100644 index 00000000000..04b8d0f637e --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializer.java @@ -0,0 +1,117 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import com.google.common.collect.ImmutableMap; +import io.pravega.client.stream.Serializer; +import io.pravega.common.util.ByteArraySegment; +import io.pravega.segmentstore.contracts.AttributeId; +import io.pravega.segmentstore.contracts.SegmentProperties; +import io.pravega.segmentstore.contracts.StreamSegmentInformation; +import io.pravega.segmentstore.server.containers.MetadataStore.SegmentInfo; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.HashMap; +import java.util.Map; +import java.util.UUID; +import java.util.function.Function; + +/** + * An implementation of {@link Serializer} that converts a user-friendly string representing container metadata. + */ +public class ContainerMetadataSerializer extends AbstractSerializer { + + public static final String SEGMENT_ID = "segmentId"; + public static final String SEGMENT_PROPERTIES_NAME = "name"; + public static final String SEGMENT_PROPERTIES_SEALED = "sealed"; + public static final String SEGMENT_PROPERTIES_START_OFFSET = "startOffset"; + public static final String SEGMENT_PROPERTIES_LENGTH = "length"; + + private static final Map> SEGMENT_PROPERTIES_FIELD_MAP = + ImmutableMap.>builder() + .put(SEGMENT_PROPERTIES_NAME, SegmentProperties::getName) + .put(SEGMENT_PROPERTIES_SEALED, SegmentProperties::isSealed) + .put(SEGMENT_PROPERTIES_START_OFFSET, SegmentProperties::getStartOffset) + .put(SEGMENT_PROPERTIES_LENGTH, SegmentProperties::getLength) + .build(); + + private static final SegmentInfo.SegmentInfoSerializer SERIALIZER = new SegmentInfo.SegmentInfoSerializer(); + + @Override + public String getName() { + return "container"; + } + + @Override + public ByteBuffer serialize(String value) { + ByteBuffer buf; + try { + // Convert string to map with fields and values. + Map data = parseStringData(value); + long segmentId = Long.parseLong(getAndRemoveIfExists(data, SEGMENT_ID)); + // Use the map to build SegmentProperties. The fields/keys are removed after being queried to ensure attributes + // can be handled without interference. If the field/key queried does not exist we throw an IllegalArgumentException. + StreamSegmentInformation properties = StreamSegmentInformation.builder() + .name(getAndRemoveIfExists(data, SEGMENT_PROPERTIES_NAME)) + .sealed(Boolean.parseBoolean(getAndRemoveIfExists(data, SEGMENT_PROPERTIES_SEALED))) + .startOffset(Long.parseLong(getAndRemoveIfExists(data, SEGMENT_PROPERTIES_START_OFFSET))) + .length(Long.parseLong(getAndRemoveIfExists(data, SEGMENT_PROPERTIES_LENGTH))) + .attributes(getAttributes(data)) + .build(); + + SegmentInfo segment = SegmentInfo.builder() + .segmentId(segmentId) + .properties(properties) + .build(); + buf = SERIALIZER.serialize(segment).asByteBuffer(); + } catch (IOException e) { + throw new RuntimeException(e); + } + return buf; + } + + @Override + public String deserialize(ByteBuffer serializedValue) { + StringBuilder stringValueBuilder; + try { + SegmentInfo data = SERIALIZER.deserialize(new ByteArraySegment(serializedValue).getReader()); + stringValueBuilder = new StringBuilder(); + + appendField(stringValueBuilder, SEGMENT_ID, String.valueOf(data.getSegmentId())); + SegmentProperties sp = data.getProperties(); + SEGMENT_PROPERTIES_FIELD_MAP.forEach((name, f) -> appendField(stringValueBuilder, name, String.valueOf(f.apply(sp)))); + + sp.getAttributes().forEach((attributeId, attributeValue) -> appendField(stringValueBuilder, attributeId.toString(), attributeValue.toString())); + } catch (IOException e) { + throw new RuntimeException(e); + } + return stringValueBuilder.toString(); + } + + /** + * Reads the remaining data map for attribute Ids and their values. + * Note: The data map should only contain attribute information. + * + * @param segmentMap The map containing segment attributes in String form. + * @return A map of segment attributes as attributeId-value pairs. + */ + private Map getAttributes(Map segmentMap) { + Map attributes = new HashMap<>(); + segmentMap.forEach((k, v) -> attributes.put(AttributeId.fromUUID(UUID.fromString(k)), Long.parseLong(v))); + return attributes; + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsKeySerializer.java b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsKeySerializer.java new file mode 100644 index 00000000000..a6cfb1fae4d --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsKeySerializer.java @@ -0,0 +1,38 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import org.apache.curator.shaded.com.google.common.base.Charsets; + +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +public class SltsKeySerializer extends AbstractSerializer { + @Override + public String getName() { + return "SLTS"; + } + + @Override + public ByteBuffer serialize(String value) { + return ByteBuffer.wrap(value.getBytes(Charsets.UTF_8)); + } + + @Override + public String deserialize(ByteBuffer serializedValue) { + return StandardCharsets.UTF_8.decode(serializedValue).toString(); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsMetadataSerializer.java b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsMetadataSerializer.java new file mode 100644 index 00000000000..8d85348dc82 --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/serializers/SltsMetadataSerializer.java @@ -0,0 +1,215 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import com.google.common.collect.ImmutableMap; +import io.pravega.client.stream.Serializer; +import io.pravega.common.util.ByteArraySegment; +import io.pravega.segmentstore.storage.metadata.BaseMetadataStore.TransactionData; +import io.pravega.segmentstore.storage.metadata.ChunkMetadata; +import io.pravega.segmentstore.storage.metadata.ReadIndexBlockMetadata; +import io.pravega.segmentstore.storage.metadata.SegmentMetadata; +import io.pravega.segmentstore.storage.metadata.StorageMetadata; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.Map; +import java.util.function.Function; + +/** + * An implementation of {@link Serializer} that converts a user-friendly string representing SLTS metadata. + */ +public class SltsMetadataSerializer extends AbstractSerializer { + + static final String TRANSACTION_DATA_KEY = "key"; + static final String TRANSACTION_DATA_VERSION = "version"; + + static final String CHUNK_METADATA_NAME = "name"; + static final String CHUNK_METADATA_LENGTH = "length"; + static final String CHUNK_METADATA_NEXT_CHUNK = "nextChunk"; + static final String CHUNK_METADATA_STATUS = "status"; + + static final String SEGMENT_METADATA_NAME = "name"; + static final String SEGMENT_METADATA_LENGTH = "length"; + static final String SEGMENT_METADATA_CHUNK_COUNT = "chunkCount"; + static final String SEGMENT_METADATA_START_OFFSET = "startOffset"; + static final String SEGMENT_METADATA_STATUS = "status"; + static final String SEGMENT_METADATA_MAX_ROLLING_LENGTH = "maxRollingLength"; + static final String SEGMENT_METADATA_FIRST_CHUNK = "firstChunk"; + static final String SEGMENT_METADATA_LAST_CHUNK = "lastChunk"; + static final String SEGMENT_METADATA_LAST_MODIFIED = "lastModified"; + static final String SEGMENT_METADATA_FIRST_CHUNK_START_OFFSET = "firstChunkStartOffset"; + static final String SEGMENT_METADATA_LAST_CHUNK_START_OFFSET = "lastChunkStartOffset"; + static final String SEGMENT_METADATA_OWNER_EPOCH = "ownerEpoch"; + + static final String READ_INDEX_BLOCK_METADATA_NAME = "name"; + static final String READ_INDEX_BLOCK_METADATA_CHUNK_NAME = "chunkName"; + static final String READ_INDEX_BLOCK_METADATA_START_OFFSET = "startOffset"; + static final String READ_INDEX_BLOCK_METADATA_STATUS = "status"; + + static final String METADATA_TYPE = "metadataType"; + static final String CHUNK_METADATA = "ChunkMetadata"; + static final String SEGMENT_METADATA = "SegmentMetadata"; + static final String READ_INDEX_BLOCK_METADATA = "ReadIndexBlockMetadata"; + + private static final TransactionData.TransactionDataSerializer SERIALIZER = new TransactionData.TransactionDataSerializer(); + + private static final Map> CHUNK_METADATA_FIELD_MAP = + ImmutableMap.>builder() + .put(CHUNK_METADATA_NAME, ChunkMetadata::getKey) + .put(CHUNK_METADATA_LENGTH, ChunkMetadata::getLength) + .put(CHUNK_METADATA_NEXT_CHUNK, ChunkMetadata::getNextChunk) + .put(CHUNK_METADATA_STATUS, ChunkMetadata::getStatus) + .build(); + + private static final Map> SEGMENT_METADATA_FIELD_MAP = + ImmutableMap.>builder() + .put(SEGMENT_METADATA_NAME, SegmentMetadata::getKey) + .put(SEGMENT_METADATA_LENGTH, SegmentMetadata::getLength) + .put(SEGMENT_METADATA_CHUNK_COUNT, SegmentMetadata::getChunkCount) + .put(SEGMENT_METADATA_START_OFFSET, SegmentMetadata::getStartOffset) + .put(SEGMENT_METADATA_STATUS, SegmentMetadata::getStatus) + .put(SEGMENT_METADATA_MAX_ROLLING_LENGTH, SegmentMetadata::getMaxRollinglength) + .put(SEGMENT_METADATA_FIRST_CHUNK, SegmentMetadata::getFirstChunk) + .put(SEGMENT_METADATA_LAST_CHUNK, SegmentMetadata::getLastChunk) + .put(SEGMENT_METADATA_LAST_MODIFIED, SegmentMetadata::getLastModified) + .put(SEGMENT_METADATA_FIRST_CHUNK_START_OFFSET, SegmentMetadata::getFirstChunkStartOffset) + .put(SEGMENT_METADATA_LAST_CHUNK_START_OFFSET, SegmentMetadata::getLastChunkStartOffset) + .put(SEGMENT_METADATA_OWNER_EPOCH, SegmentMetadata::getOwnerEpoch) + .build(); + + private static final Map> READ_INDEX_BLOCK_METADATA_FIELD_MAP = + ImmutableMap.>builder() + .put(READ_INDEX_BLOCK_METADATA_NAME, ReadIndexBlockMetadata::getKey) + .put(READ_INDEX_BLOCK_METADATA_CHUNK_NAME, ReadIndexBlockMetadata::getChunkName) + .put(READ_INDEX_BLOCK_METADATA_START_OFFSET, ReadIndexBlockMetadata::getStartOffset) + .put(READ_INDEX_BLOCK_METADATA_STATUS, ReadIndexBlockMetadata::getStatus) + .build(); + + @Override + public String getName() { + return "SLTS"; + } + + @Override + public ByteBuffer serialize(String value) { + ByteBuffer buf; + try { + // Convert string to map with fields and values. + Map data = parseStringData(value); + // Use the map to build TransactionData. If the field/key queried does not exist we throw an IllegalArgumentException. + // The value is handled by checking if a unique field corresponding to any specific implementation of StorageMetadata exists. + // The correct instance of StorageMetadata is then generated. + TransactionData transactionData = TransactionData.builder() + .key(getAndRemoveIfExists(data, TRANSACTION_DATA_KEY)) + .version(Long.parseLong(getAndRemoveIfExists(data, TRANSACTION_DATA_VERSION))) + .value(generateStorageMetadataValue(data)) + .build(); + buf = SERIALIZER.serialize(transactionData).asByteBuffer(); + } catch (IOException e) { + throw new RuntimeException(e); + } + return buf; + } + + @Override + public String deserialize(ByteBuffer serializedValue) { + StringBuilder stringValueBuilder; + try { + TransactionData data = SERIALIZER.deserialize(new ByteArraySegment(serializedValue).getReader()); + stringValueBuilder = new StringBuilder(); + + appendField(stringValueBuilder, TRANSACTION_DATA_KEY, data.getKey()); + appendField(stringValueBuilder, TRANSACTION_DATA_VERSION, String.valueOf(data.getVersion())); + handleStorageMetadataValue(stringValueBuilder, data.getValue()); + } catch (IOException e) { + throw new RuntimeException(e); + } + return stringValueBuilder.toString(); + } + + /** + * Convert {@link StorageMetadata} into string of fields and values to be appended it into the given StringBuilder. + * + * @param builder The given StringBuilder. + * @param metadata The StorageMetadata instance. + */ + private void handleStorageMetadataValue(StringBuilder builder, StorageMetadata metadata) { + if (metadata instanceof ChunkMetadata) { + appendField(builder, METADATA_TYPE, CHUNK_METADATA); + ChunkMetadata chunkMetadata = (ChunkMetadata) metadata; + CHUNK_METADATA_FIELD_MAP.forEach((name, f) -> appendField(builder, name, String.valueOf(f.apply(chunkMetadata)))); + + } else if (metadata instanceof SegmentMetadata) { + appendField(builder, METADATA_TYPE, SEGMENT_METADATA); + SegmentMetadata segmentMetadata = (SegmentMetadata) metadata; + SEGMENT_METADATA_FIELD_MAP.forEach((name, f) -> appendField(builder, name, String.valueOf(f.apply(segmentMetadata)))); + + } else if (metadata instanceof ReadIndexBlockMetadata) { + appendField(builder, METADATA_TYPE, READ_INDEX_BLOCK_METADATA); + ReadIndexBlockMetadata readIndexBlockMetadata = (ReadIndexBlockMetadata) metadata; + READ_INDEX_BLOCK_METADATA_FIELD_MAP.forEach((name, f) -> appendField(builder, name, String.valueOf(f.apply(readIndexBlockMetadata)))); + } + } + + /** + * Convert the data map into the required {@link StorageMetadata} instance. + * + * @param storageMetadataMap The map containing StorageMetadata in String form. + * @return The required StorageMetadata instance. + * @throws IllegalArgumentException if any of the queried fields do not correspond to any valid StorageMetadata implementation. + */ + private StorageMetadata generateStorageMetadataValue(Map storageMetadataMap) { + String metadataType = getAndRemoveIfExists(storageMetadataMap, METADATA_TYPE); + switch (metadataType) { + case CHUNK_METADATA: + return ChunkMetadata.builder() + .name(getAndRemoveIfExists(storageMetadataMap, CHUNK_METADATA_NAME)) + .length(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, CHUNK_METADATA_LENGTH))) + .nextChunk(getAndRemoveIfExists(storageMetadataMap, CHUNK_METADATA_NEXT_CHUNK)) + .status(Integer.parseInt(getAndRemoveIfExists(storageMetadataMap, CHUNK_METADATA_STATUS))) + .build(); + + case SEGMENT_METADATA: + return SegmentMetadata.builder() + .name(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_NAME)) + .length(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_LENGTH))) + .chunkCount(Integer.parseInt(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_CHUNK_COUNT))) + .startOffset(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_START_OFFSET))) + .status(Integer.parseInt(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_STATUS))) + .maxRollinglength(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_MAX_ROLLING_LENGTH))) + .firstChunk(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_FIRST_CHUNK)) + .lastChunk(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_LAST_CHUNK)) + .lastModified(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_LAST_MODIFIED))) + .firstChunkStartOffset(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_FIRST_CHUNK_START_OFFSET))) + .lastChunkStartOffset(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_LAST_CHUNK_START_OFFSET))) + .ownerEpoch(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, SEGMENT_METADATA_OWNER_EPOCH))) + .build(); + + case READ_INDEX_BLOCK_METADATA: + return ReadIndexBlockMetadata.builder() + .name(getAndRemoveIfExists(storageMetadataMap, READ_INDEX_BLOCK_METADATA_NAME)) + .chunkName(getAndRemoveIfExists(storageMetadataMap, READ_INDEX_BLOCK_METADATA_CHUNK_NAME)) + .startOffset(Long.parseLong(getAndRemoveIfExists(storageMetadataMap, READ_INDEX_BLOCK_METADATA_START_OFFSET))) + .status(Integer.parseInt(getAndRemoveIfExists(storageMetadataMap, READ_INDEX_BLOCK_METADATA_STATUS))) + .build(); + + default: + throw new IllegalArgumentException("The metadataType value provided does not correspond to any valid SLTS metadata. " + + "The following are valid values: " + CHUNK_METADATA + ", " + SEGMENT_METADATA + ", " + READ_INDEX_BLOCK_METADATA); + } + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/utils/AdminSegmentHelper.java b/cli/admin/src/main/java/io/pravega/cli/admin/utils/AdminSegmentHelper.java new file mode 100644 index 00000000000..cd1c2bf1c0b --- /dev/null +++ b/cli/admin/src/main/java/io/pravega/cli/admin/utils/AdminSegmentHelper.java @@ -0,0 +1,125 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.utils; + +import com.google.common.collect.ImmutableMap; +import com.google.common.collect.ImmutableSet; +import io.pravega.client.connection.impl.ConnectionPool; +import io.pravega.client.connection.impl.RawClient; +import io.pravega.controller.server.SegmentHelper; +import io.pravega.controller.store.host.HostControllerStore; +import io.pravega.shared.protocol.netty.ConnectionFailedException; +import io.pravega.shared.protocol.netty.PravegaNodeUri; +import io.pravega.shared.protocol.netty.Reply; +import io.pravega.shared.protocol.netty.Request; +import io.pravega.shared.protocol.netty.WireCommandType; +import io.pravega.shared.protocol.netty.WireCommands; +import lombok.SneakyThrows; + +import java.util.Map; +import java.util.Set; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ScheduledExecutorService; + +/** + * Used by the Admin CLI for interacting with the admin-gateway on the Segment Store. + */ +public class AdminSegmentHelper extends SegmentHelper implements AutoCloseable { + + private static final Map, Set>> EXPECTED_SUCCESS_REPLIES = + ImmutableMap., Set>>builder() + .put(WireCommands.FlushToStorage.class, ImmutableSet.of(WireCommands.StorageFlushed.class)) + .put(WireCommands.GetTableSegmentInfo.class, ImmutableSet.of(WireCommands.TableSegmentInfo.class)) + .build(); + + private static final Map, Set>> EXPECTED_FAILING_REPLIES = + ImmutableMap., Set>>builder() + .put(WireCommands.GetTableSegmentInfo.class, ImmutableSet.of(WireCommands.NoSuchSegment.class)) + .build(); + + public AdminSegmentHelper(final ConnectionPool connectionPool, HostControllerStore hostStore, + ScheduledExecutorService executorService) { + super(connectionPool, hostStore, executorService); + } + + /** + * This method sends a WireCommand to flush the container corresponding to the given containerId to storage. + * + * @param containerId The Id of the container that needs to be persisted to storage. + * @param uri The uri of the Segment Store instance. + * @param delegationToken The token to be presented to the Segment Store. + * @return A CompletableFuture that will complete normally when the provided keys are deleted. + * If the operation failed, the future will be failed with the causing exception. If the exception can be + * retried then the future will be failed. + */ + public CompletableFuture flushToStorage(int containerId, PravegaNodeUri uri, String delegationToken) { + final WireCommandType type = WireCommandType.FLUSH_TO_STORAGE; + RawClient connection = new RawClient(uri, connectionPool); + final long requestId = connection.getFlow().asLong(); + + WireCommands.FlushToStorage request = new WireCommands.FlushToStorage(containerId, delegationToken, requestId); + return sendRequest(connection, requestId, request) + .thenApply(r -> { + handleReply(requestId, r, connection, null, WireCommands.FlushToStorage.class, type); + assert r instanceof WireCommands.StorageFlushed; + return (WireCommands.StorageFlushed) r; + }); + } + + /** + * This method sends a WireCommand to get table segment info for the given table segment name. + * + * @param qualifiedName StreamSegmentName + * @param uri The uri of the Segment Store instance. + * @param delegationToken The token to be presented to the Segment Store. + * @return A CompletableFuture that will return the table segment info as a WireCommand + */ + public CompletableFuture getTableSegmentInfo(String qualifiedName, PravegaNodeUri uri, String delegationToken) { + final WireCommandType type = WireCommandType.GET_TABLE_SEGMENT_INFO; + RawClient connection = new RawClient(uri, connectionPool); + final long requestId = connection.getFlow().asLong(); + + WireCommands.GetTableSegmentInfo request = new WireCommands.GetTableSegmentInfo(requestId, + qualifiedName, delegationToken); + + return sendRequest(connection, requestId, request) + .thenApply(r -> { + handleReply(requestId, r, connection, qualifiedName, WireCommands.GetTableSegmentInfo.class, type); + assert r instanceof WireCommands.TableSegmentInfo; + return (WireCommands.TableSegmentInfo) r; + }); + } + + /** + * This method handle reply returned from RawClient.sendRequest. + * + * @param callerRequestId request id issues by the client + * @param reply actual reply received + * @param client RawClient for sending request + * @param qualifiedStreamSegmentName StreamSegmentName + * @param requestType request which reply need to be transformed + * @param type WireCommand for this request + */ + @SneakyThrows(ConnectionFailedException.class) + private void handleReply(long callerRequestId, + Reply reply, + RawClient client, + String qualifiedStreamSegmentName, + Class requestType, + WireCommandType type) { + handleExpectedReplies(callerRequestId, reply, client, qualifiedStreamSegmentName, requestType, type, EXPECTED_SUCCESS_REPLIES, EXPECTED_FAILING_REPLIES); + } +} diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/utils/CLIControllerConfig.java b/cli/admin/src/main/java/io/pravega/cli/admin/utils/CLIConfig.java similarity index 67% rename from cli/admin/src/main/java/io/pravega/cli/admin/utils/CLIControllerConfig.java rename to cli/admin/src/main/java/io/pravega/cli/admin/utils/CLIConfig.java index 8e82d787b15..239a87eb6ef 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/utils/CLIControllerConfig.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/utils/CLIConfig.java @@ -21,10 +21,13 @@ import io.pravega.common.util.TypedProperties; import lombok.Getter; +import java.time.Duration; +import java.time.temporal.ChronoUnit; + /** - * Configuration for CLI client, specially related to the Controller service in Pravega. + * Configuration for CLI client. */ -public final class CLIControllerConfig { +public final class CLIConfig { public enum MetadataBackends { SEGMENTSTORE, ZOOKEEPER @@ -32,29 +35,30 @@ public enum MetadataBackends { private static final Property CONTROLLER_REST_URI = Property.named("controller.connect.rest.uri", "localhost:9091"); private static final Property CONTROLLER_GRPC_URI = Property.named("controller.connect.grpc.uri", "localhost:9090"); - private static final Property AUTH_ENABLED = Property.named("controller.connect.channel.auth", false); - private static final Property CONTROLLER_USER_NAME = Property.named("controller.connect.credentials.username", ""); - private static final Property CONTROLLER_PASSWORD = Property.named("controller.connect.credentials.pwd", ""); - private static final Property TLS_ENABLED = Property.named("controller.connect.channel.tls", false); - private static final Property TRUSTSTORE_JKS = Property.named("controller.connect.trustStore.location", ""); + private static final Property AUTH_ENABLED = Property.named("channel.auth", true); + private static final Property USER_NAME = Property.named("credentials.username", ""); + private static final Property PASSWORD = Property.named("credentials.pwd", ""); + private static final Property TLS_ENABLED = Property.named("channel.tls", false); + private static final Property TRUSTSTORE_JKS = Property.named("trustStore.location", ""); + private static final Property TRUSTSTORE_ACCESS_TOKEN_TTL_SECONDS = Property.named("trustStore.access.token.ttl.seconds", 10); private static final Property METADATA_BACKEND = Property.named("store.metadata.backend", MetadataBackends.SEGMENTSTORE.name()); private static final String COMPONENT_CODE = "cli"; /** - * The Controller REST URI. Recall to set "http" or "https" depending on the TLS configuration of the Controller. + * The Controller REST URI. Recall to set "http" or "https" depending on the TLS configuration of the CLI. */ @Getter private final String controllerRestURI; /** - * The Controller GRPC URI. Recall to set "tcp" or "tls" depending on the TLS configuration of the Controller. + * The Controller GRPC URI. Recall to set "tcp" or "tls" depending on the TLS configuration of the CLI. */ @Getter private final String controllerGrpcURI; /** - * Defines whether or not to use authentication in Controller requests. + * Defines whether or not to use authentication in CLI requests. */ @Getter private final boolean authEnabled; @@ -66,37 +70,44 @@ public enum MetadataBackends { private final boolean tlsEnabled; /** - * User name if authentication is configured in the Controller. + * User name if authentication is configured in the CLI. */ @Getter private final String userName; /** - * Password if authentication is configured in the Controller. + * Password if authentication is configured in the CLI. */ @Getter private final String password; /** - * Truststore if TLS is configured in the Controller. + * Truststore if TLS is configured in the CLI. */ @Getter private final String truststore; + /** + * Truststore access token ttl if TLS is configured in the CLI. + */ + @Getter + private final Duration accessTokenTtl; + /** * Controller metadata backend. At the moment, its values can only be "segmentstore" or "zookeeper". */ @Getter private final String metadataBackend; - private CLIControllerConfig(TypedProperties properties) throws ConfigurationException { + private CLIConfig(TypedProperties properties) throws ConfigurationException { this.tlsEnabled = properties.getBoolean(TLS_ENABLED); this.controllerRestURI = (this.isTlsEnabled() ? "https://" : "http://") + properties.get(CONTROLLER_REST_URI); this.controllerGrpcURI = (this.isTlsEnabled() ? "tls://" : "tcp://") + properties.get(CONTROLLER_GRPC_URI); this.authEnabled = properties.getBoolean(AUTH_ENABLED); - this.userName = properties.get(CONTROLLER_USER_NAME); - this.password = properties.get(CONTROLLER_PASSWORD); + this.userName = properties.get(USER_NAME); + this.password = properties.get(PASSWORD); this.truststore = properties.get(TRUSTSTORE_JKS); + this.accessTokenTtl = properties.getDuration(TRUSTSTORE_ACCESS_TOKEN_TTL_SECONDS, ChronoUnit.SECONDS); this.metadataBackend = properties.get(METADATA_BACKEND); } @@ -105,7 +116,7 @@ private CLIControllerConfig(TypedProperties properties) throws ConfigurationExce * * @return A new Builder for this class. */ - public static ConfigBuilder builder() { - return new ConfigBuilder<>(COMPONENT_CODE, CLIControllerConfig::new); + public static ConfigBuilder builder() { + return new ConfigBuilder<>(COMPONENT_CODE, CLIConfig::new); } } diff --git a/cli/admin/src/main/java/io/pravega/cli/admin/utils/ZKHelper.java b/cli/admin/src/main/java/io/pravega/cli/admin/utils/ZKHelper.java index e1395870554..4824d2f4cf7 100644 --- a/cli/admin/src/main/java/io/pravega/cli/admin/utils/ZKHelper.java +++ b/cli/admin/src/main/java/io/pravega/cli/admin/utils/ZKHelper.java @@ -190,6 +190,7 @@ private void startZKClient() throws ZKConnectionFailedException { /** * Close the ZKHelper. */ + @Override public void close() { zkClient.close(); } diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/bookkeeper/BookkeeperCommandsTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/bookkeeper/BookkeeperCommandsTest.java index 7e868b6b2c8..d66af6fb154 100644 --- a/cli/admin/src/test/java/io/pravega/cli/admin/bookkeeper/BookkeeperCommandsTest.java +++ b/cli/admin/src/test/java/io/pravega/cli/admin/bookkeeper/BookkeeperCommandsTest.java @@ -65,6 +65,7 @@ public BookkeeperCommandsTest() { super(1); } + @Override @Before public void setUp() throws Exception { baseConf.setLedgerManagerFactoryClassName("org.apache.bookkeeper.meta.FlatLedgerManagerFactory"); @@ -100,6 +101,7 @@ public void setUp() throws Exception { System.setOut(new PrintStream(outContent)); } + @Override @After public void tearDown() { System.setOut(originalOut); @@ -191,7 +193,7 @@ public void testBookKeeperRecoveryCommand() throws Exception { command.unwrapDataCorruptionException(new DataCorruptionException("test")); command.unwrapDataCorruptionException(new DataCorruptionException("test", "test")); command.unwrapDataCorruptionException(new DataCorruptionException("test", Arrays.asList("test", "test"))); - command.unwrapDataCorruptionException(new DataCorruptionException("test", null)); + command.unwrapDataCorruptionException(new DataCorruptionException("test", (DataCorruptionException) null)); // Check that exception is thrown if ZK is not available. this.zkUtil.stopCluster(); AssertExtensions.assertThrows(DataLogNotAvailableException.class, () -> TestUtils.executeCommand("container recover 0", STATE.get())); diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/controller/ControllerCommandsTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/controller/ControllerCommandsTest.java index a6e4dc1e39a..dfcf0787ee9 100644 --- a/cli/admin/src/test/java/io/pravega/cli/admin/controller/ControllerCommandsTest.java +++ b/cli/admin/src/test/java/io/pravega/cli/admin/controller/ControllerCommandsTest.java @@ -19,7 +19,7 @@ import io.pravega.cli.admin.AdminCommandState; import io.pravega.cli.admin.CommandArgs; import io.pravega.cli.admin.Parser; -import io.pravega.cli.admin.utils.CLIControllerConfig; +import io.pravega.cli.admin.utils.CLIConfig; import io.pravega.cli.admin.utils.TestUtils; import io.pravega.client.ClientConfig; import io.pravega.client.admin.StreamManager; @@ -82,7 +82,8 @@ public class ControllerCommandsTest extends SecureControllerCommandsTest { static { CLUSTER.start(); STATE = createAdminCLIConfig(getCLIControllerRestUri(CLUSTER.controllerRestUri()), - getCLIControllerUri(CLUSTER.controllerUri()), CLUSTER.zookeeperConnectString(), CLUSTER.getContainerCount(), false, false); + getCLIControllerUri(CLUSTER.controllerUri()), CLUSTER.zookeeperConnectString(), CLUSTER.getContainerCount(), + false, false, CLUSTER.getAccessTokenTtl()); String scope = "testScope"; String testStream = "testStream"; ClientConfig clientConfig = prepareValidClientConfig(CLUSTER.controllerUri(), false, false); @@ -105,6 +106,7 @@ public class ControllerCommandsTest extends SecureControllerCommandsTest { assertTrue("Failed to create the stream ", isStreamCreated); } + @Override protected AdminCommandState cliConfig() { return STATE; } @@ -117,6 +119,7 @@ public static void shutDown() { STATE.close(); } + @Override @Test @SneakyThrows public void testDescribeReaderGroupCommand() { @@ -154,11 +157,11 @@ public void testDescribeStreamCommand() { // Try the Zookeeper backend, which is expected to fail and be handled by the command. Properties properties = new Properties(); - properties.setProperty("cli.store.metadata.backend", CLIControllerConfig.MetadataBackends.ZOOKEEPER.name()); + properties.setProperty("cli.store.metadata.backend", CLIConfig.MetadataBackends.ZOOKEEPER.name()); cliConfig().getConfigBuilder().include(properties); commandArgs = new CommandArgs(Arrays.asList(scope, testStream), cliConfig()); new ControllerDescribeStreamCommand(commandArgs).execute(); - properties.setProperty("cli.store.metadata.backend", CLIControllerConfig.MetadataBackends.SEGMENTSTORE.name()); + properties.setProperty("cli.store.metadata.backend", CLIConfig.MetadataBackends.SEGMENTSTORE.name()); cliConfig().getConfigBuilder().include(properties); } diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/controller/SecureControllerCommandsTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/controller/SecureControllerCommandsTest.java index 64ace14e072..7332976b4ab 100644 --- a/cli/admin/src/test/java/io/pravega/cli/admin/controller/SecureControllerCommandsTest.java +++ b/cli/admin/src/test/java/io/pravega/cli/admin/controller/SecureControllerCommandsTest.java @@ -51,7 +51,8 @@ public class SecureControllerCommandsTest { static { CLUSTER.start(); STATE = createAdminCLIConfig(getCLIControllerRestUri(CLUSTER.controllerRestUri()), - getCLIControllerUri(CLUSTER.controllerUri()), CLUSTER.zookeeperConnectString(), CLUSTER.getContainerCount(), true, true); + getCLIControllerUri(CLUSTER.controllerUri()), CLUSTER.zookeeperConnectString(), CLUSTER.getContainerCount(), + true, true, CLUSTER.getAccessTokenTtl()); String scope = "testScope"; String testStream = "testStream"; ClientConfig clientConfig = prepareValidClientConfig(CLUSTER.controllerUri(), true, true); diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/dataRecovery/DataRecoveryTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/dataRecovery/DataRecoveryTest.java index 68301564489..20b4fd05277 100644 --- a/cli/admin/src/test/java/io/pravega/cli/admin/dataRecovery/DataRecoveryTest.java +++ b/cli/admin/src/test/java/io/pravega/cli/admin/dataRecovery/DataRecoveryTest.java @@ -375,7 +375,7 @@ private static class ControllerRunner implements AutoCloseable { private final Controller controller; private final URI controllerURI = URI.create("tcp://" + serviceHost + ":" + controllerPort); - ControllerRunner(int bkPort, int servicePort, int containerCount) throws InterruptedException { + ControllerRunner(int bkPort, int servicePort, int containerCount) { this.controllerWrapper = new ControllerWrapper("localhost:" + bkPort, false, controllerPort, serviceHost, servicePort, containerCount); this.controllerWrapper.awaitRunning(); @@ -435,7 +435,7 @@ private static class PravegaRunner implements AutoCloseable { } private void restartControllerAndSegmentStore(StorageFactory storageFactory, BookKeeperLogFactory dataLogFactory) - throws DurableDataLogException, InterruptedException { + throws DurableDataLogException { this.segmentStoreRunner = new SegmentStoreRunner(storageFactory, dataLogFactory, this.containerCount); log.info("bk port to be connected = {}", this.bookKeeperRunner.bkPort); this.controllerRunner = new ControllerRunner(this.bookKeeperRunner.bkPort, this.segmentStoreRunner.servicePort, containerCount); diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/AbstractSegmentStoreCommandsTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/AbstractSegmentStoreCommandsTest.java new file mode 100644 index 00000000000..ed0b34bc043 --- /dev/null +++ b/cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/AbstractSegmentStoreCommandsTest.java @@ -0,0 +1,399 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.segmentstore; + +import io.pravega.cli.admin.AdminCommandState; +import io.pravega.cli.admin.serializers.ContainerKeySerializer; +import io.pravega.cli.admin.serializers.ContainerMetadataSerializer; +import io.pravega.cli.admin.serializers.SltsKeySerializer; +import io.pravega.cli.admin.serializers.SltsMetadataSerializer; +import io.pravega.cli.admin.utils.TestUtils; +import io.pravega.client.ClientConfig; +import io.pravega.client.EventStreamClientFactory; +import io.pravega.client.stream.EventStreamWriter; +import io.pravega.client.stream.EventWriterConfig; +import io.pravega.client.stream.StreamConfiguration; +import io.pravega.client.stream.impl.JavaSerializer; +import io.pravega.controller.server.WireCommandFailedException; +import io.pravega.segmentstore.contracts.Attributes; +import io.pravega.shared.security.auth.DefaultCredentials; +import io.pravega.test.common.AssertExtensions; +import io.pravega.test.common.SecurityConfigDefaults; +import io.pravega.test.integration.utils.SetupUtils; +import lombok.Cleanup; +import org.junit.Assert; +import org.junit.Rule; +import org.junit.Test; +import org.junit.Before; +import org.junit.After; +import org.junit.rules.Timeout; + +import java.io.File; +import java.nio.file.FileAlreadyExistsException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.Paths; +import java.util.Arrays; +import java.util.Properties; +import java.util.UUID; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicReference; + +import static io.pravega.cli.admin.segmentstore.tableSegment.GetTableSegmentInfoCommand.ENTRY_COUNT; +import static io.pravega.cli.admin.segmentstore.tableSegment.GetTableSegmentInfoCommand.KEY_LENGTH; +import static io.pravega.cli.admin.segmentstore.tableSegment.GetTableSegmentInfoCommand.LENGTH; +import static io.pravega.cli.admin.segmentstore.tableSegment.GetTableSegmentInfoCommand.SEGMENT_NAME; +import static io.pravega.cli.admin.segmentstore.tableSegment.GetTableSegmentInfoCommand.START_OFFSET; +import static io.pravega.cli.admin.serializers.AbstractSerializer.appendField; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_ID; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_LENGTH; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_NAME; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_SEALED; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_START_OFFSET; +import static io.pravega.shared.NameUtils.getMetadataSegmentName; +import static io.pravega.test.integration.utils.TestUtils.pathToConfig; + + +/** + * This test is for testing the segment store cli commands. + */ +public abstract class AbstractSegmentStoreCommandsTest { + // Setup utility. + protected static final SetupUtils SETUP_UTILS = new SetupUtils(); + protected static final AtomicReference STATE = new AtomicReference<>(); + protected static final int CONTAINER_COUNT = 1; + + @Rule + public final Timeout globalTimeout = new Timeout(60, TimeUnit.SECONDS); + + private ClientConfig clientConfig; + + public void setup(boolean enableAuth, boolean enableTls) throws Exception { + ClientConfig.ClientConfigBuilder clientConfigBuilder = ClientConfig.builder().controllerURI(SETUP_UTILS.getControllerUri()); + + STATE.set(new AdminCommandState()); + SETUP_UTILS.startAllServices(enableAuth, enableTls); + Properties pravegaProperties = new Properties(); + pravegaProperties.setProperty("cli.controller.rest.uri", SETUP_UTILS.getControllerRestUri().toString()); + pravegaProperties.setProperty("cli.controller.grpc.uri", SETUP_UTILS.getControllerUri().toString()); + pravegaProperties.setProperty("pravegaservice.zk.connect.uri", SETUP_UTILS.getZkTestServer().getConnectString()); + pravegaProperties.setProperty("pravegaservice.container.count", String.valueOf(CONTAINER_COUNT)); + pravegaProperties.setProperty("pravegaservice.admin.gateway.port", String.valueOf(SETUP_UTILS.getAdminPort())); + + if (enableAuth) { + clientConfigBuilder = clientConfigBuilder.credentials(new DefaultCredentials(SecurityConfigDefaults.AUTH_ADMIN_PASSWORD, + SecurityConfigDefaults.AUTH_ADMIN_USERNAME)); + pravegaProperties.setProperty("cli.channel.auth", Boolean.toString(true)); + pravegaProperties.setProperty("cli.credentials.username", SecurityConfigDefaults.AUTH_ADMIN_USERNAME); + pravegaProperties.setProperty("cli.credentials.pwd", SecurityConfigDefaults.AUTH_ADMIN_PASSWORD); + } + if (enableTls) { + clientConfigBuilder = clientConfigBuilder.trustStore(pathToConfig() + SecurityConfigDefaults.TLS_CA_CERT_FILE_NAME) + .validateHostName(false); + pravegaProperties.setProperty("cli.channel.tls", Boolean.toString(true)); + pravegaProperties.setProperty("cli.trustStore.location", "../../config/" + SecurityConfigDefaults.TLS_CA_CERT_FILE_NAME); + pravegaProperties.setProperty("cli.trustStore.access.token.ttl.seconds", Integer.toString(300)); + } + STATE.get().getConfigBuilder().include(pravegaProperties); + + clientConfig = clientConfigBuilder.build(); + } + + @Test + public void testGetSegmentInfoCommand() throws Exception { + TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "getinfo", StreamConfiguration.builder().build()); + String commandResult = TestUtils.executeCommand("segmentstore get-segment-info segmentstore/getinfo/0.#epoch.0 localhost", STATE.get()); + + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_abortStream/0.#epoch.0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_requeststream/0.#epoch.0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGcommitStreamReaders/0.#epoch.0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGscaleGroup/0.#epoch.0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGkvtStreamReaders/0.#epoch.0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGabortStreamReaders/0.#epoch.0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/containers/metadata_0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); + AssertExtensions.assertThrows(WireCommandFailedException.class, () -> TestUtils.executeCommand("segmentstore get-segment-info not/exists/0 localhost", STATE.get())); + Assert.assertNotNull(GetSegmentInfoCommand.descriptor()); + } + + @Test + public void testReadSegmentRangeCommand() throws Exception { + // Create a temporary directory. + Path tempDirPath = Files.createTempDirectory("readSegmentDir"); + String filename = Paths.get(tempDirPath.toString(), "tmp" + System.currentTimeMillis(), "readSegmentTest.txt").toString(); + + TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "readsegment", StreamConfiguration.builder().build()); + + @Cleanup + EventStreamClientFactory factory = EventStreamClientFactory.withScope("segmentstore", clientConfig); + @Cleanup + EventStreamWriter writer = factory.createEventWriter("readsegment", new JavaSerializer<>(), EventWriterConfig.builder().build()); + writer.writeEvents("rk", Arrays.asList("a", "2", "3")); + writer.flush(); + + // Check to make sure that the file exists and data is written into it. + String commandResult = TestUtils.executeCommand("segmentstore read-segment segmentstore/readsegment/0.#epoch.0 0 8 localhost " + filename, STATE.get()); + Assert.assertTrue(commandResult.contains("The segment data has been successfully written into")); + File file = new File(filename); + Assert.assertTrue(file.exists()); + Assert.assertNotEquals(0, file.length()); + + AssertExtensions.assertThrows(FileAlreadyExistsException.class, () -> + TestUtils.executeCommand("segmentstore read-segment _system/_RGcommitStreamReaders/0.#epoch.0 0 8 localhost " + filename, STATE.get())); + // Delete file created during the test. + Files.deleteIfExists(Paths.get(filename)); + + AssertExtensions.assertThrows(WireCommandFailedException.class, () -> + TestUtils.executeCommand("segmentstore read-segment not/exists/0 0 1 localhost " + filename, STATE.get())); + Assert.assertNotNull(ReadSegmentRangeCommand.descriptor()); + // Delete file created during the test. + Files.deleteIfExists(Paths.get(filename)); + + // Delete the temporary directory. + tempDirPath.toFile().deleteOnExit(); + } + + @Test + public void testGetSegmentAttributeCommand() throws Exception { + TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "getattribute", StreamConfiguration.builder().build()); + String commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute segmentstore/getattribute/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); + commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute _system/_abortStream/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); + AssertExtensions.assertThrows(WireCommandFailedException.class, () -> TestUtils.executeCommand("segmentstore get-segment-attribute not/exists/0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get())); + Assert.assertNotNull(GetSegmentAttributeCommand.descriptor()); + } + + @Test + public void testUpdateSegmentAttributeCommand() throws Exception { + TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "updateattribute", StreamConfiguration.builder().build()); + // First, get the existing value of that attribute for the segment. + String commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute segmentstore/updateattribute/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); + long oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); + Assert.assertNotEquals(0L, oldValue); + // Update the Segment to a value of 0. + commandResult = TestUtils.executeCommand("segmentstore update-segment-attribute segmentstore/updateattribute/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " 0 " + oldValue + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("UpdateSegmentAttribute:")); + // Check that the value has been updated. + commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute segmentstore/updateattribute/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); + oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); + Assert.assertEquals(0L, oldValue); + + // Do the same for an internal segment. + commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute _system/_abortStream/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); + oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); + Assert.assertNotEquals(0L, oldValue); + // Update the Segment to a value of 0. + commandResult = TestUtils.executeCommand("segmentstore update-segment-attribute _system/_abortStream/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " 0 " + oldValue + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("UpdateSegmentAttribute:")); + // Check that the value has been updated. + commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute _system/_abortStream/0.#epoch.0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); + oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); + Assert.assertEquals(0L, oldValue); + + AssertExtensions.assertThrows(WireCommandFailedException.class, () -> TestUtils.executeCommand("segmentstore update-segment-attribute not/exists/0 " + + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " 0 0 localhost", STATE.get())); + Assert.assertNotNull(UpdateSegmentAttributeCommand.descriptor()); + } + + @Test + public void testFlushToStorageCommandAllCase() throws Exception { + String commandResult = TestUtils.executeCommand("container flush-to-storage all localhost", STATE.get()); + for (int id = 1; id < CONTAINER_COUNT; id++) { + Assert.assertTrue(commandResult.contains("Flushed the Segment Container with containerId " + id + " to Storage.")); + } + Assert.assertNotNull(FlushToStorageCommand.descriptor()); + } + + @Test + public void testFlushToStorageCommand() throws Exception { + String commandResult = TestUtils.executeCommand("container flush-to-storage 0 localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("Flushed the Segment Container with containerId 0 to Storage.")); + Assert.assertNotNull(FlushToStorageCommand.descriptor()); + } + + @Test + public void testSetSerializerCommand() throws Exception { + Assert.assertNull(STATE.get().getKeySerializer()); + Assert.assertNull(STATE.get().getValueSerializer()); + + String commandResult = TestUtils.executeCommand("table-segment set-serializer dummy", STATE.get()); + Assert.assertTrue(commandResult.contains("Serializers named dummy do not exist.")); + Assert.assertNull(STATE.get().getKeySerializer()); + Assert.assertNull(STATE.get().getValueSerializer()); + + commandResult = TestUtils.executeCommand("table-segment set-serializer slts", STATE.get()); + Assert.assertTrue(commandResult.contains("Serializers changed to slts successfully.")); + Assert.assertTrue(STATE.get().getKeySerializer() instanceof SltsKeySerializer); + Assert.assertTrue(STATE.get().getValueSerializer() instanceof SltsMetadataSerializer); + + commandResult = TestUtils.executeCommand("table-segment set-serializer container_meta", STATE.get()); + Assert.assertTrue(commandResult.contains("Serializers changed to container_meta successfully.")); + Assert.assertTrue(STATE.get().getKeySerializer() instanceof ContainerKeySerializer); + Assert.assertTrue(STATE.get().getValueSerializer() instanceof ContainerMetadataSerializer); + } + + @Test + public void testGetTableSegmentInfoCommand() throws Exception { + String tableSegmentName = getMetadataSegmentName(0); + String commandResult = TestUtils.executeCommand("table-segment get-info " + tableSegmentName + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains(tableSegmentName)); + Assert.assertTrue(commandResult.contains(SEGMENT_NAME)); + Assert.assertTrue(commandResult.contains(START_OFFSET)); + Assert.assertTrue(commandResult.contains(LENGTH)); + Assert.assertTrue(commandResult.contains(ENTRY_COUNT)); + Assert.assertTrue(commandResult.contains(KEY_LENGTH)); + } + + @Test + public void testListTableSegmentKeysCommand() throws Exception { + String setSerializerResult = TestUtils.executeCommand("table-segment set-serializer container_meta", STATE.get()); + Assert.assertTrue(setSerializerResult.contains("Serializers changed to container_meta successfully.")); + Assert.assertTrue(STATE.get().getKeySerializer() instanceof ContainerKeySerializer); + Assert.assertTrue(STATE.get().getValueSerializer() instanceof ContainerMetadataSerializer); + + String tableSegmentName = getMetadataSegmentName(0); + int keyCount = 5; + String commandResult = TestUtils.executeCommand("table-segment list-keys " + tableSegmentName + " " + keyCount + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("List of at most " + keyCount + " keys in " + tableSegmentName)); + } + + @Test + public void testGetTableSegmentEntryCommand() throws Exception { + String setSerializerResult = TestUtils.executeCommand("table-segment set-serializer container_meta", STATE.get()); + Assert.assertTrue(setSerializerResult.contains("Serializers changed to container_meta successfully.")); + Assert.assertTrue(STATE.get().getKeySerializer() instanceof ContainerKeySerializer); + Assert.assertTrue(STATE.get().getValueSerializer() instanceof ContainerMetadataSerializer); + + String tableSegmentName = getMetadataSegmentName(0); + String key = "_system/_RGkvtStreamReaders/0.#epoch.0"; + String commandResult = TestUtils.executeCommand("table-segment get " + tableSegmentName + " " + key + " localhost", STATE.get()); + Assert.assertTrue(commandResult.contains("container metadata info:")); + Assert.assertTrue(commandResult.contains(SEGMENT_ID)); + Assert.assertTrue(commandResult.contains(SEGMENT_PROPERTIES_NAME)); + Assert.assertTrue(commandResult.contains(SEGMENT_PROPERTIES_SEALED)); + Assert.assertTrue(commandResult.contains(SEGMENT_PROPERTIES_START_OFFSET)); + Assert.assertTrue(commandResult.contains(SEGMENT_PROPERTIES_LENGTH)); + } + + @Test + public void testPutTableSegmentEntryCommand() throws Exception { + String setSerializerResult = TestUtils.executeCommand("table-segment set-serializer container_meta", STATE.get()); + Assert.assertTrue(setSerializerResult.contains("Serializers changed to container_meta successfully.")); + Assert.assertTrue(STATE.get().getKeySerializer() instanceof ContainerKeySerializer); + Assert.assertTrue(STATE.get().getValueSerializer() instanceof ContainerMetadataSerializer); + + String tableSegmentName = getMetadataSegmentName(0); + String key = "_system/_RGkvtStreamReaders/0.#epoch.0"; + StringBuilder newValueBuilder = new StringBuilder(); + appendField(newValueBuilder, SEGMENT_ID, "1"); + appendField(newValueBuilder, SEGMENT_PROPERTIES_NAME, key); + appendField(newValueBuilder, SEGMENT_PROPERTIES_SEALED, "false"); + appendField(newValueBuilder, SEGMENT_PROPERTIES_START_OFFSET, "0"); + appendField(newValueBuilder, SEGMENT_PROPERTIES_LENGTH, "10"); + appendField(newValueBuilder, "80000000-0000-0000-0000-000000000000", "1632728432718"); + + String commandResult = TestUtils.executeCommand("table-segment put " + tableSegmentName + " localhost " + + key + " " + newValueBuilder.toString(), + STATE.get()); + Assert.assertTrue(commandResult.contains("Successfully updated the key " + key + " in table " + tableSegmentName)); + } + + @Test + public void testModifyTableSegmentEntryCommandValidFieldCase() throws Exception { + String setSerializerResult = TestUtils.executeCommand("table-segment set-serializer container_meta", STATE.get()); + Assert.assertTrue(setSerializerResult.contains("Serializers changed to container_meta successfully.")); + Assert.assertTrue(STATE.get().getKeySerializer() instanceof ContainerKeySerializer); + Assert.assertTrue(STATE.get().getValueSerializer() instanceof ContainerMetadataSerializer); + + String tableSegmentName = getMetadataSegmentName(0); + String key = "_system/_RGkvtStreamReaders/0.#epoch.0"; + StringBuilder newFieldValueBuilder = new StringBuilder(); + appendField(newFieldValueBuilder, SEGMENT_PROPERTIES_START_OFFSET, "20"); + appendField(newFieldValueBuilder, SEGMENT_PROPERTIES_LENGTH, "30"); + appendField(newFieldValueBuilder, "80000000-0000-0000-0000-000000000000", "1632728432718"); + appendField(newFieldValueBuilder, "dummy_field", "dummy"); + + String commandResult = TestUtils.executeCommand("table-segment modify " + tableSegmentName + " localhost " + + key + " " + newFieldValueBuilder.toString(), + STATE.get()); + Assert.assertTrue(commandResult.contains("dummy_field field does not exist.")); + Assert.assertTrue(commandResult.contains("Successfully modified the following fields in the value for key " + key + " in table " + tableSegmentName)); + } + + @Test + public void testModifyTableSegmentEntryCommandInValidFieldCase() throws Exception { + String setSerializerResult = TestUtils.executeCommand("table-segment set-serializer container_meta", STATE.get()); + Assert.assertTrue(setSerializerResult.contains("Serializers changed to container_meta successfully.")); + Assert.assertTrue(STATE.get().getKeySerializer() instanceof ContainerKeySerializer); + Assert.assertTrue(STATE.get().getValueSerializer() instanceof ContainerMetadataSerializer); + + String tableSegmentName = getMetadataSegmentName(0); + String key = "_system/_RGkvtStreamReaders/0.#epoch.0"; + StringBuilder newFieldValueBuilder = new StringBuilder(); + appendField(newFieldValueBuilder, "dummy_field", "dummy"); + + String commandResult = TestUtils.executeCommand("table-segment modify " + tableSegmentName + " localhost " + + key + " " + newFieldValueBuilder.toString(), + STATE.get()); + Assert.assertTrue(commandResult.contains("dummy_field field does not exist.")); + Assert.assertTrue(commandResult.contains("No fields provided to modify.")); + } + + @After + public void tearDown() throws Exception { + SETUP_UTILS.stopAllServices(); + STATE.get().close(); + } + + //endregion + + //region Actual Test Implementations + + public static class SecureSegmentStoreCommandsTest extends AbstractSegmentStoreCommandsTest { + @Before + public void startUp() throws Exception { + setup(true, true); + } + } + + public static class SegmentStoreCommandsTest extends AbstractSegmentStoreCommandsTest { + @Before + public void startUp() throws Exception { + setup(false, false); + } + } + + //endregion +} \ No newline at end of file diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommandsTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommandsTest.java deleted file mode 100644 index 46495d33f75..00000000000 --- a/cli/admin/src/test/java/io/pravega/cli/admin/segmentstore/SegmentStoreCommandsTest.java +++ /dev/null @@ -1,133 +0,0 @@ -/** - * Copyright Pravega Authors. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package io.pravega.cli.admin.segmentstore; - -import io.pravega.cli.admin.AbstractAdminCommandTest; -import io.pravega.cli.admin.utils.TestUtils; -import io.pravega.client.ClientConfig; -import io.pravega.client.EventStreamClientFactory; -import io.pravega.client.stream.EventStreamWriter; -import io.pravega.client.stream.EventWriterConfig; -import io.pravega.client.stream.StreamConfiguration; -import io.pravega.client.stream.impl.JavaSerializer; -import io.pravega.controller.server.WireCommandFailedException; -import io.pravega.segmentstore.contracts.Attributes; -import io.pravega.test.common.AssertExtensions; -import lombok.Cleanup; -import org.junit.Assert; -import org.junit.Test; - -import java.util.Arrays; -import java.util.UUID; - -public class SegmentStoreCommandsTest extends AbstractAdminCommandTest { - - @Test - public void testGetSegmentInfoCommand() throws Exception { - TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "getinfo", StreamConfiguration.builder().build()); - String commandResult = TestUtils.executeCommand("segmentstore get-segment-info segmentstore/getinfo/0.#epoch.0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_abortStream/0.#epoch.0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_requeststream/0.#epoch.0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGcommitStreamReaders/0.#epoch.0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGscaleGroup/0.#epoch.0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGkvtStreamReaders/0.#epoch.0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/_RGabortStreamReaders/0.#epoch.0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-info _system/containers/metadata_0 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("StreamSegmentInfo:")); - AssertExtensions.assertThrows(WireCommandFailedException.class, () -> TestUtils.executeCommand("segmentstore get-segment-info not/exists/0 localhost", STATE.get())); - Assert.assertNotNull(GetSegmentInfoCommand.descriptor()); - } - - @Test - public void testReadSegmentRangeCommand() throws Exception { - TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "readsegment", StreamConfiguration.builder().build()); - ClientConfig clientConfig = ClientConfig.builder().controllerURI(SETUP_UTILS.getControllerUri()).build(); - @Cleanup - EventStreamClientFactory factory = EventStreamClientFactory.withScope("segmentstore", clientConfig); - @Cleanup - EventStreamWriter writer = factory.createEventWriter("readsegment", new JavaSerializer<>(), EventWriterConfig.builder().build()); - writer.writeEvents("rk", Arrays.asList("a", "2", "3")); - writer.flush(); - String commandResult = TestUtils.executeCommand("segmentstore read-segment segmentstore/readsegment/0.#epoch.0 0 8 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("ReadSegment:")); - commandResult = TestUtils.executeCommand("segmentstore read-segment _system/_RGcommitStreamReaders/0.#epoch.0 0 8 localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("ReadSegment:")); - AssertExtensions.assertThrows(WireCommandFailedException.class, () -> TestUtils.executeCommand("segmentstore read-segment not/exists/0 0 1 localhost", STATE.get())); - Assert.assertNotNull(ReadSegmentRangeCommand.descriptor()); - } - - @Test - public void testGetSegmentAttributeCommand() throws Exception { - TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "getattribute", StreamConfiguration.builder().build()); - String commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute segmentstore/getattribute/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); - commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute _system/_abortStream/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); - AssertExtensions.assertThrows(WireCommandFailedException.class, () -> TestUtils.executeCommand("segmentstore get-segment-attribute not/exists/0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get())); - Assert.assertNotNull(GetSegmentAttributeCommand.descriptor()); - } - - @Test - public void testUpdateSegmentAttributeCommand() throws Exception { - TestUtils.createScopeStream(SETUP_UTILS.getController(), "segmentstore", "updateattribute", StreamConfiguration.builder().build()); - // First, get the existing value of that attribute for the segment. - String commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute segmentstore/updateattribute/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); - long oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); - Assert.assertNotEquals(0L, oldValue); - // Update the Segment to a value of 0. - commandResult = TestUtils.executeCommand("segmentstore update-segment-attribute segmentstore/updateattribute/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " 0 " + oldValue + " localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("UpdateSegmentAttribute:")); - // Check that the value has been updated. - commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute segmentstore/updateattribute/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); - oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); - Assert.assertEquals(0L, oldValue); - - // Do the same for an internal segment. - commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute _system/_abortStream/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("GetSegmentAttribute:")); - oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); - Assert.assertNotEquals(0L, oldValue); - // Update the Segment to a value of 0. - commandResult = TestUtils.executeCommand("segmentstore update-segment-attribute _system/_abortStream/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " 0 " + oldValue + " localhost", STATE.get()); - Assert.assertTrue(commandResult.contains("UpdateSegmentAttribute:")); - // Check that the value has been updated. - commandResult = TestUtils.executeCommand("segmentstore get-segment-attribute _system/_abortStream/0.#epoch.0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " localhost", STATE.get()); - oldValue = Long.parseLong(commandResult.substring(commandResult.lastIndexOf("=") + 1, commandResult.indexOf(")"))); - Assert.assertEquals(0L, oldValue); - - AssertExtensions.assertThrows(WireCommandFailedException.class, () -> TestUtils.executeCommand("segmentstore update-segment-attribute not/exists/0 " - + new UUID(Attributes.CORE_ATTRIBUTE_ID_PREFIX, 0) + " 0 0 localhost", STATE.get())); - Assert.assertNotNull(UpdateSegmentAttributeCommand.descriptor()); - } - -} diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerKeySerializerTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerKeySerializerTest.java new file mode 100644 index 00000000000..f05d47a61a6 --- /dev/null +++ b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerKeySerializerTest.java @@ -0,0 +1,32 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import org.junit.Assert; +import org.junit.Test; + +import java.nio.ByteBuffer; + +public class ContainerKeySerializerTest { + + @Test + public void testContainerKeySerializer() { + String testString = "test"; + ContainerKeySerializer serializer = new ContainerKeySerializer(); + ByteBuffer buffer = serializer.serialize(testString); + Assert.assertEquals(testString, serializer.deserialize(buffer)); + } +} diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializerTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializerTest.java new file mode 100644 index 00000000000..7cb61369f4b --- /dev/null +++ b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/ContainerMetadataSerializerTest.java @@ -0,0 +1,60 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import org.junit.Test; + +import java.nio.ByteBuffer; + +import static io.pravega.cli.admin.serializers.AbstractSerializer.appendField; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_ID; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_LENGTH; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_NAME; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_SEALED; +import static io.pravega.cli.admin.serializers.ContainerMetadataSerializer.SEGMENT_PROPERTIES_START_OFFSET; +import static org.junit.Assert.assertEquals; + +public class ContainerMetadataSerializerTest { + + @Test + public void testContainerMetadataSerializer() { + StringBuilder userGeneratedMetadataBuilder = new StringBuilder(); + appendField(userGeneratedMetadataBuilder, SEGMENT_ID, "1"); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_NAME, "segment-name"); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_SEALED, "false"); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_START_OFFSET, "0"); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_LENGTH, "10"); + appendField(userGeneratedMetadataBuilder, "80000000-0000-0000-0000-000000000000", "1632728432718"); + + String userString = userGeneratedMetadataBuilder.toString(); + ContainerMetadataSerializer serializer = new ContainerMetadataSerializer(); + ByteBuffer buf = serializer.serialize(userString); + assertEquals(userString, serializer.deserialize(buf)); + } + + @Test(expected = IllegalArgumentException.class) + public void testContainerMetadataSerializerArgumentFailure() { + StringBuilder userGeneratedMetadataBuilder = new StringBuilder(); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_NAME, "segment-name"); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_SEALED, "false"); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_START_OFFSET, "0"); + appendField(userGeneratedMetadataBuilder, SEGMENT_PROPERTIES_LENGTH, "10"); + + String userString = userGeneratedMetadataBuilder.toString(); + ContainerMetadataSerializer serializer = new ContainerMetadataSerializer(); + serializer.serialize(userString); + } +} diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SerializerTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SerializerTest.java new file mode 100644 index 00000000000..43f0d8ed8f9 --- /dev/null +++ b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SerializerTest.java @@ -0,0 +1,63 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import io.pravega.test.common.AssertExtensions; +import org.junit.Assert; +import org.junit.Test; + +import java.util.HashMap; +import java.util.Map; + +import static io.pravega.cli.admin.serializers.AbstractSerializer.appendField; +import static io.pravega.cli.admin.serializers.AbstractSerializer.getAndRemoveIfExists; +import static io.pravega.cli.admin.serializers.AbstractSerializer.parseStringData; +import static java.util.stream.IntStream.range; + +public class SerializerTest { + + @Test + public void testAppendField() { + String testKey = "key"; + String testValue = "value"; + StringBuilder testBuilder = new StringBuilder(); + appendField(testBuilder, testKey, testValue); + Assert.assertTrue(testBuilder.toString().contains(String.format("%s=%s;", testKey, testValue))); + } + + @Test + public void testParseStringData() { + int total = 4; + StringBuilder testBuilder = new StringBuilder(); + range(1, total).forEach(i -> appendField(testBuilder, "key" + i, "value" + i)); + Map dataMap = parseStringData(testBuilder.toString()); + range(1, total).forEach(i -> { + Assert.assertTrue(dataMap.containsKey("key" + i)); + Assert.assertEquals("value" + i, dataMap.get("key" + i)); + }); + } + + @Test + public void testGetAndRemoveIfExists() { + String testKey = "key1"; + String testValue = "value1"; + Map testMap = new HashMap<>(); + testMap.put(testKey, testValue); + Assert.assertEquals(testValue, getAndRemoveIfExists(testMap, testKey)); + Assert.assertFalse(testMap.containsKey(testKey)); + AssertExtensions.assertThrows(IllegalArgumentException.class, () -> getAndRemoveIfExists(testMap, testKey)); + } +} diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsKeySerializerTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsKeySerializerTest.java new file mode 100644 index 00000000000..2dce8daf5b1 --- /dev/null +++ b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsKeySerializerTest.java @@ -0,0 +1,32 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import org.junit.Assert; +import org.junit.Test; + +import java.nio.ByteBuffer; + +public class SltsKeySerializerTest { + + @Test + public void testSltsKeySerializer() { + String testString = "test"; + SltsKeySerializer serializer = new SltsKeySerializer(); + ByteBuffer buffer = serializer.serialize(testString); + Assert.assertEquals(testString, serializer.deserialize(buffer)); + } +} diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsMetadataSerializerTest.java b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsMetadataSerializerTest.java new file mode 100644 index 00000000000..912a0ee7625 --- /dev/null +++ b/cli/admin/src/test/java/io/pravega/cli/admin/serializers/SltsMetadataSerializerTest.java @@ -0,0 +1,126 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.cli.admin.serializers; + +import org.junit.Test; + +import java.nio.ByteBuffer; + +import static io.pravega.cli.admin.serializers.AbstractSerializer.appendField; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.CHUNK_METADATA; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.CHUNK_METADATA_LENGTH; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.CHUNK_METADATA_NAME; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.CHUNK_METADATA_NEXT_CHUNK; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.CHUNK_METADATA_STATUS; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.METADATA_TYPE; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.READ_INDEX_BLOCK_METADATA; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.READ_INDEX_BLOCK_METADATA_CHUNK_NAME; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.READ_INDEX_BLOCK_METADATA_NAME; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.READ_INDEX_BLOCK_METADATA_START_OFFSET; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.READ_INDEX_BLOCK_METADATA_STATUS; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_CHUNK_COUNT; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_FIRST_CHUNK; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_FIRST_CHUNK_START_OFFSET; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_LAST_CHUNK; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_LAST_CHUNK_START_OFFSET; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_LAST_MODIFIED; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_LENGTH; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_MAX_ROLLING_LENGTH; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_NAME; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_OWNER_EPOCH; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_START_OFFSET; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.SEGMENT_METADATA_STATUS; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.TRANSACTION_DATA_KEY; +import static io.pravega.cli.admin.serializers.SltsMetadataSerializer.TRANSACTION_DATA_VERSION; +import static org.junit.Assert.assertEquals; + +public class SltsMetadataSerializerTest { + + @Test + public void testSltsChunkMetadataSerializer() { + StringBuilder userGeneratedMetadataBuilder = new StringBuilder(); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_KEY, "k"); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_VERSION, "1062"); + appendField(userGeneratedMetadataBuilder, METADATA_TYPE, CHUNK_METADATA); + appendField(userGeneratedMetadataBuilder, CHUNK_METADATA_NAME, "chunk0"); + appendField(userGeneratedMetadataBuilder, CHUNK_METADATA_LENGTH, "10"); + appendField(userGeneratedMetadataBuilder, CHUNK_METADATA_NEXT_CHUNK, "chunk1"); + appendField(userGeneratedMetadataBuilder, CHUNK_METADATA_STATUS, "0"); + + String userString = userGeneratedMetadataBuilder.toString(); + SltsMetadataSerializer serializer = new SltsMetadataSerializer(); + ByteBuffer buf = serializer.serialize(userString); + assertEquals(userString, serializer.deserialize(buf)); + } + + @Test + public void testSltsSegmentMetadataSerializer() { + StringBuilder userGeneratedMetadataBuilder = new StringBuilder(); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_KEY, "k"); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_VERSION, "1062"); + appendField(userGeneratedMetadataBuilder, METADATA_TYPE, SEGMENT_METADATA); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_NAME, "segment-name"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_LENGTH, "10"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_CHUNK_COUNT, "5"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_START_OFFSET, "0"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_STATUS, "1"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_MAX_ROLLING_LENGTH, "10"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_FIRST_CHUNK, "chunk0"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_LAST_CHUNK, "chunk4"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_LAST_MODIFIED, "1000"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_FIRST_CHUNK_START_OFFSET, "10"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_LAST_CHUNK_START_OFFSET, "50"); + appendField(userGeneratedMetadataBuilder, SEGMENT_METADATA_OWNER_EPOCH, "12345"); + + String userString = userGeneratedMetadataBuilder.toString(); + SltsMetadataSerializer serializer = new SltsMetadataSerializer(); + ByteBuffer buf = serializer.serialize(userString); + assertEquals(userString, serializer.deserialize(buf)); + } + + @Test + public void testSltsReadIndexBlockMetadataSerializer() { + StringBuilder userGeneratedMetadataBuilder = new StringBuilder(); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_KEY, "k"); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_VERSION, "1062"); + appendField(userGeneratedMetadataBuilder, METADATA_TYPE, READ_INDEX_BLOCK_METADATA); + appendField(userGeneratedMetadataBuilder, READ_INDEX_BLOCK_METADATA_NAME, "r1"); + appendField(userGeneratedMetadataBuilder, READ_INDEX_BLOCK_METADATA_CHUNK_NAME, "chunk0"); + appendField(userGeneratedMetadataBuilder, READ_INDEX_BLOCK_METADATA_START_OFFSET, "10"); + appendField(userGeneratedMetadataBuilder, READ_INDEX_BLOCK_METADATA_STATUS, "0"); + + String userString = userGeneratedMetadataBuilder.toString(); + SltsMetadataSerializer serializer = new SltsMetadataSerializer(); + ByteBuffer buf = serializer.serialize(userString); + assertEquals(userString, serializer.deserialize(buf)); + } + + @Test(expected = IllegalArgumentException.class) + public void testSltsSerializerArgumentFailure() { + StringBuilder userGeneratedMetadataBuilder = new StringBuilder(); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_KEY, "k"); + appendField(userGeneratedMetadataBuilder, TRANSACTION_DATA_VERSION, "1062"); + appendField(userGeneratedMetadataBuilder, METADATA_TYPE, "random_value"); + appendField(userGeneratedMetadataBuilder, CHUNK_METADATA_NAME, "chunk0"); + appendField(userGeneratedMetadataBuilder, CHUNK_METADATA_LENGTH, "10"); + appendField(userGeneratedMetadataBuilder, CHUNK_METADATA_STATUS, "0"); + + String userString = userGeneratedMetadataBuilder.toString(); + SltsMetadataSerializer serializer = new SltsMetadataSerializer(); + serializer.serialize(userString); + } +} diff --git a/cli/admin/src/test/java/io/pravega/cli/admin/utils/TestUtils.java b/cli/admin/src/test/java/io/pravega/cli/admin/utils/TestUtils.java index 8802a863d2b..553f461a39c 100644 --- a/cli/admin/src/test/java/io/pravega/cli/admin/utils/TestUtils.java +++ b/cli/admin/src/test/java/io/pravega/cli/admin/utils/TestUtils.java @@ -54,12 +54,12 @@ import java.net.URI; import java.nio.charset.StandardCharsets; import java.time.Duration; -import java.util.Arrays; -import java.util.HashMap; -import java.util.HashSet; -import java.util.Map; import java.util.Properties; import java.util.Set; +import java.util.Map; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Arrays; /** * Class to contain convenient utilities for writing test cases. @@ -109,15 +109,21 @@ public static String pathToConfig() { * @return A local Pravega cluster */ public static ClusterWrapper createPravegaCluster(boolean authEnabled, boolean tlsEnabled) { - ClusterWrapper.ClusterWrapperBuilder clusterWrapperBuilder = ClusterWrapper.builder().authEnabled(authEnabled); + ClusterWrapper.ClusterWrapperBuilder clusterWrapperBuilder = ClusterWrapper.builder(); + if (authEnabled) { + clusterWrapperBuilder.authEnabled(authEnabled); + } + if (tlsEnabled) { clusterWrapperBuilder .tlsEnabled(true) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .tlsServerCertificatePath(pathToConfig() + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME) .tlsServerKeyPath(pathToConfig() + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME) .tlsHostVerificationEnabled(false) .tlsServerKeystorePath(pathToConfig() + SecurityConfigDefaults.TLS_SERVER_KEYSTORE_NAME) - .tlsServerKeystorePasswordPath(pathToConfig() + SecurityConfigDefaults.TLS_PASSWORD_FILE_NAME); + .tlsServerKeystorePasswordPath(pathToConfig() + SecurityConfigDefaults.TLS_PASSWORD_FILE_NAME) + .tokenSigningKeyBasis("secret"); } return clusterWrapperBuilder.controllerRestEnabled(true).build(); } @@ -131,10 +137,11 @@ public static ClusterWrapper createPravegaCluster(boolean authEnabled, boolean t * @param containerCount the container count. * @param authEnabled whether the cli requires authentication to access the cluster. * @param tlsEnabled whether the cli requires TLS to access the cluster. + * @param accessTokenTtl how long the access token will last */ @SneakyThrows public static AdminCommandState createAdminCLIConfig(String controllerRestUri, String controllerUri, String zkConnectUri, - int containerCount, boolean authEnabled, boolean tlsEnabled) { + int containerCount, boolean authEnabled, boolean tlsEnabled, Duration accessTokenTtl) { AdminCommandState state = new AdminCommandState(); Properties pravegaProperties = new Properties(); System.out.println("REST URI: " + controllerRestUri); @@ -142,11 +149,12 @@ public static AdminCommandState createAdminCLIConfig(String controllerRestUri, S pravegaProperties.setProperty("cli.controller.connect.grpc.uri", controllerUri); pravegaProperties.setProperty("pravegaservice.zk.connect.uri", zkConnectUri); pravegaProperties.setProperty("pravegaservice.container.count", Integer.toString(containerCount)); - pravegaProperties.setProperty("cli.controller.connect.channel.auth", Boolean.toString(authEnabled)); - pravegaProperties.setProperty("cli.controller.connect.credentials.username", SecurityConfigDefaults.AUTH_ADMIN_USERNAME); - pravegaProperties.setProperty("cli.controller.connect.credentials.pwd", SecurityConfigDefaults.AUTH_ADMIN_PASSWORD); - pravegaProperties.setProperty("cli.controller.connect.channel.tls", Boolean.toString(tlsEnabled)); - pravegaProperties.setProperty("cli.controller.connect.trustStore.location", pathToConfig() + SecurityConfigDefaults.TLS_CA_CERT_FILE_NAME); + pravegaProperties.setProperty("cli.channel.auth", Boolean.toString(authEnabled)); + pravegaProperties.setProperty("cli.credentials.username", SecurityConfigDefaults.AUTH_ADMIN_USERNAME); + pravegaProperties.setProperty("cli.credentials.pwd", SecurityConfigDefaults.AUTH_ADMIN_PASSWORD); + pravegaProperties.setProperty("cli.channel.tls", Boolean.toString(tlsEnabled)); + pravegaProperties.setProperty("cli.trustStore.location", pathToConfig() + SecurityConfigDefaults.TLS_CA_CERT_FILE_NAME); + pravegaProperties.setProperty("cli.trustStore.access.token.ttl.seconds", Long.toString(accessTokenTtl.toSeconds())); state.getConfigBuilder().include(pravegaProperties); return state; } @@ -204,6 +212,7 @@ public static void createScopeStream(Controller controller, String scopeName, St ClientConfig clientConfig = ClientConfig.builder().build(); @Cleanup ConnectionPool cp = new ConnectionPoolImpl(clientConfig, new SocketConnectionFactoryImpl(clientConfig)); + @SuppressWarnings("resource") //Don't close the controller. StreamManager streamManager = new StreamManagerImpl(controller, cp); //create scope Boolean createScopeStatus = streamManager.createScope(scopeName); @@ -258,4 +267,4 @@ public static void readAllEvents(String scope, String streamName, ClientFactoryI Assert.assertEquals("Event written and read back don't match", EVENT, eventRead); } } -} \ No newline at end of file +} diff --git a/cli/user/src/main/java/io/pravega/cli/user/Command.java b/cli/user/src/main/java/io/pravega/cli/user/Command.java index c4915f48834..acfa9f64450 100644 --- a/cli/user/src/main/java/io/pravega/cli/user/Command.java +++ b/cli/user/src/main/java/io/pravega/cli/user/Command.java @@ -265,6 +265,7 @@ public static class Factory { .put(ConfigCommand.Set::descriptor, ConfigCommand.Set::new) .put(ScopeCommand.Create::descriptor, ScopeCommand.Create::new) .put(ScopeCommand.Delete::descriptor, ScopeCommand.Delete::new) + .put(ScopeCommand.List::descriptor, ScopeCommand.List::new) .put(StreamCommand.Create::descriptor, StreamCommand.Create::new) .put(StreamCommand.Delete::descriptor, StreamCommand.Delete::new) .put(StreamCommand.List::descriptor, StreamCommand.List::new) diff --git a/cli/user/src/main/java/io/pravega/cli/user/config/InteractiveConfig.java b/cli/user/src/main/java/io/pravega/cli/user/config/InteractiveConfig.java index afe520cd8ac..78f409d09b4 100644 --- a/cli/user/src/main/java/io/pravega/cli/user/config/InteractiveConfig.java +++ b/cli/user/src/main/java/io/pravega/cli/user/config/InteractiveConfig.java @@ -33,6 +33,7 @@ public class InteractiveConfig { public static final String TIMEOUT_MILLIS = "timeout-millis"; public static final String MAX_LIST_ITEMS = "max-list-items"; public static final String PRETTY_PRINT = "pretty-print"; + public static final String ROLLOVER_SIZE_BYTES = "rollover-size-bytes"; public static final String AUTH_ENABLED = "auth-enabled"; public static final String CONTROLLER_USER_NAME = "auth-username"; @@ -50,6 +51,7 @@ public class InteractiveConfig { private String password; private boolean tlsEnabled; private String truststore; + private long rolloverSizeBytes; public static InteractiveConfig getDefault() { return InteractiveConfig.builder() @@ -63,6 +65,7 @@ public static InteractiveConfig getDefault() { .password("") .tlsEnabled(false) .truststore("") + .rolloverSizeBytes(0) .build(); } @@ -98,6 +101,9 @@ InteractiveConfig set(String propertyName, String value) { case TRUSTSTORE_JKS: setTruststore(value); break; + case ROLLOVER_SIZE_BYTES: + setRolloverSizeBytes(Long.parseLong(value)); + break; default: throw new IllegalArgumentException(String.format("Unrecognized property name '%s'.", propertyName)); } @@ -116,6 +122,7 @@ Map getAll() { .put(CONTROLLER_PASSWORD, getPassword()) .put(TLS_ENABLED, isTlsEnabled()) .put(TRUSTSTORE_JKS, getTruststore()) + .put(ROLLOVER_SIZE_BYTES, getRolloverSizeBytes()) .build(); } } diff --git a/cli/user/src/main/java/io/pravega/cli/user/kvs/KeyValueTableCommand.java b/cli/user/src/main/java/io/pravega/cli/user/kvs/KeyValueTableCommand.java index a65747e0b7b..3c630c2a3a8 100644 --- a/cli/user/src/main/java/io/pravega/cli/user/kvs/KeyValueTableCommand.java +++ b/cli/user/src/main/java/io/pravega/cli/user/kvs/KeyValueTableCommand.java @@ -167,6 +167,7 @@ public void execute() { .partitionCount(getConfig().getDefaultSegmentCount()) .primaryKeyLength(pkLength) .secondaryKeyLength(skLength) + .rolloverSizeBytes(getConfig().getRolloverSizeBytes()) .build(); val success = m.createKeyValueTable(s.getScope(), s.getName(), kvtConfig); if (success) { @@ -286,6 +287,7 @@ protected int[] getTableFormatColumnLengths() { protected abstract void executeInternal(ScopedName kvtName, KeyValueTable kvt) throws Exception; + @Override public void execute() throws Exception { ensurePreconditions(); val kvtName = getScopedNameArg(0); diff --git a/cli/user/src/main/java/io/pravega/cli/user/scope/ScopeCommand.java b/cli/user/src/main/java/io/pravega/cli/user/scope/ScopeCommand.java index 73b1d3f3fac..5ee4af03089 100644 --- a/cli/user/src/main/java/io/pravega/cli/user/scope/ScopeCommand.java +++ b/cli/user/src/main/java/io/pravega/cli/user/scope/ScopeCommand.java @@ -22,6 +22,8 @@ import lombok.Cleanup; import lombok.NonNull; import lombok.val; +import java.util.ArrayList; +import java.util.Collections; public abstract class ScopeCommand extends Command { static final String COMPONENT = "scope"; @@ -90,4 +92,35 @@ public static CommandDescriptor descriptor() { .build(); } } + + public static class List extends StreamCommand { + public List(@NonNull CommandArgs commandArgs) { + super(commandArgs); + } + + @Override + public void execute() { + ensureArgCount(0); + @Cleanup + val sm = StreamManager.create(getClientConfig()); + val scopeIterator = sm.listScopes(); + ArrayList scopeList = new ArrayList<>(); + + while (scopeIterator.hasNext()) { + scopeList.add(scopeIterator.next()); + } + + Collections.sort(scopeList); + + for (String scope : scopeList) { + output("\t%s", scope); + } + + } + + public static CommandDescriptor descriptor() { + return createDescriptor("list", "Lists all Scopes in Pravega.") + .build(); + } + } } diff --git a/cli/user/src/main/java/io/pravega/cli/user/utils/BackgroundConsoleListener.java b/cli/user/src/main/java/io/pravega/cli/user/utils/BackgroundConsoleListener.java index bf0e85a0dad..493ef7b3740 100644 --- a/cli/user/src/main/java/io/pravega/cli/user/utils/BackgroundConsoleListener.java +++ b/cli/user/src/main/java/io/pravega/cli/user/utils/BackgroundConsoleListener.java @@ -44,6 +44,7 @@ public void start() { this.triggered.set(false); val t = new Thread(() -> { System.out.println(String.format("Press '%s ' to cancel ongoing operation.", this.token)); + @SuppressWarnings("resource") Scanner s = new Scanner(System.in); while (!this.triggered.get()) { String input = s.next(); diff --git a/cli/user/src/test/java/io/pravega/cli/user/scope/ScopeCommandsTest.java b/cli/user/src/test/java/io/pravega/cli/user/scope/ScopeCommandsTest.java index 562aaf715c2..1bd815095eb 100644 --- a/cli/user/src/test/java/io/pravega/cli/user/scope/ScopeCommandsTest.java +++ b/cli/user/src/test/java/io/pravega/cli/user/scope/ScopeCommandsTest.java @@ -59,6 +59,8 @@ public void testCreateScope() { String commandResult = TestUtils.executeCommand("scope create " + scope, cliConfig()); Assert.assertTrue(commandResult.contains("created successfully")); Assert.assertNotNull(ScopeCommand.Create.descriptor()); + + String cleanUp = TestUtils.executeCommand("scope delete " + scope, cliConfig()); } @Test(timeout = 5000) @@ -73,6 +75,32 @@ public void testDeleteScope() { Assert.assertNotNull(ScopeCommand.Delete.descriptor()); } + @Test(timeout = 10000) + @SneakyThrows + public void testListScope() { + final String scope1 = "b"; + final String scope2 = "z"; + final String scope3 = "c"; + final String scope4 = "a"; + final String scope5 = "aaa"; + + TestUtils.executeCommand("scope create " + scope1, cliConfig()); + TestUtils.executeCommand("scope create " + scope2, cliConfig()); + TestUtils.executeCommand("scope create " + scope3, cliConfig()); + TestUtils.executeCommand("scope create " + scope4, cliConfig()); + TestUtils.executeCommand("scope create " + scope5, cliConfig()); + + String commandResult = TestUtils.executeCommand("scope list", cliConfig()); + Assert.assertTrue(commandResult.equals( + "\t_system\n" + + "\ta\n" + + "\taaa\n" + + "\tb\n" + + "\tc\n" + + "\tz\n")); + Assert.assertNotNull(ScopeCommand.List.descriptor()); + } + public static class SecureScopeCommandsTest extends ScopeCommandsTest { private static final ClusterWrapper CLUSTER = createPravegaCluster(true, true); private static final InteractiveConfig CONFIG = createCLIConfig(getCLIControllerUri(CLUSTER.controllerUri()), true, true); diff --git a/client/src/main/java/io/pravega/client/ByteStreamClientFactory.java b/client/src/main/java/io/pravega/client/ByteStreamClientFactory.java index 979df849c33..4a19361fa82 100644 --- a/client/src/main/java/io/pravega/client/ByteStreamClientFactory.java +++ b/client/src/main/java/io/pravega/client/ByteStreamClientFactory.java @@ -58,7 +58,11 @@ static ByteStreamClientFactory withScope(String scope, ClientConfig config) { } /** - * Creates a new ByteStreamReader on the specified stream initialized to offset 0. + * Creates a new ByteStreamReader on the specified stream initialized with the last offset which was passed to + * ByteStreamWriter::truncateDataBefore(offset), or 0 if truncateDataBefore has not ever been called on this stream. + * + * The first byte read from the return value of this method will be the first available byte in the stream, + * considering any possible truncation. * * @param streamName the stream to read from. * @return A new ByteStreamReader diff --git a/client/src/main/java/io/pravega/client/ClientConfig.java b/client/src/main/java/io/pravega/client/ClientConfig.java index f57fa7a467b..a36cc66e321 100644 --- a/client/src/main/java/io/pravega/client/ClientConfig.java +++ b/client/src/main/java/io/pravega/client/ClientConfig.java @@ -295,10 +295,10 @@ private Credentials extractCredentialsFromEnv(Map env) { Map retVal = env.entrySet() .stream() .filter(entry -> entry.getKey().toString().startsWith(AUTH_PROPS_PREFIX_ENV)) - .collect(Collectors.toMap(entry -> (String) entry.getKey().toString() + .collect(Collectors.toMap(entry -> entry.getKey().toString() .replace("_", ".") .substring(AUTH_PROPS_PREFIX.length()), - value -> (String) value.getValue())); + value -> value.getValue())); if (retVal.containsKey(AUTH_METHOD)) { return credentialFromMap(retVal); } else { diff --git a/client/src/main/java/io/pravega/client/byteStream/ByteStreamReader.java b/client/src/main/java/io/pravega/client/byteStream/ByteStreamReader.java index e8f25cc363d..acdf3ac712f 100644 --- a/client/src/main/java/io/pravega/client/byteStream/ByteStreamReader.java +++ b/client/src/main/java/io/pravega/client/byteStream/ByteStreamReader.java @@ -59,6 +59,12 @@ public abstract class ByteStreamReader extends InputStream implements Asynchrono @Override public abstract int available(); + /** + * This makes a synchronous RPC call to the server to obtain the current head of the stream. + * @return The current head offset + */ + public abstract long fetchHeadOffset(); + /** * This make an RPC to the server to fetch the offset at which new bytes would be written. This * is the same as the length of the segment (assuming no truncation). This offset can also be diff --git a/client/src/main/java/io/pravega/client/byteStream/ByteStreamWriter.java b/client/src/main/java/io/pravega/client/byteStream/ByteStreamWriter.java index e5d1f063799..1ebe4556954 100644 --- a/client/src/main/java/io/pravega/client/byteStream/ByteStreamWriter.java +++ b/client/src/main/java/io/pravega/client/byteStream/ByteStreamWriter.java @@ -95,6 +95,12 @@ public abstract class ByteStreamWriter extends OutputStream { */ public abstract void closeAndSeal() throws IOException; + /** + * This makes a synchronous RPC call to the server to obtain the current head of the stream. + * @return The current head offset + */ + public abstract long fetchHeadOffset(); + /** * This makes a synchronous RPC call to the server to obtain the total number of bytes written * to the segment in its history. This is the sum total of the bytes written in all calls to diff --git a/client/src/main/java/io/pravega/client/byteStream/impl/BufferedByteStreamWriterImpl.java b/client/src/main/java/io/pravega/client/byteStream/impl/BufferedByteStreamWriterImpl.java index 7460d048f39..9cd53de3c2f 100644 --- a/client/src/main/java/io/pravega/client/byteStream/impl/BufferedByteStreamWriterImpl.java +++ b/client/src/main/java/io/pravega/client/byteStream/impl/BufferedByteStreamWriterImpl.java @@ -104,6 +104,11 @@ public void closeAndSeal() throws IOException { out.closeAndSeal(); } + @Override + public long fetchHeadOffset() { + return out.fetchHeadOffset(); + } + @Override public long fetchTailOffset() { return out.fetchTailOffset(); diff --git a/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamReaderImpl.java b/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamReaderImpl.java index a995e56dbfc..8e789fee930 100644 --- a/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamReaderImpl.java +++ b/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamReaderImpl.java @@ -76,6 +76,12 @@ public void close() { } } + @Override + public long fetchHeadOffset() { + return Futures.getThrowingException(meta.fetchCurrentSegmentHeadOffset()); + } + + @Override public long fetchTailOffset() { return Futures.getThrowingException(meta.fetchCurrentSegmentLength()); diff --git a/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamWriterImpl.java b/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamWriterImpl.java index e2e9c21ee17..e34394a0a97 100644 --- a/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamWriterImpl.java +++ b/client/src/main/java/io/pravega/client/byteStream/impl/ByteStreamWriterImpl.java @@ -67,6 +67,11 @@ public void closeAndSeal() throws IOException { meta.close(); } + @Override + public long fetchHeadOffset() { + return Futures.getThrowingException(meta.fetchCurrentSegmentHeadOffset()); + } + @Override public long fetchTailOffset() { return Futures.getThrowingException(meta.fetchCurrentSegmentLength()); diff --git a/client/src/main/java/io/pravega/client/connection/impl/CommandEncoder.java b/client/src/main/java/io/pravega/client/connection/impl/CommandEncoder.java index 9bd3ca8a293..221ee6b30c7 100644 --- a/client/src/main/java/io/pravega/client/connection/impl/CommandEncoder.java +++ b/client/src/main/java/io/pravega/client/connection/impl/CommandEncoder.java @@ -20,10 +20,13 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufOutputStream; import io.netty.buffer.Unpooled; +import io.pravega.common.Exceptions; import io.pravega.shared.metrics.MetricNotifier; import io.pravega.shared.protocol.netty.Append; import io.pravega.shared.protocol.netty.AppendBatchSizeTracker; import io.pravega.shared.protocol.netty.InvalidMessageException; +import io.pravega.shared.protocol.netty.PravegaNodeUri; +import io.pravega.shared.protocol.netty.ReplyProcessor; import io.pravega.shared.protocol.netty.WireCommand; import io.pravega.shared.protocol.netty.WireCommandType; import io.pravega.shared.protocol.netty.WireCommands.AppendBlock; @@ -39,6 +42,7 @@ import java.util.List; import java.util.Map; import java.util.UUID; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; import java.util.function.Function; import javax.annotation.concurrent.GuardedBy; @@ -62,7 +66,9 @@ public class CommandEncoder { static final int MAX_QUEUED_EVENTS = 500; @VisibleForTesting static final int MAX_QUEUED_SIZE = 1024 * 1024; // 1MB - + @VisibleForTesting + static final int MAX_SETUP_SEGMENTS_SIZE = 2000; + private static final byte[] LENGTH_PLACEHOLDER = new byte[4]; private final Function appendTracker; private final MetricNotifier metricNotifier; @@ -83,6 +89,10 @@ public class CommandEncoder { private final OutputStream output; private final ByteBuf buffer = Unpooled.buffer(1024 * 1024); + private final AtomicBoolean closed = new AtomicBoolean(false); + private final ReplyProcessor callback; + private final PravegaNodeUri location; + @RequiredArgsConstructor @VisibleForTesting final class Session { @@ -179,9 +189,20 @@ private void flushAllToBuffer() { @Synchronized public void write(WireCommand msg) throws IOException { + Exceptions.checkNotClosed(this.closed.get(), this); + if (msg instanceof SetupAppend) { breakCurrentAppend(); flushAllToBuffer(); + if (setupSegments.size() >= MAX_SETUP_SEGMENTS_SIZE) { + log.debug("CommandEncoder {} setupSegments map reached maximum size of {}", this.location, MAX_SETUP_SEGMENTS_SIZE); + flushBuffer(); + closed.compareAndSet(false, true); + if (callback != null) { + callback.connectionDropped(); + } + throw new IOException("CommandEncoder " + this.location + " closed due to memory limit reached"); + } writeMessage(msg, buffer); SetupAppend setup = (SetupAppend) msg; setupSegments.put(new SimpleImmutableEntry<>(setup.getSegment(), setup.getWriterId()), @@ -202,6 +223,8 @@ public void write(WireCommand msg) throws IOException { @Synchronized public void write(Append append) throws IOException { + Exceptions.checkNotClosed(this.closed.get(), this); + Session session = setupSegments.get(new SimpleImmutableEntry<>(append.getSegment(), append.getWriterId())); validateAppend(append, session); final ByteBuf data = append.getData().slice(); @@ -424,5 +447,5 @@ public long batchTimeout(long token) { closeQuietly(output, log, "Closing output failed"); } return result; - } + } } diff --git a/client/src/main/java/io/pravega/client/connection/impl/Flow.java b/client/src/main/java/io/pravega/client/connection/impl/Flow.java index 80c879dd7cc..2ee54dd8d2d 100644 --- a/client/src/main/java/io/pravega/client/connection/impl/Flow.java +++ b/client/src/main/java/io/pravega/client/connection/impl/Flow.java @@ -82,7 +82,7 @@ public static int toFlowID(long requestID) { */ @Synchronized public long asLong() { - return ((long) flowId << 32) | ((long) requestSequenceNumber & 0xFFFFFFFL); + return ((long) flowId << 32) | (requestSequenceNumber & 0xFFFFFFFL); } /** diff --git a/client/src/main/java/io/pravega/client/connection/impl/TcpClientConnection.java b/client/src/main/java/io/pravega/client/connection/impl/TcpClientConnection.java index 2572d3e30d9..8acbbee88c9 100644 --- a/client/src/main/java/io/pravega/client/connection/impl/TcpClientConnection.java +++ b/client/src/main/java/io/pravega/client/connection/impl/TcpClientConnection.java @@ -218,7 +218,7 @@ public static CompletableFuture connect(PravegaNodeUri loca reader.start(); // We use the flow id on both CommandEncoder and ConnectionReader to locate AppendBatchSizeTrackers. CommandEncoder encoder = new CommandEncoder(requestId -> - flowToBatchSizeTracker.getAppendBatchSizeTrackerByFlowId(Flow.toFlowID(requestId)), null, socket.getOutputStream()); + flowToBatchSizeTracker.getAppendBatchSizeTrackerByFlowId(Flow.toFlowID(requestId)), null, socket.getOutputStream(), callback, location); return new TcpClientConnection(socket, encoder, reader, location, onClose, executor); } catch (Exception e) { closeQuietly(socket, log, "Failed to close socket while failing."); diff --git a/client/src/main/java/io/pravega/client/control/impl/ControllerImpl.java b/client/src/main/java/io/pravega/client/control/impl/ControllerImpl.java index 0cf7701693f..18aca8741f7 100644 --- a/client/src/main/java/io/pravega/client/control/impl/ControllerImpl.java +++ b/client/src/main/java/io/pravega/client/control/impl/ControllerImpl.java @@ -667,6 +667,9 @@ public CompletableFuture updateStream(String scope, String streamName, case STREAM_NOT_FOUND: log.warn(requestId, "Stream does not exist: {}", streamName); throw new IllegalArgumentException("Stream does not exist: " + streamConfig); + case STREAM_SEALED: + log.warn(requestId, "Stream is sealed: {}", streamName); + throw new UnsupportedOperationException("Stream is sealed: " + streamConfig); case SUCCESS: log.info(requestId, "Successfully updated stream: {}", streamName); return true; @@ -800,6 +803,9 @@ private CompletableFuture truncateStream(final String scope, final Stri case STREAM_NOT_FOUND: log.warn(requestId, "Stream does not exist: {}/{}", scope, stream); throw new IllegalArgumentException("Stream does not exist: " + stream); + case STREAM_SEALED: + log.warn(requestId, "Stream is sealed: {}/{}", scope, stream); + throw new UnsupportedOperationException("Stream is sealed: " + stream); case SUCCESS: log.info(requestId, "Successfully updated stream: {}/{}", scope, stream); return true; @@ -1791,22 +1797,23 @@ public CompletableFuture createReaderGroup(String scope, Stri return callback.getFuture(); }, this.executor); return result.thenApplyAsync(x -> { + final String rgScopedName = NameUtils.getScopedReaderGroupName(scope, rgName); switch (x.getStatus()) { case FAILURE: - log.warn(requestId, "Failed to create reader group: {}", rgName); - throw new ControllerFailureException("Failed to create reader group: " + rgName); + log.warn(requestId, "Failed to create Reader Group: {}", rgScopedName); + throw new ControllerFailureException("Failed to create Reader Group: " + rgScopedName); case INVALID_RG_NAME: - log.warn(requestId, "Illegal Reader Group Name: {}", rgName); - throw new IllegalArgumentException("Illegal readergroup name: " + rgName); + log.warn(requestId, "Failed to create Reader Group {} as name is illegal.", rgScopedName); + throw new IllegalArgumentException("Failed to create Reader Group, as name is illegal." + rgScopedName); case SCOPE_NOT_FOUND: - log.warn(requestId, "Scope not found: {}", scope); - throw new IllegalArgumentException("Scope does not exist: " + scope); + log.warn(requestId, "Failed to create Reader Group {} as Scope {} does not exist.", rgScopedName, scope); + throw new IllegalArgumentException("Failed to create Reader Group as Scope does not exits:" + rgScopedName); case SUCCESS: - log.info(requestId, "ReaderGroup created successfully: {}", rgName); + log.info(requestId, "Reader Group {} created successfully: {}", rgScopedName); return encode(x.getConfig()); case UNRECOGNIZED: default: - throw new ControllerFailureException("Unknown return status creating reader group " + rgName + throw new ControllerFailureException("Unknown return status creating reader group " + rgScopedName + " " + x.getStatus()); } }, this.executor).whenComplete((x, e) -> { @@ -1837,27 +1844,27 @@ public CompletableFuture updateReaderGroup(String scope, String rgName, fi final String rgScopedName = NameUtils.getScopedReaderGroupName(scope, rgName); switch (x.getStatus()) { case FAILURE: - log.warn(requestId, "Failed to create reader group: {}", rgScopedName); - throw new ControllerFailureException("Failed to create readergroup: " + rgScopedName); + log.warn(requestId, "Failed to update Reader Group: {}", rgScopedName); + throw new ControllerFailureException("Failed to update Reader Group: " + rgScopedName); case INVALID_CONFIG: - log.warn(requestId, "Illegal Reader Group Config for reader group {}: {}", rgScopedName, rgConfig); + log.warn(requestId, "Failed to update Reader Group {} as Config was invalid: {}", rgScopedName, rgConfig); throw new ReaderGroupConfigRejectedException("Invalid Reader Group Config: " + rgConfig.toString()); case RG_NOT_FOUND: - log.warn(requestId, "Reader Group not found: {}", rgScopedName); + log.warn(requestId, "Failed to update Reader Group {} as Reader Group was not found.", rgScopedName); throw new ReaderGroupNotFoundException(rgScopedName); case SUCCESS: - log.info(requestId, "ReaderGroup created successfully: {}", rgScopedName); + log.info(requestId, "Reader Group updated successfully: {}", rgScopedName); return x.getGeneration(); case UNRECOGNIZED: default: - throw new ControllerFailureException("Unknown return status creating reader group " + rgScopedName + throw new ControllerFailureException("Unknown return status updating reader group " + rgScopedName + " " + x.getStatus()); } }, this.executor).whenComplete((x, e) -> { if (e != null) { - log.warn(requestId, "createReaderGroup {}/{} failed: ", scope, rgName, e); + log.warn(requestId, "updateReaderGroup {}/{} failed: ", scope, rgName, e); } - LoggerHelpers.traceLeave(log, "createReaderGroup", traceId, rgConfig, requestId); + LoggerHelpers.traceLeave(log, "updateReaderGroup", traceId, rgConfig, requestId); }); } diff --git a/client/src/main/java/io/pravega/client/control/impl/ModelHelper.java b/client/src/main/java/io/pravega/client/control/impl/ModelHelper.java index 6037dcf430c..3159efe318a 100644 --- a/client/src/main/java/io/pravega/client/control/impl/ModelHelper.java +++ b/client/src/main/java/io/pravega/client/control/impl/ModelHelper.java @@ -132,6 +132,8 @@ public static final StreamConfiguration encode(final StreamConfig config) { .scalingPolicy(encode(config.getScalingPolicy())) .retentionPolicy(encode(config.getRetentionPolicy())) .tags(config.getTags().getTagList()) + .timestampAggregationTimeout(config.getTimestampAggregationTimeout()) + .rolloverSizeBytes(config.getRolloverSizeBytes()) .build(); } @@ -152,6 +154,7 @@ public static final KeyValueTableConfiguration encode(final KeyValueTableConfig .partitionCount(config.getPartitionCount()) .primaryKeyLength(config.getPrimaryKeyLength()) .secondaryKeyLength(config.getSecondaryKeyLength()) + .rolloverSizeBytes(config.getRolloverSizeBytes()) .build(); } @@ -380,6 +383,8 @@ public static final StreamConfig decode(String scope, String streamName, final S builder.setRetentionPolicy(decode(configModel.getRetentionPolicy())); } builder.setTags(Controller.Tags.newBuilder().addAllTag(configModel.getTags()).build()); + builder.setTimestampAggregationTimeout(configModel.getTimestampAggregationTimeout()); + builder.setRolloverSizeBytes(configModel.getRolloverSizeBytes()); return builder.build(); } @@ -441,11 +446,14 @@ public static final KeyValueTableConfig decode(String scopeName, String kvtName, Preconditions.checkArgument(config.getPartitionCount() > 0, "Number of partitions should be > 0."); Preconditions.checkArgument(config.getPrimaryKeyLength() > 0, "Length of primary key should be > 0."); Preconditions.checkArgument(config.getSecondaryKeyLength() >= 0, "Length of secondary key should be >= 0."); + Preconditions.checkArgument(config.getRolloverSizeBytes() >= 0, "Rollover size should be >= 0."); return KeyValueTableConfig.newBuilder().setScope(scopeName) .setKvtName(kvtName) .setPartitionCount(config.getPartitionCount()) .setPrimaryKeyLength(config.getPrimaryKeyLength()) - .setSecondaryKeyLength(config.getSecondaryKeyLength()).build(); + .setSecondaryKeyLength(config.getSecondaryKeyLength()) + .setRolloverSizeBytes(config.getRolloverSizeBytes()) + .build(); } /** diff --git a/client/src/main/java/io/pravega/client/segment/impl/AsyncSegmentInputStreamImpl.java b/client/src/main/java/io/pravega/client/segment/impl/AsyncSegmentInputStreamImpl.java index bc18eb39979..71324b1146e 100644 --- a/client/src/main/java/io/pravega/client/segment/impl/AsyncSegmentInputStreamImpl.java +++ b/client/src/main/java/io/pravega/client/segment/impl/AsyncSegmentInputStreamImpl.java @@ -94,7 +94,7 @@ public void noSuchSegment(WireCommands.NoSuchSegment noSuchSegment) { log.info("Received noSuchSegment {}", noSuchSegment); CompletableFuture future = grabFuture(noSuchSegment.getSegment(), noSuchSegment.getOffset()); if (future != null) { - future.completeExceptionally(new SegmentTruncatedException("Segment no longer exists.")); + future.completeExceptionally(new SegmentTruncatedException(String.format("Segment %s no longer exists.", noSuchSegment.getSegment()))); } } @@ -103,7 +103,7 @@ public void segmentIsTruncated(SegmentIsTruncated segmentIsTruncated) { log.info("Received segmentIsTruncated {}", segmentIsTruncated); CompletableFuture future = grabFuture(segmentIsTruncated.getSegment(), segmentIsTruncated.getOffset()); if (future != null) { - future.completeExceptionally(new SegmentTruncatedException()); + future.completeExceptionally(new SegmentTruncatedException(segmentIsTruncated.toString())); } } diff --git a/client/src/main/java/io/pravega/client/segment/impl/Segment.java b/client/src/main/java/io/pravega/client/segment/impl/Segment.java index 80fd34c7fa7..f895e771c2e 100644 --- a/client/src/main/java/io/pravega/client/segment/impl/Segment.java +++ b/client/src/main/java/io/pravega/client/segment/impl/Segment.java @@ -18,7 +18,6 @@ import io.pravega.client.stream.Stream; import io.pravega.client.stream.impl.StreamImpl; import io.pravega.shared.NameUtils; -import java.io.ObjectStreamException; import java.io.Serializable; import java.util.List; import lombok.Data; @@ -110,7 +109,7 @@ public int compareTo(Segment o) { return result; } - private Object writeReplace() throws ObjectStreamException { + private Object writeReplace() { return new SerializedForm(getScopedName()); } @@ -118,7 +117,7 @@ private Object writeReplace() throws ObjectStreamException { private static class SerializedForm implements Serializable { private static final long serialVersionUID = 1L; private final String value; - Object readResolve() throws ObjectStreamException { + Object readResolve() { return Segment.fromScopedName(value); } } diff --git a/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClient.java b/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClient.java index 52ae67ea7a7..bc924af0f07 100644 --- a/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClient.java +++ b/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClient.java @@ -28,6 +28,13 @@ public interface SegmentMetadataClient extends AutoCloseable { * @return a future containing the Metadata about the segment. */ abstract CompletableFuture getSegmentInfo(); + + /** + * Returns the head of the current segment. + * + * @return a future containing the head of the current segment. + */ + abstract CompletableFuture fetchCurrentSegmentHeadOffset(); /** * Returns the length of the current segment. i.e. the total length of all data written to the segment. diff --git a/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClientImpl.java b/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClientImpl.java index a5fad0eb5cd..591c45a2ca8 100644 --- a/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClientImpl.java +++ b/client/src/main/java/io/pravega/client/segment/impl/SegmentMetadataClientImpl.java @@ -187,7 +187,7 @@ private CompletableFuture updatePropertyAsync(UUID attr private CompletableFuture truncateSegmentAsync(Segment segment, long offset, DelegationTokenProvider tokenProvider) { - log.trace("Truncating segment: {}", segment); + log.debug("Truncating segment: {} at offset {}", segment, offset); RawClient connection = getConnection(); long requestId = connection.getFlow().getNextSequenceNumber(); @@ -208,6 +208,15 @@ private CompletableFuture sealSegmentAsync(Segment segment, Deleg .thenApply(r -> transformReply(r, SegmentSealed.class)); } + @Override + public CompletableFuture fetchCurrentSegmentHeadOffset() { + Exceptions.checkNotClosed(closed.get(), this); + val result = RETRY_SCHEDULE.retryingOn(ConnectionFailedException.class) + .throwingOn(NoSuchSegmentException.class) + .runAsync(this::getStreamSegmentInfo, executor()); + return result.thenApply(info -> info.getStartOffset()); + } + @Override public CompletableFuture fetchCurrentSegmentLength() { Exceptions.checkNotClosed(closed.get(), this); @@ -264,7 +273,7 @@ public CompletableFuture truncateSegment(long offset) { .runAsync(() -> truncateSegmentAsync(segmentId, offset, tokenProvider).exceptionally(t -> { final Throwable ex = Exceptions.unwrap(t); if (ex.getCause() instanceof SegmentTruncatedException) { - log.debug("Segment already truncated at offset {}. Details: {}", + log.debug("Segment {} already truncated at offset {}. Details: {}", segmentId, offset, ex.getCause().getMessage()); return null; diff --git a/client/src/main/java/io/pravega/client/segment/impl/SegmentOutputStreamImpl.java b/client/src/main/java/io/pravega/client/segment/impl/SegmentOutputStreamImpl.java index d6395d3144d..7f17a84e360 100644 --- a/client/src/main/java/io/pravega/client/segment/impl/SegmentOutputStreamImpl.java +++ b/client/src/main/java/io/pravega/client/segment/impl/SegmentOutputStreamImpl.java @@ -211,7 +211,7 @@ private void failConnection(Throwable throwable) { } log.info("Handling exception {} for connection {} on writer {}. SetupCompleted: {}, Closed: {}", throwable, connection, writerId, connectionSetupCompleted == null ? null : connectionSetupCompleted.isDone(), closed); - if (exception == null) { + if (exception == null || throwable instanceof RetriesExhaustedException) { exception = throwable; } connection = null; @@ -573,6 +573,7 @@ public void flush() throws SegmentSealedException { } catch (Exception e) { failConnection(e); if (e instanceof RetriesExhaustedException) { + log.error("Flush on segment {} by writer {} failed after all retries", segmentName, writerId); //throw an exception to the external world that the flush failed due to RetriesExhaustedException throw Exceptions.sneakyThrow(e); } @@ -587,6 +588,11 @@ public void flush() throws SegmentSealedException { throw new SegmentSealedException(segmentName + " sealed for writer " + writerId); } + } else if (state.exception instanceof RetriesExhaustedException) { + // All attempts to connect with SSS have failed. + // The number of retry attempts is based on EventWriterConfig + log.error("Flush on segment {} by writer {} failed after all retries", segmentName, writerId); + throw Exceptions.sneakyThrow(state.exception); } } diff --git a/client/src/main/java/io/pravega/client/state/impl/RevisionImpl.java b/client/src/main/java/io/pravega/client/state/impl/RevisionImpl.java index 66e3128a555..aee715df459 100644 --- a/client/src/main/java/io/pravega/client/state/impl/RevisionImpl.java +++ b/client/src/main/java/io/pravega/client/state/impl/RevisionImpl.java @@ -18,7 +18,6 @@ import com.google.common.base.Preconditions; import io.pravega.client.segment.impl.Segment; import io.pravega.client.state.Revision; -import java.io.ObjectStreamException; import java.io.Serializable; import lombok.AccessLevel; import lombok.Data; @@ -71,7 +70,7 @@ public static Revision fromString(String scopedName) { } } - private Object writeReplace() throws ObjectStreamException { + private Object writeReplace() { return new SerializedForm(toString()); } @@ -79,7 +78,7 @@ private Object writeReplace() throws ObjectStreamException { private static class SerializedForm implements Serializable { private static final long serialVersionUID = 1L; private final String value; - Object readResolve() throws ObjectStreamException { + Object readResolve() { return Revision.fromString(value); } } diff --git a/client/src/main/java/io/pravega/client/state/impl/RevisionedStreamClientImpl.java b/client/src/main/java/io/pravega/client/state/impl/RevisionedStreamClientImpl.java index 27f7df53d27..67142304912 100644 --- a/client/src/main/java/io/pravega/client/state/impl/RevisionedStreamClientImpl.java +++ b/client/src/main/java/io/pravega/client/state/impl/RevisionedStreamClientImpl.java @@ -133,14 +133,17 @@ public void writeUnconditionally(T value) { } @Override - public Iterator> readFrom(Revision start) { - log.trace("Read segment {} from revision {}", segment, start); + public Iterator> readFrom(Revision revision) { + log.trace("Read segment {} from revision {}", segment, revision); synchronized (lock) { - long startOffset = start.asImpl().getOffsetInSegment(); + long startOffset = revision.asImpl().getOffsetInSegment(); SegmentInfo segmentInfo = Futures.getThrowingException(meta.getSegmentInfo()); long endOffset = segmentInfo.getWriteOffset(); if (startOffset < segmentInfo.getStartingOffset()) { - throw new TruncatedDataException(format("Data at the supplied revision {%s} has been truncated. The current segment info is {%s}", start, segmentInfo)); + throw new TruncatedDataException(format("Data at the supplied revision {%s} has been truncated. The current segment info is {%s}", revision, segmentInfo)); + } + if (startOffset == endOffset) { + log.debug("No new updates to be read from revision {}", revision); } log.debug("Creating iterator from {} until {} for segment {} ", startOffset, endOffset, segment); return new StreamIterator(startOffset, endOffset); @@ -232,6 +235,7 @@ public Revision fetchOldestRevision() { @Override public void truncateToRevision(Revision newStart) { Futures.getThrowingException(meta.truncateSegment(newStart.asImpl().getOffsetInSegment())); + log.info("Truncate segment {} to revision {}", newStart.asImpl().getSegment(), newStart); } @Override diff --git a/client/src/main/java/io/pravega/client/state/impl/StateSynchronizerImpl.java b/client/src/main/java/io/pravega/client/state/impl/StateSynchronizerImpl.java index 999b35fee6d..8e6490850f0 100644 --- a/client/src/main/java/io/pravega/client/state/impl/StateSynchronizerImpl.java +++ b/client/src/main/java/io/pravega/client/state/impl/StateSynchronizerImpl.java @@ -91,15 +91,13 @@ public void fetchUpdates() { log.trace("Found entry {} ", entry.getValue()); if (entry.getValue().isInit()) { InitialUpdate init = entry.getValue().getInit(); - if (isNewer(entry.getKey())) { - updateCurrentState(init.create(segment.getScopedStreamName(), entry.getKey())); - } + updateCurrentState(init.create(segment.getScopedStreamName(), entry.getKey())); } else { applyUpdates(entry.getKey().asImpl(), entry.getValue().getUpdates()); } } } catch (TruncatedDataException e) { - log.info("{} encountered truncation on segment {}", this, segment); + log.info("{} encountered truncation on segment {}, Details: {}", this, segment, e.getMessage()); RETRY_INDEFINITELY .retryingOn(TruncatedDataException.class) .throwingOn(RuntimeException.class) @@ -119,10 +117,8 @@ private Void handleTruncation() { if (entry.getValue().isInit()) { log.trace("Found entry {} ", entry.getValue()); InitialUpdate init = entry.getValue().getInit(); - if (isNewer(currentRevision)) { - updateCurrentState(init.create(segment.getScopedStreamName(), currentRevision)); - foundInit = true; - } + foundInit = true; + updateCurrentState(init.create(segment.getScopedStreamName(), currentRevision)); } } if (!foundInit) { @@ -214,6 +210,7 @@ public void compact(Function> compactor) { Revision oldMark = client.getMark(); if (oldMark == null || oldMark.compareTo(newMark) < 0) { client.compareAndSetMark(oldMark, newMark); + log.info("Compacted state is written at {} the oldMark is {}", newMark, oldMark); } if (oldMark != null) { client.truncateToRevision(oldMark); @@ -254,7 +251,11 @@ private void conditionallyWrite(Function> generator @Synchronized private boolean isNewer(Revision revision) { - return currentState == null || currentState.getRevision().compareTo(revision) < 0; + boolean result = currentState == null || currentState.getRevision().compareTo(revision) < 0; + if (!result ) { + log.debug("In memory state {} is newer than the provided revision {}", currentState.getRevision(), revision); + } + return result; } @Synchronized diff --git a/client/src/main/java/io/pravega/client/stream/EventWriterConfig.java b/client/src/main/java/io/pravega/client/stream/EventWriterConfig.java index c78dbca8e89..b96547ff0cb 100644 --- a/client/src/main/java/io/pravega/client/stream/EventWriterConfig.java +++ b/client/src/main/java/io/pravega/client/stream/EventWriterConfig.java @@ -80,7 +80,7 @@ public class EventWriterConfig implements Serializable { * of the lease renewal period. The 1,000 is hardcoded and has been chosen arbitrarily * to be a large enough value. * - * The maximum allowed lease time by default is 120s, see: + * The maximum allowed lease time by default is 600s, see: * * {@link io.pravega.controller.util.Config.PROPERTY_TXN_MAX_LEASE} * @@ -114,7 +114,7 @@ public static final class EventWriterConfigBuilder { private int maxBackoffMillis = 20000; private int retryAttempts = 10; private int backoffMultiple = 10; - private long transactionTimeoutTime = 90 * 1000 - 1; + private long transactionTimeoutTime = 600 * 1000 - 1; private boolean automaticallyNoteTime = false; // connection pooling for event writers is disabled by default. private boolean enableConnectionPooling = false; diff --git a/client/src/main/java/io/pravega/client/stream/ReaderGroupConfig.java b/client/src/main/java/io/pravega/client/stream/ReaderGroupConfig.java index f206f028940..64ae62ac7bb 100644 --- a/client/src/main/java/io/pravega/client/stream/ReaderGroupConfig.java +++ b/client/src/main/java/io/pravega/client/stream/ReaderGroupConfig.java @@ -50,7 +50,6 @@ public class ReaderGroupConfig implements Serializable { public static final UUID DEFAULT_UUID = new UUID(0L, 0L); public static final long DEFAULT_GENERATION = -1; - private static final long serialVersionUID = 1L; private static final ReaderGroupConfigSerializer SERIALIZER = new ReaderGroupConfigSerializer(); private final long groupRefreshTimeMillis; @@ -68,7 +67,6 @@ public class ReaderGroupConfig implements Serializable { private final long generation; @EqualsAndHashCode.Exclude private final UUID readerGroupId; - /** * If a Reader Group wants unconsumed data to be retained in a Stream, * the retentionType in {@link ReaderGroupConfig} should be set to @@ -280,7 +278,8 @@ public ReaderGroupConfig build() { "Outstanding checkpoint request should be greater than zero"); return new ReaderGroupConfig(groupRefreshTimeMillis, automaticCheckpointIntervalMillis, - startingStreamCuts, endingStreamCuts, maxOutstandingCheckpointRequest, retentionType, generation, readerGroupId); + startingStreamCuts, endingStreamCuts, maxOutstandingCheckpointRequest, retentionType, + generation, readerGroupId); } private void validateStartAndEndStreamCuts(Map startStreamCuts, diff --git a/client/src/main/java/io/pravega/client/stream/Sequence.java b/client/src/main/java/io/pravega/client/stream/Sequence.java index a85335cb425..6872651edfe 100644 --- a/client/src/main/java/io/pravega/client/stream/Sequence.java +++ b/client/src/main/java/io/pravega/client/stream/Sequence.java @@ -15,7 +15,6 @@ */ package io.pravega.client.stream; -import java.io.ObjectStreamException; import java.io.Serializable; import java.util.UUID; import lombok.Data; @@ -41,7 +40,7 @@ public int compareTo(Sequence o) { return result; } - private Object writeReplace() throws ObjectStreamException { + private Object writeReplace() { return new SerializedForm(new UUID(highOrder, lowOrder)); } @@ -49,7 +48,7 @@ private Object writeReplace() throws ObjectStreamException { private static class SerializedForm implements Serializable { private static final long serialVersionUID = 1L; private final UUID value; - Object readResolve() throws ObjectStreamException { + Object readResolve() { return new Sequence(value.getMostSignificantBits(), value.getLeastSignificantBits()); } } diff --git a/client/src/main/java/io/pravega/client/stream/StreamConfiguration.java b/client/src/main/java/io/pravega/client/stream/StreamConfiguration.java index 809fdd26580..ec96fad9a23 100644 --- a/client/src/main/java/io/pravega/client/stream/StreamConfiguration.java +++ b/client/src/main/java/io/pravega/client/stream/StreamConfiguration.java @@ -83,12 +83,24 @@ public class StreamConfiguration implements Serializable { @EqualsAndHashCode.Exclude private final Set tags; + /** + * API to return segment rollover size. + * The default value for this field is 0. + * If default value is passed down to the server, a non-zero value defined in the server + * will be used for the actual rollover size. + * + * @param rolloverSizeBytes The segment rollover size in this stream. + * @return Rollover size for the segment in this Stream. + */ + private final long rolloverSizeBytes; + public static final class StreamConfigurationBuilder { private ScalingPolicy scalingPolicy = ScalingPolicy.fixed(1); public StreamConfiguration build() { Set tagSet = validateTags(this.tags); - return new StreamConfiguration(this.scalingPolicy, this.retentionPolicy, this.timestampAggregationTimeout, tagSet); + Preconditions.checkArgument(this.rolloverSizeBytes >= 0, String.format("Segment rollover size bytes cannot be less than 0, actual is %s", this.rolloverSizeBytes)); + return new StreamConfiguration(this.scalingPolicy, this.retentionPolicy, this.timestampAggregationTimeout, tagSet, this.rolloverSizeBytes); } private Set validateTags(List tags) { diff --git a/client/src/main/java/io/pravega/client/stream/impl/ClientFactoryImpl.java b/client/src/main/java/io/pravega/client/stream/impl/ClientFactoryImpl.java index 29d3e530e71..397b82db148 100644 --- a/client/src/main/java/io/pravega/client/stream/impl/ClientFactoryImpl.java +++ b/client/src/main/java/io/pravega/client/stream/impl/ClientFactoryImpl.java @@ -65,11 +65,11 @@ import io.pravega.shared.NameUtils; import io.pravega.shared.security.auth.AccessOperation; import java.util.UUID; +import java.util.concurrent.ExecutorService; import java.util.concurrent.ScheduledExecutorService; -import java.util.concurrent.ThreadPoolExecutor; import java.util.function.Supplier; -import lombok.extern.slf4j.Slf4j; import lombok.val; +import lombok.extern.slf4j.Slf4j; import static io.pravega.common.concurrent.ExecutorServiceHelpers.newScheduledThreadPool; @@ -170,7 +170,7 @@ public EventStreamWriter createEventWriter(String writerId, String stream NameUtils.validateWriterId(writerId); log.info("Creating writer: {} for stream: {} with configuration: {}", writerId, streamName, config); Stream stream = new StreamImpl(scope, streamName); - ThreadPoolExecutor retransmitPool = ExecutorServiceHelpers.getShrinkingExecutor(1, 100, + ExecutorService retransmitPool = ExecutorServiceHelpers.getShrinkingExecutor(1, 100, "ScalingRetransmission-" + stream.getScopedName()); try { return new EventStreamWriterImpl(stream, writerId, controller, outFactory, s, config, retransmitPool, connectionPool.getInternalExecutor(), connectionPool); diff --git a/client/src/main/java/io/pravega/client/stream/impl/EventStreamReaderImpl.java b/client/src/main/java/io/pravega/client/stream/impl/EventStreamReaderImpl.java index 7f6577b8c68..d38f4d4d627 100644 --- a/client/src/main/java/io/pravega/client/stream/impl/EventStreamReaderImpl.java +++ b/client/src/main/java/io/pravega/client/stream/impl/EventStreamReaderImpl.java @@ -393,6 +393,10 @@ private void handleSegmentTruncated(EventSegmentReader segmentReader) throws Tru DelegationTokenProviderFactory.create(controller, segmentId, AccessOperation.READ)); try { long startingOffset = Futures.getThrowingException(metadataClient.getSegmentInfo()).getStartingOffset(); + if (segmentReader.getOffset() == startingOffset) { + log.warn("Attempt to fetch the next available read offset on the segment {} returned a truncated offset {}", + segmentId, startingOffset); + } segmentReader.setOffset(startingOffset); } catch (NoSuchSegmentException e) { handleEndOfSegment(segmentReader, true); diff --git a/client/src/main/java/io/pravega/client/stream/impl/EventStreamWriterImpl.java b/client/src/main/java/io/pravega/client/stream/impl/EventStreamWriterImpl.java index 4ba07bce9cf..56440aa1a8d 100644 --- a/client/src/main/java/io/pravega/client/stream/impl/EventStreamWriterImpl.java +++ b/client/src/main/java/io/pravega/client/stream/impl/EventStreamWriterImpl.java @@ -295,6 +295,7 @@ public void flush() { Preconditions.checkState(!closed.get()); synchronized (writeFlushLock) { boolean success = false; + RuntimeException retriesExhaustedException = null; while (!success) { success = true; for (SegmentOutputStream writer : selector.getWriters().values()) { @@ -304,12 +305,23 @@ public void flush() { // Segment sealed exception observed during a flush. Re-run flush on all the // available writers. success = false; - log.warn("Flush on segment {} failed due to {}, it will be retried.", writer.getSegmentName(), e.getMessage()); + log.warn("Flush on segment {} by event writer {} failed due to {}, it will be retried.", + writer.getSegmentName(), writerId, e.getMessage()); tryWaitForSuccessors(); break; + } catch (RetriesExhaustedException e1) { + // Ensure a flush is invoked on all the segment writers before throwing a RetriesExhaustedException. + log.warn("Flush on segment {} by event writer {} failed after all configured retries", + writer.getSegmentName(), writerId); + retriesExhaustedException = e1; } } } + if (retriesExhaustedException != null) { + log.error("Flush by writer {} on Stream {} failed after all retries to connect with Pravega exhausted.", + writerId, stream.getScopedName()); + throw retriesExhaustedException; + } } } diff --git a/client/src/main/java/io/pravega/client/stream/impl/StreamCutImpl.java b/client/src/main/java/io/pravega/client/stream/impl/StreamCutImpl.java index 2d8c3634bee..72abc36ee3d 100644 --- a/client/src/main/java/io/pravega/client/stream/impl/StreamCutImpl.java +++ b/client/src/main/java/io/pravega/client/stream/impl/StreamCutImpl.java @@ -29,7 +29,6 @@ import io.pravega.common.util.ToStringUtils; import io.pravega.shared.NameUtils; import java.io.IOException; -import java.io.ObjectStreamException; import java.io.Serializable; import java.nio.ByteBuffer; import java.util.ArrayList; @@ -219,7 +218,7 @@ public static StreamCutInternal fromBytes(ByteBuffer buff) { } @SneakyThrows(IOException.class) - private Object writeReplace() throws ObjectStreamException { + private Object writeReplace() { return new SerializedForm(SERIALIZER.serialize(this).getCopy()); } @@ -228,7 +227,7 @@ private static class SerializedForm implements Serializable { private static final long serialVersionUID = 1L; private final byte[] value; @SneakyThrows(IOException.class) - Object readResolve() throws ObjectStreamException { + Object readResolve() { return SERIALIZER.deserialize(new ByteArraySegment(value)); } } diff --git a/client/src/main/java/io/pravega/client/stream/impl/StreamImpl.java b/client/src/main/java/io/pravega/client/stream/impl/StreamImpl.java index a13c0686001..4d5a7a74072 100644 --- a/client/src/main/java/io/pravega/client/stream/impl/StreamImpl.java +++ b/client/src/main/java/io/pravega/client/stream/impl/StreamImpl.java @@ -16,7 +16,6 @@ package io.pravega.client.stream.impl; import com.google.common.base.Preconditions; -import java.io.ObjectStreamException; import java.io.Serializable; import lombok.AllArgsConstructor; import lombok.Data; @@ -41,7 +40,7 @@ public StreamImpl(String scope, String streamName) { this.streamName = streamName; } - private Object writeReplace() throws ObjectStreamException { + private Object writeReplace() { return new SerializedForm(getScopedName()); } @@ -50,7 +49,7 @@ private Object writeReplace() throws ObjectStreamException { private static class SerializedForm implements Serializable { private static final long serialVersionUID = 1L; private String value; - Object readResolve() throws ObjectStreamException { + Object readResolve() { return StreamInternal.fromScopedName(value); } } diff --git a/client/src/main/java/io/pravega/client/stream/notifications/notifier/EndOfDataNotifier.java b/client/src/main/java/io/pravega/client/stream/notifications/notifier/EndOfDataNotifier.java index d5b0bedcc5d..7d5434088f6 100644 --- a/client/src/main/java/io/pravega/client/stream/notifications/notifier/EndOfDataNotifier.java +++ b/client/src/main/java/io/pravega/client/stream/notifications/notifier/EndOfDataNotifier.java @@ -60,7 +60,9 @@ public String getType() { private void checkAndTriggerEndOfStreamNotification() { this.synchronizer.fetchUpdates(); ReaderGroupState state = this.synchronizer.getState(); - if (state.isEndOfData()) { + if (state == null) { + log.warn("Current state of StateSynchronizer {} is null, will try again.", synchronizer); + } else if (state.isEndOfData()) { notifySystem.notify(new EndOfDataNotification()); } } diff --git a/client/src/main/java/io/pravega/client/stream/notifications/notifier/SegmentNotifier.java b/client/src/main/java/io/pravega/client/stream/notifications/notifier/SegmentNotifier.java index c06d7e86646..41bbf1160e6 100644 --- a/client/src/main/java/io/pravega/client/stream/notifications/notifier/SegmentNotifier.java +++ b/client/src/main/java/io/pravega/client/stream/notifications/notifier/SegmentNotifier.java @@ -25,6 +25,7 @@ import io.pravega.client.stream.notifications.NotificationSystem; import io.pravega.client.stream.notifications.SegmentNotification; import javax.annotation.concurrent.GuardedBy; + import lombok.Synchronized; import lombok.extern.slf4j.Slf4j; @@ -65,17 +66,22 @@ public String getType() { private void checkAndTriggerSegmentNotification() { this.synchronizer.fetchUpdates(); ReaderGroupState state = this.synchronizer.getState(); - int newNumberOfSegments = state.getNumberOfSegments(); - checkState(newNumberOfSegments > 0, "Number of segments cannot be zero"); + if (state == null ) { + log.warn("Current state of StateSynchronizer {} is null, will try again.", synchronizer); + } else { + int newNumberOfSegments = state.getNumberOfSegments(); + log.debug("Number of segments in {} is {}", synchronizer, newNumberOfSegments); + checkState(newNumberOfSegments > 0, "Number of segments cannot be zero"); - //Trigger a notification with the initial number of segments. - //Subsequent notifications are triggered only if there is a change in the number of segments. - if (this.numberOfSegments != newNumberOfSegments) { - this.numberOfSegments = newNumberOfSegments; - SegmentNotification notification = SegmentNotification.builder().numOfSegments(state.getNumberOfSegments()) - .numOfReaders(state.getOnlineReaders().size()) - .build(); - notifySystem.notify(notification); + //Trigger a notification with the initial number of segments. + //Subsequent notifications are triggered only if there is a change in the number of segments. + if (this.numberOfSegments != newNumberOfSegments) { + this.numberOfSegments = newNumberOfSegments; + SegmentNotification notification = SegmentNotification.builder().numOfSegments(state.getNumberOfSegments()) + .numOfReaders(state.getOnlineReaders().size()) + .build(); + notifySystem.notify(notification); + } } } } diff --git a/client/src/main/java/io/pravega/client/tables/KeyValueTableConfiguration.java b/client/src/main/java/io/pravega/client/tables/KeyValueTableConfiguration.java index a3f1ced0a46..689075c7cbe 100644 --- a/client/src/main/java/io/pravega/client/tables/KeyValueTableConfiguration.java +++ b/client/src/main/java/io/pravega/client/tables/KeyValueTableConfiguration.java @@ -50,10 +50,22 @@ public class KeyValueTableConfiguration implements Serializable { * The number of bytes for the Secondary Key. This value cannot be changed after the Key-Value Table has been created. * * @param secondaryKeyLength The number of bytes for the Secondary Key. - * @return The number of bytes for the Primary Key. + * @return The number of bytes for the Secondary Key. */ private final int secondaryKeyLength; + /** + * The rollover size for table segment in LTS. + * + * The default value for this field is 0. + * If default value is passed down to the server, a non-zero value defined in the server + * will be used for the actual rollover size. + * + * @param rolloverSizeBytes The rollover size for the table segment in LTS. + * @return The rollover size for the table segment in LTS. + */ + private final long rolloverSizeBytes; + /** * The total number of bytes for the key (includes Primary and Secondary). * @@ -73,7 +85,8 @@ public KeyValueTableConfiguration build() { Preconditions.checkArgument(this.partitionCount > 0, "partitionCount must be a positive integer. Given %s.", this.partitionCount); Preconditions.checkArgument(this.primaryKeyLength > 0, "primaryKeyLength must be a positive integer. Given %s.", this.primaryKeyLength); Preconditions.checkArgument(this.secondaryKeyLength >= 0, "secondaryKeyLength must be a non-negative integer. Given %s.", this.secondaryKeyLength); - return new KeyValueTableConfiguration(this.partitionCount, this.primaryKeyLength, this.secondaryKeyLength); + Preconditions.checkArgument(this.rolloverSizeBytes >= 0, String.format("Segment rollover size bytes cannot be less than 0, actual is %s", this.rolloverSizeBytes)); + return new KeyValueTableConfiguration(this.partitionCount, this.primaryKeyLength, this.secondaryKeyLength, this.rolloverSizeBytes); } } } diff --git a/client/src/main/java/io/pravega/client/tables/impl/KeyValueTableImpl.java b/client/src/main/java/io/pravega/client/tables/impl/KeyValueTableImpl.java index 00f3b155ff0..56f93c4b7d0 100644 --- a/client/src/main/java/io/pravega/client/tables/impl/KeyValueTableImpl.java +++ b/client/src/main/java/io/pravega/client/tables/impl/KeyValueTableImpl.java @@ -153,6 +153,7 @@ public CompletableFuture exists(@NonNull TableKey key) { return update(new Remove(key, Version.NOT_EXISTS)) .handle((r, ex) -> { if (ex != null) { + ex = Exceptions.unwrap(ex); if (ex instanceof ConditionalTableUpdateException) { return true; } else { diff --git a/client/src/main/java/io/pravega/client/tables/impl/TableSegment.java b/client/src/main/java/io/pravega/client/tables/impl/TableSegment.java index e542822753d..d85b6e0264c 100644 --- a/client/src/main/java/io/pravega/client/tables/impl/TableSegment.java +++ b/client/src/main/java/io/pravega/client/tables/impl/TableSegment.java @@ -176,6 +176,21 @@ default CompletableFuture get(ByteBuf key) { */ AsyncIterator> entryIterator(SegmentIteratorArgs args); + /** + * Gets the number of entries in the Table Segment. + * + * NOTE: this is an "eventually consistent" value: + *
    + *
  • In-flight (not yet acknowledged) updates and removals are not included. + *
  • Recently acknowledged updates and removals may or may not be included (depending on whether they were + * conditional or not). As the index is updated (in the background), this value will eventually converge towards the + * actual number of entries in the Table Segment. + *
+ * + * @return A CompletableFuture that, when completed, will contain the number of entries in the Table Segment. + */ + CompletableFuture getEntryCount(); + /** * Gets a value indicating the internal Id of the Table Segment, as assigned by the Controller. * diff --git a/client/src/main/java/io/pravega/client/tables/impl/TableSegmentImpl.java b/client/src/main/java/io/pravega/client/tables/impl/TableSegmentImpl.java index 974db97a765..54de33917a1 100644 --- a/client/src/main/java/io/pravega/client/tables/impl/TableSegmentImpl.java +++ b/client/src/main/java/io/pravega/client/tables/impl/TableSegmentImpl.java @@ -200,6 +200,16 @@ public AsyncIterator> entryIterator(@NonNull Seg .asSequential(this.connectionPool.getInternalExecutor()); } + @Override + public CompletableFuture getEntryCount() { + return this.readContext.execute((state, requestId) -> { + val request = new WireCommands.GetTableSegmentInfo(requestId, this.segmentName, state.getToken()); + + return sendRequest(request, state, WireCommands.TableSegmentInfo.class) + .thenApply(WireCommands.TableSegmentInfo::getEntryCount); + }); + } + /** * Fetches a collection of items as part of an async iterator. * diff --git a/client/src/main/java/io/pravega/client/tables/impl/TableSegmentKeyVersion.java b/client/src/main/java/io/pravega/client/tables/impl/TableSegmentKeyVersion.java index f978526c99d..08f0426fd6c 100644 --- a/client/src/main/java/io/pravega/client/tables/impl/TableSegmentKeyVersion.java +++ b/client/src/main/java/io/pravega/client/tables/impl/TableSegmentKeyVersion.java @@ -23,7 +23,6 @@ import io.pravega.common.util.ByteBufferUtils; import io.pravega.shared.protocol.netty.WireCommands; import java.io.IOException; -import java.io.ObjectStreamException; import java.io.Serializable; import java.nio.ByteBuffer; import lombok.Builder; @@ -128,7 +127,7 @@ private void write00(TableSegmentKeyVersion version, RevisionDataOutput revision * {@link java.io.ObjectOutputStream} is preparing to write the object to the stream. */ @SneakyThrows(IOException.class) - private Object writeReplace() throws ObjectStreamException { + private Object writeReplace() { return new TableSegmentKeyVersion.SerializedForm(SERIALIZER.serialize(this).getCopy()); } @@ -138,7 +137,7 @@ private static class SerializedForm implements Serializable { private final byte[] value; @SneakyThrows(IOException.class) - Object readResolve() throws ObjectStreamException { + Object readResolve() { return SERIALIZER.deserialize(new ByteArraySegment(value)); } } diff --git a/client/src/test/java/io/pravega/client/CredentialsExtractorTest.java b/client/src/test/java/io/pravega/client/CredentialsExtractorTest.java index db233aed942..aacf7b13cb0 100644 --- a/client/src/test/java/io/pravega/client/CredentialsExtractorTest.java +++ b/client/src/test/java/io/pravega/client/CredentialsExtractorTest.java @@ -273,6 +273,7 @@ public String getAuthenticationToken() { } } + @SuppressWarnings("deprecation") public static class LegacyCredentials1 implements io.pravega.client.stream.impl.Credentials { private static final String TOKEN = "custom-token-legacy"; private static final String AUTHENTICATION_METHOD = "custom-method-legacy"; @@ -293,6 +294,7 @@ public String getAuthenticationToken() { * See how they are loaded differently in the corresponding service definition files under * resources/META-INF/services. */ + @SuppressWarnings("deprecation") public static class LegacyCredentials2 implements io.pravega.client.stream.impl.Credentials { private static final String TOKEN = "custom-token-legacy-2"; private static final String AUTHENTICATION_METHOD = "custom-method-legacy-2"; diff --git a/client/src/test/java/io/pravega/client/batch/impl/BatchClientImplTest.java b/client/src/test/java/io/pravega/client/batch/impl/BatchClientImplTest.java index 0be68b3a4ee..191fa8c5e8a 100644 --- a/client/src/test/java/io/pravega/client/batch/impl/BatchClientImplTest.java +++ b/client/src/test/java/io/pravega/client/batch/impl/BatchClientImplTest.java @@ -86,6 +86,7 @@ public void testGetSegmentsWithStreamCut() throws Exception { MockConnectionFactoryImpl connectionFactory = getMockConnectionFactory(location); MockController mockController = new MockController(location.getEndpoint(), location.getPort(), connectionFactory, false); Stream stream = createStream(SCOPE, STREAM, 3, mockController); + @Cleanup BatchClientFactoryImpl client = new BatchClientFactoryImpl(mockController, ClientConfig.builder().maxConnectionsPerSegmentStore(1).build(), connectionFactory); Iterator boundedSegments = client.getSegments(stream, getStreamCut(5L, 0, 1, 2), getStreamCut(15L, 0, 1, 2)).getIterator(); diff --git a/client/src/test/java/io/pravega/client/byteStream/ByteStreamReaderTest.java b/client/src/test/java/io/pravega/client/byteStream/ByteStreamReaderTest.java index cc23bac0c9c..99a06018d07 100644 --- a/client/src/test/java/io/pravega/client/byteStream/ByteStreamReaderTest.java +++ b/client/src/test/java/io/pravega/client/byteStream/ByteStreamReaderTest.java @@ -22,7 +22,6 @@ import io.pravega.client.stream.mock.MockConnectionFactoryImpl; import io.pravega.client.stream.mock.MockController; import io.pravega.client.stream.mock.MockSegmentStreamFactory; -import io.pravega.shared.protocol.netty.ConnectionFailedException; import io.pravega.shared.protocol.netty.PravegaNodeUri; import io.pravega.test.common.AssertExtensions; import lombok.Cleanup; @@ -42,7 +41,7 @@ public class ByteStreamReaderTest { private ByteStreamClientFactory clientFactory; @Before - public void setup() throws ConnectionFailedException { + public void setup() { PravegaNodeUri endpoint = new PravegaNodeUri("localhost", 0); connectionFactory = new MockConnectionFactoryImpl(); controller = new MockController(endpoint.getEndpoint(), endpoint.getPort(), connectionFactory, false); @@ -67,6 +66,7 @@ public void testReadWritten() throws Exception { @Cleanup ByteStreamWriter writer = clientFactory.createByteStreamWriter(STREAM); byte[] value = new byte[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; + int headOffset = 0; writer.write(value); writer.flush(); @Cleanup @@ -74,8 +74,14 @@ public void testReadWritten() throws Exception { for (int i = 0; i < 10; i++) { assertEquals(i, reader.read()); } + assertEquals(headOffset, reader.fetchHeadOffset()); + assertEquals(value.length, reader.fetchTailOffset()); + headOffset = 3; + writer.truncateDataBefore(headOffset); writer.write(value); writer.flush(); + assertEquals(headOffset, reader.fetchHeadOffset()); + assertEquals(value.length * 2, reader.fetchTailOffset()); byte[] read = new byte[5]; assertEquals(5, reader.read(read)); assertArrayEquals(new byte[] { 0, 1, 2, 3, 4 }, read); diff --git a/client/src/test/java/io/pravega/client/byteStream/ByteStreamWriterTest.java b/client/src/test/java/io/pravega/client/byteStream/ByteStreamWriterTest.java index 51e5b88ba12..2b60c7304c2 100644 --- a/client/src/test/java/io/pravega/client/byteStream/ByteStreamWriterTest.java +++ b/client/src/test/java/io/pravega/client/byteStream/ByteStreamWriterTest.java @@ -26,11 +26,11 @@ import io.pravega.client.stream.mock.MockConnectionFactoryImpl; import io.pravega.client.stream.mock.MockController; import io.pravega.client.stream.mock.MockSegmentStreamFactory; -import io.pravega.shared.protocol.netty.ConnectionFailedException; import io.pravega.shared.protocol.netty.PravegaNodeUri; import java.io.IOException; import java.nio.ByteBuffer; import java.util.Arrays; + import lombok.Cleanup; import org.junit.After; import org.junit.Before; @@ -49,7 +49,7 @@ public class ByteStreamWriterTest { private ByteStreamClientFactory clientFactory; @Before - public void setup() throws ConnectionFailedException { + public void setup() { PravegaNodeUri endpoint = new PravegaNodeUri("localhost", 0); connectionFactory = new MockConnectionFactoryImpl(); ClientConnection connection = Mockito.mock(ClientConnection.class); @@ -75,13 +75,18 @@ public void teardown() { public void testWrite() throws Exception { @Cleanup ByteStreamWriter writer = clientFactory.createByteStreamWriter(STREAM); - byte[] value = new byte[] { 1, 2, 3, 4, 5 }; + byte[] value = new byte[] { 1, 2, 3, 4, 5, 6, 7 }; + int headoffset = 0; writer.write(value); writer.flush(); + assertEquals(headoffset, writer.fetchHeadOffset()); assertEquals(value.length, writer.fetchTailOffset()); writer.write(value); writer.write(value); + headoffset = 5; + writer.truncateDataBefore(headoffset); writer.flush(); + assertEquals(headoffset, writer.fetchHeadOffset()); assertEquals(value.length * 3, writer.fetchTailOffset()); } diff --git a/client/src/test/java/io/pravega/client/connection/impl/CommandEncoderTest.java b/client/src/test/java/io/pravega/client/connection/impl/CommandEncoderTest.java index 29f53920253..ae5a3bf670c 100644 --- a/client/src/test/java/io/pravega/client/connection/impl/CommandEncoderTest.java +++ b/client/src/test/java/io/pravega/client/connection/impl/CommandEncoderTest.java @@ -17,9 +17,12 @@ import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; +import io.pravega.common.ObjectClosedException; import io.pravega.shared.protocol.netty.Append; import io.pravega.shared.protocol.netty.AppendBatchSizeTracker; +import io.pravega.shared.protocol.netty.FailingReplyProcessor; import io.pravega.shared.protocol.netty.InvalidMessageException; +import io.pravega.shared.protocol.netty.PravegaNodeUri; import io.pravega.shared.protocol.netty.WireCommand; import io.pravega.shared.protocol.netty.WireCommands; import io.pravega.shared.protocol.netty.WireCommands.AppendBlock; @@ -30,6 +33,8 @@ import java.io.OutputStream; import java.util.ArrayList; import java.util.UUID; +import java.util.concurrent.atomic.AtomicInteger; + import lombok.RequiredArgsConstructor; import org.junit.Test; import org.mockito.Mockito; @@ -40,7 +45,8 @@ import static org.junit.Assert.assertTrue; public class CommandEncoderTest { - + private static final int SERVICE_PORT = 12345; + @RequiredArgsConstructor private static class FixedBatchSizeTracker implements AppendBatchSizeTracker { private final int batchSize; @@ -90,11 +96,12 @@ public void write(byte[] buf, int offset, int length) throws IOException { public void testRoundTrip() throws IOException { AppendBatchSizeTrackerImpl batchSizeTracker = new AppendBatchSizeTrackerImpl(); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); WireCommand command = new WireCommands.Hello(0, 1); commandEncoder.write(command); assertEquals(output.decoded.remove(0), command); - command = new WireCommands.CreateTableSegment(0, "segment", false, 16, ""); + command = new WireCommands.CreateTableSegment(0, "segment", false, 16, "", 1024 * 1024 * 1024); commandEncoder.write(command); assertEquals(output.decoded.remove(0), command); command = new WireCommands.TruncateSegment(12, "s", 354, "d"); @@ -106,7 +113,8 @@ public void testRoundTrip() throws IOException { public void testAppendsAreBatched() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId, "seg", ""); commandEncoder.write(setupAppend); @@ -133,7 +141,8 @@ public void testAppendsAreBatched() throws IOException { public void testExactBatch() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId, "seg", ""); commandEncoder.write(setupAppend); @@ -156,7 +165,8 @@ public void testExactBatch() throws IOException { public void testOverBatchSize() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId, "seg", ""); commandEncoder.write(setupAppend); @@ -179,7 +189,8 @@ public void testOverBatchSize() throws IOException { public void testBatchInterupted() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId, "seg", ""); commandEncoder.write(setupAppend); @@ -207,7 +218,8 @@ public void testBatchInterupted() throws IOException { public void testBatchTimeout() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId, "seg", ""); commandEncoder.write(setupAppend); @@ -235,7 +247,8 @@ public void testBatchTimeout() throws IOException { public void testAppendsQueued() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId1 = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId1, "seg", ""); commandEncoder.write(setupAppend); @@ -278,7 +291,8 @@ public void testAppendsQueued() throws IOException { public void testAppendsQueuedBreak() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId1 = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId1, "seg", ""); commandEncoder.write(setupAppend); @@ -323,7 +337,8 @@ public void testAppendsQueuedBreak() throws IOException { public void testAppendSizeQueuedBreak() throws IOException { AppendBatchSizeTracker batchSizeTracker = new FixedBatchSizeTracker(100); DecodingOutputStream output = new DecodingOutputStream(); - CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, null, endpoint); UUID writerId1 = UUID.randomUUID(); WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId1, "seg", ""); commandEncoder.write(setupAppend); @@ -373,5 +388,48 @@ public void testValidateAppend() { assertThrows(InvalidMessageException.class, () -> CommandEncoder.validateAppend(new Append("", writerId, -1, event, 1), s)); assertThrows(IllegalArgumentException.class, () -> CommandEncoder.validateAppend(new Append("", writerId, 1, event, 132, 1), s)); } - + + @Test + public void testShutdown() throws IOException { + AppendBatchSizeTrackerImpl batchSizeTracker = new AppendBatchSizeTrackerImpl(); + DecodingOutputStream output = new DecodingOutputStream(); + AtomicInteger counter = new AtomicInteger(0); + PravegaNodeUri endpoint = new PravegaNodeUri("localhost", SERVICE_PORT); + CommandEncoder commandEncoder = new CommandEncoder(x -> batchSizeTracker, null, output, new FailingReplyProcessor() { + @Override + public void connectionDropped() { + counter.getAndAdd(1); + } + + @Override + public void processingFailure(Exception error) { + + } + }, endpoint); + + // maximum setup requests + for (int i = 0; i < CommandEncoder.MAX_SETUP_SEGMENTS_SIZE; i++) { + UUID writerId = UUID.randomUUID(); + WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId, "seg", ""); + commandEncoder.write(setupAppend); + } + + // further setup request should throw IOException + UUID writerId = UUID.randomUUID(); + final WireCommand setupAppend = new WireCommands.SetupAppend(0, writerId, "seg", ""); + assertThrows(IOException.class, () -> commandEncoder.write(setupAppend)); + + // then connection is closed, ObjectClosedException should be thrown + writerId = UUID.randomUUID(); + final WireCommand setupAppend2 = new WireCommands.SetupAppend(0, writerId, "seg", ""); + assertThrows(ObjectClosedException.class, () -> commandEncoder.write(setupAppend2)); + + writerId = UUID.randomUUID(); + ByteBuf data = Unpooled.wrappedBuffer(new byte[40]); + WireCommands.Event event = new WireCommands.Event(data); + Append append = new Append("", writerId, 1, event, 1); + assertThrows(ObjectClosedException.class, () -> commandEncoder.write(append)); + + assertEquals(counter.get(), 1); + } } diff --git a/client/src/test/java/io/pravega/client/connection/impl/ConnectionFactoryImplTest.java b/client/src/test/java/io/pravega/client/connection/impl/ConnectionFactoryImplTest.java index 8b065409b0c..02d2cb8443a 100644 --- a/client/src/test/java/io/pravega/client/connection/impl/ConnectionFactoryImplTest.java +++ b/client/src/test/java/io/pravega/client/connection/impl/ConnectionFactoryImplTest.java @@ -148,7 +148,7 @@ public void authTokenCheckFailed(WireCommands.AuthTokenCheckFailed authTokenChec } @Test - public void getActiveChannelTestWithConnectionPooling() throws InterruptedException, ConnectionFailedException { + public void getActiveChannelTestWithConnectionPooling() { ClientConfig config = ClientConfig.builder() .controllerURI(URI.create((this.ssl ? "tls://" : "tcp://") + "localhost")) .trustStore(SecurityConfigDefaults.TLS_CA_CERT_PATH) @@ -191,7 +191,7 @@ public void authTokenCheckFailed(WireCommands.AuthTokenCheckFailed authTokenChec } @Test - public void getActiveChannelTestWithoutConnectionPooling() throws InterruptedException, ConnectionFailedException { + public void getActiveChannelTestWithoutConnectionPooling() { @Cleanup SocketConnectionFactoryImpl factory = new SocketConnectionFactoryImpl(ClientConfig.builder() .controllerURI(URI.create((this.ssl ? "tls://" : "tcp://") + "localhost")) diff --git a/client/src/test/java/io/pravega/client/connection/impl/RawClientTest.java b/client/src/test/java/io/pravega/client/connection/impl/RawClientTest.java index 4f0db86cfaf..e30585cf43c 100644 --- a/client/src/test/java/io/pravega/client/connection/impl/RawClientTest.java +++ b/client/src/test/java/io/pravega/client/connection/impl/RawClientTest.java @@ -202,10 +202,12 @@ public void testExceptionWhileObtainingConnection() { }).when(connectionPool).getClientConnection(Mockito.any(Flow.class), Mockito.eq(endpoint), Mockito.any(ReplyProcessor.class), Mockito.>any()); // Test exception paths. + @Cleanup RawClient rawClient = new RawClient(endpoint, connectionPool); CompletableFuture reply = rawClient.sendRequest(100L, new WireCommands.Hello(0, 0)); assertFutureThrows("RawClient did not wrap the exception into ConnectionFailedException", reply, t -> t instanceof ConnectionFailedException); + @Cleanup RawClient rawClient1 = new RawClient(controller, connectionPool, new Segment("scope", "stream", 1)); CompletableFuture reply1 = rawClient1.sendRequest(101L, new WireCommands.Hello(0, 0)); assertFutureThrows("RawClient did not wrap the exception into ConnectionFailedException", reply1, t -> t instanceof ConnectionFailedException); diff --git a/client/src/test/java/io/pravega/client/control/impl/ControllerImplLBTest.java b/client/src/test/java/io/pravega/client/control/impl/ControllerImplLBTest.java index 07e103c8ae0..f7d41d302f0 100644 --- a/client/src/test/java/io/pravega/client/control/impl/ControllerImplLBTest.java +++ b/client/src/test/java/io/pravega/client/control/impl/ControllerImplLBTest.java @@ -114,7 +114,7 @@ public void getURI(SegmentId request, StreamObserver responseObserver) } @After - public void tearDown() throws IOException { + public void tearDown() { testRPCServer1.shutdownNow(); testRPCServer2.shutdownNow(); testRPCServer3.shutdownNow(); diff --git a/client/src/test/java/io/pravega/client/control/impl/ControllerImplTest.java b/client/src/test/java/io/pravega/client/control/impl/ControllerImplTest.java index 32e33e8bccc..1323d4b8d91 100644 --- a/client/src/test/java/io/pravega/client/control/impl/ControllerImplTest.java +++ b/client/src/test/java/io/pravega/client/control/impl/ControllerImplTest.java @@ -1495,7 +1495,7 @@ public void testKeepAliveWithServer() throws Exception { } @Test - public void testRetries() throws IOException, ExecutionException, InterruptedException { + public void testRetries() throws ExecutionException, InterruptedException { // Verify retries exhausted error after multiple attempts. @Cleanup diff --git a/client/src/test/java/io/pravega/client/control/impl/ModelHelperTest.java b/client/src/test/java/io/pravega/client/control/impl/ModelHelperTest.java index c5a95907fdf..e547270f8e3 100644 --- a/client/src/test/java/io/pravega/client/control/impl/ModelHelperTest.java +++ b/client/src/test/java/io/pravega/client/control/impl/ModelHelperTest.java @@ -131,11 +131,11 @@ public void encodeScalingPolicy() { public void encodeRetentionPolicy() { RetentionPolicy policy1 = ModelHelper.encode(decode(RetentionPolicy.bySizeBytes(1000L))); assertEquals(RetentionPolicy.RetentionType.SIZE, policy1.getRetentionType()); - assertEquals(1000L, (long) policy1.getRetentionParam()); + assertEquals(1000L, policy1.getRetentionParam()); RetentionPolicy policy2 = ModelHelper.encode(decode(RetentionPolicy.byTime(Duration.ofDays(100L)))); assertEquals(RetentionPolicy.RetentionType.TIME, policy2.getRetentionType()); - assertEquals(Duration.ofDays(100L).toMillis(), (long) policy2.getRetentionParam()); + assertEquals(Duration.ofDays(100L).toMillis(), policy2.getRetentionParam()); RetentionPolicy policy3 = ModelHelper.encode(decode((RetentionPolicy) null)); assertNull(policy3); @@ -165,6 +165,8 @@ public void decodeStreamConfig() { StreamConfig config = decode("scope", "test", StreamConfiguration.builder() .scalingPolicy(ScalingPolicy.byEventRate(100, 2, 3)) .retentionPolicy(RetentionPolicy.byTime(Duration.ofDays(100L))) + .timestampAggregationTimeout(1000L) + .rolloverSizeBytes(1024L) .build()); assertEquals("test", config.getStreamInfo().getStream()); Controller.ScalingPolicy policy = config.getScalingPolicy(); @@ -176,6 +178,8 @@ public void decodeStreamConfig() { assertEquals(Controller.RetentionPolicy.RetentionPolicyType.TIME, retentionPolicy.getRetentionType()); assertEquals(Duration.ofDays(100L).toMillis(), retentionPolicy.getRetentionParam()); assertEquals(Collections.emptyList(), config.getTags().getTagList()); + assertEquals(1000L, config.getTimestampAggregationTimeout()); + assertEquals(1024L, config.getRolloverSizeBytes()); } @Test @@ -207,6 +211,8 @@ public void encodeStreamConfig() { StreamConfiguration config = ModelHelper.encode(ModelHelper.decode("scope", "test", StreamConfiguration.builder() .scalingPolicy(ScalingPolicy.byEventRate(100, 2, 3)) .retentionPolicy(RetentionPolicy.bySizeBytes(1000L)) + .timestampAggregationTimeout(1000L) + .rolloverSizeBytes(1024L) .build())); ScalingPolicy policy = config.getScalingPolicy(); assertEquals(ScalingPolicy.ScaleType.BY_RATE_IN_EVENTS_PER_SEC, policy.getScaleType()); @@ -215,8 +221,10 @@ public void encodeStreamConfig() { assertEquals(3, policy.getMinNumSegments()); RetentionPolicy retentionPolicy = config.getRetentionPolicy(); assertEquals(RetentionPolicy.RetentionType.SIZE, retentionPolicy.getRetentionType()); - assertEquals(1000L, (long) retentionPolicy.getRetentionParam()); + assertEquals(1000L, retentionPolicy.getRetentionParam()); assertEquals(Collections.emptySet(), config.getTags()); + assertEquals(1000L, config.getTimestampAggregationTimeout()); + assertEquals(1024L, config.getRolloverSizeBytes()); } @Test @@ -249,7 +257,7 @@ public void encodeStreamConfigWithTags() { assertEquals(3, policy.getMinNumSegments()); RetentionPolicy retentionPolicy = config.getRetentionPolicy(); assertEquals(RetentionPolicy.RetentionType.SIZE, retentionPolicy.getRetentionType()); - assertEquals(1000L, (long) retentionPolicy.getRetentionParam()); + assertEquals(1000L, retentionPolicy.getRetentionParam()); assertEquals(ImmutableSet.of("tag1", "tag2"), config.getTags()); } @@ -324,21 +332,32 @@ public void encodeSegmentRange() { @Test public void encodeKeyValueTableConfig() { Controller.KeyValueTableConfig config = Controller.KeyValueTableConfig.newBuilder() - .setScope("scope").setKvtName("kvtable").setPartitionCount(2) - .setPrimaryKeyLength(Integer.BYTES).setSecondaryKeyLength(Long.BYTES).build(); + .setScope("scope").setKvtName("kvtable") + .setPartitionCount(2) + .setPrimaryKeyLength(Integer.BYTES) + .setSecondaryKeyLength(Long.BYTES) + .setRolloverSizeBytes(1024L) + .build(); KeyValueTableConfiguration configuration = ModelHelper.encode(config); assertEquals(config.getPartitionCount(), configuration.getPartitionCount()); assertEquals(config.getPrimaryKeyLength(), configuration.getPrimaryKeyLength()); assertEquals(config.getSecondaryKeyLength(), configuration.getSecondaryKeyLength()); + assertEquals(config.getRolloverSizeBytes(), configuration.getRolloverSizeBytes()); } @Test public void decodeKeyValueTableConfig() { Controller.KeyValueTableConfig config = ModelHelper.decode("scope", "kvtable", - KeyValueTableConfiguration.builder().partitionCount(2).primaryKeyLength(Integer.BYTES).secondaryKeyLength(Long.BYTES).build()); + KeyValueTableConfiguration.builder() + .partitionCount(2) + .primaryKeyLength(Integer.BYTES) + .secondaryKeyLength(Long.BYTES) + .rolloverSizeBytes(1024L) + .build()); assertEquals(2, config.getPartitionCount()); assertEquals(Integer.BYTES, config.getPrimaryKeyLength()); assertEquals(Long.BYTES, config.getSecondaryKeyLength()); + assertEquals(1024L, config.getRolloverSizeBytes()); } @Test @@ -366,7 +385,8 @@ public void testReaderGroupConfig() { ImmutableMap positions = ImmutableMap.builder().put(new Segment(scope, stream, 0), 90L).build(); StreamCut sc = new StreamCutImpl(Stream.of(scope, stream), positions); ReaderGroupConfig config = ReaderGroupConfig.builder().disableAutomaticCheckpoints() - .stream(getScopedStreamName(scope, stream), StreamCut.UNBOUNDED, sc).build(); + .stream(getScopedStreamName(scope, stream), StreamCut.UNBOUNDED, sc) + .build(); Controller.ReaderGroupConfiguration decodedConfig = decode(scope, "group", config); assertEquals(config, ModelHelper.encode(decodedConfig)); } diff --git a/client/src/test/java/io/pravega/client/security/auth/JwtTokenProviderImplTest.java b/client/src/test/java/io/pravega/client/security/auth/JwtTokenProviderImplTest.java index e4b2ffff370..e30c963d452 100644 --- a/client/src/test/java/io/pravega/client/security/auth/JwtTokenProviderImplTest.java +++ b/client/src/test/java/io/pravega/client/security/auth/JwtTokenProviderImplTest.java @@ -283,7 +283,7 @@ public void testRefreshTokenCompletesUponFailure() { } @Test - public void testTokenRefreshFutureIsClearedUponFailure() throws InterruptedException { + public void testTokenRefreshFutureIsClearedUponFailure() { ClientConfig config = ClientConfig.builder().controllerURI( URI.create("tcp://non-existent-cluster:9090")).build(); @Cleanup("shutdownNow") diff --git a/client/src/test/java/io/pravega/client/segment/impl/ConditionalOutputStreamTest.java b/client/src/test/java/io/pravega/client/segment/impl/ConditionalOutputStreamTest.java index 33da285ec2c..d32e36d42e3 100644 --- a/client/src/test/java/io/pravega/client/segment/impl/ConditionalOutputStreamTest.java +++ b/client/src/test/java/io/pravega/client/segment/impl/ConditionalOutputStreamTest.java @@ -89,7 +89,7 @@ public Void answer(InvocationOnMock invocation) throws Throwable { } @Test(timeout = 10000) - public void testClose() throws SegmentSealedException { + public void testClose() { @Cleanup MockConnectionFactoryImpl connectionFactory = new MockConnectionFactoryImpl(); @Cleanup @@ -155,7 +155,7 @@ public Void answer(InvocationOnMock invocation) throws Throwable { } @Test(timeout = 10000) - public void testSegmentSealed() throws ConnectionFailedException, SegmentSealedException { + public void testSegmentSealed() throws ConnectionFailedException { @Cleanup MockConnectionFactoryImpl connectionFactory = new MockConnectionFactoryImpl(); @Cleanup diff --git a/client/src/test/java/io/pravega/client/segment/impl/EventSegmentReaderImplTest.java b/client/src/test/java/io/pravega/client/segment/impl/EventSegmentReaderImplTest.java index a805067cd33..5342c26c74e 100644 --- a/client/src/test/java/io/pravega/client/segment/impl/EventSegmentReaderImplTest.java +++ b/client/src/test/java/io/pravega/client/segment/impl/EventSegmentReaderImplTest.java @@ -21,6 +21,7 @@ import io.pravega.shared.protocol.netty.WireCommands; import java.nio.ByteBuffer; import java.util.concurrent.TimeUnit; +import lombok.Cleanup; import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; @@ -44,6 +45,7 @@ public class EventSegmentReaderImplTest { public void testHeaderTimeout() throws SegmentTruncatedException, EndOfSegmentException { // Setup Mocks SegmentInputStream segmentInputStream = mock(SegmentInputStream.class); + @Cleanup EventSegmentReaderImpl segmentReader = new EventSegmentReaderImpl(segmentInputStream); //return a value less than WireCommands.TYPE_PLUS_LENGTH_SIZE = 8 bytes. when(segmentInputStream.read(any(ByteBuffer.class), eq(1000L))).thenReturn(5); @@ -60,6 +62,7 @@ public void testHeaderTimeout() throws SegmentTruncatedException, EndOfSegmentEx public void testEventDataTimeout() throws SegmentTruncatedException, EndOfSegmentException { // Setup Mocks SegmentInputStream segmentInputStream = mock(SegmentInputStream.class); + @Cleanup EventSegmentReaderImpl segmentReader = new EventSegmentReaderImpl(segmentInputStream); doAnswer(i -> { ByteBuffer headerReadingBuffer = i.getArgument(0); @@ -82,6 +85,7 @@ public void testEventDataTimeout() throws SegmentTruncatedException, EndOfSegmen public void testEventDataTimeoutZeroLength() throws SegmentTruncatedException, EndOfSegmentException { // Setup Mocks SegmentInputStream segmentInputStream = mock(SegmentInputStream.class); + @Cleanup EventSegmentReaderImpl segmentReader = new EventSegmentReaderImpl(segmentInputStream); doAnswer(i -> { ByteBuffer headerReadingBuffer = i.getArgument(0); @@ -103,6 +107,7 @@ public void testEventDataTimeoutZeroLength() throws SegmentTruncatedException, E public void testEventDataPartialTimeout() throws SegmentTruncatedException, EndOfSegmentException { // Setup Mocks SegmentInputStream segmentInputStream = mock(SegmentInputStream.class); + @Cleanup EventSegmentReaderImpl segmentReader = new EventSegmentReaderImpl(segmentInputStream); doAnswer(i -> { ByteBuffer headerReadingBuffer = i.getArgument(0); diff --git a/client/src/test/java/io/pravega/client/segment/impl/SegmentInputStreamTest.java b/client/src/test/java/io/pravega/client/segment/impl/SegmentInputStreamTest.java index 6b898ac48dc..a9191088098 100644 --- a/client/src/test/java/io/pravega/client/segment/impl/SegmentInputStreamTest.java +++ b/client/src/test/java/io/pravega/client/segment/impl/SegmentInputStreamTest.java @@ -219,7 +219,7 @@ public void testExceptionRecovery() throws EndOfSegmentException, SegmentTruncat } @Test - public void testStreamTruncated() throws EndOfSegmentException { + public void testStreamTruncated() { byte[] data = new byte[]{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; val wireData = createEventFromData(data); TestAsyncSegmentInputStream fakeNetwork = new TestAsyncSegmentInputStream(segment, 6); @@ -235,7 +235,7 @@ public void testStreamTruncated() throws EndOfSegmentException { } @Test - public void testStreamTruncatedWithPartialEvent() throws EndOfSegmentException { + public void testStreamTruncatedWithPartialEvent() { val trailingData = Unpooled.wrappedBuffer(new byte[]{0, 1}); TestAsyncSegmentInputStream fakeNetwork = new TestAsyncSegmentInputStream(segment, 1); @Cleanup diff --git a/client/src/test/java/io/pravega/client/segment/impl/SegmentMetadataClientTest.java b/client/src/test/java/io/pravega/client/segment/impl/SegmentMetadataClientTest.java index d18bea3e7fe..5c76988cfb0 100644 --- a/client/src/test/java/io/pravega/client/segment/impl/SegmentMetadataClientTest.java +++ b/client/src/test/java/io/pravega/client/segment/impl/SegmentMetadataClientTest.java @@ -75,7 +75,9 @@ public Void answer(InvocationOnMock invocation) throws Throwable { return null; } }).when(connection).send(any(WireCommands.GetStreamSegmentInfo.class)); + long head = client.fetchCurrentSegmentHeadOffset().join(); long length = client.fetchCurrentSegmentLength().join(); + assertEquals(121, head); assertEquals(123, length); } @@ -278,7 +280,6 @@ public Void answer(InvocationOnMock invocation) throws Throwable { return null; } }).when(connection).send(any(WireCommands.GetStreamSegmentInfo.class)); - long length = client.fetchCurrentSegmentLength().join(); InOrder order = Mockito.inOrder(connection, cf); order.verify(cf).establishConnection(eq(endpoint), any(ReplyProcessor.class)); diff --git a/client/src/test/java/io/pravega/client/segment/impl/SegmentOutputStreamTest.java b/client/src/test/java/io/pravega/client/segment/impl/SegmentOutputStreamTest.java index 0b11799dea7..2ed426a49f0 100644 --- a/client/src/test/java/io/pravega/client/segment/impl/SegmentOutputStreamTest.java +++ b/client/src/test/java/io/pravega/client/segment/impl/SegmentOutputStreamTest.java @@ -94,7 +94,7 @@ private static ByteBuffer getBuffer(String s) { } @Test(timeout = 10000) - public void testConnectAndSend() throws SegmentSealedException, ConnectionFailedException { + public void testConnectAndSend() throws ConnectionFailedException { UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); MockConnectionFactoryImpl cf = new MockConnectionFactoryImpl(); @@ -113,7 +113,7 @@ public void testConnectAndSend() throws SegmentSealedException, ConnectionFailed } @Test(timeout = 10000) - public void testRecvErrorMessage() throws SegmentSealedException, ConnectionFailedException { + public void testRecvErrorMessage() throws SegmentSealedException { int requestId = 0; UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); @@ -122,6 +122,7 @@ public void testRecvErrorMessage() throws SegmentSealedException, ConnectionFail MockController controller = new MockController(uri.getEndpoint(), uri.getPort(), cf, true); ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); + @Cleanup SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -135,7 +136,7 @@ public void testRecvErrorMessage() throws SegmentSealedException, ConnectionFail @Test(timeout = 10000) - public void testReconnectWorksWithTokenTaskInInternalExecutor() { + public void testReconnectWorksWithTokenTaskInInternalExecutor() throws SegmentSealedException { UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); @@ -151,6 +152,7 @@ public void testReconnectWorksWithTokenTaskInInternalExecutor() { ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); + @Cleanup SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.create(controller, "scope", "stream", AccessOperation.ANY)); output.reconnect(); @@ -246,29 +248,33 @@ public void testConnectWithMultipleFailures() throws Exception { ClientConnection connection = mock(ClientConnection.class); InOrder verify = inOrder(connection); cf.provideConnection(uri, connection); - @Cleanup SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, - retryConfig, DelegationTokenProviderFactory.createWithEmptyToken()); - output.reconnect(); - verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - //simulate a processing Failure and ensure SetupAppend is executed. - ReplyProcessor processor = cf.getProcessor(uri); - processor.processingFailure(new IOException()); - verify.verify(connection).close(); - verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - verifyNoMoreInteractions(connection); - processor.connectionDropped(); - verify.verify(connection).close(); - verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - processor.wrongHost(new WireCommands.WrongHost(output.getRequestId(), SEGMENT, "newHost", "SomeException")); - verify.verify(connection).close(); - verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - verifyNoMoreInteractions(connection); - processor.processingFailure(new IOException()); - assertTrue( "Connection is exceptionally closed with RetriesExhaustedException", output.getConnection().isCompletedExceptionally()); - AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(output.getConnection())); - verify.verify(connection).close(); - verifyNoMoreInteractions(connection); + retryConfig, DelegationTokenProviderFactory.createWithEmptyToken()); + try { + output.reconnect(); + verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + //simulate a processing Failure and ensure SetupAppend is executed. + ReplyProcessor processor = cf.getProcessor(uri); + processor.processingFailure(new IOException()); + verify.verify(connection).close(); + verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + verifyNoMoreInteractions(connection); + processor.connectionDropped(); + verify.verify(connection).close(); + verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + processor.wrongHost(new WireCommands.WrongHost(output.getRequestId(), SEGMENT, "newHost", "SomeException")); + verify.verify(connection).close(); + verify.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + verifyNoMoreInteractions(connection); + processor.processingFailure(new IOException()); + assertTrue("Connection is exceptionally closed with RetriesExhaustedException", output.getConnection().isCompletedExceptionally()); + AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(output.getConnection())); + verify.verify(connection).close(); + verifyNoMoreInteractions(connection); + } finally { + // Verify that a close on the SegmentOutputStream does throw a RetriesExhaustedException. + AssertExtensions.assertThrows(RetriesExhaustedException.class, output::close); + } } @Test(timeout = 10000) @@ -286,36 +292,42 @@ public void testInflightWithMultipleConnectFailures() throws Exception { cf.provideConnection(uri, connection); SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, retryConfig, DelegationTokenProviderFactory.createWithEmptyToken()); - output.reconnect(); - verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - //Simulate a successful connection setup. - cf.getProcessor(uri).appendSetup(new AppendSetup(output.getRequestId(), SEGMENT, cid, 0)); - - // try sending an event. - byte[] eventData = "test data".getBytes(); - CompletableFuture ack1 = new CompletableFuture<>(); - output.write(PendingEvent.withoutHeader(null, ByteBuffer.wrap(eventData), ack1)); - verify(connection).send(new Append(SEGMENT, cid, 1, 1, Unpooled.wrappedBuffer(eventData), null, output.getRequestId())); - reset(connection); - //simulate a connection drop and verify if the writer tries to establish a new connection. - cf.getProcessor(uri).connectionDropped(); - verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - reset(connection); - // Simulate a connection drop again. - cf.getProcessor(uri).connectionDropped(); - // Since we have exceeded the retry attempts verify we do not try to establish a connection. - verify(connection, never()).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - assertTrue( "Connection is exceptionally closed with RetriesExhaustedException", output.getConnection().isCompletedExceptionally()); - AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(output.getConnection())); - // Verify that the inflight event future is completed exceptionally. - AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(ack1)); - - //Write an additional event to a writer that has failed with RetriesExhaustedException. - CompletableFuture ack2 = new CompletableFuture<>(); - output.write(PendingEvent.withoutHeader(null, ByteBuffer.wrap(eventData), ack2)); - verify(connection, never()).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(ack2)); + try { + output.reconnect(); + verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + //Simulate a successful connection setup. + cf.getProcessor(uri).appendSetup(new AppendSetup(output.getRequestId(), SEGMENT, cid, 0)); + // try sending an event. + byte[] eventData = "test data".getBytes(); + CompletableFuture ack1 = new CompletableFuture<>(); + output.write(PendingEvent.withoutHeader(null, ByteBuffer.wrap(eventData), ack1)); + verify(connection).send(new Append(SEGMENT, cid, 1, 1, Unpooled.wrappedBuffer(eventData), null, output.getRequestId())); + reset(connection); + //simulate a connection drop and verify if the writer tries to establish a new connection. + cf.getProcessor(uri).connectionDropped(); + verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + reset(connection); + // Simulate a connection drop again. + cf.getProcessor(uri).connectionDropped(); + // Since we have exceeded the retry attempts verify we do not try to establish a connection. + verify(connection, never()).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + assertTrue("Connection is exceptionally closed with RetriesExhaustedException", output.getConnection().isCompletedExceptionally()); + AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(output.getConnection())); + // Verify that the inflight event future is completed exceptionally. + AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(ack1)); + + //Write an additional event to a writer that has failed with RetriesExhaustedException. + CompletableFuture ack2 = new CompletableFuture<>(); + output.write(PendingEvent.withoutHeader(null, ByteBuffer.wrap(eventData), ack2)); + verify(connection, never()).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(ack2)); + // Verify that a flush on the SegmentOutputStream does throw a RetriesExhaustedException. + AssertExtensions.assertThrows(RetriesExhaustedException.class, output::flush); + } finally { + // Verify that a close on the SegmentOutputStream does throw a RetriesExhaustedException. + AssertExtensions.assertThrows(RetriesExhaustedException.class, output::close); + } } @Test(timeout = 10000) @@ -333,31 +345,36 @@ public void testFlushWithMultipleConnectFailures() throws Exception { cf.provideConnection(uri, connection); SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, retryConfig, DelegationTokenProviderFactory.createWithEmptyToken()); - output.reconnect(); - verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - //Simulate a successful connection setup. - cf.getProcessor(uri).appendSetup(new AppendSetup(output.getRequestId(), SEGMENT, cid, 0)); - - // try sending an event. - byte[] eventData = "test data".getBytes(); - CompletableFuture acked = new CompletableFuture<>(); - // this is an inflight event and the client will track it until there is a response from SSS. - output.write(PendingEvent.withoutHeader(null, ByteBuffer.wrap(eventData), acked)); - verify(connection).send(new Append(SEGMENT, cid, 1, 1, Unpooled.wrappedBuffer(eventData), null, output.getRequestId())); - reset(connection); - //simulate a connection drop and verify if the writer tries to establish a new connection. - cf.getProcessor(uri).connectionDropped(); - verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - reset(connection); + try { + output.reconnect(); + verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + //Simulate a successful connection setup. + cf.getProcessor(uri).appendSetup(new AppendSetup(output.getRequestId(), SEGMENT, cid, 0)); - // Verify flush blocks until there is a response from SSS. Incase of connection error the client retries. If the - // retry count more than the configuration ensure flush returns exceptionally. - AssertExtensions.assertBlocks(() -> AssertExtensions.assertThrows(RetriesExhaustedException.class, output::flush), - () -> cf.getProcessor(uri).connectionDropped()); - assertTrue( "Connection is exceptionally closed with RetriesExhaustedException", output.getConnection().isCompletedExceptionally()); - AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(output.getConnection())); - // Verify that the inflight event future is completed exceptionally. - AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(acked)); + // try sending an event. + byte[] eventData = "test data".getBytes(); + CompletableFuture acked = new CompletableFuture<>(); + // this is an inflight event and the client will track it until there is a response from SSS. + output.write(PendingEvent.withoutHeader(null, ByteBuffer.wrap(eventData), acked)); + verify(connection).send(new Append(SEGMENT, cid, 1, 1, Unpooled.wrappedBuffer(eventData), null, output.getRequestId())); + reset(connection); + //simulate a connection drop and verify if the writer tries to establish a new connection. + cf.getProcessor(uri).connectionDropped(); + verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); + reset(connection); + + // Verify flush blocks until there is a response from SSS. Incase of connection error the client retries. If the + // retry count more than the configuration ensure flush returns exceptionally. + AssertExtensions.assertBlocks(() -> AssertExtensions.assertThrows(RetriesExhaustedException.class, output::flush), + () -> cf.getProcessor(uri).connectionDropped()); + assertTrue("Connection is exceptionally closed with RetriesExhaustedException", output.getConnection().isCompletedExceptionally()); + AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(output.getConnection())); + // Verify that the inflight event future is completed exceptionally. + AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(acked)); + } finally { + // Verify that a close on the SegmentOutputStream does throw a RetriesExhaustedException. + AssertExtensions.assertThrows(RetriesExhaustedException.class, output::close); + } } @SuppressWarnings("unchecked") @@ -386,7 +403,7 @@ public Object answer(InvocationOnMock invocation) throws Exception { } @Test(timeout = 10000) - public void testConditionalSend() throws SegmentSealedException, ConnectionFailedException { + public void testConditionalSend() throws ConnectionFailedException { UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); MockConnectionFactoryImpl cf = new MockConnectionFactoryImpl(); @@ -456,7 +473,7 @@ public Void answer(InvocationOnMock invocation) throws Throwable { } private void sendAndVerifyEvent(UUID cid, ClientConnection connection, SegmentOutputStreamImpl output, - ByteBuffer data, int num) throws SegmentSealedException, ConnectionFailedException { + ByteBuffer data, int num) throws ConnectionFailedException { CompletableFuture acked = new CompletableFuture<>(); output.write(PendingEvent.withoutHeader(null, data, acked)); verify(connection).send(new Append(SEGMENT, cid, num, 1, Unpooled.wrappedBuffer(data), null, output.getRequestId())); @@ -464,7 +481,7 @@ private void sendAndVerifyEvent(UUID cid, ClientConnection connection, SegmentOu } @Test(timeout = 10000) - public void testClose() throws ConnectionFailedException, SegmentSealedException { + public void testClose() throws ConnectionFailedException { UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); MockConnectionFactoryImpl cf = new MockConnectionFactoryImpl(); @@ -536,7 +553,7 @@ public void testFlush() throws ConnectionFailedException, SegmentSealedException } @Test(timeout = 10000) - public void testReconnectOnMissedAcks() throws ConnectionFailedException, SegmentSealedException { + public void testReconnectOnMissedAcks() throws ConnectionFailedException { UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); MockConnectionFactoryImpl cf = spy(new MockConnectionFactoryImpl()); @@ -547,6 +564,7 @@ public void testReconnectOnMissedAcks() throws ConnectionFailedException, Segmen ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -578,7 +596,7 @@ public void testReconnectOnMissedAcks() throws ConnectionFailedException, Segmen } @Test(timeout = 10000) - public void testReconnectSendsSetupAppendOnTokenExpiration() throws ConnectionFailedException { + public void testReconnectSendsSetupAppendOnTokenExpiration() throws ConnectionFailedException, SegmentSealedException { UUID writerId = UUID.randomUUID(); PravegaNodeUri segmentStoreUri = new PravegaNodeUri("endpoint", SERVICE_PORT); @@ -593,7 +611,7 @@ public void testReconnectSendsSetupAppendOnTokenExpiration() throws ConnectionFa ClientConnection mockConnection = mock(ClientConnection.class); mockConnectionFactory.provideConnection(segmentStoreUri, mockConnection); - + @Cleanup SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, mockConnectionFactory, writerId, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); @@ -613,7 +631,7 @@ public void testReconnectSendsSetupAppendOnTokenExpiration() throws ConnectionFa } @Test(timeout = 10000) - public void testReconnectDoesNotSetupAppendOnTokenCheckFailure() throws ConnectionFailedException { + public void testReconnectDoesNotSetupAppendOnTokenCheckFailure() throws ConnectionFailedException, SegmentSealedException { UUID writerId = UUID.randomUUID(); PravegaNodeUri segmentStoreUri = new PravegaNodeUri("endpoint", SERVICE_PORT); @@ -628,7 +646,7 @@ public void testReconnectDoesNotSetupAppendOnTokenCheckFailure() throws Connecti ClientConnection mockConnection = mock(ClientConnection.class); mockConnectionFactory.provideConnection(segmentStoreUri, mockConnection); - + @Cleanup SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, mockConnectionFactory, writerId, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); @@ -649,7 +667,7 @@ public void testReconnectDoesNotSetupAppendOnTokenCheckFailure() throws Connecti } @Test(timeout = 10000) - public void testReconnectOnBadAcks() throws ConnectionFailedException, SegmentSealedException { + public void testReconnectOnBadAcks() throws ConnectionFailedException { UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); MockConnectionFactoryImpl cf = spy(new MockConnectionFactoryImpl()); @@ -660,6 +678,7 @@ public void testReconnectOnBadAcks() throws ConnectionFailedException, SegmentSe ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -699,6 +718,7 @@ public void testConnectionFailure() throws Exception { MockController controller = new MockController(uri.getEndpoint(), uri.getPort(), cf, true); ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -752,6 +772,7 @@ public void testConnectionFailureWithSegmentSealed() throws Exception { MockController controller = new MockController(uri.getEndpoint(), uri.getPort(), cf, true); ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); @@ -796,7 +817,7 @@ public Void answer(InvocationOnMock invocation) throws Throwable { output.write(PendingEvent.withoutHeader(null, data, acked3)); inOrder.verify(connection).send(append); inOrder.verify(connection).send(new SetupAppend(output.getRequestId(), cid, SEGMENT, "")); - inOrder.verify(connection).send(any(Append.class)); + inOrder.verify(connection).send(eq(append2)); // the setup append should not transmit the inflight events given that the segment is sealed. inOrder.verifyNoMoreInteractions(); assertFalse(acked.isDone()); @@ -906,6 +927,7 @@ public void testSealedBeforeFlush() throws Exception { ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -932,6 +954,7 @@ public void testSealedAfterFlush() throws Exception { ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -971,6 +994,7 @@ public void testFlushIsBlockedUntilCallBackInvoked() throws Exception { ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -1132,6 +1156,7 @@ public void testExceptionSealedCallback() throws Exception { throw new IllegalStateException(); } }; + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, exceptionCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -1170,7 +1195,7 @@ public void testNoSuchSegment() throws Exception { ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); - + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -1204,7 +1229,7 @@ public void testAlreadySealedSegment() throws Exception { ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); - + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -1224,7 +1249,7 @@ public void testFlushDuringTransactionAbort() throws Exception { ClientConnection connection = mock(ClientConnection.class); cf.provideConnection(uri, connection); InOrder order = Mockito.inOrder(connection); - + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(TXN_SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); output.reconnect(); @@ -1311,6 +1336,7 @@ public void testSegmentSealedFollowedbyConnectionDrop() throws Exception { InOrder order = Mockito.inOrder(connection); // Create a Segment writer. + @SuppressWarnings("resource") SegmentOutputStreamImpl output = new SegmentOutputStreamImpl(SEGMENT, true, controller, cf, cid, segmentSealedCallback, RETRY_SCHEDULE, DelegationTokenProviderFactory.createWithEmptyToken()); @@ -1359,7 +1385,7 @@ public void testSegmentSealedFollowedbyConnectionDrop() throws Exception { } @Test(timeout = 10000) - public void testConnectAndSendWithoutConnectionPooling() throws SegmentSealedException, ConnectionFailedException { + public void testConnectAndSendWithoutConnectionPooling() throws ConnectionFailedException { UUID cid = UUID.randomUUID(); PravegaNodeUri uri = new PravegaNodeUri("endpoint", SERVICE_PORT); MockConnectionFactoryImpl cf = new MockConnectionFactoryImpl(); diff --git a/client/src/test/java/io/pravega/client/state/impl/SynchronizerTest.java b/client/src/test/java/io/pravega/client/state/impl/SynchronizerTest.java index cb16851e08c..a38098ffb58 100644 --- a/client/src/test/java/io/pravega/client/state/impl/SynchronizerTest.java +++ b/client/src/test/java/io/pravega/client/state/impl/SynchronizerTest.java @@ -18,7 +18,7 @@ import io.pravega.client.ClientConfig; import io.pravega.client.SynchronizerClientFactory; import io.pravega.client.connection.impl.ConnectionPoolImpl; -import io.pravega.client.segment.impl.EndOfSegmentException; +import io.pravega.client.control.impl.Controller; import io.pravega.client.segment.impl.Segment; import io.pravega.client.segment.impl.SegmentAttribute; import io.pravega.client.state.InitialUpdate; @@ -35,11 +35,11 @@ import io.pravega.client.stream.TruncatedDataException; import io.pravega.client.stream.impl.ByteArraySerializer; import io.pravega.client.stream.impl.ClientFactoryImpl; -import io.pravega.client.control.impl.Controller; import io.pravega.client.stream.impl.JavaSerializer; import io.pravega.client.stream.impl.StreamSegments; import io.pravega.client.stream.mock.MockClientFactory; import io.pravega.client.stream.mock.MockSegmentStreamFactory; +import io.pravega.common.concurrent.ExecutorServiceHelpers; import io.pravega.common.util.ReusableLatch; import io.pravega.test.common.AssertExtensions; import java.io.Serializable; @@ -50,6 +50,8 @@ import java.util.Map.Entry; import java.util.TreeMap; import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import lombok.Cleanup; @@ -232,7 +234,7 @@ public void testLocking() { } @Test(timeout = 20000) - public void testCompaction() throws EndOfSegmentException { + public void testCompaction() { String streamName = "streamName"; String scope = "scope"; @@ -332,7 +334,7 @@ public void testConsistency() { } @Test(timeout = 20000) - public void testReturnValue() throws EndOfSegmentException { + public void testReturnValue() { String streamName = "streamName"; String scope = "scope"; @@ -386,7 +388,7 @@ public void testReturnValue() throws EndOfSegmentException { } @Test(timeout = 20000) - public void testCompactionShrinksSet() throws EndOfSegmentException { + public void testCompactionShrinksSet() { String streamName = "testCompactionShrinksSet"; String scope = "scope"; @@ -414,7 +416,7 @@ public void testCompactionShrinksSet() throws EndOfSegmentException { } @Test(timeout = 20000) - public void testSetOperations() throws EndOfSegmentException { + public void testSetOperations() { String streamName = "testCompactionShrinksSet"; String scope = "scope"; @@ -446,7 +448,7 @@ public void testSetOperations() throws EndOfSegmentException { } @Test(timeout = 20000) - public void testCompactWithTruncation() throws EndOfSegmentException { + public void testCompactWithTruncation() { String streamName = "streamName"; String scope = "scope"; @@ -600,9 +602,9 @@ public void testFetchUpdatesWithMultipleTruncation() { String streamName = "streamName"; String scope = "scope"; - RevisionedStreamClient> revisionedClient = - (RevisionedStreamClient>) mock(RevisionedStreamClient.class); + RevisionedStreamClient> revisionedClient = mock(RevisionedStreamClient.class); final Segment segment = new Segment(scope, streamName, 0L); + @Cleanup StateSynchronizerImpl syncA = new StateSynchronizerImpl<>(segment, revisionedClient); Revision firstMark = new RevisionImpl(segment, 10L, 1); @@ -625,6 +627,78 @@ public void testFetchUpdatesWithMultipleTruncation() { assertEquals("x", syncA.getState().getValue()); } + @Test(timeout = 20000) + @SuppressWarnings("unchecked") + public void testConcurrentFetchUpdatesAfterTruncation() { + String streamName = "streamName"; + String scope = "scope"; + + // Mock of the RevisionedStreamClient. + RevisionedStreamClient> revisionedStreamClient = mock(RevisionedStreamClient.class); + + final Segment segment = new Segment(scope, streamName, 0L); + @Cleanup + StateSynchronizerImpl syncA = new StateSynchronizerImpl<>(segment, revisionedStreamClient); + + Revision firstMark = new RevisionImpl(segment, 10L, 1); + Revision secondMark = new RevisionImpl(segment, 20L, 2); + final AbstractMap.SimpleImmutableEntry> entry = + new AbstractMap.SimpleImmutableEntry<>(secondMark, new UpdateOrInit<>(new RegularUpdate("x"))); + + // Mock iterators to simulate concurrent revisionedStreamClient.readFrom(firstMark) call. + Iterator>> iterator1 = + Collections.>>singletonList(entry).iterator(); + Iterator>> iterator2 = + Collections.>>singletonList(entry).iterator(); + // Latch to ensure both the thread encounter truncation exception. + CountDownLatch truncationLatch = new CountDownLatch(2); + // Latch to ensure both the threads invoke read attempt reading from same revision. + // This will simulate the race condition where the in-memory state is newer than the state returned by RevisionedStreamClient. + CountDownLatch raceLatch = new CountDownLatch(2); + + // Setup Mock + when(revisionedStreamClient.getMark()).thenReturn(firstMark); + when(revisionedStreamClient.readFrom(firstMark)) + // simulate multiple TruncatedDataExceptions. + .thenAnswer(invocation -> { + truncationLatch.countDown(); + truncationLatch.await(); // wait until the other thread encounters the TruncationDataException. + throw new TruncatedDataException(); + }) + .thenAnswer(invocation -> { + throw new TruncatedDataException(); + }) + .thenAnswer(invocation -> { + truncationLatch.countDown(); + raceLatch.await(); // wait until the other thread attempts to fetch updates from SSS post truncation and updates internal state. + return iterator1; + }).thenAnswer(invocation -> { + raceLatch.countDown(); + return iterator2; + }); + + // Return an iterator whose hasNext is false. + when(revisionedStreamClient.readFrom(secondMark)).thenAnswer(invocation -> { + raceLatch.countDown(); // release the waiting thread which is fetching updates from SSS when the internal state is already updated. + return iterator2; + }); + + // Simulate concurrent invocations of fetchUpdates API. + @Cleanup("shutdownNow") + ScheduledExecutorService exec = ExecutorServiceHelpers.newScheduledThreadPool(2, "test-pool"); + CompletableFuture cf1 = CompletableFuture.supplyAsync(() -> { + syncA.fetchUpdates(); + return null; + }, exec); + CompletableFuture cf2 = CompletableFuture.supplyAsync(() -> { + syncA.fetchUpdates(); + return null; + }, exec); + // Wait until the completion of both the fetchUpdates() API. + CompletableFuture.allOf(cf1, cf2).join(); + assertEquals("x", syncA.getState().getValue()); + } + @Test(timeout = 5000) public void testSynchronizerClientFactory() { ClientConfig config = ClientConfig.builder().controllerURI(URI.create("tls://localhost:9090")).build(); diff --git a/client/src/test/java/io/pravega/client/stream/StreamConfigurationTest.java b/client/src/test/java/io/pravega/client/stream/StreamConfigurationTest.java index 845583268ee..38f65795164 100644 --- a/client/src/test/java/io/pravega/client/stream/StreamConfigurationTest.java +++ b/client/src/test/java/io/pravega/client/stream/StreamConfigurationTest.java @@ -37,6 +37,7 @@ public void testStreamConfigDefault() { StreamConfiguration streamConfig = StreamConfiguration.builder().build(); assertEquals(ScalingPolicy.fixed(1), streamConfig.getScalingPolicy() ); assertEquals(0, streamConfig.getTags().size()); + assertEquals(0L, streamConfig.getRolloverSizeBytes()); } @Test @@ -49,9 +50,10 @@ public void testStreamBuilder() { assertEquals(2, streamConfig.getTags().size()); assertEquals(tagList, streamConfig.getTags()); - streamConfig = StreamConfiguration.builder().tags(tagList).build(); + streamConfig = StreamConfiguration.builder().tags(tagList).rolloverSizeBytes(1024).build(); assertEquals(2, streamConfig.getTags().size()); assertEquals(tagList, streamConfig.getTags()); + assertEquals(1024L, streamConfig.getRolloverSizeBytes()); } @Test @@ -65,6 +67,8 @@ public void testInvalidStreamConfig() { // Exceed the permissible number of tags for String. List tags = IntStream.range(0, 130).mapToObj(String::valueOf).collect(Collectors.toList()); assertThrows(IllegalArgumentException.class, () -> StreamConfiguration.builder().tags(tags).build()); + // Invalid rollover size + assertThrows(IllegalArgumentException.class, () -> StreamConfiguration.builder().rolloverSizeBytes(-1024).build()); } @Test diff --git a/client/src/test/java/io/pravega/client/stream/StreamCutTest.java b/client/src/test/java/io/pravega/client/stream/StreamCutTest.java index f7fd29b6efa..1451a6c85b4 100644 --- a/client/src/test/java/io/pravega/client/stream/StreamCutTest.java +++ b/client/src/test/java/io/pravega/client/stream/StreamCutTest.java @@ -90,7 +90,7 @@ private void write00(StreamCutInternal cut, RevisionDataOutput revisionDataOutpu (out, offset) -> out.writeCompactLong(offset)); } - private void read00(RevisionDataInput revisionDataInput, StreamCutInternal target) throws IOException { + private void read00(RevisionDataInput revisionDataInput, StreamCutInternal target) { // NOP. } } diff --git a/client/src/test/java/io/pravega/client/stream/impl/DefaultCredentialsTest.java b/client/src/test/java/io/pravega/client/stream/impl/DefaultCredentialsTest.java index 37579b1ad22..9b34da0f108 100644 --- a/client/src/test/java/io/pravega/client/stream/impl/DefaultCredentialsTest.java +++ b/client/src/test/java/io/pravega/client/stream/impl/DefaultCredentialsTest.java @@ -22,12 +22,14 @@ public class DefaultCredentialsTest { + @SuppressWarnings("deprecation") @Test public void testObjectIsAssignableToBothInterfaces() { io.pravega.shared.security.auth.Credentials credentials = new DefaultCredentials("pwd", "user"); Credentials legacyCredentials = new DefaultCredentials("pwd", "username"); } + @SuppressWarnings("deprecation") @Test public void testLegacyObjectDelegatesToNewObject() { io.pravega.shared.security.auth.Credentials credentials = @@ -37,6 +39,7 @@ public void testLegacyObjectDelegatesToNewObject() { assertEquals(AuthConstants.BASIC, credentials.getAuthenticationType()); } + @SuppressWarnings("deprecation") @Test public void testEqualsAndHashCode() { io.pravega.shared.security.auth.Credentials credentials1 = new DefaultCredentials("pwd", "user"); diff --git a/client/src/test/java/io/pravega/client/stream/impl/EventStreamReaderTest.java b/client/src/test/java/io/pravega/client/stream/impl/EventStreamReaderTest.java index fffe98be450..ad44e3d0a3e 100644 --- a/client/src/test/java/io/pravega/client/stream/impl/EventStreamReaderTest.java +++ b/client/src/test/java/io/pravega/client/stream/impl/EventStreamReaderTest.java @@ -100,7 +100,7 @@ public void tearDown() { } @Test(timeout = 10000) - public void testEndOfSegmentWithoutSuccessors() throws SegmentSealedException, ReaderNotInReaderGroupException { + public void testEndOfSegmentWithoutSuccessors() throws ReaderNotInReaderGroupException { AtomicLong clock = new AtomicLong(); MockSegmentStreamFactory segmentStreamFactory = new MockSegmentStreamFactory(); Orderer orderer = new Orderer(); @@ -322,7 +322,7 @@ public void testReleaseSegment() throws SegmentSealedException, ReaderNotInReade reader.close(); } - private ByteBuffer writeInt(SegmentOutputStream stream, int value) throws SegmentSealedException { + private ByteBuffer writeInt(SegmentOutputStream stream, int value) { ByteBuffer buffer = ByteBuffer.allocate(4).putInt(value); buffer.flip(); stream.write(PendingEvent.withHeader(null, buffer, new CompletableFuture())); diff --git a/client/src/test/java/io/pravega/client/stream/impl/EventStreamWriterTest.java b/client/src/test/java/io/pravega/client/stream/impl/EventStreamWriterTest.java index 4f57623ddfb..9951adbd1bf 100644 --- a/client/src/test/java/io/pravega/client/stream/impl/EventStreamWriterTest.java +++ b/client/src/test/java/io/pravega/client/stream/impl/EventStreamWriterTest.java @@ -575,6 +575,8 @@ public void testSealInvokesFlushError() throws SegmentSealedException { outputStream1.invokeSealedCallBack(); // Verify that the inflight event which is written to segment2 due to sealed segment fails incase of a connection failure. AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> Futures.getThrowingException(writerFuture)); + // Verify that a flush() does indicate this failure. + AssertExtensions.assertThrows(RetriesExhaustedException.class, () -> writer.flush()); } @Test @@ -870,6 +872,7 @@ public void testNoteTime() { Mockito.when(mockOutputStream.getLastObservedWriteOffset()).thenReturn(1111L); JavaSerializer serializer = new JavaSerializer<>(); + @Cleanup EventStreamWriter writer = new EventStreamWriterImpl<>(stream, "id", controller, streamFactory, serializer, config, executorService(), executorService(), null); writer.noteTime(123); @@ -893,6 +896,7 @@ public void testNoteTimeFail() { Mockito.when(controller.getCurrentSegments(scope, streamName)).thenReturn(getSegmentsFuture(segment1)); JavaSerializer serializer = new JavaSerializer<>(); + @Cleanup EventStreamWriter writer = new EventStreamWriterImpl<>(stream, "id", controller, streamFactory, serializer, config, executorService(), executorService(), null); AssertExtensions.assertThrows(IllegalStateException.class, () -> writer.noteTime(123)); @@ -917,6 +921,7 @@ public void testAutoNoteTime() { CollectingExecutor executor = new CollectingExecutor(); JavaSerializer serializer = new JavaSerializer<>(); + @Cleanup EventStreamWriter writer = new EventStreamWriterImpl<>(stream, "id", controller, streamFactory, serializer, config, executor, executor, null); diff --git a/client/src/test/java/io/pravega/client/stream/impl/PingerTest.java b/client/src/test/java/io/pravega/client/stream/impl/PingerTest.java index 33d83617a56..5d99b52cc8f 100644 --- a/client/src/test/java/io/pravega/client/stream/impl/PingerTest.java +++ b/client/src/test/java/io/pravega/client/stream/impl/PingerTest.java @@ -124,7 +124,7 @@ public void startTxnKeepAliveError() throws Exception { pinger.startPing(txnID); verify(executor, times(1)).scheduleAtFixedRate(any(Runnable.class), anyLong(), - longThat(l -> l > 0 && l <= 10000), eq(TimeUnit.MILLISECONDS)); + longThat(l -> l > 0 && l <= 50000), eq(TimeUnit.MILLISECONDS)); verify(controller, times(1)).pingTransaction(eq(stream), eq(txnID), eq(config.getTransactionTimeoutTime())); } @@ -138,7 +138,7 @@ public void startTxnKeepAliveMultiple() throws Exception { pinger.startPing(txnID1); pinger.startPing(txnID2); verify(executor, times(1)).scheduleAtFixedRate(any(Runnable.class), anyLong(), - longThat(l -> l > 0 && l <= 10000), eq(TimeUnit.MILLISECONDS)); + longThat(l -> l > 0 && l <= 50000), eq(TimeUnit.MILLISECONDS)); } @Test diff --git a/client/src/test/java/io/pravega/client/stream/impl/SegmentTransactionTest.java b/client/src/test/java/io/pravega/client/stream/impl/SegmentTransactionTest.java index 8243f119385..fef41cd7429 100644 --- a/client/src/test/java/io/pravega/client/stream/impl/SegmentTransactionTest.java +++ b/client/src/test/java/io/pravega/client/stream/impl/SegmentTransactionTest.java @@ -56,6 +56,7 @@ public Void answer(InvocationOnMock invocation) throws Throwable { public void testSegmentDoesNotExist() { UUID uuid = UUID.randomUUID(); SegmentOutputStream outputStream = Mockito.mock(SegmentOutputStream.class); + @SuppressWarnings("resource") SegmentTransactionImpl txn = new SegmentTransactionImpl<>(uuid, outputStream, new JavaSerializer()); Mockito.doAnswer(new Answer() { @Override diff --git a/client/src/test/java/io/pravega/client/stream/mock/MockController.java b/client/src/test/java/io/pravega/client/stream/mock/MockController.java index e0f24549419..a4ec3779d30 100644 --- a/client/src/test/java/io/pravega/client/stream/mock/MockController.java +++ b/client/src/test/java/io/pravega/client/stream/mock/MockController.java @@ -393,7 +393,7 @@ private boolean createSegment(String name) { } CompletableFuture result = new CompletableFuture<>(); FailingReplyProcessor replyProcessor = createReplyProcessorCreateSegment(result); - CreateSegment command = new WireCommands.CreateSegment(idGenerator.get(), name, WireCommands.CreateSegment.NO_SCALE, 0, ""); + CreateSegment command = new WireCommands.CreateSegment(idGenerator.get(), name, WireCommands.CreateSegment.NO_SCALE, 0, "", 0); sendRequestOverNewConnection(command, replyProcessor, result); return getAndHandleExceptions(result, RuntimeException::new); } @@ -405,7 +405,7 @@ private boolean createTableSegment(String name) { CompletableFuture result = new CompletableFuture<>(); FailingReplyProcessor replyProcessor = createReplyProcessorCreateSegment(result); - WireCommands.CreateTableSegment command = new WireCommands.CreateTableSegment(idGenerator.get(), name, false, 0, ""); + WireCommands.CreateTableSegment command = new WireCommands.CreateTableSegment(idGenerator.get(), name, false, 0, "", 0); sendRequestOverNewConnection(command, replyProcessor, result); return getAndHandleExceptions(result, RuntimeException::new); } @@ -704,7 +704,7 @@ public void authTokenCheckFailed(WireCommands.AuthTokenCheckFailed authTokenChec }; String transactionName = NameUtils.getTransactionNameFromId(segment.getScopedName(), txId); sendRequestOverNewConnection(new CreateSegment(idGenerator.get(), transactionName, WireCommands.CreateSegment.NO_SCALE, - 0, ""), replyProcessor, result); + 0, "", 0), replyProcessor, result); return result; } diff --git a/client/src/test/java/io/pravega/client/stream/mock/MockSegmentIoStreams.java b/client/src/test/java/io/pravega/client/stream/mock/MockSegmentIoStreams.java index 613ac39a767..d02e6f01c84 100644 --- a/client/src/test/java/io/pravega/client/stream/mock/MockSegmentIoStreams.java +++ b/client/src/test/java/io/pravega/client/stream/mock/MockSegmentIoStreams.java @@ -84,6 +84,12 @@ public long getOffset() { return readOffset; } + @Override + @Synchronized + public CompletableFuture fetchCurrentSegmentHeadOffset() { + return CompletableFuture.completedFuture(startingOffset); + } + @Override @Synchronized public CompletableFuture fetchCurrentSegmentLength() { diff --git a/client/src/test/java/io/pravega/client/stream/notifications/EndOfDataNotifierTest.java b/client/src/test/java/io/pravega/client/stream/notifications/EndOfDataNotifierTest.java index c43ec21be42..8c8a27d9ae2 100644 --- a/client/src/test/java/io/pravega/client/stream/notifications/EndOfDataNotifierTest.java +++ b/client/src/test/java/io/pravega/client/stream/notifications/EndOfDataNotifierTest.java @@ -80,6 +80,28 @@ public void endOfStreamNotifierTest() throws Exception { verify(system, times(1)).removeListeners(EndOfDataNotification.class.getSimpleName()); } + @Test(timeout = 10000) + public void endOfStreamNotifierWithEmptyState() throws Exception { + AtomicBoolean listenerInvoked = new AtomicBoolean(); + + when(state.isEndOfData()).thenReturn(false).thenReturn(true); + when(sync.getState()).thenReturn(null).thenReturn(state); + + Listener listener1 = notification -> { + log.info("listener 1 invoked"); + listenerInvoked.set(true); + }; + + EndOfDataNotifier notifier = new EndOfDataNotifier(system, sync, executor); + notifier.registerListener(listener1); + verify(executor, times(1)).scheduleAtFixedRate(any(Runnable.class), eq(0L), anyLong(), any(TimeUnit.class)); + notifier.pollNow(); + verify(state, times(1)).isEndOfData(); + notifier.pollNow(); + verify(state, times(2)).isEndOfData(); + assertTrue(listenerInvoked.get()); + } + @After public void cleanup() { ExecutorServiceHelpers.shutdown(executor); diff --git a/client/src/test/java/io/pravega/client/stream/notifications/SegmentNotifierTest.java b/client/src/test/java/io/pravega/client/stream/notifications/SegmentNotifierTest.java index 16a952e1350..f9ae91e96f5 100644 --- a/client/src/test/java/io/pravega/client/stream/notifications/SegmentNotifierTest.java +++ b/client/src/test/java/io/pravega/client/stream/notifications/SegmentNotifierTest.java @@ -93,6 +93,30 @@ public void segmentNotifierTest() throws Exception { verify(system, times(1)).removeListeners(SegmentNotification.class.getSimpleName()); } + @Test(timeout = 5000) + public void segmentNotifierTestWithEmptyState() throws Exception { + AtomicBoolean listenerInvoked = new AtomicBoolean(); + AtomicInteger segmentCount = new AtomicInteger(0); + + when(state.getOnlineReaders()).thenReturn(new HashSet<>(singletonList("reader1"))); + when(state.getNumberOfSegments()).thenReturn(1, 1, 2 ).thenReturn(2); + // simulate a null being returned. + when(sync.getState()).thenReturn(null).thenReturn(state); + + Listener listener1 = e -> { + log.info("listener 1 invoked"); + listenerInvoked.set(true); + segmentCount.set(e.getNumOfSegments()); + + }; + SegmentNotifier notifier = new SegmentNotifier(system, sync, executor); + notifier.registerListener(listener1); + verify(executor, times(1)).scheduleAtFixedRate(any(Runnable.class), eq(0L), anyLong(), any(TimeUnit.class)); + notifier.pollNow(); + assertTrue(listenerInvoked.get()); + assertEquals(1, segmentCount.get()); + } + @After public void cleanup() { ExecutorServiceHelpers.shutdown(executor); diff --git a/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableImplTests.java b/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableImplTests.java index 57878dd6384..7b42ae60fb5 100644 --- a/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableImplTests.java +++ b/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableImplTests.java @@ -54,6 +54,7 @@ protected KeyValueTable createKeyValueTable(KeyValueTableInfo kvt, KeyValueTable return new KeyValueTableImpl(kvt, segmentFactory, this.controller, executorService()); } + @Override @Before public void setup() throws Exception { super.setup(); diff --git a/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableIteratorImplTests.java b/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableIteratorImplTests.java index 15119df1178..54bd621d203 100644 --- a/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableIteratorImplTests.java +++ b/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableIteratorImplTests.java @@ -232,7 +232,7 @@ private void checkSegmentIteratorArgs(SegmentIteratorArgs iteratorArgs, ByteBuff for (int i = 0; i < DEFAULT_CONFIG.getSecondaryKeyLength(); i++) { Assert.assertEquals(0, fromSK.get(i)); - Assert.assertEquals(0xFF, (int) toSK.get(i) & 0xFF); + Assert.assertEquals(0xFF, toSK.get(i) & 0xFF); } } diff --git a/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableTestBase.java b/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableTestBase.java index 84161c1bfa5..dcfb880372c 100644 --- a/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableTestBase.java +++ b/client/src/test/java/io/pravega/client/tables/impl/KeyValueTableTestBase.java @@ -776,6 +776,8 @@ private void checkValues(int iteration, Map keyVersions, Ke val expectedValue = k.getValue() == null ? null : new TableEntry(new TableKey(k.getKey()), k.getValue(), getValue(k.getKey(), iteration)); val actualEntry = kvt.get(new TableKey(k.getKey())).join(); Assert.assertTrue(areEqual(expectedValue, actualEntry)); + boolean exists = kvt.exists(new TableKey(k.getKey())).join(); + Assert.assertEquals(expectedValue != null, exists); } // Using multi-get. diff --git a/client/src/test/java/io/pravega/client/tables/impl/MockTableSegmentFactory.java b/client/src/test/java/io/pravega/client/tables/impl/MockTableSegmentFactory.java index dd4e2787817..224b2b1f778 100644 --- a/client/src/test/java/io/pravega/client/tables/impl/MockTableSegmentFactory.java +++ b/client/src/test/java/io/pravega/client/tables/impl/MockTableSegmentFactory.java @@ -187,6 +187,15 @@ public AsyncIterator> entryIterator(SegmentItera return getIterator(args, TableSegmentEntry::versioned, e -> e.getKey().getKey()); } + @Override + public CompletableFuture getEntryCount() { + return CompletableFuture.supplyAsync(() -> { + synchronized (this.data) { + return (long) this.data.size(); + } + }, this.executorService); + } + private AsyncIterator> getIterator(SegmentIteratorArgs initialArgs, IteratorConverter converter, Function getKey) { Preconditions.checkNotNull(initialArgs.getFromKey(), "initialArgs.fromKey"); diff --git a/client/src/test/java/io/pravega/client/tables/impl/TableSegmentImplTest.java b/client/src/test/java/io/pravega/client/tables/impl/TableSegmentImplTest.java index 47dbbe86e30..622aa7b8d9c 100644 --- a/client/src/test/java/io/pravega/client/tables/impl/TableSegmentImplTest.java +++ b/client/src/test/java/io/pravega/client/tables/impl/TableSegmentImplTest.java @@ -171,6 +171,22 @@ public void testGet() throws Exception { AssertExtensions.assertListEquals("Unexpected return value", expectedEntries, actualEntries, this::entryEquals); } + /** + * Tests the {@link TableSegmentImpl#getEntryCount()} method. + */ + @Test + public void testGetEntryCount() throws Exception { + @Cleanup + val context = new TestContext(); + val getInfoResult = context.segment.getEntryCount(); + val wireCommand = (WireCommands.GetTableSegmentInfo) context.getConnection().getLastSentWireCommand(); + Assert.assertEquals(SEGMENT.getKVTScopedName(), wireCommand.getSegmentName()); + context.sendReply(new WireCommands.TableSegmentInfo(context.getConnection().getLastRequestId(), SEGMENT.getKVTScopedName(), + 1, 2, 3L, 4)); + val actualResult = getInfoResult.get(SHORT_TIMEOUT, TimeUnit.MILLISECONDS); + Assert.assertEquals("Unexpected return value", 3L, (long) actualResult); + } + /** * Tests the {@link TableSegmentImpl#get} method when the response coming back from the server is truncated. * Connection reset failures are not tested here; they're checked in {@link #testReconnect()}. diff --git a/client/src/test/java/io/pravega/client/watermark/WatermarkSerializerTest.java b/client/src/test/java/io/pravega/client/watermark/WatermarkSerializerTest.java index 65b2418b64a..939407a6923 100644 --- a/client/src/test/java/io/pravega/client/watermark/WatermarkSerializerTest.java +++ b/client/src/test/java/io/pravega/client/watermark/WatermarkSerializerTest.java @@ -18,7 +18,6 @@ import com.google.common.collect.ImmutableMap; import io.pravega.shared.watermarks.SegmentWithRange; import io.pravega.shared.watermarks.Watermark; -import java.io.IOException; import java.nio.ByteBuffer; import org.junit.Test; @@ -26,7 +25,7 @@ public class WatermarkSerializerTest { @Test - public void testWatermark() throws IOException { + public void testWatermark() { SegmentWithRange segmentWithRange1 = new SegmentWithRange(0L, 0.0, 0.5); SegmentWithRange segmentWithRange2 = new SegmentWithRange(1L, 0.5, 1.0); ImmutableMap map = ImmutableMap.of(segmentWithRange1, 1L, segmentWithRange2, 1L); diff --git a/common/src/main/java/io/pravega/common/concurrent/AsyncSemaphore.java b/common/src/main/java/io/pravega/common/concurrent/AsyncSemaphore.java index 6a082e56557..ee1464cc45e 100644 --- a/common/src/main/java/io/pravega/common/concurrent/AsyncSemaphore.java +++ b/common/src/main/java/io/pravega/common/concurrent/AsyncSemaphore.java @@ -49,7 +49,7 @@ public class AsyncSemaphore implements AutoCloseable { @GuardedBy("queue") private long usedCredits; @GuardedBy("queue") - private final ArrayDeque queue; + private final ArrayDeque> queue; @GuardedBy("queue") private boolean closed; @@ -80,7 +80,7 @@ public AsyncSemaphore(long totalCredits, long usedCredits, String logId) { @Override public void close() { - List toCancel = null; + List> toCancel = null; synchronized (this.queue) { if (!this.closed) { toCancel = new ArrayList<>(this.queue); diff --git a/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceFactory.java b/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceFactory.java index 3c78d48c085..872386b614a 100644 --- a/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceFactory.java +++ b/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceFactory.java @@ -17,10 +17,10 @@ import com.google.common.annotations.VisibleForTesting; import java.util.concurrent.BlockingQueue; +import java.util.concurrent.ExecutorService; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.RejectedExecutionHandler; import java.util.concurrent.ScheduledExecutorService; -import java.util.concurrent.ScheduledThreadPoolExecutor; import java.util.concurrent.ThreadFactory; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; @@ -71,7 +71,7 @@ final class ExecutorServiceFactory { // In all of the below, the ThreadFactory is created in this class, and its toString() returns the pool name. if (this.detectionLevel == ThreadLeakDetectionLevel.None) { - this.createScheduledExecutor = (size, factory) -> new ScheduledThreadPoolExecutor(size, factory, new CallerRuns(factory.toString())); + this.createScheduledExecutor = (size, factory) -> new ThreadPoolScheduledExecutorService(size, factory); this.createShrinkingExecutor = (maxThreadCount, threadTimeout, factory) -> new ThreadPoolExecutor(0, maxThreadCount, threadTimeout, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>(), factory, new CallerRuns(factory.toString())); @@ -79,7 +79,7 @@ final class ExecutorServiceFactory { // Light and Aggressive need a special executor that overrides the finalize() method. this.createScheduledExecutor = (size, factory) -> { logNewThreadPoolCreated(factory.toString()); - return new LeakDetectorScheduledExecutorService(size, factory, new CallerRuns(factory.toString())); + return new LeakDetectorScheduledExecutorService(size, factory); }; this.createShrinkingExecutor = (maxThreadCount, threadTimeout, factory) -> { logNewThreadPoolCreated(factory.toString()); @@ -155,17 +155,11 @@ ScheduledExecutorService newScheduledThreadPool(int size, String poolName, int t ThreadFactory threadFactory = getThreadFactory(poolName, threadPriority); // Caller runs only occurs after shutdown, as queue size is unbounded. - ScheduledThreadPoolExecutor result = this.createScheduledExecutor.apply(size, threadFactory); - - // Do not execute any periodic tasks after shutdown. - result.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); - - // Do not execute any delayed tasks after shutdown. - result.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); - - // Remove tasks from the executor once they are done executing. By default, even when canceled, these tasks are - // not removed; if this setting is not enabled we could end up with leaked (and obsolete) tasks. - result.setRemoveOnCancelPolicy(true); + ThreadPoolScheduledExecutorService result = this.createScheduledExecutor.apply(size, threadFactory); + // ThreadPoolScheduledExecutorService implies: + // setContinueExistingPeriodicTasksAfterShutdownPolicy(false), + // setExecuteExistingDelayedTasksAfterShutdownPolicy(false), + // setRemoveOnCancelPolicy(true); return result; } @@ -200,7 +194,9 @@ private static class CallerRuns implements RejectedExecutionHandler { @Override public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { log.debug("Caller to executor: " + poolName + " rejected and run in the caller."); - r.run(); + if (!executor.isShutdown()) { + r.run(); + } } } @@ -208,15 +204,17 @@ public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { //region Leak Detection Pools - private class LeakDetectorScheduledExecutorService extends ScheduledThreadPoolExecutor { + private class LeakDetectorScheduledExecutorService extends ThreadPoolScheduledExecutorService { private final Exception stackTraceEx; - LeakDetectorScheduledExecutorService(int corePoolSize, ThreadFactory threadFactory, RejectedExecutionHandler handler) { - super(corePoolSize, threadFactory, handler); + LeakDetectorScheduledExecutorService(int corePoolSize, ThreadFactory threadFactory) { + super(corePoolSize, threadFactory); this.stackTraceEx = new Exception(); } - protected void finalize() { + @SuppressWarnings("deprecation") + @Override + protected void finalize() throws Throwable { checkThreadPoolLeak(this, this.stackTraceEx); super.finalize(); } @@ -231,6 +229,8 @@ private class LeakDetectorThreadPoolExecutor extends ThreadPoolExecutor { this.stackTraceEx = new Exception(); } + @SuppressWarnings("deprecation") + @Override protected void finalize() { checkThreadPoolLeak(this, this.stackTraceEx); super.finalize(); @@ -246,15 +246,15 @@ private void logNewThreadPoolCreated(String poolName) { } @VisibleForTesting - void checkThreadPoolLeak(ThreadPoolExecutor e, Exception stackTraceEx) { + void checkThreadPoolLeak(ExecutorService e, Exception stackTraceEx) { if (this.detectionLevel == ThreadLeakDetectionLevel.None) { // Not doing anything in this case. return; } - if (!e.isShutdown() || !e.isTerminated()) { + if (!e.isShutdown()) { log.warn("THREAD POOL LEAK: {} (ShutDown={}, Terminated={}) finalized without being properly shut down.", - e.getThreadFactory(), e.isShutdown(), e.isTerminated(), stackTraceEx); + e, e.isShutdown(), e.isTerminated(), stackTraceEx); if (this.detectionLevel == ThreadLeakDetectionLevel.Aggressive) { // Not pretty, but outputting this stack trace on System.err helps with those unit tests that turned off // logging. @@ -286,7 +286,7 @@ enum ThreadLeakDetectionLevel { @FunctionalInterface private interface CreateScheduledExecutor { - ScheduledThreadPoolExecutor apply(int size, ThreadFactory factory); + ThreadPoolScheduledExecutorService apply(int size, ThreadFactory factory); } @FunctionalInterface diff --git a/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceHelpers.java b/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceHelpers.java index 0ee3c6c605e..073cbdabbcd 100644 --- a/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceHelpers.java +++ b/common/src/main/java/io/pravega/common/concurrent/ExecutorServiceHelpers.java @@ -32,8 +32,8 @@ import lombok.AccessLevel; import lombok.AllArgsConstructor; import lombok.Getter; -import lombok.extern.slf4j.Slf4j; import lombok.val; +import lombok.extern.slf4j.Slf4j; /** * Helper methods for ExecutorService. @@ -99,6 +99,9 @@ public static Snapshot getSnapshot(ExecutorService service) { } else if (service instanceof ForkJoinPool) { val fjp = (ForkJoinPool) service; return new Snapshot(fjp.getQueuedSubmissionCount(), fjp.getActiveThreadCount(), fjp.getPoolSize()); + } else if (service instanceof ThreadPoolScheduledExecutorService) { + val tpse = (ThreadPoolScheduledExecutorService) service; + return new Snapshot(tpse.getRunner().getQueue().size(), tpse.getRunner().getActiveCount(), tpse.getRunner().getPoolSize()); } else { return null; } @@ -111,7 +114,7 @@ public static Snapshot getSnapshot(ExecutorService service) { * @param threadTimeout the number of milliseconds that a thread should sit idle before shutting down. * @param poolName The name of the threadpool. */ - public static ThreadPoolExecutor getShrinkingExecutor(int maxThreadCount, int threadTimeout, String poolName) { + public static ExecutorService getShrinkingExecutor(int maxThreadCount, int threadTimeout, String poolName) { return FACTORY.newShrinkingExecutor(maxThreadCount, threadTimeout, poolName); } diff --git a/common/src/main/java/io/pravega/common/concurrent/Scheduled.java b/common/src/main/java/io/pravega/common/concurrent/Scheduled.java new file mode 100644 index 00000000000..60df5897fc4 --- /dev/null +++ b/common/src/main/java/io/pravega/common/concurrent/Scheduled.java @@ -0,0 +1,36 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.common.concurrent; + +/** + * An item scheduled for a point in time in the future. + * This is used by {@link ThreadPoolScheduledExecutorService} + */ +interface Scheduled { + + /** + * Returns the time in nanos (as defined by {@link System#nanoTime()} that this item is + * scheduled for. + */ + long getScheduledTimeNanos(); + + /** + * Returns true if the {@link #getScheduledTimeNanos()} is now or was previously in the future. + */ + boolean isDelayed(); + +} \ No newline at end of file diff --git a/common/src/main/java/io/pravega/common/concurrent/ScheduledQueue.java b/common/src/main/java/io/pravega/common/concurrent/ScheduledQueue.java new file mode 100644 index 00000000000..40d1d908038 --- /dev/null +++ b/common/src/main/java/io/pravega/common/concurrent/ScheduledQueue.java @@ -0,0 +1,353 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.common.concurrent; + +import com.google.common.collect.Iterators; +import java.util.AbstractQueue; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Iterator; +import java.util.List; +import java.util.Map.Entry; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.ConcurrentSkipListMap; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; +import javax.annotation.Nonnull; +import lombok.Data; +import lombok.val; + +/** + * Provides a unbounded blocking queue which {@link Scheduled} items can be added to. + * Items which are scheduled will not be returned from {@link #poll()} or {@link #take()} until their scheduled time. + * + * This class is similar to DelayQueue but it allows a delay to be optional. This allows adding and polling non-delayed tasks in O(1). + * It also it lock-free, and may offer higher throughput under contention. + */ +public class ScheduledQueue extends AbstractQueue implements BlockingQueue { + + private final AtomicLong itemsAdded = new AtomicLong(0); + private final AtomicLong itemsRemoved = new AtomicLong(0); + private final ConcurrentSkipListMap delayedTasks = new ConcurrentSkipListMap(); + private final ConcurrentLinkedQueue readyTasks = new ConcurrentLinkedQueue(); + private final Semaphore blocker = new Semaphore(1); + + @Data + private static final class FireTime implements Comparable { + private final long timeNanos; + private final long sequenceNumber; + + @Override + public int compareTo(FireTime other) { + if (this.timeNanos < other.timeNanos) { + return -1; + } else if (this.timeNanos > other.timeNanos) { + return 1; + } else { + return Long.compare(this.sequenceNumber, other.sequenceNumber); + } + } + } + + /** + * Retrieves and removes the head of this queue, waiting if necessary + * until an element becomes available. + * + * @return the head of this queue + * @throws InterruptedException if interrupted while waiting + */ + @Override + public E take() throws InterruptedException { + return poll(Long.MAX_VALUE, TimeUnit.NANOSECONDS); + } + + + @Override + public E poll(long timeout, TimeUnit unit) throws InterruptedException { + @SuppressWarnings("unused") + boolean ignored = blocker.tryAcquire(); + E result = readyTasks.poll(); + if (result != null) { + this.itemsRemoved.incrementAndGet(); + return result; + } + long startTime = System.nanoTime(); + long now = startTime; + while (now - startTime <= timeout) { + val delayed = delayedTasks.firstEntry(); + if (delayed != null && delayed.getKey().getTimeNanos() <= now) { + if (delayedTasks.remove(delayed.getKey(), delayed.getValue())) { + val next = delayedTasks.firstEntry(); + if (next != null && next.getKey().timeNanos <= now) { + //In case there are more tasks that are ready wake up other callers. + blocker.release(); + } + this.itemsRemoved.incrementAndGet(); + return delayed.getValue(); + } else { + continue; // Some other thread grabbed the task. + } + } else { + //If readyTasks is non-empty then blocker must have permits. + ignored = blocker.tryAcquire(sleepTimeout(timeout, startTime, now, delayed), TimeUnit.NANOSECONDS); + result = readyTasks.poll(); + if (result != null) { + this.itemsRemoved.incrementAndGet(); + return result; + } + } + now = System.nanoTime(); + } + return null; + } + + @Override + public E poll() { + @SuppressWarnings("unused") + boolean ignored = blocker.tryAcquire(); + E result = readyTasks.poll(); + if (result != null) { + this.itemsRemoved.incrementAndGet(); + return result; + } + while (true) { + val delayed = delayedTasks.firstEntry(); + if (delayed == null || delayed.getKey().getTimeNanos() > System.nanoTime()) { + return null; + } + if (delayedTasks.remove(delayed.getKey(), delayed.getValue())) { + this.itemsRemoved.incrementAndGet(); + return delayed.getValue(); + } + } + } + + private long sleepTimeout(long timeout, long startTime, long now, Entry delayed) { + long sleepTimeout = timeout - (now - startTime); + if (delayed != null) { + sleepTimeout = Math.min(sleepTimeout, delayed.getKey().getTimeNanos() - now); + } + return sleepTimeout; + } + + /** + * Inserts the specified element into this delay queue. + * + * @param e the element to add + * @return {@code true} (as specified by {@link Collection#add}) + * @throws NullPointerException if the specified element is null + */ + @Override + public boolean add(E e) { + return offer(e); + } + + /** + * Inserts the specified element into this delay queue. + * + * @param e the element to add + * @return {@code true} + * @throws NullPointerException if the specified element is null + */ + @Override + public boolean offer(E e) { + long seq = itemsAdded.incrementAndGet(); + if (!e.isDelayed()) { + readyTasks.add(e); + } else { + delayedTasks.put(new FireTime(e.getScheduledTimeNanos(), seq), e); + } + // This is done unconditionally even if delayed because it could be the + // new shortest delay in which case some thread should wake up to + // re-schedule its sleep. + blocker.release(); + return true; + } + + /** + * Inserts the specified element into this delay queue. As the queue is + * unbounded this method will never block. + * + * @param e the element to add + * @param timeout This parameter is ignored as the method never blocks + * @param unit This parameter is ignored as the method never blocks + * @return {@code true} + * @throws NullPointerException {@inheritDoc} + */ + @Override + public boolean offer(E e, long timeout, TimeUnit unit) { + return offer(e); + } + + /** + * Inserts the specified element into this delay queue. As the queue is + * unbounded this method will never block. + * + * @param e the element to add + * @throws NullPointerException {@inheritDoc} + */ + @Override + public void put(E e) { + offer(e); + } + + /** + * Retrieves, but does not remove, the head of this queue, or + * returns {@code null} if this queue is empty. + * + * Unlike {@code poll}, this method returns the elements who's scheduled time has not yet arrived. + * + * (Items added concurrently may or may not be observed) + * + * @return the head of this queue, or {@code null} if this queue is empty + */ + @Override + public E peek() { + val ready = readyTasks.peek(); + if (ready == null) { + val result = delayedTasks.firstEntry(); + if (result == null) { + return null; + } else { + return result.getValue(); + } + } else { + return ready; + } + } + + /** + * Returns the size of this collection. + * The value is only guaranteed to be accurate if there are not concurrent operations being performed. + * If there are, it may reflect the operation or not. + * This call is O(1). + */ + @Override + public int size() { + return (int) Long.min(itemsAdded.get() - itemsRemoved.get(), Integer.MAX_VALUE); + } + + /** + * Always returns {@code Integer.MAX_VALUE} because + * a {@code ScheduledQueue} is not capacity constrained. + * + * @return {@code Integer.MAX_VALUE} + */ + @Override + public int remainingCapacity() { + return Integer.MAX_VALUE; + } + + /** + * Returns an array containing all of the elements in this queue. + * (Items added concurrently may or may not be included) + * + * @param a the array into which the elements of the queue are to + * be stored, if it is big enough; otherwise, a new array of the + * same runtime type is allocated for this purpose + * @return an array containing all of the elements in this queue + */ + @Override + public T[] toArray(@Nonnull T[] a) { + ArrayList result = new ArrayList(); + for (E val : readyTasks) { + result.add(val); + } + for (E val : delayedTasks.values()) { + result.add(val); + } + return result.toArray(a); + } + + /** + * Removes a single instance of the specified element from this + * queue, if it is present, whether or not it has expired. + */ + @Override + public boolean remove(Object o) { + if (readyTasks.remove(o)) { + this.itemsRemoved.incrementAndGet(); + return true; + } else { + if (delayedTasks.values().remove(o)) { + this.itemsRemoved.incrementAndGet(); + return true; + } + return false; + } + } + + /** + * Returns an iterator over all the items in the queue. + * + *

The returned iterator is + * weakly consistent. + * + * @return an iterator over the elements in this queue + */ + @Override + public Iterator iterator() { + return Iterators.unmodifiableIterator(Iterators.concat(readyTasks.iterator(), delayedTasks.values().iterator())); + } + + @Override + public int drainTo(Collection c) { + return drainTo(c, Integer.MAX_VALUE); + } + + @Override + public int drainTo(Collection c, int maxElements) { + int itemCount = 0; + blocker.drainPermits(); + while (itemCount < maxElements) { + E item = readyTasks.poll(); + if (item == null) { + break; + } + c.add(item); + itemCount++; + } + while (itemCount < maxElements) { + Entry item = delayedTasks.pollFirstEntry(); + if (item == null) { + break; + } + c.add(item.getValue()); + itemCount++; + } + itemsRemoved.addAndGet(itemCount); + blocker.release(); //In case there are items remaining + return itemCount; + } + + /** + * Clears all delayed tasks from the queue but leaves those which can be polled immediately. + */ + public List drainDelayed() { + ArrayList result = new ArrayList<>(); + Entry item = delayedTasks.pollFirstEntry(); + while (item != null) { + result.add(item.getValue()); + itemsRemoved.incrementAndGet(); + item = delayedTasks.pollFirstEntry(); + } + return result; + } + +} diff --git a/common/src/main/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorService.java b/common/src/main/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorService.java new file mode 100644 index 00000000000..c27601fa747 --- /dev/null +++ b/common/src/main/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorService.java @@ -0,0 +1,396 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.common.concurrent; + +import java.util.List; +import java.util.concurrent.AbstractExecutorService; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.Callable; +import java.util.concurrent.CancellationException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Delayed; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Executors; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.ThreadPoolExecutor.AbortPolicy; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; +import lombok.Data; +import lombok.Getter; +import lombok.EqualsAndHashCode; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import lombok.extern.slf4j.Slf4j; +import lombok.AccessLevel; + +import static java.util.concurrent.TimeUnit.MILLISECONDS; +import static java.util.concurrent.TimeUnit.NANOSECONDS; + +/** + * An implementation of {@link ScheduledExecutorService} which uses a thread pool. + * + * This class is similar to ScheduledThreadPoolExecutor but differs in the following ways: + * + * 1. The thread pool supports growing. IE: {@code maxPoolSize} and {@code corePoolSize} don't have to be the same. + * 2. Queued tasks are stored in a lock-free queue so that no bottlenecks can occur on submit. + * 3. Scheduling a task without a delay is O(1) as opposed to O(log(n)) + * 4. Canceling a task actually removes it from the queue and is O(n) as opposed to a no-op which leaves it in the queue or O(log(n)) when {@code setRemoveOnCancelPolicy(true)}. + * 5. The {@code mayInteruptIfRunning} flag on cancel is ignored and assumed to be false. + * 6. {@code ContinueExistingPeriodicTasksAfterShutdown} and {@code ExecuteExistingDelayedTasksAfterShutdown} are always false. + */ +@Slf4j +@ToString(of = "runner") +public class ThreadPoolScheduledExecutorService extends AbstractExecutorService implements ScheduledExecutorService { + + private static final AtomicLong COUNTER = new AtomicLong(0); + @Getter(AccessLevel.PACKAGE) + private final ThreadPoolExecutor runner; + private final ScheduledQueue> queue; + + /** + * Creates a fixed size thread pool (Similar to ScheduledThreadPoolExecutor). + * + * @param corePoolSize The number of threads in the pool + * @param threadFactory The factory used to create the threads. + */ + public ThreadPoolScheduledExecutorService(int corePoolSize, ThreadFactory threadFactory) { + this.queue = new ScheduledQueue>(); + // While this cast looks invalid, it is ok because runner is private and will only + // be given ScheduledRunnable which by definition implement runnable. + @SuppressWarnings("unchecked") + BlockingQueue queue = (BlockingQueue) this.queue; + runner = new ThreadPoolExecutor(corePoolSize, + corePoolSize, + 100, + MILLISECONDS, + queue, + threadFactory, + new AbortPolicy()); + runner.prestartAllCoreThreads(); + } + + @RequiredArgsConstructor + @EqualsAndHashCode + private final class CancelableFuture implements ScheduledFuture { + + private final ScheduledRunnable task; + + @Override + public long getDelay(TimeUnit unit) { + if (!task.isDelayed) { + return 0; + } + return unit.convert(task.scheduledTimeNanos - System.nanoTime(), TimeUnit.NANOSECONDS); + } + + @Override + public int compareTo(Delayed other) { + if (other == this) { // compare zero if same object + return 0; + } + if (other instanceof CancelableFuture) { + return Long.compare(this.task.scheduledTimeNanos, + ((CancelableFuture) other).task.scheduledTimeNanos); + } else { + long diff = getDelay(NANOSECONDS) - other.getDelay(NANOSECONDS); + return (diff < 0) ? -1 : (diff > 0) ? 1 : 0; + } + } + + /** + * Cancels a pending task. Note: The {@code mayInterruptIfRunning} parameter is ignored (and + * assumed to be false) as it is unsupported. + * + * @param mayInterruptIfRunning Ignored. + */ + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + return ThreadPoolScheduledExecutorService.this.cancel(task); + } + + @Override + public boolean isCancelled() { + return task.future.isCancelled(); + } + + @Override + public boolean isDone() { + return task.future.isDone(); + } + + @Override + public R get() throws InterruptedException, ExecutionException { + return task.future.get(); + } + + @Override + public R get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException { + return task.future.get(timeout, unit); + } + } + + @Data + private static final class ScheduledRunnable implements Runnable, Scheduled { + private final long id; + private final boolean isDelayed; + private final long scheduledTimeNanos; + @EqualsAndHashCode.Exclude + private final Callable task; + @EqualsAndHashCode.Exclude + private final CompletableFuture future; + + private ScheduledRunnable(Callable task) { + this.id = COUNTER.incrementAndGet(); + this.isDelayed = false; + this.scheduledTimeNanos = 0; + this.task = task; + this.future = new CompletableFuture(); + } + + private ScheduledRunnable(Callable task, long delay, TimeUnit unit) { + this.id = COUNTER.incrementAndGet(); + this.isDelayed = true; + this.scheduledTimeNanos = unit.toNanos(delay) + System.nanoTime(); + this.task = task; + this.future = new CompletableFuture(); + } + + private ScheduledRunnable(Callable task, long scheduledTimeNanos) { + this.id = COUNTER.incrementAndGet(); + this.isDelayed = true; + this.scheduledTimeNanos = scheduledTimeNanos; + this.task = task; + this.future = new CompletableFuture(); + } + + @Override + public void run() { + try { + future.complete(task.call()); + } catch (Throwable e) { + future.completeExceptionally(e); + } + } + } + + @Override + public void shutdown() { + cancelDelayed(); + runner.shutdown(); + } + + private boolean cancel(ScheduledRunnable task) { + if (queue.remove(task)) { + task.future.cancel(false); + return true; + } + return false; + } + + private void cancelDelayed() { + for (ScheduledRunnable item : queue.drainDelayed()) { + item.future.cancel(false); + } + } + + @Override + public List shutdownNow() { + cancelDelayed(); + return runner.shutdownNow(); + } + + @Override + public boolean isShutdown() { + return runner.isShutdown(); + } + + @Override + public boolean isTerminated() { + return runner.isTerminated(); + } + + @Override + public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException { + return runner.awaitTermination(timeout, unit); + } + + @Override + public void execute(Runnable command) { + runner.execute(new ScheduledRunnable<>(Executors.callable(command))); + } + + @Override + public ScheduledFuture schedule(Runnable command, long delay, TimeUnit unit) { + ScheduledRunnable task = new ScheduledRunnable<>(Executors.callable(command), delay, unit); + runner.execute(task); + return new CancelableFuture<>(task); + } + + @Override + public ScheduledFuture schedule(Callable callable, long delay, TimeUnit unit) { + ScheduledRunnable task = new ScheduledRunnable<>(callable, delay, unit); + runner.execute(task); + return new CancelableFuture<>(task); + } + + @Override + public ScheduledFuture scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit) { + FixedRateLoop loop = new FixedRateLoop(command, period, unit); + ScheduledRunnable task = new ScheduledRunnable<>(loop, initialDelay, unit); + runner.execute(task); + return loop; + } + + @Override + public ScheduledFuture scheduleWithFixedDelay(Runnable command, long initialDelay, long delay, TimeUnit unit) { + FixedDelayLoop loop = new FixedDelayLoop(command, delay, unit); + ScheduledRunnable task = new ScheduledRunnable<>(loop, initialDelay, unit); + runner.execute(task); + return loop; + } + + @RequiredArgsConstructor + private abstract class ScheduleLoop implements Callable, ScheduledFuture { + final Runnable command; + final AtomicBoolean canceled = new AtomicBoolean(false); + final CompletableFuture shutdownFuture = new CompletableFuture<>(); + + @Override + public Void call() { + if (!canceled.get()) { + try { + command.run(); + } catch (Throwable t) { + canceled.set(true); + log.error("Exception thrown out of root of recurring task: " + command + " This task will not run again!", t); + shutdownFuture.completeExceptionally(t); + return null; + } + if (!canceled.get()) { + try { + schedule(); + } catch (RejectedExecutionException e) { + //Pool has shutdown + log.debug("Shutting down task {} because pool {} has shutdown.", command, runner); + cancel(false); + } + } + } + return null; + } + + abstract void schedule(); + + @Override + public int compareTo(Delayed other) { + if (other == this) { // compare zero if same object + return 0; + } + long diff = getDelay(NANOSECONDS) - other.getDelay(NANOSECONDS); + return (diff < 0) ? -1 : (diff > 0) ? 1 : 0; + } + + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + if (canceled.getAndSet(true)) { + return false; + } + shutdownFuture.completeExceptionally(new CancellationException()); + return true; + } + + @Override + public boolean isCancelled() { + return canceled.get(); + } + + @Override + public boolean isDone() { + return canceled.get(); + } + + @Override + public Void get() throws InterruptedException, ExecutionException { + return shutdownFuture.get(); + } + + @Override + public Void get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException { + return shutdownFuture.get(timeout, unit); + } + } + + private class FixedDelayLoop extends ScheduleLoop { + private final long delay; + private final TimeUnit unit; + + public FixedDelayLoop(Runnable command, long delay, TimeUnit unit) { + super(command); + this.delay = delay; + this.unit = unit; + } + + @Override + public long getDelay(TimeUnit returnUnit) { + return returnUnit.convert(delay, unit); + } + + @Override + void schedule() { + ThreadPoolScheduledExecutorService.this.schedule(this, delay, unit); + } + } + + private class FixedRateLoop extends ScheduleLoop { + private final long periodNanos; + private final AtomicLong startTimeNanos; + + public FixedRateLoop(Runnable command, long period, TimeUnit unit) { + super(command); + this.startTimeNanos = new AtomicLong(System.nanoTime()); + this.periodNanos = unit.toNanos(period); + } + + + @Override + public long getDelay(TimeUnit returnUnit) { + return returnUnit.convert(periodNanos, NANOSECONDS); + } + + @Override + public Void call() { + startTimeNanos.set(System.nanoTime()); + return super.call(); + } + + @Override + void schedule() { + runner.execute(new ScheduledRunnable<>(this, startTimeNanos.get() + periodNanos)); + } + } + + ThreadFactory getThreadFactory() { + return runner.getThreadFactory(); + } + +} diff --git a/common/src/main/java/io/pravega/common/io/serialization/VersionedSerializer.java b/common/src/main/java/io/pravega/common/io/serialization/VersionedSerializer.java index f6dd765eb84..883bcaf01e9 100644 --- a/common/src/main/java/io/pravega/common/io/serialization/VersionedSerializer.java +++ b/common/src/main/java/io/pravega/common/io/serialization/VersionedSerializer.java @@ -269,7 +269,7 @@ private static abstract class SingleType extends Version */ @SuppressWarnings("unchecked") SingleType() { - this.versions = (FormatVersion[]) new FormatVersion[Byte.MAX_VALUE]; + this.versions = new FormatVersion[Byte.MAX_VALUE]; declareVersions(); Preconditions.checkArgument(this.versions[getWriteVersion()] != null, "Write version %s is not defined.", getWriteVersion()); } diff --git a/common/src/main/java/io/pravega/common/lang/Int96.java b/common/src/main/java/io/pravega/common/lang/Int96.java index 0c206a12a73..650388627f9 100644 --- a/common/src/main/java/io/pravega/common/lang/Int96.java +++ b/common/src/main/java/io/pravega/common/lang/Int96.java @@ -25,7 +25,7 @@ * first compariging msbs and if msbs are equal then we compare lsbs. */ @Data -public class Int96 implements Comparable { +public class Int96 implements Comparable { public static final Int96 ZERO = new Int96(0, 0L); private final int msb; private final long lsb; @@ -39,12 +39,7 @@ public Int96(int msb, long lsb) { } @Override - public int compareTo(Object o) { - if (!(o instanceof Int96)) { - throw new RuntimeException("incomparable objects"); - } - Int96 other = (Int96) o; - + public int compareTo(Int96 other) { if (msb != other.msb) { return Integer.compare(msb, other.msb); } else { diff --git a/common/src/main/java/io/pravega/common/security/TLSProtocolVersion.java b/common/src/main/java/io/pravega/common/security/TLSProtocolVersion.java new file mode 100644 index 00000000000..ee2841cf9c6 --- /dev/null +++ b/common/src/main/java/io/pravega/common/security/TLSProtocolVersion.java @@ -0,0 +1,41 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.common.security; + +import java.util.Arrays; + +public class TLSProtocolVersion { + + private final String[] protocols; + + public TLSProtocolVersion(String s) { + protocols = parse(s); + } + + public String[] getProtocols() { + return Arrays.copyOf(protocols, protocols.length); + } + + public static final String[] parse(String s) { + String[] protocols = s.split(","); + for (String a : protocols) { + if (!a.matches("TLSv1\\.(2|3)")) { + throw new IllegalArgumentException("Invalid TLS Protocol Version"); + } + } + return protocols; + } +} \ No newline at end of file diff --git a/common/src/main/java/io/pravega/common/util/BitConverter.java b/common/src/main/java/io/pravega/common/util/BitConverter.java index 92cb3414cc4..d6c20dab488 100644 --- a/common/src/main/java/io/pravega/common/util/BitConverter.java +++ b/common/src/main/java/io/pravega/common/util/BitConverter.java @@ -216,6 +216,7 @@ public static long readLong(byte[] source, int position) { * @param b8 Byte #8. * @return The composed number. */ + @SuppressWarnings("cast") public static long makeLong(int b1, int b2, int b3, int b4, int b5, int b6, int b7, int b8) { return ((long) b1 << 56) + ((long) (b2 & 255) << 48) + diff --git a/common/src/main/java/io/pravega/common/util/PriorityBlockingDrainingQueue.java b/common/src/main/java/io/pravega/common/util/PriorityBlockingDrainingQueue.java index 4eee886718b..da54c076c0f 100644 --- a/common/src/main/java/io/pravega/common/util/PriorityBlockingDrainingQueue.java +++ b/common/src/main/java/io/pravega/common/util/PriorityBlockingDrainingQueue.java @@ -120,7 +120,7 @@ private int getFirstIndex() { @SuppressWarnings("unchecked") private SimpleDeque getQueue(int index) { - return (SimpleDeque) this.queues[index]; + return this.queues[index]; } @SuppressWarnings("unchecked") diff --git a/common/src/main/java/io/pravega/common/util/TypedProperties.java b/common/src/main/java/io/pravega/common/util/TypedProperties.java index 290ed31d9ae..e5013758230 100644 --- a/common/src/main/java/io/pravega/common/util/TypedProperties.java +++ b/common/src/main/java/io/pravega/common/util/TypedProperties.java @@ -17,12 +17,11 @@ import com.google.common.base.Preconditions; import io.pravega.common.Exceptions; -import lombok.extern.slf4j.Slf4j; - import java.time.Duration; import java.time.temporal.TemporalUnit; import java.util.Properties; import java.util.function.Function; +import lombok.extern.slf4j.Slf4j; /** * * @@ -162,6 +161,39 @@ public int getPositiveInt(Property property) { return value; } + /** + * Gets the value of an Integer property only if it is non-negative (greater than or equal to 0). + * + * @param property The Property to get. + * @return The property value or default value, if no such is defined in the base Properties. + * @throws ConfigurationException When the given property name does not exist within the current component and the property + * does not have a default value set, or when the property cannot be parsed as a + * non-negative Integer. + */ + public int getNonNegativeInt(Property property) { + int value = getInt(property); + if (value < 0) { + throw new ConfigurationException(String.format("Property '%s' must be a non-negative integer.", property)); + } + return value; + } + + /** + * Gets the value of an Long property only if it is greater than 0. + * + * @param property The Property to get. + * @return The property value or default value, if no such is defined in the base Properties. + * @throws ConfigurationException When the given property name does not exist within the current component and the property + * does not have a default value set, or when the property cannot be parsed as a positive Long. + */ + public long getPositiveLong(Property property) { + long value = getLong(property); + if (value <= 0) { + throw new ConfigurationException(String.format("Property '%s' must be a positive long.", property)); + } + return value; + } + /** * Gets a Duration from an Integer property only if it is greater than 0. * diff --git a/common/src/test/java/io/pravega/common/concurrent/AsyncSemaphoreTests.java b/common/src/test/java/io/pravega/common/concurrent/AsyncSemaphoreTests.java index 6aacfdd6f98..5affffe805a 100644 --- a/common/src/test/java/io/pravega/common/concurrent/AsyncSemaphoreTests.java +++ b/common/src/test/java/io/pravega/common/concurrent/AsyncSemaphoreTests.java @@ -61,7 +61,7 @@ public void testInvalidArguments() { "constructor: usedCredits < 0", () -> new AsyncSemaphore(1, -1, ""), ex -> ex instanceof IllegalArgumentException); - + @Cleanup val s = new AsyncSemaphore(credits, 0, ""); AssertExtensions.assertThrows( "release: credits < 0", diff --git a/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceFactoryTests.java b/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceFactoryTests.java index d28d1869274..d4938d84681 100644 --- a/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceFactoryTests.java +++ b/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceFactoryTests.java @@ -16,7 +16,7 @@ package io.pravega.common.concurrent; import io.pravega.test.common.IntentionalException; -import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.ExecutorService; import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Function; import lombok.Cleanup; @@ -64,7 +64,7 @@ public void testGetDetectionLevel() { @Test public void testScheduledThreadPoolLeak() { - testLeaks(factory -> (ThreadPoolExecutor) factory.newScheduledThreadPool(1, "test", 1)); + testLeaks(factory -> (ThreadPoolScheduledExecutorService) factory.newScheduledThreadPool(1, "test", 1)); } @Test @@ -72,7 +72,7 @@ public void testShrinkingThreadPoolLeak() { testLeaks(factory -> factory.newShrinkingExecutor(1, 1, "test")); } - private void testLeaks(Function newExecutor) { + private void testLeaks(Function newExecutor) { for (val level : ExecutorServiceFactory.ThreadLeakDetectionLevel.values()) { val invoked = new AtomicBoolean(false); Runnable callback = () -> invoked.set(true); diff --git a/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceHelpersTests.java b/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceHelpersTests.java index 2fc6ec8a0c6..c05df568e6c 100644 --- a/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceHelpersTests.java +++ b/common/src/test/java/io/pravega/common/concurrent/ExecutorServiceHelpersTests.java @@ -16,12 +16,17 @@ package io.pravega.common.concurrent; import io.pravega.test.common.AssertExtensions; +import io.pravega.test.common.InlineExecutor; import io.pravega.test.common.IntentionalException; import io.pravega.test.common.ThreadPooledTestSuite; +import java.time.Duration; import java.util.concurrent.Executors; import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; + +import lombok.Cleanup; import lombok.val; import org.junit.Assert; import org.junit.Test; @@ -69,7 +74,7 @@ public void testExecute() { // Scheduling exception val closedExecutor = Executors.newSingleThreadExecutor(); - closedExecutor.shutdown(); + ExecutorServiceHelpers.shutdown(closedExecutor); runCount.set(0); exceptionHolder.set(null); finallyCount.set(0); @@ -85,4 +90,20 @@ public void testExecute() { Assert.assertNull("Unexpected exception set (rejected execution)", exceptionHolder.get()); Assert.assertEquals("Unexpected number of finally runs (rejected execution)", 1, finallyCount.get()); } + + @Test + public void testSnapshot() { + @Cleanup("shutdown") + ScheduledExecutorService coreExecutor = ExecutorServiceHelpers.newScheduledThreadPool(30, "core", Thread.NORM_PRIORITY); + + ExecutorServiceHelpers.Snapshot snapshot = ExecutorServiceHelpers.getSnapshot(coreExecutor); + Assert.assertEquals("Unexpected pool size", 30, snapshot.getPoolSize()); + Assert.assertEquals("Unexpected queue size", 0, snapshot.getQueueSize()); + + ScheduledExecutorService inlineExecutor = new InlineExecutor(); + ExecutorServiceHelpers.Snapshot inlineSnapshot = ExecutorServiceHelpers.getSnapshot(inlineExecutor); + Assert.assertNull("Unexpected snapshot", inlineSnapshot); + + ExecutorServiceHelpers.shutdown(Duration.ofSeconds(1), coreExecutor, inlineExecutor); + } } diff --git a/common/src/test/java/io/pravega/common/concurrent/ScheduledQueueTest.java b/common/src/test/java/io/pravega/common/concurrent/ScheduledQueueTest.java new file mode 100644 index 00000000000..9139ee785c1 --- /dev/null +++ b/common/src/test/java/io/pravega/common/concurrent/ScheduledQueueTest.java @@ -0,0 +1,187 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.common.concurrent; + +import java.util.ArrayList; +import java.util.concurrent.TimeUnit; +import lombok.Data; +import org.junit.Test; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; + +public class ScheduledQueueTest { + + @Data + private static class NoDelay implements Scheduled { + final int id; + + @Override + public long getScheduledTimeNanos() { + return 0; + } + + @Override + public boolean isDelayed() { + return false; + } + } + + @Data + private static class Delay implements Scheduled { + final long time; + + @Override + public long getScheduledTimeNanos() { + return time; + } + + @Override + public boolean isDelayed() { + return true; + } + } + + @Test(timeout = 5000) + public void testNonDelayed() { + ScheduledQueue queue = new ScheduledQueue(); + queue.add(new NoDelay(1)); + queue.add(new NoDelay(2)); + queue.add(new NoDelay(3)); + assertEquals(1, queue.poll().id); + assertEquals(2, queue.poll().id); + assertEquals(3, queue.poll().id); + assertEquals(null, queue.poll()); + } + + @Test(timeout = 5000) + public void testDelayed() { + ScheduledQueue queue = new ScheduledQueue(); + queue.add(new Delay(1)); + queue.add(new Delay(3)); + queue.add(new Delay(2)); + assertEquals(1, queue.poll().time); + assertEquals(2, queue.poll().time); + assertEquals(3, queue.poll().time); + assertEquals(null, queue.poll()); + } + + @Test(timeout = 5000) + public void testPoll() throws InterruptedException { + ScheduledQueue queue = new ScheduledQueue(); + assertNull(queue.poll(5, TimeUnit.SECONDS)); + queue.add(new Delay(Long.MAX_VALUE)); + assertNull(queue.poll(0, TimeUnit.SECONDS)); + queue.add(new Delay(1)); + queue.add(new Delay(3)); + queue.add(new Delay(2)); + queue.add(new NoDelay(4)); + assertEquals(0, queue.poll(5, TimeUnit.SECONDS).getScheduledTimeNanos()); + assertEquals(1, queue.take().getScheduledTimeNanos()); + assertEquals(2, queue.poll(5, TimeUnit.SECONDS).getScheduledTimeNanos()); + assertEquals(3, queue.poll(5, TimeUnit.SECONDS).getScheduledTimeNanos()); + assertEquals(null, queue.poll(0, TimeUnit.SECONDS)); + assertEquals(1, queue.size()); + } + + @Test(timeout = 5000) + public void testPeek() { + ScheduledQueue queue = new ScheduledQueue(); + assertNull(queue.peek()); + Delay delay = new Delay(1); + queue.add(delay); + assertEquals(delay, queue.peek()); + NoDelay noDelay = new NoDelay(1); + queue.add(noDelay); + assertEquals(noDelay, queue.peek()); + queue.poll(); + assertEquals(delay, queue.peek()); + queue.poll(); + assertNull(queue.peek()); + } + + @Test(timeout = 5000) + public void testSize() { + ScheduledQueue queue = new ScheduledQueue(); + assertEquals(0, queue.size()); + queue.add(new NoDelay(1)); + assertEquals(1, queue.size()); + NoDelay nd2 = new NoDelay(2); + queue.add(nd2); + assertEquals(2, queue.size()); + queue.add(new NoDelay(3)); + assertEquals(3, queue.size()); + queue.add(new Delay(1)); + assertEquals(4, queue.size()); + Delay d3 = new Delay(3); + queue.add(d3); + assertEquals(5, queue.size()); + queue.add(new Delay(2)); + assertEquals(6, queue.size()); + assertNotNull(queue.poll()); + assertEquals(5, queue.size()); + assertTrue(queue.remove(nd2)); + assertEquals(4, queue.size()); + assertTrue(queue.remove(d3)); + assertEquals(3, queue.size()); + queue.drainDelayed(); + assertEquals(1, queue.size()); + queue.clear(); + assertEquals(0, queue.size()); + } + + @Test(timeout = 5000) + public void testToArray() { + ScheduledQueue queue = new ScheduledQueue(); + queue.add(new Delay(Long.MAX_VALUE)); + queue.add(new Delay(1)); + queue.add(new Delay(3)); + queue.add(new Delay(2)); + queue.add(new NoDelay(4)); + Object[] objects = queue.toArray(); + assertEquals(5, objects.length); + assertEquals(new NoDelay(4), objects[0]); + assertEquals(new Delay(1), objects[1]); + assertEquals(new Delay(2), objects[2]); + assertEquals(new Delay(3), objects[3]); + assertEquals(new Delay(Long.MAX_VALUE), objects[4]); + } + + @Test(timeout = 5000) + public void testDrainTo() { + ScheduledQueue queue = new ScheduledQueue(); + queue.add(new Delay(Long.MAX_VALUE)); + queue.add(new Delay(1)); + queue.add(new Delay(3)); + queue.add(new Delay(2)); + queue.add(new NoDelay(4)); + ArrayList result = new ArrayList<>(); + queue.drainTo(result, 2); + assertEquals(2, result.size()); + assertEquals(new NoDelay(4), result.get(0)); + assertEquals(new Delay(1), result.get(1)); + queue.drainTo(result); + assertEquals(5, result.size()); + assertEquals(new NoDelay(4), result.get(0)); + assertEquals(new Delay(1), result.get(1)); + assertEquals(new Delay(2), result.get(2)); + assertEquals(new Delay(3), result.get(3)); + assertEquals(new Delay(Long.MAX_VALUE), result.get(4)); + } +} diff --git a/common/src/test/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorServiceTest.java b/common/src/test/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorServiceTest.java new file mode 100644 index 00000000000..fac87091d26 --- /dev/null +++ b/common/src/test/java/io/pravega/common/concurrent/ThreadPoolScheduledExecutorServiceTest.java @@ -0,0 +1,299 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.common.concurrent; + +import io.pravega.common.util.ReusableLatch; +import io.pravega.test.common.AssertExtensions; +import java.util.List; +import java.util.concurrent.CancellationException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CyclicBarrier; +import java.util.concurrent.Future; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; +import org.junit.Test; + +import static java.util.concurrent.TimeUnit.MILLISECONDS; +import static java.util.concurrent.TimeUnit.SECONDS; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; + +public class ThreadPoolScheduledExecutorServiceTest { + + private static ThreadPoolScheduledExecutorService createPool(int threads) { + return new ThreadPoolScheduledExecutorService(threads, + ExecutorServiceHelpers.getThreadFactory("ThreadPoolScheduledExecutorServiceTest")); + } + + @Test(timeout = 10000) + public void testRunsTask() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + CompletableFuture result = new CompletableFuture(); + Future future = pool.submit(() -> result.complete(5)); + assertEquals(Integer.valueOf(5), result.get(5, SECONDS)); + assertEquals(true, future.get()); + pool.shutdown(); + assertTrue(pool.awaitTermination(5, SECONDS)); + } + + @Test(timeout = 10000) + public void testRunsDelayTask() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + CompletableFuture result = new CompletableFuture(); + long startTime = System.nanoTime(); + Future future = pool.schedule(() -> result.complete(System.nanoTime()), 100, MILLISECONDS); + long runTime = result.get(5, SECONDS); + assertTrue(runTime > startTime + 50 * 1000 * 1000); + assertTrue(future.get(5, SECONDS)); + pool.shutdown(); + assertTrue(pool.awaitTermination(5, SECONDS)); + } + + @Test(timeout = 10000) + public void testCancelsDelayTask() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + AtomicInteger count = new AtomicInteger(0); + ScheduledFuture future = pool.schedule(() -> count.incrementAndGet(), 100, SECONDS); + assertTrue(future.cancel(false)); + assertTrue(future.isCancelled()); + assertTrue(future.isDone()); + AssertExtensions.assertThrows(CancellationException.class, () -> future.get()); + pool.shutdown(); + assertTrue(pool.awaitTermination(5, SECONDS)); + } + + @Test(timeout = 10000) + public void testSpawnsOptionalThreads() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(3); + AtomicInteger count = new AtomicInteger(0); + CyclicBarrier barrior = new CyclicBarrier(4); + AtomicReference error = new AtomicReference<>(); + pool.submit(() -> { + count.incrementAndGet(); + try { + barrior.await(); + } catch (Exception e) { + error.set(e); + } + }); + pool.submit(() -> { + count.incrementAndGet(); + try { + barrior.await(); + } catch (Exception e) { + error.set(e); + } + }); + pool.submit(() -> { + count.incrementAndGet(); + try { + barrior.await(); + } catch (Exception e) { + error.set(e); + } + }); + barrior.await(5, SECONDS); + assertEquals(3, count.get()); + assertNull(error.get()); + pool.shutdown(); + assertTrue(pool.awaitTermination(5, SECONDS)); + } + + @Test(timeout = 10000) + public void testRunsDelayLoop() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + AtomicInteger count = new AtomicInteger(0); + long startTime = System.nanoTime(); + ScheduledFuture future = pool.scheduleWithFixedDelay(() -> { + int value = count.incrementAndGet(); + if (value >= 20) { + throw new RuntimeException("Expected test error"); + } + }, 10, 10, MILLISECONDS); + AssertExtensions.assertEventuallyEquals(20, () -> count.get(), 5000); + AssertExtensions.assertThrows(RuntimeException.class, () -> future.get(5000, MILLISECONDS)); + assertTrue(System.nanoTime() > startTime + 19 * 10 * 1000 * 1000L); + pool.shutdown(); + assertTrue(pool.awaitTermination(5, SECONDS)); + assertEquals(20, count.get()); + } + + + @Test(timeout = 10000) + public void testRunsRateLoop() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + AtomicInteger count = new AtomicInteger(0); + long startTime = System.nanoTime(); + ScheduledFuture future = pool.scheduleAtFixedRate(() -> { + int value = count.incrementAndGet(); + if (value >= 20) { + throw new RuntimeException("Expected test error"); + } + }, 10, 10, MILLISECONDS); + AssertExtensions.assertEventuallyEquals(20, () -> count.get(), 5000); + AssertExtensions.assertThrows(RuntimeException.class, () -> future.get(5000, MILLISECONDS)); + assertTrue(System.nanoTime() > startTime + 19 * 10 * 1000 * 1000L); + pool.shutdown(); + assertTrue(pool.awaitTermination(5, SECONDS)); + assertEquals(20, count.get()); + } + + @Test(timeout = 10000) + public void testShutdown() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + AtomicInteger count = new AtomicInteger(0); + ReusableLatch latch = new ReusableLatch(false); + AtomicReference error = new AtomicReference<>(); + pool.submit(() -> { + count.incrementAndGet(); + try { + latch.await(); + } catch (Exception e) { + error.set(e); + } + }); + pool.submit(() -> count.incrementAndGet()); + assertFalse(pool.isShutdown()); + assertFalse(pool.isTerminated()); + pool.shutdown(); + assertTrue(pool.isShutdown()); + AssertExtensions.assertThrows(RejectedExecutionException.class, + () -> pool.submit(() -> count.incrementAndGet())); + latch.release(); + pool.awaitTermination(1, SECONDS); + assertNull(error.get()); + assertTrue(pool.isTerminated()); + assertEquals(2, count.get()); + } + + @Test(timeout = 10000) + public void testShutdownNow() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + AtomicInteger count = new AtomicInteger(0); + ReusableLatch latch = new ReusableLatch(false); + AtomicReference error = new AtomicReference<>(); + pool.submit(() -> { + count.incrementAndGet(); + try { + latch.await(); + } catch (Exception e) { + error.set(e); + } + }); + pool.submit(() -> count.incrementAndGet()); + assertFalse(pool.isShutdown()); + assertFalse(pool.isTerminated()); + AssertExtensions.assertEventuallyEquals(1, count::get, 5000); + List remaining = pool.shutdownNow(); + assertEquals(1, remaining.size()); + assertTrue(pool.isShutdown()); + AssertExtensions.assertThrows(RejectedExecutionException.class, + () -> pool.submit(() -> count.incrementAndGet())); + //No need to call latch.release() because thread should be interupted + assertTrue(pool.awaitTermination(5, SECONDS)); + assertTrue(pool.isTerminated()); + assertNotNull(error.get()); + assertEquals(InterruptedException.class, error.get().getClass()); + assertEquals(1, count.get()); + } + + @Test(timeout = 10000) + public void testCancel() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + AtomicInteger count = new AtomicInteger(0); + ScheduledFuture future = pool.schedule(() -> count.incrementAndGet(), 20, SECONDS); + assertTrue(future.cancel(false)); + AssertExtensions.assertThrows(CancellationException.class, () -> future.get()); + assertEquals(0, count.get()); + assertTrue(pool.shutdownNow().isEmpty()); + assertTrue(pool.awaitTermination(1, SECONDS)); + } + + @Test(timeout = 10000) + public void testCancelRecurring() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + ReusableLatch latch = new ReusableLatch(false); + AtomicInteger count = new AtomicInteger(0); + AtomicReference error = new AtomicReference<>(); + ScheduledFuture future = pool.scheduleAtFixedRate(() -> { + count.incrementAndGet(); + try { + latch.await(); + } catch (Exception e) { + error.set(e); + } + }, 0, 20, SECONDS); + assertFalse(future.isCancelled()); + assertFalse(future.isDone()); + AssertExtensions.assertEventuallyEquals(1, count::get, 5000); + assertTrue(future.cancel(false)); + latch.release(); + AssertExtensions.assertThrows(CancellationException.class, () -> future.get()); + assertTrue(future.isCancelled()); + assertTrue(future.isDone()); + assertEquals(1, count.get()); + assertTrue(pool.shutdownNow().isEmpty()); + assertTrue(pool.awaitTermination(1, SECONDS)); + } + + @Test(timeout = 10000) + public void testShutdownWithRecurring() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + ReusableLatch latch = new ReusableLatch(false); + AtomicInteger count = new AtomicInteger(0); + AtomicReference error = new AtomicReference<>(); + ScheduledFuture future = pool.scheduleAtFixedRate(() -> { + count.incrementAndGet(); + try { + latch.await(); + } catch (Exception e) { + error.set(e); + } + }, 0, 20, SECONDS); + assertFalse(future.isCancelled()); + assertFalse(future.isDone()); + AssertExtensions.assertEventuallyEquals(1, count::get, 5000); + pool.shutdown(); + latch.release(); + AssertExtensions.assertThrows(CancellationException.class, () -> future.get()); + assertTrue(future.isCancelled()); + assertTrue(future.isDone()); + assertEquals(1, count.get()); + assertTrue(pool.awaitTermination(1, SECONDS)); + } + + @Test(timeout = 10000) + public void testDelays() throws Exception { + ThreadPoolScheduledExecutorService pool = createPool(1); + ScheduledFuture f20 = pool.schedule(() -> { }, 20, SECONDS); + ScheduledFuture f30 = pool.schedule(() -> { }, 30, SECONDS); + assertTrue(f20.getDelay(SECONDS) <= 20); + assertTrue(f20.getDelay(SECONDS) > 18); + assertTrue(f30.getDelay(SECONDS) <= 30); + assertTrue(f30.getDelay(SECONDS) > 28); + assertTrue(f20.compareTo(f30) < 0); + assertTrue(f30.compareTo(f20) > 0); + assertTrue(f30.compareTo(f30) == 0); + pool.shutdown(); + assertTrue(pool.awaitTermination(1, SECONDS)); + } +} diff --git a/common/src/test/java/io/pravega/common/io/filesystem/FileModificationEventWatcherTests.java b/common/src/test/java/io/pravega/common/io/filesystem/FileModificationEventWatcherTests.java index 7d70a8b1fe6..c77445d8cc4 100644 --- a/common/src/test/java/io/pravega/common/io/filesystem/FileModificationEventWatcherTests.java +++ b/common/src/test/java/io/pravega/common/io/filesystem/FileModificationEventWatcherTests.java @@ -70,7 +70,7 @@ FileModificationMonitor prepareObjectUnderTest(Path path, boolean checkForFileEx @Test(timeout = 15000) public void testInvokesCallBackForFileModification() throws IOException, InterruptedException { - File tempFile = this.createTempFile(); + File tempFile = FileModificationMonitorTests.createTempFile(); AtomicBoolean isCallbackInvoked = new AtomicBoolean(false); FileModificationEventWatcher watcher = new FileModificationEventWatcher(tempFile.toPath(), diff --git a/common/src/test/java/io/pravega/common/security/TLSProtocolVersionTest.java b/common/src/test/java/io/pravega/common/security/TLSProtocolVersionTest.java new file mode 100644 index 00000000000..9c1822a81a0 --- /dev/null +++ b/common/src/test/java/io/pravega/common/security/TLSProtocolVersionTest.java @@ -0,0 +1,47 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.common.security; + +import io.pravega.test.common.AssertExtensions; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test to check the correctness of TLS Protocol version enum. + */ + +public class TLSProtocolVersionTest { + + @Test + public void passingValidTlsProtocolVersionTest() { + String tls12 = "TLSv1.2"; + String tls13 = "TLSv1.3"; + String tls1213 = "TLSv1.2,TLSv1.3"; + String tls1312 = "TLSv1.3,TLSv1.2"; + + Assert.assertArrayEquals(new String[] {"TLSv1.2"}, new TLSProtocolVersion(tls12).getProtocols()); + Assert.assertArrayEquals(new String[] {"TLSv1.3"}, new TLSProtocolVersion(tls13).getProtocols()); + Assert.assertArrayEquals(new String[] {"TLSv1.2", "TLSv1.3"}, new TLSProtocolVersion(tls1213).getProtocols()); + Assert.assertArrayEquals(new String[] {"TLSv1.3", "TLSv1.2"}, new TLSProtocolVersion(tls1312).getProtocols()); + } + + @Test + public void passingInValidTlsProtocolVersionTest() { + AssertExtensions.assertThrows(IllegalArgumentException.class, () -> new TLSProtocolVersion("TLSv1.1").getProtocols()); + AssertExtensions.assertThrows(IllegalArgumentException.class, () -> new TLSProtocolVersion("TLSv1_2").getProtocols()); + AssertExtensions.assertThrows(IllegalArgumentException.class, () -> new TLSProtocolVersion("TLSv1.4").getProtocols()); + } +} \ No newline at end of file diff --git a/common/src/test/java/io/pravega/common/util/BlockingDrainingQueueTests.java b/common/src/test/java/io/pravega/common/util/BlockingDrainingQueueTests.java index 88171c1fbb1..890bfa657b9 100644 --- a/common/src/test/java/io/pravega/common/util/BlockingDrainingQueueTests.java +++ b/common/src/test/java/io/pravega/common/util/BlockingDrainingQueueTests.java @@ -341,6 +341,7 @@ public ScheduledFuture schedule(Runnable command, long delay, TimeUnit unit) * {@link #take} and returns an instance of {@link InterceptableFuture}. */ private static class InterceptableQueue extends BlockingDrainingQueue { + @Override @VisibleForTesting protected CompletableFuture> newTakeResult() { return new InterceptableFuture<>(); diff --git a/common/src/test/java/io/pravega/common/util/BufferViewTestBase.java b/common/src/test/java/io/pravega/common/util/BufferViewTestBase.java index e817384bb68..a2a8af30653 100644 --- a/common/src/test/java/io/pravega/common/util/BufferViewTestBase.java +++ b/common/src/test/java/io/pravega/common/util/BufferViewTestBase.java @@ -52,7 +52,7 @@ public void testBasicFunctionality() throws Exception { val expectedData = new byte[data.getLength() - SKIP_COUNT]; System.arraycopy(data.array(), data.arrayOffset() + SKIP_COUNT, expectedData, 0, expectedData.length); - val wrapData = (ArrayView) data.slice(SKIP_COUNT, data.getLength() - SKIP_COUNT); + val wrapData = data.slice(SKIP_COUNT, data.getLength() - SKIP_COUNT); @Cleanup("release") val bufferView = toBufferView(wrapData); diff --git a/common/src/test/java/io/pravega/common/util/CompositeBufferViewTests.java b/common/src/test/java/io/pravega/common/util/CompositeBufferViewTests.java index 92ea102832d..677cc5c0091 100644 --- a/common/src/test/java/io/pravega/common/util/CompositeBufferViewTests.java +++ b/common/src/test/java/io/pravega/common/util/CompositeBufferViewTests.java @@ -86,6 +86,7 @@ public void testWrapRecursive() throws IOException { /** * Tests {@link CompositeBufferView#getReader()}. */ + @Override @Test public void testGetReader() throws IOException { val components = createComponents(); diff --git a/common/src/test/java/io/pravega/common/util/CompositeByteArraySegmentTests.java b/common/src/test/java/io/pravega/common/util/CompositeByteArraySegmentTests.java index a07a06b95b9..77688075612 100644 --- a/common/src/test/java/io/pravega/common/util/CompositeByteArraySegmentTests.java +++ b/common/src/test/java/io/pravega/common/util/CompositeByteArraySegmentTests.java @@ -137,6 +137,7 @@ public void testCopyFromOverflow() { /** * Tests the {@link CompositeByteArraySegment#copyTo(ByteBuffer)} method. */ + @Override @Test public void testCopyToByteBuffer() { testProgressiveCopies((expectedData, s, offset, length) -> { @@ -149,6 +150,7 @@ public void testCopyToByteBuffer() { /** * Tests the functionality of {@link CompositeByteArraySegment#copyTo(OutputStream)}. */ + @Override @Test public void testCopyToStream() { testProgressiveCopies((expectedData, s, offset, length) -> { diff --git a/common/src/test/java/io/pravega/common/util/RetryTests.java b/common/src/test/java/io/pravega/common/util/RetryTests.java index e9e20bdf68e..ee625ec2530 100644 --- a/common/src/test/java/io/pravega/common/util/RetryTests.java +++ b/common/src/test/java/io/pravega/common/util/RetryTests.java @@ -113,6 +113,7 @@ public void retryTests() { @Test public void retryFutureTests() { + @Cleanup("shutdownNow") ScheduledExecutorService executorService = ExecutorServiceHelpers.newScheduledThreadPool(5, "testpool"); // 1. series of retryable exceptions followed by a failure @@ -171,6 +172,7 @@ public void retryFutureTests() { @Test public void retryFutureInExecutorTests() throws ExecutionException { + @Cleanup("shutdownNow") ScheduledExecutorService executorService = ExecutorServiceHelpers.newScheduledThreadPool(5, "testpool"); // 1. series of retryable exceptions followed by a failure diff --git a/common/src/test/java/io/pravega/common/util/SortedIndexTestBase.java b/common/src/test/java/io/pravega/common/util/SortedIndexTestBase.java index 2547988215d..6201831cd05 100644 --- a/common/src/test/java/io/pravega/common/util/SortedIndexTestBase.java +++ b/common/src/test/java/io/pravega/common/util/SortedIndexTestBase.java @@ -280,8 +280,8 @@ public void testSortedInput() { //Get + GetCeiling. for (int key = 0; key < ITEM_COUNT; key++) { - Assert.assertEquals("Unexpected value from get() for key " + key, key, (long) index.get(key).key()); - Assert.assertEquals("Unexpected value from getCeiling() for key " + key, key, (long) index.getCeiling(key).key()); + Assert.assertEquals("Unexpected value from get() for key " + key, key, index.get(key).key()); + Assert.assertEquals("Unexpected value from getCeiling() for key " + key, key, index.getCeiling(key).key()); } // Remove + get. @@ -292,7 +292,7 @@ public void testSortedInput() { if (key == ITEM_COUNT - 1) { Assert.assertNull("Unexpected value from getCeiling() for removed key " + key, index.getCeiling(key)); } else { - Assert.assertEquals("Unexpected value from getCeiling() for removed key " + key, key + 1, (long) index.getCeiling(key).key()); + Assert.assertEquals("Unexpected value from getCeiling() for removed key " + key, key + 1, index.getCeiling(key).key()); } } } diff --git a/common/src/test/java/io/pravega/common/util/ToStringUtilsTest.java b/common/src/test/java/io/pravega/common/util/ToStringUtilsTest.java index c602b2db954..10d9aaa25e8 100644 --- a/common/src/test/java/io/pravega/common/util/ToStringUtilsTest.java +++ b/common/src/test/java/io/pravega/common/util/ToStringUtilsTest.java @@ -16,15 +16,12 @@ package io.pravega.common.util; import io.pravega.test.common.AssertExtensions; - -import java.io.IOException; import java.nio.charset.StandardCharsets; import java.util.Arrays; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Random; - import org.junit.Test; import static io.pravega.common.util.ToStringUtils.compressToBase64; @@ -73,7 +70,7 @@ public void testBadListValues() { } @Test - public void testCompressBase64() throws IOException { + public void testCompressBase64() { //generate a random string. byte[] array = new byte[10]; new Random().nextBytes(array); diff --git a/common/src/test/java/io/pravega/common/util/TypedPropertiesTests.java b/common/src/test/java/io/pravega/common/util/TypedPropertiesTests.java index 92d70bca659..4431bb47bce 100644 --- a/common/src/test/java/io/pravega/common/util/TypedPropertiesTests.java +++ b/common/src/test/java/io/pravega/common/util/TypedPropertiesTests.java @@ -16,7 +16,6 @@ package io.pravega.common.util; import io.pravega.test.common.AssertExtensions; - import java.time.Duration; import java.time.temporal.ChronoUnit; import java.util.Arrays; @@ -141,10 +140,26 @@ public void testGetPositiveInt() { TypedProperties typedProps = new TypedProperties(props, "getPositiveInteger"); Assert.assertEquals(1000, typedProps.getPositiveInt(Property.named("positiveInteger"))); AssertExtensions.assertThrows(ConfigurationException.class, () -> typedProps.getPositiveInt(Property.named("zero"))); + Assert.assertEquals(0, typedProps.getNonNegativeInt(Property.named("zero"))); AssertExtensions.assertThrows(ConfigurationException.class, () -> typedProps.getPositiveInt(Property.named("negativeInteger"))); + AssertExtensions.assertThrows(ConfigurationException.class, () -> typedProps.getNonNegativeInt(Property.named("negativeInteger"))); AssertExtensions.assertThrows(ConfigurationException.class, () -> typedProps.getPositiveInt(Property.named("notAnInteger"))); } + @Test + public void testGetPositiveLong() { + Properties props = new Properties(); + props.setProperty("getPositiveLong.positiveLong", "1000000000000"); + props.setProperty("getPositiveLong.zero", "0"); + props.setProperty("getPositiveLong.negativeLong", "-1"); + props.setProperty("getPositiveLong.notALong", "hello"); + TypedProperties typedProps = new TypedProperties(props, "getPositiveLong"); + Assert.assertEquals(1000000000000L, typedProps.getPositiveLong(Property.named("positiveLong"))); + AssertExtensions.assertThrows(ConfigurationException.class, () -> typedProps.getPositiveLong(Property.named("zero"))); + AssertExtensions.assertThrows(ConfigurationException.class, () -> typedProps.getPositiveLong(Property.named("negativeLong"))); + AssertExtensions.assertThrows(ConfigurationException.class, () -> typedProps.getPositiveLong(Property.named("notALong"))); + } + @Test public void testGetDuration() { Properties props = new Properties(); diff --git a/common_server/src/main/java/io/pravega/common/util/btree/BTreeIndex.java b/common_server/src/main/java/io/pravega/common/util/btree/BTreeIndex.java index e1f42ab1a4d..7fd97454c91 100644 --- a/common_server/src/main/java/io/pravega/common/util/btree/BTreeIndex.java +++ b/common_server/src/main/java/io/pravega/common/util/btree/BTreeIndex.java @@ -769,6 +769,7 @@ private long deserializePointerMinOffset(ByteArraySegment serialization) { return serialization.getLong(Long.BYTES + Short.BYTES); } + @SuppressWarnings("all") private ByteArraySegment generateMinKey() { byte[] result = new byte[this.indexPageConfig.getKeyLength()]; if (BufferViewComparator.MIN_VALUE != 0) { diff --git a/common_server/src/test/java/io/pravega/common/util/btree/BTreePageTests.java b/common_server/src/test/java/io/pravega/common/util/btree/BTreePageTests.java index d6cfe715aa0..c412e85d1f6 100644 --- a/common_server/src/test/java/io/pravega/common/util/btree/BTreePageTests.java +++ b/common_server/src/test/java/io/pravega/common/util/btree/BTreePageTests.java @@ -345,7 +345,7 @@ public void testSplit() { int headerId = page.getHeaderId(); for (int item = 0; item < count; item++) { // Add one more entry to the page. - page.update(Collections.singletonList(new PageEntry(serializeInt(item), serializeLong((long) (item + 1))))); + page.update(Collections.singletonList(new PageEntry(serializeInt(item), serializeLong(item + 1)))); boolean expectedSplit = page.getLength() > CONFIG.getMaxPageSize(); val splitResult = page.splitIfNecessary(); diff --git a/config/admin-cli.properties b/config/admin-cli.properties index 5c3e9452a68..bc3672285ff 100644 --- a/config/admin-cli.properties +++ b/config/admin-cli.properties @@ -14,12 +14,14 @@ # limitations under the License. # -# Config for CLI credentials (at the moment, Controller auth is only supported). -cli.controller.connect.channel.auth=false -cli.controller.connect.channel.tls=false -cli.controller.connect.credentials.username=admin -cli.controller.connect.credentials.pwd=1111_aaaa -cli.controller.connect.trustStore.location=conf/ca-cert.crt +# Config for CLI credentials . +cli.channel.auth=true +cli.channel.tls=true +cli.credentials.username=admin +cli.credentials.pwd=1111_aaaa +cli.trustStore.location=conf/ca-cert.crt +cli.trustStore.access.token.ttl.seconds=600 + # Config for CLI Controller REST/GRPC endpoints. cli.controller.connect.rest.uri=localhost:9091 diff --git a/config/config.properties b/config/config.properties index 92f66147f8c..d4dee6b5331 100644 --- a/config/config.properties +++ b/config/config.properties @@ -98,7 +98,9 @@ pravegaservice.zk.connect.uri=localhost:2181 pravegaservice.dataLog.impl.name=BOOKKEEPER # Storage implementation for Long-Term Storage. -# Valid values: HDFS, FILESYSTEM, EXTENDEDS3, INMEMORY. +# Valid values: Name of valid storage implementation. +# For built in implementations: HDFS, FILESYSTEM, EXTENDEDS3, INMEMORY. +# For custom implementations: Any valid identifier. Eg. MY_STORAGE. # Default value: HDFS # pravegaservice.storage.impl.name=HDFS @@ -108,7 +110,7 @@ pravegaservice.dataLog.impl.name=BOOKKEEPER # CHUNKED_STORAGE - Using ChunkedSegmentStorage. # ROLLING_STORAGE - Using RollingStorage. # Default value: ROLLING_STORAGE -# pravegaservice.storage.layout=ROLLING_STORAGE +pravegaservice.storage.layout=CHUNKED_STORAGE # NOTE: ChunkedSegmentStorage is an experimental feature. Its behavior and/or API should be expected to be in flux until fully released. # Size of chunk in bytes above which it is no longer considered a small object. diff --git a/config/standalone-config.properties b/config/standalone-config.properties index 168b92321d6..05fe0ab24ee 100644 --- a/config/standalone-config.properties +++ b/config/standalone-config.properties @@ -55,6 +55,11 @@ # Default value: false #singlenode.security.tls.enable=false +# TLS Protocol Version +# Valid values: 'TLSv1.2' (strict) or 'TLSv1.3' (strict) or 'TLSv1.2,TLSv1.3' (mixed) +# Default value: 'TLSv1.2,TLSv1.3' +#singlenode.security.tls.protocolVersion=TLSv1.2,TLSv1.3 + # Location of server's key file for TLS communication. #singlenode.security.tls.privateKey.location=../config/server-key.key diff --git a/controller/src/conf/controller.config.properties b/controller/src/conf/controller.config.properties index b856b0894d9..0fbeeece61d 100644 --- a/controller/src/conf/controller.config.properties +++ b/controller/src/conf/controller.config.properties @@ -49,6 +49,7 @@ controller.security.auth.enable=${AUTHORIZATION_ENABLED} controller.security.auth.delegationToken.signingKey.basis=${TOKEN_SIGNING_KEY} controller.security.auth.delegationToken.ttl.seconds=${ACCESS_TOKEN_TTL_IN_SECONDS} controller.security.tls.enable=${TLS_ENABLED} +controller.security.tls.protocolVersion=${TLS_PROTOCOL_VERSION} controller.security.tls.server.certificate.location=${TLS_CERT_FILE} controller.security.tls.trustStore.location=${TLS_TRUST_STORE} controller.security.tls.server.privateKey.location=${TLS_KEY_FILE} diff --git a/controller/src/main/java/io/pravega/controller/eventProcessor/impl/EventProcessorGroupImpl.java b/controller/src/main/java/io/pravega/controller/eventProcessor/impl/EventProcessorGroupImpl.java index c5a4ba79a14..3ef9c0dd2f7 100644 --- a/controller/src/main/java/io/pravega/controller/eventProcessor/impl/EventProcessorGroupImpl.java +++ b/controller/src/main/java/io/pravega/controller/eventProcessor/impl/EventProcessorGroupImpl.java @@ -100,7 +100,7 @@ public final class EventProcessorGroupImpl extends Ab .clientFactory .createEventWriter(eventProcessorConfig.getConfig().getStreamName(), eventProcessorConfig.getSerializer(), - EventWriterConfig.builder().retryAttempts(Integer.MAX_VALUE).build()); + EventWriterConfig.builder().enableConnectionPooling(true).retryAttempts(Integer.MAX_VALUE).build()); this.checkpointStore = checkpointStore; this.rebalancePeriodMillis = eventProcessorConfig.getRebalancePeriodMillis(); } diff --git a/controller/src/main/java/io/pravega/controller/fault/ControllerClusterListener.java b/controller/src/main/java/io/pravega/controller/fault/ControllerClusterListener.java index b21b62d9959..8e37a003735 100644 --- a/controller/src/main/java/io/pravega/controller/fault/ControllerClusterListener.java +++ b/controller/src/main/java/io/pravega/controller/fault/ControllerClusterListener.java @@ -67,6 +67,33 @@ public ControllerClusterListener(final Host host, final Cluster cluster, this.sweepers = Lists.newArrayList(sweepers); } + /** + * Get the zookeeper health status. + * + * @return true if zookeeper is connected. + */ + public boolean isMetadataServiceConnected() { + return cluster.isHealthy(); + } + + /** + * Get the sweepers status. + * + * @return true if all sweepers are ready. + */ + public boolean areAllSweepersReady() { + return sweepers.stream().allMatch(s -> s.isReady()); + } + + /** + * Check service is ready. + * + * @return true if zookeeper is connected and all sweepers are ready. + */ + public boolean isReady() { + return isMetadataServiceConnected() && areAllSweepersReady(); + } + @Override protected void startUp() throws InterruptedException { long traceId = LoggerHelpers.traceEnter(log, objectId, "startUp"); diff --git a/controller/src/main/java/io/pravega/controller/fault/SegmentContainerMonitor.java b/controller/src/main/java/io/pravega/controller/fault/SegmentContainerMonitor.java index cee444bbbf4..d6dce8a9529 100644 --- a/controller/src/main/java/io/pravega/controller/fault/SegmentContainerMonitor.java +++ b/controller/src/main/java/io/pravega/controller/fault/SegmentContainerMonitor.java @@ -23,6 +23,8 @@ import org.apache.curator.framework.recipes.leader.LeaderSelector; import org.apache.curator.utils.ZKPaths; +import java.util.concurrent.atomic.AtomicBoolean; + /** * Class used to monitor the pravega host cluster for failures and ensure the segment containers owned by them are * assigned to the other pravega hosts. @@ -41,6 +43,8 @@ public class SegmentContainerMonitor extends AbstractIdleService { //The ZK path which is monitored for leader selection. private final String leaderZKPath; + private final AtomicBoolean zkConnected = new AtomicBoolean(false); + /** * Monitor to manage pravega host addition and removal in the cluster. * @@ -61,9 +65,11 @@ public SegmentContainerMonitor(HostControllerStore hostStore, CuratorFramework c segmentMonitorLeader = new SegmentMonitorLeader(hostStore, balancer, minRebalanceInterval); leaderSelector = new LeaderSelector(client, leaderZKPath, segmentMonitorLeader); - //Listen for any zookeeper connectivity error and relinquish leadership. + this.zkConnected.set(client.getZookeeperClient().isConnected()); + //Listen for any zookeeper connection state changes client.getConnectionStateListenable().addListener( (curatorClient, newState) -> { + this.zkConnected.set(newState.isConnected()); switch (newState) { case LOST: log.warn("Connection to zookeeper lost, attempting to interrrupt the leader thread"); @@ -89,6 +95,10 @@ public SegmentContainerMonitor(HostControllerStore hostStore, CuratorFramework c ); } + public boolean isZKConnected() { + return zkConnected.get(); + } + /** * Start the leader selection process. */ diff --git a/controller/src/main/java/io/pravega/controller/server/ControllerService.java b/controller/src/main/java/io/pravega/controller/server/ControllerService.java index 767c3810263..920bcbcf4c8 100644 --- a/controller/src/main/java/io/pravega/controller/server/ControllerService.java +++ b/controller/src/main/java/io/pravega/controller/server/ControllerService.java @@ -130,6 +130,7 @@ public CompletableFuture createKeyValueTable(String s Preconditions.checkArgument(kvtConfig.getPartitionCount() > 0); Preconditions.checkArgument(kvtConfig.getPrimaryKeyLength() > 0); Preconditions.checkArgument(kvtConfig.getSecondaryKeyLength() >= 0); + Preconditions.checkArgument(kvtConfig.getRolloverSizeBytes() >= 0); Timer timer = new Timer(); try { NameUtils.validateUserKeyValueTableName(kvtName); @@ -217,8 +218,8 @@ public CompletableFuture deleteKeyValueTable(final String s * @return Create Readergroup status future. */ public CompletableFuture createReaderGroup(String scope, String rgName, - final ReaderGroupConfig rgConfig, - final long createTimestamp, + final ReaderGroupConfig rgConfig, + final long createTimestamp, final long requestId) { Preconditions.checkNotNull(scope, "ReaderGroup scope is null"); Preconditions.checkNotNull(rgName, "ReaderGroup name is null"); @@ -347,6 +348,9 @@ public CompletableFuture createStream(String scope, String s final long createTimestamp, long requestId) { Preconditions.checkNotNull(streamConfig, "streamConfig"); Preconditions.checkArgument(createTimestamp >= 0); + Preconditions.checkArgument(streamConfig.getRolloverSizeBytes() >= 0, + String.format("Segment rollover size bytes cannot be less than 0, actual is %s", streamConfig.getRolloverSizeBytes())); + Timer timer = new Timer(); try { NameUtils.validateStreamName(stream); @@ -382,17 +386,17 @@ public CompletableFuture createStream(String scope, String s * @param stream stream * @param streamConfig stream configuration * @param requestId request id - * @return Update stream status future. + * @return Update stream status future. */ - public CompletableFuture updateStream(String scope, String stream, final StreamConfiguration streamConfig, + public CompletableFuture updateStream(String scope, String stream, final StreamConfiguration streamConfig, long requestId) { Preconditions.checkNotNull(streamConfig, "streamConfig"); Timer timer = new Timer(); return streamMetadataTasks.updateStream(scope, stream, streamConfig, requestId) - .thenApplyAsync(status -> { - reportUpdateStreamMetrics(scope, stream, status, timer.getElapsed()); - return UpdateStreamStatus.newBuilder().setStatus(status).build(); - }, executor); + .thenApplyAsync(status -> { + reportUpdateStreamMetrics(scope, stream, status, timer.getElapsed()); + return UpdateStreamStatus.newBuilder().setStatus(status).build(); + }, executor); } /** @@ -630,8 +634,9 @@ public CompletableFuture>> createTransaction(final Exceptions.checkNotNullOrEmpty(scope, "scope"); Exceptions.checkNotNullOrEmpty(stream, "stream"); Timer timer = new Timer(); - - return streamTransactionMetadataTasks.createTxn(scope, stream, lease, requestId) + OperationContext context = streamStore.createStreamContext(scope, stream, requestId); + return streamStore.getConfiguration(scope, stream, context, executor).thenCompose(streamConfig -> + streamTransactionMetadataTasks.createTxn(scope, stream, lease, requestId, streamConfig.getRolloverSizeBytes())) .thenApply(pair -> { VersionedTransactionData data = pair.getKey(); List segments = pair.getValue(); diff --git a/controller/src/main/java/io/pravega/controller/server/ControllerServiceStarter.java b/controller/src/main/java/io/pravega/controller/server/ControllerServiceStarter.java index 2576ab9dd9e..baa49d56aa4 100644 --- a/controller/src/main/java/io/pravega/controller/server/ControllerServiceStarter.java +++ b/controller/src/main/java/io/pravega/controller/server/ControllerServiceStarter.java @@ -48,6 +48,12 @@ import io.pravega.controller.server.rest.resources.StreamMetadataResourceImpl; import io.pravega.shared.health.HealthServiceManager; import io.pravega.shared.rest.RESTServer; +import io.pravega.controller.server.health.ClusterListenerHealthContributor; +import io.pravega.controller.server.health.EventProcessorHealthContributor; +import io.pravega.controller.server.health.GRPCServerHealthContributor; +import io.pravega.controller.server.health.RetentionServiceHealthContributor; +import io.pravega.controller.server.health.SegmentContainerMonitorHealthContributor; +import io.pravega.controller.server.health.WatermarkingServiceHealthContributor; import io.pravega.controller.server.rpc.grpc.GRPCServer; import io.pravega.controller.server.rpc.grpc.GRPCServerConfig; import io.pravega.controller.server.security.auth.GrpcAuthHelper; @@ -220,6 +226,9 @@ protected void startUp() { GRPCServerConfig grpcServerConfig = serviceConfig.getGRPCServerConfig().get(); RequestTracker requestTracker = new RequestTracker(grpcServerConfig.isRequestTracingEnabled()); + // Create a Health Service Manager instance. + healthServiceManager = new HealthServiceManager(serviceConfig.getHealthCheckFrequency()); + if (serviceConfig.getHostMonitorConfig().isHostMonitorEnabled()) { //Start the Segment Container Monitor. monitor = new SegmentContainerMonitor(hostStore, (CuratorFramework) storeClient.getClient(), @@ -227,6 +236,8 @@ protected void startUp() { serviceConfig.getHostMonitorConfig().getHostMonitorMinRebalanceInterval()); log.info("Starting segment container monitor"); monitor.startAsync(); + SegmentContainerMonitorHealthContributor segmentContainerMonitorHC = new SegmentContainerMonitorHealthContributor("segmentContainerMonitor", monitor ); + healthServiceManager.register(segmentContainerMonitorHC); } // This client config is used by the segment store helper (SegmentHelper) to connect to the segment store. @@ -241,7 +252,8 @@ protected void startUp() { clientConfigBuilder.enableTlsToSegmentStore(tlsEnabledForSegmentStore.get()); } - ClientConfig clientConfig = clientConfigBuilder.build(); + // Use one connection per Segment Store to save up resources. + ClientConfig clientConfig = clientConfigBuilder.maxConnectionsPerSegmentStore(1).build(); connectionFactory = connectionFactoryRef.orElseGet(() -> new SocketConnectionFactoryImpl(clientConfig)); connectionPool = new ConnectionPoolImpl(clientConfig, connectionFactory); segmentHelper = segmentHelperRef.orElseGet(() -> new SegmentHelper(connectionPool, hostStore, controllerExecutor)); @@ -268,6 +280,8 @@ protected void startUp() { log.info("starting background periodic service for retention"); retentionService.startAsync(); retentionService.awaitRunning(); + RetentionServiceHealthContributor retentionServiceHC = new RetentionServiceHealthContributor("retentionService", retentionService); + healthServiceManager.register(retentionServiceHC); Duration executionDurationWatermarking = Duration.ofSeconds(Config.MINIMUM_WATERMARKING_FREQUENCY_IN_SECONDS); watermarkingWork = new PeriodicWatermarking(streamStore, bucketStore, @@ -278,6 +292,8 @@ protected void startUp() { log.info("starting background periodic service for watermarking"); watermarkingService.startAsync(); watermarkingService.awaitRunning(); + WatermarkingServiceHealthContributor watermarkingServiceHC = new WatermarkingServiceHealthContributor("watermarkingService", watermarkingService); + healthServiceManager.register(watermarkingServiceHC); // Controller has a mechanism to track the currently active controller host instances. On detecting a failure of // any controller instance, the failure detector stores the failed HostId in a failed hosts directory (FH), and @@ -322,6 +338,8 @@ protected void startUp() { eventProcessorFuture = controllerEventProcessors.bootstrap(streamTransactionMetadataTasks, streamMetadataTasks, kvtMetadataTasks) .thenAcceptAsync(x -> controllerEventProcessors.startAsync(), eventExecutor); + EventProcessorHealthContributor eventProcessorHC = new EventProcessorHealthContributor("eventProcessor", controllerEventProcessors); + healthServiceManager.register(eventProcessorHC); } // Setup and start controller cluster listener after all sweepers have been initialized. @@ -339,10 +357,12 @@ protected void startUp() { log.info("Starting controller cluster listener"); controllerClusterListener.startAsync(); + ClusterListenerHealthContributor clusterListenerHC = new ClusterListenerHealthContributor("clusterListener", controllerClusterListener); + healthServiceManager.register(clusterListenerHC); } // Start the Health Service. - healthServiceManager = new HealthServiceManager(serviceConfig.getHealthCheckFrequency()); + log.info("Starting health manager"); healthServiceManager.start(); // Start RPC server. @@ -351,6 +371,8 @@ protected void startUp() { grpcServer.startAsync(); log.info("Awaiting start of rpc server"); grpcServer.awaitRunning(); + GRPCServerHealthContributor grpcServerHC = new GRPCServerHealthContributor("GRPCServer", grpcServer); + healthServiceManager.register(grpcServerHC); } // Start REST server. diff --git a/controller/src/main/java/io/pravega/controller/server/Main.java b/controller/src/main/java/io/pravega/controller/server/Main.java index c0e3b90b3a1..62bb8627818 100644 --- a/controller/src/main/java/io/pravega/controller/server/Main.java +++ b/controller/src/main/java/io/pravega/controller/server/Main.java @@ -91,6 +91,7 @@ public static void main(String[] args) { .host(Config.REST_SERVER_IP) .port(Config.REST_SERVER_PORT) .tlsEnabled(Config.TLS_ENABLED) + .tlsProtocolVersion(Config.TLS_PROTOCOL_VERSION.toArray(new String[Config.TLS_PROTOCOL_VERSION.size()])) .keyFilePath(Config.REST_KEYSTORE_FILE_PATH) .keyFilePasswordPath(Config.REST_KEYSTORE_PASSWORD_FILE_PATH) .build(); diff --git a/controller/src/main/java/io/pravega/controller/server/SegmentHelper.java b/controller/src/main/java/io/pravega/controller/server/SegmentHelper.java index 1ddf18aa7d0..ac76fffc905 100644 --- a/controller/src/main/java/io/pravega/controller/server/SegmentHelper.java +++ b/controller/src/main/java/io/pravega/controller/server/SegmentHelper.java @@ -105,6 +105,7 @@ public class SegmentHelper implements AutoCloseable { .put(WireCommands.ReadTable.class, ImmutableSet.of(WireCommands.TableRead.class)) .put(WireCommands.ReadTableKeys.class, ImmutableSet.of(WireCommands.TableKeysRead.class)) .put(WireCommands.ReadTableEntries.class, ImmutableSet.of(WireCommands.TableEntriesRead.class)) + .put(WireCommands.GetTableSegmentInfo.class, ImmutableSet.of(WireCommands.TableSegmentInfo.class)) .build(); private static final Map, Set>> EXPECTED_FAILING_REPLIES = @@ -120,12 +121,13 @@ public class SegmentHelper implements AutoCloseable { .put(WireCommands.ReadTable.class, ImmutableSet.of(WireCommands.NoSuchSegment.class)) .put(WireCommands.ReadTableKeys.class, ImmutableSet.of(WireCommands.NoSuchSegment.class)) .put(WireCommands.ReadTableEntries.class, ImmutableSet.of(WireCommands.NoSuchSegment.class)) + .put(WireCommands.GetTableSegmentInfo.class, ImmutableSet.of(WireCommands.NoSuchSegment.class)) .build(); + protected final ConnectionPool connectionPool; + protected final ScheduledExecutorService executorService; + protected final AtomicReference timeout; private final HostControllerStore hostStore; - private final ConnectionPool connectionPool; - private final ScheduledExecutorService executorService; - private final AtomicReference timeout; public SegmentHelper(final ConnectionPool connectionPool, HostControllerStore hostStore, ScheduledExecutorService executorService) { @@ -157,7 +159,8 @@ public CompletableFuture createSegment(final String scope, final long segmentId, final ScalingPolicy policy, final String controllerToken, - final long clientRequestId) { + final long clientRequestId, + final long rolloverSizeBytes) { final String qualifiedStreamSegmentName = getQualifiedStreamSegmentName(scope, stream, segmentId); final Controller.NodeUri uri = getSegmentUri(scope, stream, segmentId); final WireCommandType type = WireCommandType.CREATE_SEGMENT; @@ -167,7 +170,7 @@ public CompletableFuture createSegment(final String scope, Pair extracted = extractFromPolicy(policy); return sendRequest(connection, clientRequestId, new WireCommands.CreateSegment(requestId, qualifiedStreamSegmentName, - extracted.getLeft(), extracted.getRight(), controllerToken)) + extracted.getLeft(), extracted.getRight(), controllerToken, rolloverSizeBytes)) .thenAccept(r -> handleReply(clientRequestId, r, connection, qualifiedStreamSegmentName, WireCommands.CreateSegment.class, type)); } @@ -238,7 +241,8 @@ public CompletableFuture createTransaction(final String scope, final long segmentId, final UUID txId, final String delegationToken, - final long clientRequestId) { + final long clientRequestId, + final long rolloverSizeBytes) { final Controller.NodeUri uri = getSegmentUri(scope, stream, segmentId); final String transactionName = getTransactionName(scope, stream, segmentId, txId); final WireCommandType type = WireCommandType.CREATE_SEGMENT; @@ -247,7 +251,7 @@ public CompletableFuture createTransaction(final String scope, final long requestId = connection.getFlow().asLong(); WireCommands.CreateSegment request = new WireCommands.CreateSegment(requestId, transactionName, - WireCommands.CreateSegment.NO_SCALE, 0, delegationToken); + WireCommands.CreateSegment.NO_SCALE, 0, delegationToken, rolloverSizeBytes); return sendRequest(connection, clientRequestId, request) .thenAccept(r -> handleReply(clientRequestId, r, connection, transactionName, WireCommands.CreateSegment.class, @@ -375,6 +379,7 @@ public CompletableFuture getSegmentInfo(String q * @param sortedTableSegment Boolean flag indicating if the Table Segment should be created in sorted order. * @param keyLength Key Length. If 0, a Hash Table Segment (Variable Key Length) will be created, otherwise * a Fixed-Key-Length Table Segment will be created with this value for the key length. + * @param rolloverSizeBytes The rollover size of segment in LTS. * @return A CompletableFuture that, when completed normally, will indicate the table segment creation completed * successfully. If the operation failed, the future will be failed with the causing exception. If the exception * can be retried then the future will be failed with {@link WireCommandFailedException}. @@ -383,7 +388,8 @@ public CompletableFuture createTableSegment(final String tableName, String delegationToken, final long clientRequestId, final boolean sortedTableSegment, - final int keyLength) { + final int keyLength, + final long rolloverSizeBytes) { final Controller.NodeUri uri = getTableUri(tableName); final WireCommandType type = WireCommandType.CREATE_TABLE_SEGMENT; @@ -392,7 +398,7 @@ public CompletableFuture createTableSegment(final String tableName, final long requestId = connection.getFlow().asLong(); // All Controller Metadata Segments are non-sorted. - return sendRequest(connection, clientRequestId, new WireCommands.CreateTableSegment(requestId, tableName, sortedTableSegment, keyLength, delegationToken)) + return sendRequest(connection, clientRequestId, new WireCommands.CreateTableSegment(requestId, tableName, sortedTableSegment, keyLength, delegationToken, rolloverSizeBytes)) .thenAccept(rpl -> handleReply(clientRequestId, rpl, connection, tableName, WireCommands.CreateTableSegment.class, type)); } @@ -468,6 +474,53 @@ public CompletableFuture> updateTableEntries(final }); } + /** + * This method sends a WireCommand to get information about a Table Segment. + * + * @param tableName Qualified table name. + * @param delegationToken The token to be presented to the segmentstore. + * @param clientRequestId Request id. + * @return A CompletableFuture that, when completed successfully, will return information about the Table Segment. + * If the operation failed, the future will be failed with the causing exception. If the exception + * can be retried then the future will be failed with {@link WireCommandFailedException}. + */ + public CompletableFuture getTableSegmentInfo(final String tableName, + String delegationToken, + final long clientRequestId) { + + final Controller.NodeUri uri = getTableUri(tableName); + final WireCommandType type = WireCommandType.GET_TABLE_SEGMENT_INFO; + + RawClient connection = new RawClient(ModelHelper.encode(uri), connectionPool); + final long requestId = connection.getFlow().asLong(); + + // All Controller Metadata Segments are non-sorted. + return sendRequest(connection, clientRequestId, new WireCommands.GetTableSegmentInfo(requestId, tableName, delegationToken)) + .thenApply(r -> { + handleReply(clientRequestId, r, connection, tableName, WireCommands.GetTableSegmentInfo.class, + type); + assert r instanceof WireCommands.TableSegmentInfo; + return (WireCommands.TableSegmentInfo) r; + }); + } + + /** + * This method gets the entry count for a Table Segment. + * + * @param tableName Qualified table name. + * @param delegationToken The token to be presented to the segmentstore. + * @param clientRequestId Request id. + * @return A CompletableFuture that, when completed successfully, will return entry count of a Table Segment. + * If the operation failed, the future will be failed with the causing exception. If the exception + * can be retried then the future will be failed with {@link WireCommandFailedException}. + */ + public CompletableFuture getTableSegmentEntryCount(final String tableName, + String delegationToken, + final long clientRequestId) { + return getTableSegmentInfo(tableName, delegationToken, clientRequestId) + .thenApply(WireCommands.TableSegmentInfo::getEntryCount); + } + /** * This method sends a WireCommand to remove table keys. * @@ -560,9 +613,18 @@ public CompletableFuture> readTableKeys(f final HashTableIteratorItem.State state, final String delegationToken, final long clientRequestId) { - final Controller.NodeUri uri = getTableUri(tableName); + return readTableKeys(tableName, ModelHelper.encode(getTableUri(tableName)), suggestedKeyCount, state, + delegationToken, clientRequestId); + } + + public CompletableFuture> readTableKeys(final String tableName, + final PravegaNodeUri uri, + final int suggestedKeyCount, + final HashTableIteratorItem.State state, + final String delegationToken, + final long clientRequestId) { final WireCommandType type = WireCommandType.READ_TABLE_KEYS; - RawClient connection = new RawClient(ModelHelper.encode(uri), connectionPool); + RawClient connection = new RawClient(uri, connectionPool); final long requestId = connection.getFlow().asLong(); final HashTableIteratorItem.State token = (state == null) ? HashTableIteratorItem.State.EMPTY : state; @@ -635,7 +697,7 @@ public CompletableFuture> readTableEntr }); } - public CompletableFuture readSegment(String qualifiedName, int offset, int length, + public CompletableFuture readSegment(String qualifiedName, long offset, int length, PravegaNodeUri uri, String delegationToken) { final WireCommandType type = WireCommandType.READ_SEGMENT; RawClient connection = new RawClient(uri, connectionPool); @@ -737,7 +799,7 @@ private void closeConnection(Reply reply, RawClient client, long callerRequestId } } - private CompletableFuture sendRequest(RawClient connection, long clientRequestId, + protected CompletableFuture sendRequest(RawClient connection, long clientRequestId, T request) { log.trace(clientRequestId, "Sending request to segment store with: flowId: {}: request: {}", request.getRequestId(), request); @@ -781,7 +843,6 @@ void processAndRethrowException(long callerReq * @param qualifiedStreamSegmentName StreamSegmentName * @param requestType request which reply need to be transformed * @param type WireCommand for this request - * @return true if reply is in the expected reply set for the given requestType or throw exception. */ @SneakyThrows(ConnectionFailedException.class) private void handleReply(long callerRequestId, @@ -790,9 +851,33 @@ private void handleReply(long callerRequestId, String qualifiedStreamSegmentName, Class requestType, WireCommandType type) { + handleExpectedReplies(callerRequestId, reply, client, qualifiedStreamSegmentName, requestType, type, EXPECTED_SUCCESS_REPLIES, EXPECTED_FAILING_REPLIES); + } + + /** + * This method handles the reply returned from RawClient.sendRequest given the expected success and failure cases. + * + * @param callerRequestId request id issues by the client + * @param reply actual reply received + * @param client RawClient for sending request + * @param qualifiedStreamSegmentName StreamSegmentName + * @param requestType request which reply need to be transformed + * @param type WireCommand for this request + * @param expectedSuccessReplies the expected replies for a successful case + * @param expectedFailureReplies the expected replies for a failing case + * @throws ConnectionFailedException in case the reply is unexpected + */ + protected void handleExpectedReplies(long callerRequestId, + Reply reply, + RawClient client, + String qualifiedStreamSegmentName, + Class requestType, + WireCommandType type, + Map, Set>> expectedSuccessReplies, + Map, Set>> expectedFailureReplies) throws ConnectionFailedException { closeConnection(reply, client, callerRequestId); - Set> expectedReplies = EXPECTED_SUCCESS_REPLIES.get(requestType); - Set> expectedFailingReplies = EXPECTED_FAILING_REPLIES.get(requestType); + Set> expectedReplies = expectedSuccessReplies.get(requestType); + Set> expectedFailingReplies = expectedFailureReplies.get(requestType); if (expectedReplies != null && expectedReplies.contains(reply.getClass())) { log.debug(callerRequestId, "{} {} {} {}.", requestType.getSimpleName(), qualifiedStreamSegmentName, reply.getClass().getSimpleName(), reply.getRequestId()); diff --git a/controller/src/main/java/io/pravega/controller/server/SegmentStoreConnectionManager.java b/controller/src/main/java/io/pravega/controller/server/SegmentStoreConnectionManager.java deleted file mode 100644 index a0901873f20..00000000000 --- a/controller/src/main/java/io/pravega/controller/server/SegmentStoreConnectionManager.java +++ /dev/null @@ -1,421 +0,0 @@ -/** - * Copyright Pravega Authors. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package io.pravega.controller.server; - -import com.google.common.annotations.VisibleForTesting; -import com.google.common.cache.CacheBuilder; -import com.google.common.cache.CacheLoader; -import com.google.common.cache.LoadingCache; -import com.google.common.cache.RemovalListener; -import io.pravega.client.connection.impl.ClientConnection; -import io.pravega.client.connection.impl.ConnectionFactory; -import io.pravega.common.Exceptions; -import io.pravega.common.util.ResourcePool; -import io.pravega.controller.util.Config; -import io.pravega.shared.protocol.netty.ConnectionFailedException; -import io.pravega.shared.protocol.netty.PravegaNodeUri; -import io.pravega.shared.protocol.netty.ReplyProcessor; -import io.pravega.shared.protocol.netty.WireCommand; -import io.pravega.shared.protocol.netty.WireCommands; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicReference; -import java.util.function.BiConsumer; -import java.util.function.Consumer; -import javax.annotation.ParametersAreNonnullByDefault; -import lombok.extern.slf4j.Slf4j; - -/** - * Connection Manager class that maintains a cache of connection pools to connection to different segment stores. - * Users of connection manager class can request for a connection pool and then request connections on the pool and return - * the connections when done. - * This maintains a cache of connection pools to segment stores. Any segmentstore for which no new connection request - * comes for a while, its pool is evicted from the cache which triggers a shutdown on the pool. - * If newer requests are received for the said segmentstore, a new pool gets created. However, any callers using the existing - * pool can continue to do so. The pool in shutdown mode simply drains all available connections. And when it has no references left - * it can be garbage collected. - */ -@Slf4j -class SegmentStoreConnectionManager implements AutoCloseable { - private static final int MAX_CONCURRENT_CONNECTIONS = 500; - private static final int MAX_IDLE_CONNECTIONS = 100; - // cache of connection manager for segment store nodes. - // Pravega Connection Manager maintains a pool of connection for a segment store and returns a connection from - // the pool on the need basis. - private final LoadingCache cache; - - SegmentStoreConnectionManager(final ConnectionFactory clientCF) { - this.cache = CacheBuilder.newBuilder() - .maximumSize(Config.HOST_STORE_CONTAINER_COUNT) - // if a host is not accessed for 5 minutes, remove it from the cache - .expireAfterAccess(5, TimeUnit.MINUTES) - .removalListener((RemovalListener) removalNotification -> { - // Whenever a connection manager is evicted from the cache call shutdown on it. - removalNotification.getValue().shutdown(); - }) - .build(new CacheLoader() { - @Override - @ParametersAreNonnullByDefault - public SegmentStoreConnectionPool load(PravegaNodeUri nodeUri) { - return new SegmentStoreConnectionPool(nodeUri, clientCF); - } - }); - - } - - - CompletableFuture getConnection(PravegaNodeUri uri, ReplyProcessor replyProcessor) { - return cache.getUnchecked(uri).getConnection(replyProcessor); - } - - @Override - public void close() { - cache.invalidateAll(); - cache.cleanUp(); - } - - /** - * This is a connection manager class to manage connection to a given segmentStore node identified by PravegaNodeUri. - * It uses {@link ResourcePool} to create pool of available connections and specify a maximum number of concurrent connections. - * Users can request for connection from this class and it will opportunistically use existing connections or create new - * connections to a given segment store server and return the connection. - * It ensures that there are only a limited number of concurrent connections created. If more users request for connection - * it would add them to wait queue and as connections become available (when existing connections are returned), - * this class opportunistically tries to reuse the returned connection to fulfil waiting requests. If there are no - * waiting requests, this class tries to maintain an available connection pool of predetermined size. If more number of - * connections than max available size is returned to it, the classes closes those connections to free up resources. - * - * It is important to note that the intent is not to multiplex multiple requests over a single connection concurrently. - * It simply reuses already created connections to send additional commands over it. - * As users finish their processing, they should return the connection back to this class. - * - * The connectionManager can be shutdown as well. However, the shutdown trigger does not prevent callers to attempt - * to create new connections and new connections will be served. Shutdown ensures that it drains all available connections - * and as connections are returned, they are not reused. - */ - static class SegmentStoreConnectionPool extends ResourcePool { - @VisibleForTesting - SegmentStoreConnectionPool(PravegaNodeUri pravegaNodeUri, ConnectionFactory clientCF) { - this(pravegaNodeUri, clientCF, MAX_CONCURRENT_CONNECTIONS, MAX_IDLE_CONNECTIONS); - } - - @VisibleForTesting - SegmentStoreConnectionPool(PravegaNodeUri pravegaNodeUri, ConnectionFactory clientCF, int maxConcurrent, int maxIdle) { - super(() -> { - ReusableReplyProcessor rp = new ReusableReplyProcessor(); - return clientCF.establishConnection(pravegaNodeUri, rp) - .thenApply(connection -> new ConnectionObject(connection, rp)); - }, connectionObj -> connectionObj.connection.close(), maxConcurrent, maxIdle); - } - - CompletableFuture getConnection(ReplyProcessor replyProcessor) { - return getResource() - .thenApply(closeableResource -> { - ConnectionObject connectionObject = closeableResource.getResource(); - connectionObject.reusableReplyProcessor.initialize(replyProcessor); - return new ConnectionWrapper(closeableResource); - }); - } - } - - static class ConnectionWrapper implements AutoCloseable { - private final ResourcePool.CloseableResource resource; - private AtomicBoolean isClosed; - private ConnectionWrapper(ResourcePool.CloseableResource resource) { - this.resource = resource; - this.isClosed = new AtomicBoolean(false); - } - - void failConnection() { - resource.getResource().failConnection(); - } - - void sendAsync(WireCommand request, CompletableFuture resultFuture) { - resource.getResource().sendAsync(request, resultFuture); - } - - // region for testing - @VisibleForTesting - ConnectionObject.ConnectionState getState() { - return resource.getResource().state.get(); - } - - @VisibleForTesting - ClientConnection getConnection() { - return resource.getResource().connection; - } - - @VisibleForTesting - ReplyProcessor getReplyProcessor() { - return resource.getResource().reusableReplyProcessor.replyProcessor.get(); - } - // endregion - - @Override - public void close() { - if (isClosed.compareAndSet(false, true)) { - ConnectionObject connectionObject = resource.getResource(); - connectionObject.reusableReplyProcessor.uninitialize(); - if (!connectionObject.state.get().equals(ConnectionObject.ConnectionState.CONNECTED)) { - resource.invalidate(); - } - this.resource.close(); - } - } - } - - private static class ConnectionObject { - private final ClientConnection connection; - private final ReusableReplyProcessor reusableReplyProcessor; - private final AtomicReference state; - - ConnectionObject(ClientConnection connection, ReusableReplyProcessor processor) { - this.connection = connection; - this.reusableReplyProcessor = processor; - state = new AtomicReference<>(ConnectionState.CONNECTED); - } - - private void failConnection() { - state.set(ConnectionState.DISCONNECTED); - } - - private void sendAsync(WireCommand request, CompletableFuture resultFuture) { - try { - connection.send(request); - } catch (ConnectionFailedException cfe) { - Throwable cause = Exceptions.unwrap(cfe); - resultFuture.completeExceptionally(new WireCommandFailedException(cause, request.getType(), - WireCommandFailedException.Reason.ConnectionFailed)); - state.set(ConnectionState.DISCONNECTED); - } - } - - private enum ConnectionState { - CONNECTED, - DISCONNECTED - } - } - - /** - * A reusable reply processor class which can be initialized and uninitialized with new ReplyProcessor. - * This same replyProcessor can be reused with the same connection for handling different replies from servers for - * different calls. - */ - @VisibleForTesting - static class ReusableReplyProcessor implements ReplyProcessor { - private final AtomicReference replyProcessor = new AtomicReference<>(); - - // initialize the reusable reply processor class with a new reply processor - void initialize(ReplyProcessor replyProcessor) { - this.replyProcessor.set(replyProcessor); - } - - // unset reply processor - void uninitialize() { - replyProcessor.set(null); - } - - private void execute(BiConsumer toInvoke, T arg) { - ReplyProcessor rp = replyProcessor.get(); - if (rp != null) { - toInvoke.accept(rp, arg); - } - } - - private void execute(Consumer toInvoke) { - ReplyProcessor rp = replyProcessor.get(); - if (rp != null) { - toInvoke.accept(rp); - } - } - - @Override - public void hello(WireCommands.Hello hello) { - execute(ReplyProcessor::hello, hello); - } - - @Override - public void wrongHost(WireCommands.WrongHost wrongHost) { - execute(ReplyProcessor::wrongHost, wrongHost); - } - - @Override - public void segmentAlreadyExists(WireCommands.SegmentAlreadyExists segmentAlreadyExists) { - execute(ReplyProcessor::segmentAlreadyExists, segmentAlreadyExists); - } - - @Override - public void segmentIsSealed(WireCommands.SegmentIsSealed segmentIsSealed) { - execute(ReplyProcessor::segmentIsSealed, segmentIsSealed); - } - - @Override - public void segmentIsTruncated(WireCommands.SegmentIsTruncated segmentIsTruncated) { - execute(ReplyProcessor::segmentIsTruncated, segmentIsTruncated); - } - - @Override - public void noSuchSegment(WireCommands.NoSuchSegment noSuchSegment) { - execute(ReplyProcessor::noSuchSegment, noSuchSegment); - } - - @Override - public void tableSegmentNotEmpty(WireCommands.TableSegmentNotEmpty tableSegmentNotEmpty) { - execute(ReplyProcessor::tableSegmentNotEmpty, tableSegmentNotEmpty); - } - - @Override - public void invalidEventNumber(WireCommands.InvalidEventNumber invalidEventNumber) { - execute(ReplyProcessor::invalidEventNumber, invalidEventNumber); - } - - @Override - public void appendSetup(WireCommands.AppendSetup appendSetup) { - execute(ReplyProcessor::appendSetup, appendSetup); - } - - @Override - public void dataAppended(WireCommands.DataAppended dataAppended) { - execute(ReplyProcessor::dataAppended, dataAppended); - } - - @Override - public void conditionalCheckFailed(WireCommands.ConditionalCheckFailed dataNotAppended) { - execute(ReplyProcessor::conditionalCheckFailed, dataNotAppended); - } - - @Override - public void segmentRead(WireCommands.SegmentRead segmentRead) { - execute(ReplyProcessor::segmentRead, segmentRead); - } - - @Override - public void segmentAttributeUpdated(WireCommands.SegmentAttributeUpdated segmentAttributeUpdated) { - execute(ReplyProcessor::segmentAttributeUpdated, segmentAttributeUpdated); - } - - @Override - public void segmentAttribute(WireCommands.SegmentAttribute segmentAttribute) { - execute(ReplyProcessor::segmentAttribute, segmentAttribute); - } - - @Override - public void streamSegmentInfo(WireCommands.StreamSegmentInfo streamInfo) { - execute(ReplyProcessor::streamSegmentInfo, streamInfo); - } - - @Override - public void segmentCreated(WireCommands.SegmentCreated segmentCreated) { - execute(ReplyProcessor::segmentCreated, segmentCreated); - } - - @Override - public void segmentsMerged(WireCommands.SegmentsMerged segmentsMerged) { - execute(ReplyProcessor::segmentsMerged, segmentsMerged); - } - - @Override - public void segmentSealed(WireCommands.SegmentSealed segmentSealed) { - execute(ReplyProcessor::segmentSealed, segmentSealed); - } - - @Override - public void segmentTruncated(WireCommands.SegmentTruncated segmentTruncated) { - execute(ReplyProcessor::segmentTruncated, segmentTruncated); - } - - @Override - public void segmentDeleted(WireCommands.SegmentDeleted segmentDeleted) { - execute(ReplyProcessor::segmentDeleted, segmentDeleted); - } - - @Override - public void operationUnsupported(WireCommands.OperationUnsupported operationUnsupported) { - execute(ReplyProcessor::operationUnsupported, operationUnsupported); - } - - @Override - public void keepAlive(WireCommands.KeepAlive keepAlive) { - execute(ReplyProcessor::keepAlive, keepAlive); - } - - @Override - public void connectionDropped() { - execute(ReplyProcessor::connectionDropped); - } - - @Override - public void segmentPolicyUpdated(WireCommands.SegmentPolicyUpdated segmentPolicyUpdated) { - execute(ReplyProcessor::segmentPolicyUpdated, segmentPolicyUpdated); - } - - @Override - public void processingFailure(Exception error) { - execute(ReplyProcessor::processingFailure, error); - } - - @Override - public void authTokenCheckFailed(WireCommands.AuthTokenCheckFailed authTokenCheckFailed) { - execute(ReplyProcessor::authTokenCheckFailed, authTokenCheckFailed); - } - - @Override - public void tableEntriesUpdated(WireCommands.TableEntriesUpdated tableEntriesUpdated) { - execute(ReplyProcessor::tableEntriesUpdated, tableEntriesUpdated); - } - - @Override - public void tableKeysRemoved(WireCommands.TableKeysRemoved tableKeysRemoved) { - execute(ReplyProcessor::tableKeysRemoved, tableKeysRemoved); - } - - @Override - public void tableRead(WireCommands.TableRead tableRead) { - execute(ReplyProcessor::tableRead, tableRead); - } - - @Override - public void tableKeyDoesNotExist(WireCommands.TableKeyDoesNotExist tableKeyDoesNotExist) { - execute(ReplyProcessor::tableKeyDoesNotExist, tableKeyDoesNotExist); - } - - @Override - public void tableKeyBadVersion(WireCommands.TableKeyBadVersion tableKeyBadVersion) { - execute(ReplyProcessor::tableKeyBadVersion, tableKeyBadVersion); - } - - @Override - public void tableKeysRead(WireCommands.TableKeysRead tableKeysRead) { - execute(ReplyProcessor::tableKeysRead, tableKeysRead); - } - - @Override - public void tableEntriesRead(WireCommands.TableEntriesRead tableEntriesRead) { - execute(ReplyProcessor::tableEntriesRead, tableEntriesRead); - } - - @Override - public void tableEntriesDeltaRead(WireCommands.TableEntriesDeltaRead tableEntriesDeltaRead) { - execute(ReplyProcessor::tableEntriesDeltaRead, tableEntriesDeltaRead); - } - - @Override - public void errorMessage(WireCommands.ErrorMessage errorMessage) { - execute(ReplyProcessor::errorMessage, errorMessage); - } - } -} diff --git a/controller/src/main/java/io/pravega/controller/server/bucket/BucketManager.java b/controller/src/main/java/io/pravega/controller/server/bucket/BucketManager.java index ba18d27df2b..2b5af4304db 100644 --- a/controller/src/main/java/io/pravega/controller/server/bucket/BucketManager.java +++ b/controller/src/main/java/io/pravega/controller/server/bucket/BucketManager.java @@ -83,6 +83,8 @@ protected void doStart() { protected abstract int getBucketCount(); + public abstract boolean isHealthy(); + CompletableFuture tryTakeOwnership(int bucket) { return takeBucketOwnership(bucket, processId, executor) .thenCompose(isOwner -> { diff --git a/controller/src/main/java/io/pravega/controller/server/bucket/InMemoryBucketManager.java b/controller/src/main/java/io/pravega/controller/server/bucket/InMemoryBucketManager.java index f042518eefd..0da959b2f53 100644 --- a/controller/src/main/java/io/pravega/controller/server/bucket/InMemoryBucketManager.java +++ b/controller/src/main/java/io/pravega/controller/server/bucket/InMemoryBucketManager.java @@ -40,6 +40,16 @@ protected int getBucketCount() { return bucketStore.getBucketCount(getServiceType()); } + /** + * Get the health status. + * + * @return true by default. + */ + @Override + public boolean isHealthy() { + return true; + } + @Override void startBucketOwnershipListener() { diff --git a/controller/src/main/java/io/pravega/controller/server/bucket/ZooKeeperBucketManager.java b/controller/src/main/java/io/pravega/controller/server/bucket/ZooKeeperBucketManager.java index 890a25ef762..633d0478b95 100644 --- a/controller/src/main/java/io/pravega/controller/server/bucket/ZooKeeperBucketManager.java +++ b/controller/src/main/java/io/pravega/controller/server/bucket/ZooKeeperBucketManager.java @@ -45,6 +45,16 @@ public class ZooKeeperBucketManager extends BucketManager { this.bucketStore = bucketStore; } + /** + * Get the health status. + * + * @return true if zookeeper is connected. + */ + @Override + public boolean isHealthy() { + return this.bucketStore.isZKConnected(); + } + @Override protected int getBucketCount() { return bucketStore.getBucketCount(getServiceType()); diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessors.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessors.java index 8e3e39ee66d..3a8760d840f 100755 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessors.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessors.java @@ -80,10 +80,12 @@ import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; import java.util.function.Supplier; import java.util.stream.Collectors; import lombok.extern.slf4j.Slf4j; +import lombok.Getter; import org.apache.commons.lang3.tuple.ImmutablePair; import static io.pravega.controller.util.RetryHelper.RETRYABLE_PREDICATE; @@ -121,6 +123,8 @@ public class ControllerEventProcessors extends AbstractIdleService implements Fa private final long rebalanceIntervalMillis; private final AtomicLong truncationInterval; private ScheduledExecutorService rebalanceExecutor; + @Getter + private final AtomicBoolean bootstrapCompleted = new AtomicBoolean(false); public ControllerEventProcessors(final String host, final ControllerEventProcessorConfig config, @@ -139,7 +143,7 @@ public ControllerEventProcessors(final String host, } @VisibleForTesting - ControllerEventProcessors(final String host, + public ControllerEventProcessors(final String host, final ControllerEventProcessorConfig config, final Controller controller, final CheckpointStore checkpointStore, @@ -181,6 +185,39 @@ public ControllerEventProcessors(final String host, this.truncationInterval = new AtomicLong(TRUNCATION_INTERVAL_MILLIS); } + /** + * Get the health status. + * + * @return true if zookeeper is connected. + */ + public boolean isMetadataServiceConnected() { + return checkpointStore.isHealthy(); + } + + /** + * Get bootstrap completed status. + * + * @return true if bootstrapCompleted is set to true. + */ + public boolean isBootstrapCompleted() { + return this.bootstrapCompleted.get(); + } + + /** + * Get the health status. + * + * @return true if zookeeper is connected and bootstrap is completed. + */ + @Override + public boolean isReady() { + boolean isMetaConnected = isMetadataServiceConnected(); + boolean isBootstrapComplete = isBootstrapCompleted(); + boolean isSvcRunning = this.isRunning(); + boolean isReady = isMetaConnected && isBootstrapComplete && isSvcRunning; + log.debug("IsReady={} as isMetaConnected={}, isBootstrapComplete={}, isSvcRunning={}", isReady, isMetaConnected, isBootstrapComplete, isSvcRunning); + return isReady; + } + @Override protected void startUp() throws Exception { long traceId = LoggerHelpers.traceEnterWithContext(log, this.objectId, "startUp"); @@ -208,12 +245,7 @@ protected void shutDown() { } } - @Override - public boolean isReady() { - return isRunning(); - } - - @Override + @Override public CompletableFuture sweepFailedProcesses(final Supplier> processes) { List> futures = new ArrayList<>(); @@ -343,6 +375,8 @@ public CompletableFuture bootstrap(final StreamTransactionMetadataTasks st Futures.loop(this::isRunning, () -> Futures.delayedFuture( () -> truncate(config.getKvtStreamName(), config.getKvtReaderGroupName(), streamMetadataTasks), delay, executor), executor); + this.bootstrapCompleted.set(true); + log.info("Completed bootstrapping event processors."); }, executor); } @@ -472,7 +506,7 @@ private void initialize() { .minRebalanceIntervalMillis(rebalanceIntervalMillis) .build(); - log.info("Creating commit event processors"); + log.debug("Creating commit event processors"); Retry.indefinitelyWithExpBackoff(DELAY, MULTIPLIER, MAX_DELAY, e -> log.warn("Error creating commit event processor group", e)) .run(() -> { @@ -501,7 +535,7 @@ private void initialize() { .minRebalanceIntervalMillis(rebalanceIntervalMillis) .build(); - log.info("Creating abort event processors"); + log.debug("Creating abort event processors"); Retry.indefinitelyWithExpBackoff(DELAY, MULTIPLIER, MAX_DELAY, e -> log.warn("Error creating commit event processor group", e)) .run(() -> { @@ -529,7 +563,7 @@ private void initialize() { .minRebalanceIntervalMillis(rebalanceIntervalMillis) .build(); - log.info("Creating stream request event processors"); + log.debug("Creating stream request event processors"); Retry.indefinitelyWithExpBackoff(DELAY, MULTIPLIER, MAX_DELAY, e -> log.warn("Error creating request event processor group", e)) .run(() -> { @@ -557,7 +591,7 @@ private void initialize() { .minRebalanceIntervalMillis(rebalanceIntervalMillis) .build(); - log.info("Creating kvt request event processors"); + log.debug("Creating kvt request event processors"); Retry.indefinitelyWithExpBackoff(DELAY, MULTIPLIER, MAX_DELAY, e -> log.warn("Error creating request event processor group", e)) .run(() -> { @@ -566,14 +600,15 @@ private void initialize() { }); // endregion - log.info("Awaiting start of commit event processors"); + log.info("Awaiting start of event processors..."); commitEventProcessors.awaitRunning(); - log.info("Awaiting start of abort event processors"); + log.info("Commit event processor started."); abortEventProcessors.awaitRunning(); - log.info("Awaiting start of stream request event processors"); + log.info("Abort event processor started."); requestEventProcessors.awaitRunning(); - log.info("Awaiting start of kvt request event processors"); + log.info("Stream request event processor started."); kvtRequestEventProcessors.awaitRunning(); + log.info("KVT request event processor started."); } private void stopEventProcessors() { diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/LocalController.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/LocalController.java index bfc3c0a1d04..ff0a5666582 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/LocalController.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/LocalController.java @@ -184,11 +184,11 @@ public CompletableFuture createStream(String scope, String streamName, return this.controller.createStream(scope, streamName, streamConfig, System.currentTimeMillis(), requestIdGenerator.nextLong()).thenApply(x -> { switch (x.getStatus()) { case FAILURE: - throw new ControllerFailureException("Failed to create stream: " + streamConfig); + throw new ControllerFailureException(String.format("Failed to create stream: %s/%s with config: %s", scope, streamName, streamConfig)); case INVALID_STREAM_NAME: - throw new IllegalArgumentException("Illegal stream name: " + streamName + " config: " + streamConfig); + throw new IllegalArgumentException(String.format("Failed to create stream. Illegal Stream name: %s", streamName)); case SCOPE_NOT_FOUND: - throw new IllegalArgumentException("Scope does not exist: " + scope + " config: " + streamConfig); + throw new IllegalArgumentException(String.format("Failed to create stream: %s as Scope %s does not exist.", streamName, scope)); case STREAM_EXISTS: return false; case SUCCESS: @@ -203,18 +203,20 @@ public CompletableFuture createStream(String scope, String streamName, @Override public CompletableFuture updateStream(String scope, String streamName, final StreamConfiguration streamConfig) { return this.controller.updateStream(scope, streamName, streamConfig, requestIdGenerator.nextLong()).thenApply(x -> { + String scopedStreamName = NameUtils.getScopedStreamName(scope, streamName); switch (x.getStatus()) { case FAILURE: - throw new ControllerFailureException("Failed to update stream: " + streamConfig); + throw new ControllerFailureException(String.format("Failed to update Stream: %s, config: %s", scopedStreamName, streamConfig)); case SCOPE_NOT_FOUND: - throw new IllegalArgumentException("Scope does not exist: " + scope + " config: " + streamConfig); + throw new IllegalArgumentException(String.format("Failed to update Stream %s as Scope %s does not exist.", scope, streamName)); case STREAM_NOT_FOUND: - throw new IllegalArgumentException("Stream does not exist: " + streamName + " in scope: " + scope + " config: " + streamConfig); + throw new IllegalArgumentException(String.format("Failed to update Stream: %s as Stream does not exist under Scope: %s", streamName, scope)); + case STREAM_SEALED: + throw new UnsupportedOperationException(String.format("Failed to update Stream: %s as Stream is sealed", streamName)); case SUCCESS: return true; default: - throw new ControllerFailureException("Unknown return status updating stream " + streamConfig - + " " + x.getStatus()); + throw new ControllerFailureException(String.format("Failed to update Stream: %s. Unknown return status updating stream %s", scopedStreamName, x.getStatus())); } }); } @@ -227,16 +229,15 @@ public CompletableFuture createReaderGroup(String scopeName, final String scopedRGName = NameUtils.getScopedReaderGroupName(scopeName, rgName); switch (x.getStatus()) { case FAILURE: - throw new ControllerFailureException("Failed to create ReaderGroup: " + scopedRGName); + throw new ControllerFailureException(String.format("Failed to create Reader Group: %s", scopedRGName)); case INVALID_RG_NAME: - throw new IllegalArgumentException("Illegal ReaderGroup name: " + rgName); + throw new IllegalArgumentException(String.format("Failed to create Reader Group: %s due to Illegal Reader Group name: %s", scopedRGName, rgName)); case SCOPE_NOT_FOUND: - throw new IllegalArgumentException("Scope does not exist: " + scopeName + " config: " + scopeName); + throw new IllegalArgumentException(String.format("Failed to create Reader Group: %s as Scope: %s does not exist.", scopedRGName, scopeName)); case SUCCESS: return ModelHelper.encode(x.getConfig()); default: - throw new ControllerFailureException("Unknown return status creating ReaderGroup " + scopedRGName - + " " + x); + throw new ControllerFailureException(String.format("Unknown return status creating ReaderGroup %s: Status: %s", scopedRGName, x)); } }); } @@ -247,16 +248,15 @@ public CompletableFuture updateReaderGroup(String scopeName, String rgName final String scopedRGName = NameUtils.getScopedReaderGroupName(scopeName, rgName); switch (x.getStatus()) { case FAILURE: - throw new ControllerFailureException("Failed to create ReaderGroup: " + scopedRGName); + throw new ControllerFailureException(String.format("Failed to update Reader Group: %s", scopedRGName)); case INVALID_CONFIG: - throw new ReaderGroupConfigRejectedException("Invalid Reader Group Config: " + config.toString()); + throw new ReaderGroupConfigRejectedException(String.format("Failed to update Reader Group: %s, Invalid Reader Group Config: %s.", scopedRGName, config.toString())); case RG_NOT_FOUND: - throw new ReaderGroupNotFoundException("Scope does not exist: " + scopeName + " config: " + scopedRGName); + throw new ReaderGroupNotFoundException(String.format("Failed to update Reader Group as Reader Group: %s does not exist.", scopedRGName)); case SUCCESS: return x.getGeneration(); default: - throw new ControllerFailureException("Unknown return status creating ReaderGroup " + scopedRGName - + " " + x.getStatus()); + throw new ControllerFailureException(String.format("Failed to update Reader Group: %s, due to unknown return status %s.", scopedRGName, x.getStatus())); } }); } @@ -267,14 +267,13 @@ public CompletableFuture getReaderGroupConfig(String scopeNam final String scopedRGName = NameUtils.getScopedReaderGroupName(scopeName, rgName); switch (x.getStatus()) { case FAILURE: - throw new ControllerFailureException("Failed to get Config for ReaderGroup: " + scopedRGName); + throw new ControllerFailureException(String.format("Failed to get configuration for Reader Group: %s.", scopedRGName)); case RG_NOT_FOUND: - throw new ReaderGroupNotFoundException("Could not find Reader Group: " + scopedRGName); + throw new ReaderGroupNotFoundException(String.format("Failed to get configuration for Reader Group %s, as Reader Group could not be found.", scopedRGName)); case SUCCESS: return ModelHelper.encode(x.getConfig()); default: - throw new ControllerFailureException("Unknown return status getting config for ReaderGroup " + scopedRGName - + " " + x.getStatus()); + throw new ControllerFailureException(String.format("Failed to get configuration for Reader Group: %s, due to unknown return status: %s.", scopedRGName, x.getStatus())); } }); } @@ -286,14 +285,13 @@ public CompletableFuture deleteReaderGroup(final String scopeName, fina final String scopedRGName = NameUtils.getScopedReaderGroupName(scopeName, rgName); switch (x.getStatus()) { case FAILURE: - throw new ControllerFailureException("Failed to create ReaderGroup: " + scopedRGName); + throw new ControllerFailureException(String.format("Failed to delete Reader Group: %s", scopedRGName)); case RG_NOT_FOUND: - throw new ReaderGroupNotFoundException("Reader group not found: " + scopedRGName); + throw new ReaderGroupNotFoundException(String.format("Failed to delete Reader Group as Reader Group %s could not be found.", scopedRGName)); case SUCCESS: return true; default: - throw new ControllerFailureException("Unknown return status creating ReaderGroup " + scopedRGName - + " " + x.getStatus()); + throw new ControllerFailureException(String.format("Failed to delete Reader Group: %s, due to unknown return status: %s", scopedRGName, x.getStatus())); } }); } @@ -301,17 +299,17 @@ public CompletableFuture deleteReaderGroup(final String scopeName, fina @Override public CompletableFuture> listSubscribers(final String scope, final String streamName) { return this.controller.listSubscribers(scope, streamName, requestIdGenerator.nextLong()).thenApply(x -> { + String scopedStreamName = NameUtils.getScopedStreamName(scope, streamName); switch (x.getStatus()) { case FAILURE: - throw new ControllerFailureException("Failed to listSubscribers for stream: " + scope + "/" + streamName); + throw new ControllerFailureException(String.format("Failed to listSubscribers for Stream: %s", scopedStreamName)); case STREAM_NOT_FOUND: - throw new IllegalArgumentException("Stream does not exist: " + streamName); + throw new IllegalArgumentException(String.format("Failed to listSubscribers for Stream: %s, as Stream does not exist.", scopedStreamName)); case SUCCESS: return new ArrayList<>(x.getSubscribersList()); default: throw new ControllerFailureException( - String.format("Unknown return status for listSubscribers on stream %s/%s %s", - scope, streamName, x.getStatus())); + String.format("Failed to listSubscribers for Stream: %s due to unknown return status %s", scopedStreamName, x.getStatus())); } }); } @@ -358,6 +356,8 @@ public CompletableFuture truncateStream(final String scope, final Strin throw new IllegalArgumentException("Scope does not exist: " + scope); case STREAM_NOT_FOUND: throw new IllegalArgumentException("Stream does not exist: " + stream + " in scope: " + scope); + case STREAM_SEALED: + throw new UnsupportedOperationException("Failed to update Stream: " + stream + " as stream is sealed"); case SUCCESS: return true; default: diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/AbstractRequestProcessor.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/AbstractRequestProcessor.java index 7ffb2ffed6a..1cc2a37b5ef 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/AbstractRequestProcessor.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/AbstractRequestProcessor.java @@ -21,7 +21,6 @@ import io.pravega.common.tracing.RequestTag; import io.pravega.common.util.Retry; import io.pravega.controller.eventProcessor.impl.SerializedRequestHandler; -import io.pravega.controller.store.stream.EpochTransitionOperationExceptions; import io.pravega.controller.store.stream.OperationContext; import io.pravega.controller.store.stream.StoreException; import io.pravega.controller.store.stream.StreamMetadataStore; @@ -67,18 +66,8 @@ @Slf4j public abstract class AbstractRequestProcessor extends SerializedRequestHandler implements StreamRequestProcessor { - protected static final Predicate ILLEGAL_STATE_PREDICATE = e -> Exceptions.unwrap(e) instanceof StoreException.IllegalStateException; - protected static final Predicate DATA_NOT_FOUND_PREDICATE = e -> Exceptions.unwrap(e) instanceof StoreException.DataNotFoundException; - protected static final Predicate SEGMENT_NOT_FOUND_PREDICATE = e -> Exceptions.unwrap(e) instanceof StoreException.DataContainerNotFoundException; - protected static final Predicate NON_RETRYABLE_EXCEPTIONS = ILLEGAL_STATE_PREDICATE.or(DATA_NOT_FOUND_PREDICATE) - .or(SEGMENT_NOT_FOUND_PREDICATE) - .or(e -> Exceptions.unwrap(e) instanceof IllegalArgumentException) - .or(e -> Exceptions.unwrap(e) instanceof NullPointerException); - protected static final Predicate EVENT_RETRY_PREDICATE = NON_RETRYABLE_EXCEPTIONS.negate(); - protected static final Predicate SCALE_EVENT_RETRY_PREDICATE = NON_RETRYABLE_EXCEPTIONS.or(e -> e instanceof EpochTransitionOperationExceptions.ConditionInvalidException) - .or(e -> e instanceof EpochTransitionOperationExceptions.InputInvalidException) - .or(e -> e instanceof EpochTransitionOperationExceptions.PreConditionFailureException) - .negate(); + protected static final Predicate OPERATION_NOT_ALLOWED_PREDICATE = e -> Exceptions.unwrap(e) + instanceof StoreException.OperationNotAllowedException; protected final StreamMetadataStore streamMetadataStore; @@ -178,7 +167,7 @@ protected CompletableFuture withCompletion(Str stream, getProcessorName(), context, executor), null, "Exception while trying to create waiting request. Logged and ignored.") .thenCompose(ignore -> retryIndefinitelyThenComplete( - () -> task.writeBack(event), resultFuture, null)); + () -> task.writeBack(event), resultFuture, ex)); } else { // Processing was done for this event, whether it succeeded or failed, we should remove // the waiting request if it matches the current processor. @@ -199,11 +188,8 @@ stream, getProcessorName(), context, executor), + event + " so that waiting processor" + waitingRequestProcessor + " can work. ")); } }).exceptionally(e -> { - if (writeBackPredicate.test(e)) { - retryIndefinitelyThenComplete( - () -> task.writeBack(event), resultFuture, null); - } - return null; + resultFuture.completeExceptionally(e); + return null; }); return resultFuture; diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CommitRequestHandler.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CommitRequestHandler.java index 51734863198..f7ae1276479 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CommitRequestHandler.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CommitRequestHandler.java @@ -17,6 +17,7 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Preconditions; +import com.google.common.collect.ImmutableMap; import io.pravega.client.stream.ScalingPolicy; import io.pravega.common.Exceptions; import io.pravega.common.Timer; @@ -29,11 +30,13 @@ import io.pravega.controller.store.stream.StreamMetadataStore; import io.pravega.controller.store.VersionedMetadata; import io.pravega.controller.store.stream.State; +import io.pravega.controller.store.stream.TxnWriterMark; import io.pravega.controller.store.stream.records.CommittingTransactionsRecord; import io.pravega.controller.store.stream.records.EpochRecord; import io.pravega.controller.task.Stream.StreamMetadataTasks; import io.pravega.controller.task.Stream.StreamTransactionMetadataTasks; import io.pravega.shared.controller.event.CommitEvent; +import org.apache.curator.shaded.com.google.common.base.Strings; import org.slf4j.LoggerFactory; import static io.pravega.shared.NameUtils.computeSegmentId; @@ -41,6 +44,8 @@ import java.util.ArrayList; import java.util.List; import java.util.UUID; +import java.util.Map; +import java.util.HashMap; import java.util.concurrent.BlockingQueue; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ScheduledExecutorService; @@ -51,7 +56,7 @@ * Request handler for processing commit events in commit-stream. */ public class CommitRequestHandler extends AbstractRequestProcessor implements StreamTask { - private static final TagLogger log = new TagLogger(LoggerFactory.getLogger(AutoScaleTask.class)); + private static final TagLogger log = new TagLogger(LoggerFactory.getLogger(CommitRequestHandler.class)); private static final int MAX_TRANSACTION_COMMIT_BATCH_SIZE = 100; @@ -87,7 +92,7 @@ public CommitRequestHandler(final StreamMetadataStore streamMetadataStore, @Override public CompletableFuture processCommitTxnRequest(CommitEvent event) { - return withCompletion(this, event, event.getScope(), event.getStream(), EVENT_RETRY_PREDICATE); + return withCompletion(this, event, event.getScope(), event.getStream(), OPERATION_NOT_ALLOWED_PREDICATE); } /** @@ -159,24 +164,27 @@ private CompletableFuture tryCommitTransactions(final String scope, final String stream, final OperationContext context) { Timer timer = new Timer(); + Map writerMarks = new HashMap<>(); + Map txnIdToWriterId = new HashMap<>(); + return streamMetadataStore.getVersionedState(scope, stream, context, executor) .thenComposeAsync(state -> { final AtomicReference> stateRecord = new AtomicReference<>(state); - CompletableFuture> commitFuture = streamMetadataStore.startCommitTransactions(scope, stream, MAX_TRANSACTION_COMMIT_BATCH_SIZE, context, executor) - .thenComposeAsync(versionedMetadata -> { - if (versionedMetadata.getObject().equals(CommittingTransactionsRecord.EMPTY)) { + .thenComposeAsync(txnsTuple -> { + VersionedMetadata committingTxnsRecord = txnsTuple.getKey(); + if (committingTxnsRecord.getObject().equals(CommittingTransactionsRecord.EMPTY)) { // there are no transactions found to commit. // reset state conditionally in case we were left with stale committing state from // a previous execution // that died just before updating the state back to ACTIVE but after having // completed all the work. - return CompletableFuture.completedFuture(versionedMetadata); + return CompletableFuture.completedFuture(committingTxnsRecord); } else { - int txnEpoch = versionedMetadata.getObject().getEpoch(); - List txnList = versionedMetadata.getObject().getTransactionsToCommit(); + int txnEpoch = committingTxnsRecord.getObject().getEpoch(); + List txnList = committingTxnsRecord.getObject().getTransactionsToCommit(); log.info(context.getRequestId(), "Committing {} transactions on epoch {} on stream {}/{}", @@ -198,6 +206,16 @@ private CompletableFuture tryCommitTransactions(final String scope, .thenAccept(stateRecord::set); } + txnsTuple.getValue().forEach(txn -> { + if (!Strings.isNullOrEmpty(txn.getWriterId())) { + txnIdToWriterId.put(txn.getId(), txn.getWriterId()); + if (!writerMarks.containsKey(txn.getWriterId()) + || writerMarks.get(txn.getWriterId()).getTimestamp() < txn.getCommitTime()) { + writerMarks.put(txn.getWriterId(), + new TxnWriterMark(txn.getCommitTime(), ImmutableMap.of(), txn.getId())); + } + } + }); // Note: since we have set the state to COMMITTING_TXN (or it was already sealing), // the active epoch that we fetch now cannot change until we perform rolling txn. // TxnCommittingRecord ensures no other rollingTxn can run concurrently @@ -211,11 +229,11 @@ private CompletableFuture tryCommitTransactions(final String scope, // we can commit transactions immediately return commitTransactions(scope, stream, new ArrayList<>(activeEpochRecord.getSegmentIds()), txnList, - context, timer) - .thenApply(x -> versionedMetadata); + context, txnIdToWriterId, writerMarks) + .thenApply(txnOffsets -> committingTxnsRecord); } else { return rollTransactions(scope, stream, txnEpochRecord, activeEpochRecord, - versionedMetadata, context, timer); + committingTxnsRecord, context, txnIdToWriterId, writerMarks); } })); } @@ -224,16 +242,19 @@ private CompletableFuture tryCommitTransactions(final String scope, // once all commits are done, reset the committing txn record. // reset state to ACTIVE if it was COMMITTING_TXN return commitFuture - .thenCompose(versionedMetadata -> streamMetadataStore.completeCommitTransactions( - scope, stream, versionedMetadata, context, executor) + .thenCompose(committingTxnsRecord -> streamMetadataStore.completeCommitTransactions( + scope, stream, committingTxnsRecord, context, executor, writerMarks) .thenCompose(v -> resetStateConditionally(scope, stream, stateRecord.get(), context)) - .thenApply(v -> versionedMetadata.getObject().getEpoch())); + .thenRun(() -> TransactionMetrics.getInstance().commitTransaction(scope, stream, timer.getElapsed())) + .thenApply(v -> committingTxnsRecord.getObject().getEpoch())); }, executor); } private CompletableFuture> rollTransactions( - String scope, String stream, EpochRecord txnEpoch, EpochRecord activeEpoch, - VersionedMetadata existing, OperationContext context, Timer timer) { + String scope, String stream, EpochRecord txnEpoch, EpochRecord activeEpoch, + VersionedMetadata existing, OperationContext context, + Map txnIdToWriterId, + Map writerMarks) { CompletableFuture> future = CompletableFuture.completedFuture(existing); if (!existing.getObject().isRollingTxnRecord()) { future = future.thenCompose( @@ -245,7 +266,7 @@ private CompletableFuture> rollT if (activeEpoch.getEpoch() > record.getObject().getCurrentEpoch()) { return CompletableFuture.completedFuture(record); } else { - return runRollingTxn(scope, stream, txnEpoch, activeEpoch, record, context, timer) + return runRollingTxn(scope, stream, txnEpoch, activeEpoch, record, context, txnIdToWriterId, writerMarks) .thenApply(v -> record); } }); @@ -254,7 +275,8 @@ private CompletableFuture> rollT private CompletableFuture runRollingTxn(String scope, String stream, EpochRecord txnEpoch, EpochRecord activeEpoch, VersionedMetadata existing, - OperationContext context, Timer timer) { + OperationContext context, Map txnIdToWriterId, + Map writerMarks) { String delegationToken = streamMetadataTasks.retrieveDelegationToken(); long timestamp = System.currentTimeMillis(); @@ -269,7 +291,7 @@ private CompletableFuture runRollingTxn(String scope, String stream, Epoch newActiveEpoch)) .collect(Collectors.toList()); List transactionsToCommit = existing.getObject().getTransactionsToCommit(); - return copyTxnEpochSegmentsAndCommitTxns(scope, stream, transactionsToCommit, txnEpochDuplicate, context, timer) + return copyTxnEpochSegmentsAndCommitTxns(scope, stream, transactionsToCommit, txnEpochDuplicate, context, txnIdToWriterId, writerMarks) .thenCompose(v -> streamMetadataTasks.notifyNewSegments(scope, stream, activeEpochDuplicate, context, delegationToken, context.getRequestId())) .thenCompose(v -> streamMetadataTasks.getSealedSegmentsSize(scope, stream, txnEpochDuplicate, @@ -300,8 +322,8 @@ private CompletableFuture runRollingTxn(String scope, String stream, Epoch * those duplicate segments. */ private CompletableFuture copyTxnEpochSegmentsAndCommitTxns(String scope, String stream, List transactionsToCommit, - List segmentIds, OperationContext context, - Timer timer) { + List segmentIds, OperationContext context, Map txnIdToWriterId, + Map writerMarks) { // 1. create duplicate segments // 2. merge transactions in those segments // 3. seal txn epoch segments @@ -309,8 +331,9 @@ private CompletableFuture copyTxnEpochSegmentsAndCommitTxns(String scope, CompletableFuture createSegmentsFuture = Futures.allOf(segmentIds.stream().map(segment -> { // Use fixed scaling policy for these segments as they are created, merged into and sealed and are not // supposed to auto scale. - return streamMetadataTasks.notifyNewSegment(scope, stream, segment, ScalingPolicy.fixed(1), delegationToken, - context.getRequestId()); + return streamMetadataStore.getConfiguration(scope, stream, context, executor).thenCompose(config -> + streamMetadataTasks.notifyNewSegment(scope, stream, segment, ScalingPolicy.fixed(1), delegationToken, + context.getRequestId(), config.getRolloverSizeBytes())); }).collect(Collectors.toList())); return createSegmentsFuture @@ -319,9 +342,9 @@ private CompletableFuture copyTxnEpochSegmentsAndCommitTxns(String scope, "Rolling transaction, successfully created duplicate txn epoch {} for stream {}/{}", segmentIds, scope, stream); // now commit transactions into these newly created segments - return commitTransactions(scope, stream, segmentIds, transactionsToCommit, context, timer); + return commitTransactions(scope, stream, segmentIds, transactionsToCommit, context, txnIdToWriterId, writerMarks); }) - .thenCompose(v -> streamMetadataTasks.notifySealedSegments(scope, stream, segmentIds, delegationToken, + .thenAccept(v -> streamMetadataTasks.notifySealedSegments(scope, stream, segmentIds, delegationToken, context.getRequestId())); } @@ -330,30 +353,32 @@ private CompletableFuture copyTxnEpochSegmentsAndCommitTxns(String scope, * At the end of this method's execution, all transactions in the list would have committed into given list of segments. */ private CompletableFuture commitTransactions(String scope, String stream, List segments, - List transactionsToCommit, OperationContext context, Timer timer) { + List transactionsToCommit, OperationContext context, + Map txnIdToWriterId, + Map writerMarks) { // Chain all transaction commit futures one after the other. This will ensure that order of commit // if honoured and is based on the order in the list. + boolean noteTime = writerMarks.size() > 0; CompletableFuture future = CompletableFuture.completedFuture(null); for (UUID txnId : transactionsToCommit) { log.info(context.getRequestId(), "Committing transaction {} on stream {}/{}", txnId, scope, stream); // commit transaction in segment store future = future - // Note, we can use the same segments and transaction id as only - // primary id is taken for creation of txn-segment name and secondary part is erased and replaced with - // transaction's epoch. - // And we are creating duplicates of txn epoch keeping the primary same. - // After committing transactions, we collect the current sizes of segments and update the offset - // at which the transaction was committed into ActiveTxnRecord in an idempotent fashion. - // Note: if its a rerun, transaction commit offsets may have been updated already in previous iteration - // so this will not update/modify it. .thenCompose(v -> streamMetadataTasks.notifyTxnCommit(scope, stream, segments, txnId, context.getRequestId())) - .thenCompose(map -> streamMetadataStore.recordCommitOffsets(scope, stream, txnId, map, context, executor)) - .thenRun(() -> TransactionMetrics.getInstance().commitTransaction(scope, stream, timer.getElapsed())); + .thenAccept(txnOffsets -> { + String writerId = txnIdToWriterId.get(txnId); + if (!Strings.isNullOrEmpty(writerId) && writerMarks.get(writerId).getTransactionId().equals(txnId)) { + TxnWriterMark mark = writerMarks.get(writerId); + writerMarks.put(writerId, new TxnWriterMark(mark.getTimestamp(), txnOffsets, mark.getTransactionId())); + } + } + ); } - - return future - .thenCompose(v -> bucketStore.addStreamToBucketStore(BucketStore.ServiceType.WatermarkingService, scope, - stream, executor)); + return future.thenAcceptAsync(x -> { + if (noteTime) { + bucketStore.addStreamToBucketStore(BucketStore.ServiceType.WatermarkingService, scope, stream, executor); + } + }); } /** @@ -387,4 +412,5 @@ public CompletableFuture hasTaskStarted(CommitEvent event) { return streamMetadataStore.getState(event.getScope(), event.getStream(), true, null, executor) .thenApply(state -> state.equals(State.COMMITTING_TXN) || state.equals(State.SEALING)); } + } diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CreateReaderGroupTask.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CreateReaderGroupTask.java index f1383a85d77..33e576ed517 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CreateReaderGroupTask.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/CreateReaderGroupTask.java @@ -91,10 +91,10 @@ private ReaderGroupConfig getConfigFromEvent(CreateReaderGroupEvent request) { .groupRefreshTimeMillis(request.getGroupRefreshTimeMillis()) .automaticCheckpointIntervalMillis(request.getAutomaticCheckpointIntervalMillis()) .maxOutstandingCheckpointRequest(request.getMaxOutstandingCheckpointRequest()) - .retentionType(ReaderGroupConfig.StreamDataRetention.values()[request.getRetentionTypeOrdinal()]) .startingStreamCuts(startStreamCut) - .endingStreamCuts(endStreamCut).build(); + .endingStreamCuts(endStreamCut) + .build(); return ReaderGroupConfig.cloneConfig(conf, request.getReaderGroupId(), request.getGeneration()); } diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestHandler.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestHandler.java index 803eeea26c8..0105bd6be39 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestHandler.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestHandler.java @@ -16,6 +16,7 @@ package io.pravega.controller.server.eventProcessor.requesthandlers; import io.pravega.common.tracing.TagLogger; +import io.pravega.controller.store.stream.EpochTransitionOperationExceptions; import io.pravega.controller.store.stream.StreamMetadataStore; import io.pravega.shared.controller.event.AutoScaleEvent; import io.pravega.shared.controller.event.ControllerEvent; @@ -31,6 +32,7 @@ import org.slf4j.LoggerFactory; import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionException; import java.util.concurrent.ScheduledExecutorService; public class StreamRequestHandler extends AbstractRequestProcessor { @@ -79,7 +81,7 @@ public CompletableFuture processScaleOpRequest(ScaleOpEvent scaleOpEvent) log.info(scaleOpEvent.getRequestId(), "Processing scale request for stream {}/{}", scaleOpEvent.getScope(), scaleOpEvent.getStream()); return withCompletion(scaleOperationTask, scaleOpEvent, scaleOpEvent.getScope(), scaleOpEvent.getStream(), - SCALE_EVENT_RETRY_PREDICATE) + OPERATION_NOT_ALLOWED_PREDICATE.or(e -> e instanceof EpochTransitionOperationExceptions.ConflictException)) .thenAccept(v -> { log.info(scaleOpEvent.getRequestId(), "Processing scale request for stream {}/{} complete", scaleOpEvent.getScope(), scaleOpEvent.getStream()); @@ -91,7 +93,7 @@ public CompletableFuture processUpdateStream(UpdateStreamEvent updateStrea log.info(updateStreamEvent.getRequestId(), "Processing update request for stream {}/{}", updateStreamEvent.getScope(), updateStreamEvent.getStream()); return withCompletion(updateStreamTask, updateStreamEvent, updateStreamEvent.getScope(), updateStreamEvent.getStream(), - EVENT_RETRY_PREDICATE) + OPERATION_NOT_ALLOWED_PREDICATE) .thenAccept(v -> { log.info(updateStreamEvent.getRequestId(), "Processing update request for stream {}/{} complete", updateStreamEvent.getScope(), updateStreamEvent.getStream()); @@ -103,7 +105,7 @@ public CompletableFuture processTruncateStream(TruncateStreamEvent truncat log.info(truncateStreamEvent.getRequestId(), "Processing truncate request for stream {}/{}", truncateStreamEvent.getScope(), truncateStreamEvent.getStream()); return withCompletion(truncateStreamTask, truncateStreamEvent, truncateStreamEvent.getScope(), truncateStreamEvent.getStream(), - EVENT_RETRY_PREDICATE) + OPERATION_NOT_ALLOWED_PREDICATE) .thenAccept(v -> { log.info(truncateStreamEvent.getRequestId(), "Processing truncate request for stream {}/{} complete", truncateStreamEvent.getScope(), truncateStreamEvent.getStream()); @@ -115,7 +117,7 @@ public CompletableFuture processSealStream(SealStreamEvent sealStreamEvent log.info(sealStreamEvent.getRequestId(), "Processing seal request for stream {}/{}", sealStreamEvent.getScope(), sealStreamEvent.getStream()); return withCompletion(sealStreamTask, sealStreamEvent, sealStreamEvent.getScope(), sealStreamEvent.getStream(), - EVENT_RETRY_PREDICATE) + OPERATION_NOT_ALLOWED_PREDICATE) .thenAccept(v -> { log.info(sealStreamEvent.getRequestId(), "Processing seal request for stream {}/{} complete", sealStreamEvent.getScope(), sealStreamEvent.getStream()); @@ -127,7 +129,7 @@ public CompletableFuture processDeleteStream(DeleteStreamEvent deleteStrea log.info(deleteStreamEvent.getRequestId(), "Processing delete request for stream {}/{}", deleteStreamEvent.getScope(), deleteStreamEvent.getStream()); return withCompletion(deleteStreamTask, deleteStreamEvent, deleteStreamEvent.getScope(), deleteStreamEvent.getStream(), - EVENT_RETRY_PREDICATE) + OPERATION_NOT_ALLOWED_PREDICATE) .thenAccept(v -> { log.info(deleteStreamEvent.getRequestId(), "Processing delete request for stream {}/{} complete", deleteStreamEvent.getScope(), deleteStreamEvent.getStream()); @@ -138,20 +140,43 @@ public CompletableFuture processDeleteStream(DeleteStreamEvent deleteStrea public CompletableFuture processCreateReaderGroup(CreateReaderGroupEvent createRGEvent) { log.info(createRGEvent.getRequestId(), "Processing create request for ReaderGroup {}/{}", createRGEvent.getScope(), createRGEvent.getRgName()); - return createRGTask.execute(createRGEvent); + return createRGTask.execute(createRGEvent).thenAccept(v -> { + log.info(createRGEvent.getRequestId(), "Processing of create event for Reader Group {}/{} completed successfully.", + createRGEvent.getScope(), createRGEvent.getRgName()); + }).exceptionally(ex -> { + log.error(createRGEvent.getRequestId(), String.format("Error processing create event for Reader Group %s/%s. Unexpected exception.", + createRGEvent.getScope(), createRGEvent.getRgName()), ex); + throw new CompletionException(ex); + }); } @Override public CompletableFuture processDeleteReaderGroup(DeleteReaderGroupEvent deleteRGEvent) { log.info(deleteRGEvent.getRequestId(), "Processing delete request for ReaderGroup {}/{}", deleteRGEvent.getScope(), deleteRGEvent.getRgName()); - return deleteRGTask.execute(deleteRGEvent); + return deleteRGTask.execute(deleteRGEvent) + .thenAccept(v -> log.info(deleteRGEvent.getRequestId(), "Processing of delete event for Reader Group {}/{} completed successfully.", + deleteRGEvent.getScope(), deleteRGEvent.getRgName())) + .exceptionally(ex -> { + log.error(deleteRGEvent.getRequestId(), String.format("Error processing delete event for Reader Group %s/%s. Unexpected exception.", + deleteRGEvent.getScope(), deleteRGEvent.getRgName()), ex); + throw new CompletionException(ex); + }); } @Override public CompletableFuture processUpdateReaderGroup(UpdateReaderGroupEvent updateRGEvent) { log.info(updateRGEvent.getRequestId(), "Processing update request for ReaderGroup {}/{}", updateRGEvent.getScope(), updateRGEvent.getRgName()); - return updateRGTask.execute(updateRGEvent); + return updateRGTask.execute(updateRGEvent) + .thenAccept(v -> { + log.info(updateRGEvent.getRequestId(), "Processing of update event for Reader Group {}/{} completed successfully.", + updateRGEvent.getScope(), updateRGEvent.getRgName()); + }) + .exceptionally(ex -> { + log.error(updateRGEvent.getRequestId(), String.format("Error processing update event for Reader Group %s/%s. Unexpected exception.", + updateRGEvent.getScope(), updateRGEvent.getRgName()), ex); + throw new CompletionException(ex); + }); } } diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/TruncateStreamTask.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/TruncateStreamTask.java index 91f6306f26a..ce0b3db58ec 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/TruncateStreamTask.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/TruncateStreamTask.java @@ -24,6 +24,7 @@ import io.pravega.controller.store.stream.State; import io.pravega.controller.store.stream.records.StreamTruncationRecord; import io.pravega.controller.task.Stream.StreamMetadataTasks; +import io.pravega.shared.NameUtils; import io.pravega.shared.controller.event.TruncateStreamEvent; import io.pravega.shared.metrics.DynamicLogger; import io.pravega.shared.metrics.MetricsProvider; @@ -69,19 +70,31 @@ public CompletableFuture execute(final TruncateStreamEvent request) { final OperationContext context = streamMetadataStore.createStreamContext(scope, stream, requestId); return streamMetadataStore.getVersionedState(scope, stream, context, executor) - .thenCompose(versionedState -> streamMetadataStore.getTruncationRecord(scope, stream, context, executor) - .thenCompose(versionedMetadata -> { - if (!versionedMetadata.getObject().isUpdating()) { - if (versionedState.getObject().equals(State.TRUNCATING)) { - return Futures.toVoid(streamMetadataStore.updateVersionedState(scope, stream, State.ACTIVE, - versionedState, context, executor)); + .thenCompose(versionedState -> { + if (versionedState.getObject().equals(State.SEALED)) { + // truncation should not be allowed since the stream is in SEALED state + // hence, we need to complete the truncation by updating the metadata + // and then throw an exception + return streamMetadataStore.getTruncationRecord(scope, stream, context, executor) + .thenCompose(versionedMetadata -> streamMetadataStore.completeTruncation(scope, stream, versionedMetadata, context, executor) + .thenAccept(v -> { + throw new UnsupportedOperationException("Cannot truncate a sealed stream: " + NameUtils.getScopedStreamName(scope, stream)); + })); + } + return streamMetadataStore.getTruncationRecord(scope, stream, context, executor) + .thenCompose(versionedMetadata -> { + if (!versionedMetadata.getObject().isUpdating()) { + if (versionedState.getObject().equals(State.TRUNCATING)) { + return Futures.toVoid(streamMetadataStore.updateVersionedState(scope, stream, State.ACTIVE, + versionedState, context, executor)); + } else { + return CompletableFuture.completedFuture(null); + } } else { - return CompletableFuture.completedFuture(null); + return processTruncate(scope, stream, versionedMetadata, versionedState, context, requestId); } - } else { - return processTruncate(scope, stream, versionedMetadata, versionedState, context, requestId); - } - })); + }); + }); } private CompletableFuture processTruncate(String scope, String stream, VersionedMetadata versionedTruncationRecord, diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateReaderGroupTask.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateReaderGroupTask.java index 7e66fe325c8..02164dc7b60 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateReaderGroupTask.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateReaderGroupTask.java @@ -29,19 +29,24 @@ import io.pravega.controller.util.RetryHelper; import io.pravega.shared.NameUtils; import io.pravega.shared.controller.event.UpdateReaderGroupEvent; +import io.pravega.shared.protocol.netty.ConnectionFailedException; import org.slf4j.LoggerFactory; import java.util.Iterator; import java.util.UUID; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ScheduledExecutorService; +import java.util.function.Predicate; /** * Request handler for executing a create operation for a ReaderGroup. */ public class UpdateReaderGroupTask implements ReaderGroupTask { private static final TagLogger log = new TagLogger(LoggerFactory.getLogger(UpdateReaderGroupTask.class)); - + private static final Predicate UPDATE_RETRY_PREDICATE = e -> { + Throwable t = Exceptions.unwrap(e); + return t instanceof RetryableException || t instanceof ConnectionFailedException; + }; private final StreamMetadataStore streamMetadataStore; private final StreamMetadataTasks streamMetadataTasks; private final ScheduledExecutorService executor; @@ -109,14 +114,14 @@ public CompletableFuture execute(final UpdateReaderGroupEvent request) { } return CompletableFuture.completedFuture(null); }) - .thenCompose(v -> streamMetadataStore.completeRGConfigUpdate(scope, - readerGroup, rgConfigRecord, context, executor)); + .thenCompose(v -> streamMetadataStore.completeRGConfigUpdate(scope, readerGroup, rgConfigRecord, context, executor)); } - return streamMetadataStore.completeRGConfigUpdate(scope, readerGroup, - rgConfigRecord, context, executor); + // We get here for non-transition updates + return streamMetadataStore.completeRGConfigUpdate(scope, readerGroup, rgConfigRecord, context, executor); } return CompletableFuture.completedFuture(null); }); - }), e -> Exceptions.unwrap(e) instanceof RetryableException, Integer.MAX_VALUE, executor); + }), UPDATE_RETRY_PREDICATE, Integer.MAX_VALUE, executor); } + } \ No newline at end of file diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateStreamTask.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateStreamTask.java index bf99c8e8401..9eb234c83f6 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateStreamTask.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/UpdateStreamTask.java @@ -29,6 +29,7 @@ import io.pravega.controller.store.stream.records.EpochTransitionRecord; import io.pravega.controller.store.stream.records.StreamConfigurationRecord; import io.pravega.controller.task.Stream.StreamMetadataTasks; +import io.pravega.shared.NameUtils; import io.pravega.shared.controller.event.UpdateStreamEvent; import java.util.AbstractMap; @@ -38,6 +39,7 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionException; import java.util.concurrent.ScheduledExecutorService; + import org.slf4j.LoggerFactory; import java.util.stream.Collectors; import java.util.stream.IntStream; @@ -78,19 +80,31 @@ public CompletableFuture execute(final UpdateStreamEvent request) { final OperationContext context = streamMetadataStore.createStreamContext(scope, stream, requestId); return streamMetadataStore.getVersionedState(scope, stream, context, executor) - .thenCompose(versionedState -> streamMetadataStore.getConfigurationRecord(scope, stream, context, executor) - .thenCompose(versionedMetadata -> { - if (!versionedMetadata.getObject().isUpdating()) { - if (versionedState.getObject().equals(State.UPDATING)) { - return Futures.toVoid(streamMetadataStore.updateVersionedState(scope, stream, State.ACTIVE, - versionedState, context, executor)); + .thenCompose(versionedState -> { + if (versionedState.getObject().equals(State.SEALED)) { + // updating the stream should not be allowed since it has been SEALED + // hence, we need to update the configuration in the store with updating flag = false + // and then throw an exception + return streamMetadataStore.getConfigurationRecord(scope, stream, context, executor) + .thenCompose(versionedMetadata -> streamMetadataStore.completeUpdateConfiguration(scope, stream, versionedMetadata, context, executor) + .thenAccept(v -> { + throw new UnsupportedOperationException("Cannot update a sealed stream: " + NameUtils.getScopedStreamName(scope, stream)); + })); + } + return streamMetadataStore.getConfigurationRecord(scope, stream, context, executor) + .thenCompose(versionedMetadata -> { + if (!versionedMetadata.getObject().isUpdating()) { + if (versionedState.getObject().equals(State.UPDATING)) { + return Futures.toVoid(streamMetadataStore.updateVersionedState(scope, stream, State.ACTIVE, + versionedState, context, executor)); + } else { + return CompletableFuture.completedFuture(null); + } } else { - return CompletableFuture.completedFuture(null); + return processUpdate(scope, stream, versionedMetadata, versionedState, context, requestId); } - } else { - return processUpdate(scope, stream, versionedMetadata, versionedState, context, requestId); - } - })); + }); + }); } private CompletableFuture processUpdate(String scope, String stream, VersionedMetadata record, diff --git a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/kvtable/CreateTableTask.java b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/kvtable/CreateTableTask.java index 351a48300f0..e35794ccfcb 100644 --- a/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/kvtable/CreateTableTask.java +++ b/controller/src/main/java/io/pravega/controller/server/eventProcessor/requesthandlers/kvtable/CreateTableTask.java @@ -67,11 +67,13 @@ public CompletableFuture execute(final CreateTableEvent request) { int secondaryKeyLength = request.getSecondaryKeyLength(); long creationTime = request.getTimestamp(); long requestId = request.getRequestId(); + long rolloverSize = request.getRolloverSizeBytes(); String kvTableId = request.getTableId().toString(); KeyValueTableConfiguration config = KeyValueTableConfiguration.builder() .partitionCount(partitionCount) .primaryKeyLength(primaryKeyLength) .secondaryKeyLength(secondaryKeyLength) + .rolloverSizeBytes(rolloverSize) .build(); final OperationContext context = kvtMetadataStore.createContext(scope, kvt, requestId); @@ -95,7 +97,7 @@ public CompletableFuture execute(final CreateTableEvent request) { .boxed() .map(x -> NameUtils.computeSegmentId(x, 0)) .collect(Collectors.toList()); - kvtMetadataTasks.createNewSegments(scope, kvt, newSegments, keyLength, requestId) + kvtMetadataTasks.createNewSegments(scope, kvt, newSegments, keyLength, requestId, config.getRolloverSizeBytes()) .thenCompose(y -> { kvtMetadataStore.getVersionedState(scope, kvt, context, executor) .thenCompose(state -> { diff --git a/controller/src/main/java/io/pravega/controller/server/health/ClusterListenerHealthContributor.java b/controller/src/main/java/io/pravega/controller/server/health/ClusterListenerHealthContributor.java new file mode 100644 index 00000000000..3a7dca58224 --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/server/health/ClusterListenerHealthContributor.java @@ -0,0 +1,45 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import com.google.common.base.Preconditions; +import io.pravega.controller.fault.ControllerClusterListener; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; +import lombok.extern.slf4j.Slf4j; + +@Slf4j +public class ClusterListenerHealthContributor extends AbstractHealthContributor { + private final ControllerClusterListener controllerClusterListener; + + public ClusterListenerHealthContributor(String name, ControllerClusterListener controllerClusterListener) { + super(name); + this.controllerClusterListener = Preconditions.checkNotNull(controllerClusterListener, "controllerClusterListener"); + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) throws Exception { + Status status = Status.DOWN; + if (controllerClusterListener.isRunning()) { + status = Status.NEW; + if (controllerClusterListener.isReady()) { + status = Status.UP; + } + } + return status; + } +} diff --git a/controller/src/main/java/io/pravega/controller/server/health/EventProcessorHealthContributor.java b/controller/src/main/java/io/pravega/controller/server/health/EventProcessorHealthContributor.java new file mode 100644 index 00000000000..4f785d581dd --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/server/health/EventProcessorHealthContributor.java @@ -0,0 +1,43 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import com.google.common.base.Preconditions; +import io.pravega.controller.server.eventProcessor.ControllerEventProcessors; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; + +public class EventProcessorHealthContributor extends AbstractHealthContributor { + private final ControllerEventProcessors controllerEventProcessors; + + public EventProcessorHealthContributor(String name, ControllerEventProcessors controllerEventProcessors) { + super(name); + this.controllerEventProcessors = Preconditions.checkNotNull(controllerEventProcessors, "controllerEventProcessors"); + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) throws Exception { + Status status = Status.DOWN; + if (controllerEventProcessors.isRunning()) { + status = Status.NEW; + if (controllerEventProcessors.isReady()) { + status = Status.UP; + } + } + return status; + } +} diff --git a/controller/src/main/java/io/pravega/controller/server/health/GRPCServerHealthContributor.java b/controller/src/main/java/io/pravega/controller/server/health/GRPCServerHealthContributor.java new file mode 100644 index 00000000000..e0546ac96fc --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/server/health/GRPCServerHealthContributor.java @@ -0,0 +1,41 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import com.google.common.base.Preconditions; +import io.pravega.controller.server.rpc.grpc.GRPCServer; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; + +public class GRPCServerHealthContributor extends AbstractHealthContributor { + + private final GRPCServer grpcServer; + + public GRPCServerHealthContributor(String name, GRPCServer grpcServer) { + super(name); + this.grpcServer = Preconditions.checkNotNull(grpcServer, "grpcServer"); + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) throws Exception { + Status status = Status.DOWN; + if (grpcServer.isRunning()) { + status = Status.UP; + } + return status; + } +} \ No newline at end of file diff --git a/controller/src/main/java/io/pravega/controller/server/health/RetentionServiceHealthContributor.java b/controller/src/main/java/io/pravega/controller/server/health/RetentionServiceHealthContributor.java new file mode 100644 index 00000000000..68641c46dd3 --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/server/health/RetentionServiceHealthContributor.java @@ -0,0 +1,46 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import com.google.common.base.Preconditions; +import io.pravega.controller.server.bucket.BucketManager; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; +import lombok.extern.slf4j.Slf4j; + +@Slf4j +public class RetentionServiceHealthContributor extends AbstractHealthContributor { + private final BucketManager retentionService; + + public RetentionServiceHealthContributor(String name, BucketManager retentionService) { + super(name); + this.retentionService = Preconditions.checkNotNull(retentionService, "retentionService"); + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) throws Exception { + Status status = Status.DOWN; + if (retentionService.isRunning()) { + status = Status.NEW; + if (retentionService.isHealthy()) { + status = Status.UP; + } + } + return status; + } +} + diff --git a/controller/src/main/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributor.java b/controller/src/main/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributor.java new file mode 100644 index 00000000000..92a853833f7 --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributor.java @@ -0,0 +1,46 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import com.google.common.base.Preconditions; +import io.pravega.controller.fault.SegmentContainerMonitor; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; +import lombok.extern.slf4j.Slf4j; + +@Slf4j +public class SegmentContainerMonitorHealthContributor extends AbstractHealthContributor { + private final SegmentContainerMonitor segmentContainerMonitor; + + public SegmentContainerMonitorHealthContributor(String name, SegmentContainerMonitor segmentContainerMonitor) { + super(name); + this.segmentContainerMonitor = Preconditions.checkNotNull(segmentContainerMonitor, "segmentContainerMonitor"); + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) throws Exception { + Status status = Status.DOWN; + if (segmentContainerMonitor.isRunning()) { + status = Status.NEW; + if (segmentContainerMonitor.isZKConnected()) { + status = Status.UP; + } + } + return status; + } +} + diff --git a/controller/src/main/java/io/pravega/controller/server/health/WatermarkingServiceHealthContributor.java b/controller/src/main/java/io/pravega/controller/server/health/WatermarkingServiceHealthContributor.java new file mode 100644 index 00000000000..c27e0e7b000 --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/server/health/WatermarkingServiceHealthContributor.java @@ -0,0 +1,45 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import com.google.common.base.Preconditions; +import io.pravega.controller.server.bucket.BucketManager; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; +import lombok.extern.slf4j.Slf4j; + +@Slf4j +public class WatermarkingServiceHealthContributor extends AbstractHealthContributor { + private final BucketManager watermarkingService; + + public WatermarkingServiceHealthContributor(String name, BucketManager watermarkingService) { + super(name); + this.watermarkingService = Preconditions.checkNotNull(watermarkingService, "watermarkingService"); + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) throws Exception { + Status status = Status.DOWN; + if (watermarkingService.isRunning()) { + status = Status.NEW; + if (watermarkingService.isHealthy()) { + status = Status.UP; + } + } + return status; + } +} \ No newline at end of file diff --git a/controller/src/main/java/io/pravega/controller/server/rest/ModelHelper.java b/controller/src/main/java/io/pravega/controller/server/rest/ModelHelper.java index 4138801f0c6..b749b4e2837 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/ModelHelper.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/ModelHelper.java @@ -16,15 +16,16 @@ package io.pravega.controller.server.rest; import io.pravega.client.stream.Stream; +import io.pravega.client.stream.RetentionPolicy; +import io.pravega.client.stream.ScalingPolicy; +import io.pravega.client.stream.StreamConfiguration; import io.pravega.controller.server.rest.generated.model.CreateStreamRequest; import io.pravega.controller.server.rest.generated.model.RetentionConfig; -import io.pravega.controller.server.rest.generated.model.TimeBasedRetention; import io.pravega.controller.server.rest.generated.model.ScalingConfig; import io.pravega.controller.server.rest.generated.model.StreamProperty; +import io.pravega.controller.server.rest.generated.model.TagsList; +import io.pravega.controller.server.rest.generated.model.TimeBasedRetention; import io.pravega.controller.server.rest.generated.model.UpdateStreamRequest; -import io.pravega.client.stream.RetentionPolicy; -import io.pravega.client.stream.ScalingPolicy; -import io.pravega.client.stream.StreamConfiguration; import io.pravega.controller.store.stream.records.ReaderGroupConfigRecord; import io.pravega.controller.stream.api.grpc.v1.Controller; import org.apache.commons.lang3.NotImplementedException; @@ -98,10 +99,26 @@ public static final StreamConfiguration getCreateStreamConfig(final CreateStream throw new NotImplementedException("retention policy type not supported"); } } - return StreamConfiguration.builder() - .scalingPolicy(scalingPolicy) - .retentionPolicy(retentionPolicy) - .build(); + + TagsList tagsList = new TagsList(); + if (createStreamRequest.getStreamTags() != null) { + tagsList = createStreamRequest.getStreamTags(); + } + + StreamConfiguration.StreamConfigurationBuilder builder = StreamConfiguration.builder() + .scalingPolicy(scalingPolicy) + .retentionPolicy(retentionPolicy) + .tags(tagsList); + + if (createStreamRequest.getTimestampAggregationTimeout() != null) { + builder.timestampAggregationTimeout(createStreamRequest.getTimestampAggregationTimeout()); + } + + if (createStreamRequest.getRolloverSizeBytes() != null) { + builder.rolloverSizeBytes(createStreamRequest.getRolloverSizeBytes()); + } + + return builder.build(); } /** @@ -143,10 +160,26 @@ public static final StreamConfiguration getUpdateStreamConfig(final UpdateStream throw new NotImplementedException("retention policy type not supported"); } } - return StreamConfiguration.builder() + + TagsList tagsList = new TagsList(); + if (updateStreamRequest.getStreamTags() != null) { + tagsList = updateStreamRequest.getStreamTags(); + } + + StreamConfiguration.StreamConfigurationBuilder builder = StreamConfiguration.builder() .scalingPolicy(scalingPolicy) .retentionPolicy(retentionPolicy) - .build(); + .tags(tagsList); + + if (updateStreamRequest.getTimestampAggregationTimeout() != null) { + builder.timestampAggregationTimeout(updateStreamRequest.getTimestampAggregationTimeout()); + } + + if (updateStreamRequest.getRolloverSizeBytes() != null) { + builder.rolloverSizeBytes(updateStreamRequest.getRolloverSizeBytes()); + } + + return builder.build(); } /** @@ -203,11 +236,17 @@ public static final StreamProperty encodeStreamResponse(String scope, String str } } + TagsList tagList = new TagsList(); + tagList.addAll(streamConfiguration.getTags()); + StreamProperty streamProperty = new StreamProperty(); streamProperty.setScopeName(scope); streamProperty.setStreamName(streamName); streamProperty.setScalingPolicy(scalingPolicy); streamProperty.setRetentionPolicy(retentionConfig); + streamProperty.setTags(tagList); + streamProperty.setTimestampAggregationTimeout(streamConfiguration.getTimestampAggregationTimeout()); + streamProperty.setRolloverSizeBytes(streamConfiguration.getRolloverSizeBytes()); return streamProperty; } diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/Bootstrap.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/Bootstrap.java index e492131bd49..0ef53e54ba7 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/Bootstrap.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/Bootstrap.java @@ -16,7 +16,7 @@ public class Bootstrap extends HttpServlet { public void init(ServletConfig config) throws ServletException { Info info = new Info() .title("Swagger Server") - .description("List of admin REST APIs for the pravega controller service.") + .description("List of admin REST APIs for the Pravega controller service.") .termsOfService("") .contact(new Contact() .email("")) diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApi.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApi.java index 6a6b487403c..750ae43f137 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApi.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApi.java @@ -245,10 +245,11 @@ public Response listScopes(@Context SecurityContext securityContext) @io.swagger.annotations.ApiResponse(code = 500, message = "Internal server error while fetching the list of streams for the given scope", response = StreamsList.class) }) public Response listStreams(@ApiParam(value = "Scope name",required=true) @PathParam("scopeName") String scopeName -,@ApiParam(value = "Optional flag whether to display system created streams. If not specified only user created streams will be returned") @QueryParam("showInternalStreams") String showInternalStreams +,@ApiParam(value = "Filter options", allowableValues="showInternalStreams, tag") @QueryParam("filter_type") String filterType +,@ApiParam(value = "value to be passed. must match the type passed with it.") @QueryParam("filter_value") String filterValue ,@Context SecurityContext securityContext) throws NotFoundException { - return delegate.listStreams(scopeName,showInternalStreams,securityContext); + return delegate.listStreams(scopeName,filterType,filterValue,securityContext); } @PUT @Path("/{scopeName}/streams/{streamName}") diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApiService.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApiService.java index 5d1edc6c015..262aa965994 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApiService.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/ScopesApiService.java @@ -37,7 +37,7 @@ public abstract class ScopesApiService { public abstract Response getStream(String scopeName,String streamName,SecurityContext securityContext) throws NotFoundException; public abstract Response listReaderGroups(String scopeName,SecurityContext securityContext) throws NotFoundException; public abstract Response listScopes(SecurityContext securityContext) throws NotFoundException; - public abstract Response listStreams(String scopeName, String showInternalStreams,SecurityContext securityContext) throws NotFoundException; + public abstract Response listStreams(String scopeName, String filterType, String filterValue,SecurityContext securityContext) throws NotFoundException; public abstract Response updateStream(String scopeName,String streamName,UpdateStreamRequest updateStreamRequest,SecurityContext securityContext) throws NotFoundException; public abstract Response updateStreamState(String scopeName,String streamName,StreamState updateStreamStateRequest,SecurityContext securityContext) throws NotFoundException; } diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/impl/ScopesApiServiceImpl.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/impl/ScopesApiServiceImpl.java index 06833cfd95a..120898208eb 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/api/impl/ScopesApiServiceImpl.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/api/impl/ScopesApiServiceImpl.java @@ -78,7 +78,7 @@ public Response listScopes(SecurityContext securityContext) throws NotFoundExcep return Response.ok().entity(new ApiResponseMessage(ApiResponseMessage.OK, "magic!")).build(); } @Override - public Response listStreams(String scopeName, String showInternalStreams, SecurityContext securityContext) throws NotFoundException { + public Response listStreams(String scopeName, String filterType, String filterValue, SecurityContext securityContext) throws NotFoundException { // do some magic! return Response.ok().entity(new ApiResponseMessage(ApiResponseMessage.OK, "magic!")).build(); } diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateScopeRequest.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateScopeRequest.java index e7dceada527..40d2b39b8a7 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateScopeRequest.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateScopeRequest.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * @@ -14,10 +14,11 @@ package io.pravega.controller.server.rest.generated.model; import java.util.Objects; - import com.fasterxml.jackson.annotation.JsonProperty; - +import com.fasterxml.jackson.annotation.JsonCreator; +import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; +import javax.validation.constraints.*; /** * CreateScopeRequest diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateStreamRequest.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateStreamRequest.java index 5b738b1410b..9794dcbc97f 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateStreamRequest.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/CreateStreamRequest.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * @@ -18,6 +18,7 @@ import com.fasterxml.jackson.annotation.JsonCreator; import io.pravega.controller.server.rest.generated.model.RetentionConfig; import io.pravega.controller.server.rest.generated.model.ScalingConfig; +import io.pravega.controller.server.rest.generated.model.TagsList; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import javax.validation.constraints.*; @@ -36,6 +37,15 @@ public class CreateStreamRequest { @JsonProperty("retentionPolicy") private RetentionConfig retentionPolicy = null; + @JsonProperty("streamTags") + private TagsList streamTags = null; + + @JsonProperty("timestampAggregationTimeout") + private Long timestampAggregationTimeout = null; + + @JsonProperty("rolloverSizeBytes") + private Long rolloverSizeBytes = null; + public CreateStreamRequest streamName(String streamName) { this.streamName = streamName; return this; @@ -93,6 +103,63 @@ public void setRetentionPolicy(RetentionConfig retentionPolicy) { this.retentionPolicy = retentionPolicy; } + public CreateStreamRequest streamTags(TagsList streamTags) { + this.streamTags = streamTags; + return this; + } + + /** + * Get streamTags + * @return streamTags + **/ + @JsonProperty("streamTags") + @ApiModelProperty(value = "") + public TagsList getStreamTags() { + return streamTags; + } + + public void setStreamTags(TagsList streamTags) { + this.streamTags = streamTags; + } + + public CreateStreamRequest timestampAggregationTimeout(Long timestampAggregationTimeout) { + this.timestampAggregationTimeout = timestampAggregationTimeout; + return this; + } + + /** + * Get timestampAggregationTimeout + * @return timestampAggregationTimeout + **/ + @JsonProperty("timestampAggregationTimeout") + @ApiModelProperty(value = "") + public Long getTimestampAggregationTimeout() { + return timestampAggregationTimeout; + } + + public void setTimestampAggregationTimeout(Long timestampAggregationTimeout) { + this.timestampAggregationTimeout = timestampAggregationTimeout; + } + + public CreateStreamRequest rolloverSizeBytes(Long rolloverSizeBytes) { + this.rolloverSizeBytes = rolloverSizeBytes; + return this; + } + + /** + * Get rolloverSizeBytes + * @return rolloverSizeBytes + **/ + @JsonProperty("rolloverSizeBytes") + @ApiModelProperty(value = "") + public Long getRolloverSizeBytes() { + return rolloverSizeBytes; + } + + public void setRolloverSizeBytes(Long rolloverSizeBytes) { + this.rolloverSizeBytes = rolloverSizeBytes; + } + @Override public boolean equals(java.lang.Object o) { @@ -105,12 +172,15 @@ public boolean equals(java.lang.Object o) { CreateStreamRequest createStreamRequest = (CreateStreamRequest) o; return Objects.equals(this.streamName, createStreamRequest.streamName) && Objects.equals(this.scalingPolicy, createStreamRequest.scalingPolicy) && - Objects.equals(this.retentionPolicy, createStreamRequest.retentionPolicy); + Objects.equals(this.retentionPolicy, createStreamRequest.retentionPolicy) && + Objects.equals(this.streamTags, createStreamRequest.streamTags) && + Objects.equals(this.timestampAggregationTimeout, createStreamRequest.timestampAggregationTimeout) && + Objects.equals(this.rolloverSizeBytes, createStreamRequest.rolloverSizeBytes); } @Override public int hashCode() { - return Objects.hash(streamName, scalingPolicy, retentionPolicy); + return Objects.hash(streamName, scalingPolicy, retentionPolicy, streamTags, timestampAggregationTimeout, rolloverSizeBytes); } @@ -122,6 +192,9 @@ public String toString() { sb.append(" streamName: ").append(toIndentedString(streamName)).append("\n"); sb.append(" scalingPolicy: ").append(toIndentedString(scalingPolicy)).append("\n"); sb.append(" retentionPolicy: ").append(toIndentedString(retentionPolicy)).append("\n"); + sb.append(" streamTags: ").append(toIndentedString(streamTags)).append("\n"); + sb.append(" timestampAggregationTimeout: ").append(toIndentedString(timestampAggregationTimeout)).append("\n"); + sb.append(" rolloverSizeBytes: ").append(toIndentedString(rolloverSizeBytes)).append("\n"); sb.append("}"); return sb.toString(); } diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupProperty.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupProperty.java index dfd1c1edf4d..5ad2724c419 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupProperty.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupProperty.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsList.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsList.java index 9d9feff9b66..790f08cc30f 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsList.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsList.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsListReaderGroups.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsListReaderGroups.java index 77d6bca30e4..093545d73af 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsListReaderGroups.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ReaderGroupsListReaderGroups.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/RetentionConfig.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/RetentionConfig.java index 563e9d14e19..c485df3641e 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/RetentionConfig.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/RetentionConfig.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScaleMetadata.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScaleMetadata.java index ae8f2d56dbb..2f7f0ab8cef 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScaleMetadata.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScaleMetadata.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingConfig.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingConfig.java index 0da18b6e2e4..c76d110fcf8 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingConfig.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingConfig.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingEventList.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingEventList.java index d294c8668fa..ae6970f9519 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingEventList.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScalingEventList.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopeProperty.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopeProperty.java index 9547063bf4a..537ace23317 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopeProperty.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopeProperty.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopesList.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopesList.java index 21746164cfa..785d5f66f14 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopesList.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/ScopesList.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/Segment.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/Segment.java index fc1870f31bc..f44959347ad 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/Segment.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/Segment.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamProperty.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamProperty.java index e229bb6df31..1f830d530bd 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamProperty.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamProperty.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * @@ -18,6 +18,7 @@ import com.fasterxml.jackson.annotation.JsonCreator; import io.pravega.controller.server.rest.generated.model.RetentionConfig; import io.pravega.controller.server.rest.generated.model.ScalingConfig; +import io.pravega.controller.server.rest.generated.model.TagsList; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import javax.validation.constraints.*; @@ -39,6 +40,15 @@ public class StreamProperty { @JsonProperty("retentionPolicy") private RetentionConfig retentionPolicy = null; + @JsonProperty("tags") + private TagsList tags = null; + + @JsonProperty("timestampAggregationTimeout") + private Long timestampAggregationTimeout = null; + + @JsonProperty("rolloverSizeBytes") + private Long rolloverSizeBytes = null; + public StreamProperty scopeName(String scopeName) { this.scopeName = scopeName; return this; @@ -115,6 +125,63 @@ public void setRetentionPolicy(RetentionConfig retentionPolicy) { this.retentionPolicy = retentionPolicy; } + public StreamProperty tags(TagsList tags) { + this.tags = tags; + return this; + } + + /** + * Get tags + * @return tags + **/ + @JsonProperty("tags") + @ApiModelProperty(value = "") + public TagsList getTags() { + return tags; + } + + public void setTags(TagsList tags) { + this.tags = tags; + } + + public StreamProperty timestampAggregationTimeout(Long timestampAggregationTimeout) { + this.timestampAggregationTimeout = timestampAggregationTimeout; + return this; + } + + /** + * Get timestampAggregationTimeout + * @return timestampAggregationTimeout + **/ + @JsonProperty("timestampAggregationTimeout") + @ApiModelProperty(value = "") + public Long getTimestampAggregationTimeout() { + return timestampAggregationTimeout; + } + + public void setTimestampAggregationTimeout(Long timestampAggregationTimeout) { + this.timestampAggregationTimeout = timestampAggregationTimeout; + } + + public StreamProperty rolloverSizeBytes(Long rolloverSizeBytes) { + this.rolloverSizeBytes = rolloverSizeBytes; + return this; + } + + /** + * Get rolloverSizeBytes + * @return rolloverSizeBytes + **/ + @JsonProperty("rolloverSizeBytes") + @ApiModelProperty(value = "") + public Long getRolloverSizeBytes() { + return rolloverSizeBytes; + } + + public void setRolloverSizeBytes(Long rolloverSizeBytes) { + this.rolloverSizeBytes = rolloverSizeBytes; + } + @Override public boolean equals(java.lang.Object o) { @@ -128,12 +195,15 @@ public boolean equals(java.lang.Object o) { return Objects.equals(this.scopeName, streamProperty.scopeName) && Objects.equals(this.streamName, streamProperty.streamName) && Objects.equals(this.scalingPolicy, streamProperty.scalingPolicy) && - Objects.equals(this.retentionPolicy, streamProperty.retentionPolicy); + Objects.equals(this.retentionPolicy, streamProperty.retentionPolicy) && + Objects.equals(this.tags, streamProperty.tags) && + Objects.equals(this.timestampAggregationTimeout, streamProperty.timestampAggregationTimeout) && + Objects.equals(this.rolloverSizeBytes, streamProperty.rolloverSizeBytes); } @Override public int hashCode() { - return Objects.hash(scopeName, streamName, scalingPolicy, retentionPolicy); + return Objects.hash(scopeName, streamName, scalingPolicy, retentionPolicy, tags, timestampAggregationTimeout, rolloverSizeBytes); } @@ -146,6 +216,9 @@ public String toString() { sb.append(" streamName: ").append(toIndentedString(streamName)).append("\n"); sb.append(" scalingPolicy: ").append(toIndentedString(scalingPolicy)).append("\n"); sb.append(" retentionPolicy: ").append(toIndentedString(retentionPolicy)).append("\n"); + sb.append(" tags: ").append(toIndentedString(tags)).append("\n"); + sb.append(" timestampAggregationTimeout: ").append(toIndentedString(timestampAggregationTimeout)).append("\n"); + sb.append(" rolloverSizeBytes: ").append(toIndentedString(rolloverSizeBytes)).append("\n"); sb.append("}"); return sb.toString(); } diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamState.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamState.java index 547a99011ba..3b6ff5fbcf3 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamState.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamState.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamsList.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamsList.java index c1fa053929f..9ed071b8511 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamsList.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/StreamsList.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/TagsList.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/TagsList.java new file mode 100644 index 00000000000..f58c776facd --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/TagsList.java @@ -0,0 +1,64 @@ +/* + * Pravega Controller APIs + * List of admin REST APIs for the Pravega controller service. + * + * OpenAPI spec version: 0.0.1 + * + * + * NOTE: This class is auto generated by the swagger code generator program. + * https://github.com/swagger-api/swagger-codegen.git + * Do not edit the class manually. + */ + + +package io.pravega.controller.server.rest.generated.model; + +import java.util.Objects; +import java.util.ArrayList; +import java.util.List; +import javax.validation.constraints.*; + +/** + * TagsList + */ + +public class TagsList extends ArrayList { + + @Override + public boolean equals(java.lang.Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + return true; + } + + @Override + public int hashCode() { + return Objects.hash(super.hashCode()); + } + + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("class TagsList {\n"); + sb.append(" ").append(toIndentedString(super.toString())).append("\n"); + sb.append("}"); + return sb.toString(); + } + + /** + * Convert the given object to string with each line indented by 4 spaces + * (except the first line). + */ + private String toIndentedString(java.lang.Object o) { + if (o == null) { + return "null"; + } + return o.toString().replace("\n", "\n "); + } +} + diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/TimeBasedRetention.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/TimeBasedRetention.java index 67b2695860a..0d10abfced0 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/TimeBasedRetention.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/TimeBasedRetention.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * diff --git a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/UpdateStreamRequest.java b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/UpdateStreamRequest.java index fd867091839..3bcaea5c006 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/generated/model/UpdateStreamRequest.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/generated/model/UpdateStreamRequest.java @@ -1,6 +1,6 @@ /* * Pravega Controller APIs - * List of admin REST APIs for the pravega controller service. + * List of admin REST APIs for the Pravega controller service. * * OpenAPI spec version: 0.0.1 * @@ -18,6 +18,7 @@ import com.fasterxml.jackson.annotation.JsonCreator; import io.pravega.controller.server.rest.generated.model.RetentionConfig; import io.pravega.controller.server.rest.generated.model.ScalingConfig; +import io.pravega.controller.server.rest.generated.model.TagsList; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import javax.validation.constraints.*; @@ -33,6 +34,15 @@ public class UpdateStreamRequest { @JsonProperty("retentionPolicy") private RetentionConfig retentionPolicy = null; + @JsonProperty("streamTags") + private TagsList streamTags = null; + + @JsonProperty("timestampAggregationTimeout") + private Long timestampAggregationTimeout = null; + + @JsonProperty("rolloverSizeBytes") + private Long rolloverSizeBytes = null; + public UpdateStreamRequest scalingPolicy(ScalingConfig scalingPolicy) { this.scalingPolicy = scalingPolicy; return this; @@ -71,6 +81,63 @@ public void setRetentionPolicy(RetentionConfig retentionPolicy) { this.retentionPolicy = retentionPolicy; } + public UpdateStreamRequest streamTags(TagsList streamTags) { + this.streamTags = streamTags; + return this; + } + + /** + * Get streamTags + * @return streamTags + **/ + @JsonProperty("streamTags") + @ApiModelProperty(value = "") + public TagsList getStreamTags() { + return streamTags; + } + + public void setStreamTags(TagsList streamTags) { + this.streamTags = streamTags; + } + + public UpdateStreamRequest timestampAggregationTimeout(Long timestampAggregationTimeout) { + this.timestampAggregationTimeout = timestampAggregationTimeout; + return this; + } + + /** + * Get timestampAggregationTimeout + * @return timestampAggregationTimeout + **/ + @JsonProperty("timestampAggregationTimeout") + @ApiModelProperty(value = "") + public Long getTimestampAggregationTimeout() { + return timestampAggregationTimeout; + } + + public void setTimestampAggregationTimeout(Long timestampAggregationTimeout) { + this.timestampAggregationTimeout = timestampAggregationTimeout; + } + + public UpdateStreamRequest rolloverSizeBytes(Long rolloverSizeBytes) { + this.rolloverSizeBytes = rolloverSizeBytes; + return this; + } + + /** + * Get rolloverSizeBytes + * @return rolloverSizeBytes + **/ + @JsonProperty("rolloverSizeBytes") + @ApiModelProperty(value = "") + public Long getRolloverSizeBytes() { + return rolloverSizeBytes; + } + + public void setRolloverSizeBytes(Long rolloverSizeBytes) { + this.rolloverSizeBytes = rolloverSizeBytes; + } + @Override public boolean equals(java.lang.Object o) { @@ -82,12 +149,15 @@ public boolean equals(java.lang.Object o) { } UpdateStreamRequest updateStreamRequest = (UpdateStreamRequest) o; return Objects.equals(this.scalingPolicy, updateStreamRequest.scalingPolicy) && - Objects.equals(this.retentionPolicy, updateStreamRequest.retentionPolicy); + Objects.equals(this.retentionPolicy, updateStreamRequest.retentionPolicy) && + Objects.equals(this.streamTags, updateStreamRequest.streamTags) && + Objects.equals(this.timestampAggregationTimeout, updateStreamRequest.timestampAggregationTimeout) && + Objects.equals(this.rolloverSizeBytes, updateStreamRequest.rolloverSizeBytes); } @Override public int hashCode() { - return Objects.hash(scalingPolicy, retentionPolicy); + return Objects.hash(scalingPolicy, retentionPolicy, streamTags, timestampAggregationTimeout, rolloverSizeBytes); } @@ -98,6 +168,9 @@ public String toString() { sb.append(" scalingPolicy: ").append(toIndentedString(scalingPolicy)).append("\n"); sb.append(" retentionPolicy: ").append(toIndentedString(retentionPolicy)).append("\n"); + sb.append(" streamTags: ").append(toIndentedString(streamTags)).append("\n"); + sb.append(" timestampAggregationTimeout: ").append(toIndentedString(timestampAggregationTimeout)).append("\n"); + sb.append(" rolloverSizeBytes: ").append(toIndentedString(rolloverSizeBytes)).append("\n"); sb.append("}"); return sb.toString(); } diff --git a/controller/src/main/java/io/pravega/controller/server/rest/resources/StreamMetadataResourceImpl.java b/controller/src/main/java/io/pravega/controller/server/rest/resources/StreamMetadataResourceImpl.java index f8135b5d1fd..4c14fc8fa0f 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/resources/StreamMetadataResourceImpl.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/resources/StreamMetadataResourceImpl.java @@ -22,9 +22,11 @@ import io.pravega.client.connection.impl.ConnectionFactory; import io.pravega.client.stream.ReaderGroup; import io.pravega.client.stream.ReaderGroupNotFoundException; +import io.pravega.client.stream.Stream; import io.pravega.client.stream.StreamConfiguration; import io.pravega.client.stream.impl.ClientFactoryImpl; import io.pravega.common.LoggerHelpers; +import io.pravega.common.concurrent.Futures; import io.pravega.common.tracing.TagLogger; import io.pravega.controller.server.ControllerService; import io.pravega.controller.server.eventProcessor.LocalController; @@ -58,12 +60,15 @@ import java.util.List; import java.util.Random; import java.util.concurrent.CompletableFuture; +import java.util.stream.Collectors; import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import javax.ws.rs.core.SecurityContext; + +import org.apache.commons.lang3.tuple.ImmutablePair; import org.slf4j.LoggerFactory; import static io.pravega.auth.AuthHandler.Permissions.READ; @@ -88,7 +93,7 @@ public class StreamMetadataResourceImpl implements ApiV1.ScopesApi { private final Random requestIdGenerator = new Random(); private final ClientConfig clientConfig; - public StreamMetadataResourceImpl(LocalController localController, ControllerService controllerService, + public StreamMetadataResourceImpl(LocalController localController, ControllerService controllerService, AuthHandlerManager pravegaAuthManager, ConnectionFactory connectionFactory, ClientConfig clientConfig) { this.localController = localController; this.controllerService = controllerService; @@ -325,7 +330,7 @@ public void getReaderGroup(final String scopeName, final String readerGroupName, restAuthHelper.authenticateAuthorize( getAuthorizationHeader(), authorizationResource.ofReaderGroupInScope(scopeName, readerGroupName), READ); } catch (AuthException e) { - log.warn(requestId, "Get reader group for {} failed due to authentication failure.", + log.warn(requestId, "Get reader group for {} failed due to authentication failure.", scopeName + "/" + readerGroupName); asyncResponse.resume(Response.status(Status.fromStatusCode(e.getResponseCode())).build()); LoggerHelpers.traceLeave(log, "getReaderGroup", traceId); @@ -417,7 +422,7 @@ public void getStream(final String scopeName, final String streamName, final Sec restAuthHelper.authenticateAuthorize(getAuthorizationHeader(), authorizationResource.ofStreamInScope(scopeName, streamName), READ); } catch (AuthException e) { - log.warn(requestId, "Get stream for {} failed due to authentication failure.", + log.warn(requestId, "Get stream for {} failed due to authentication failure.", scopeName + "/" + streamName); asyncResponse.resume(Response.status(Status.fromStatusCode(e.getResponseCode())).build()); LoggerHelpers.traceLeave(log, "getStream", traceId); @@ -434,7 +439,7 @@ public void getStream(final String scopeName, final String streamName, final Sec log.warn(requestId, "Stream: {}/{} not found", scopeName, streamName); return Response.status(Status.NOT_FOUND).build(); } else { - log.warn(requestId, "getStream for {}/{} failed with exception: {}", + log.warn(requestId, "getStream for {}/{} failed with exception: {}", scopeName, streamName, exception); return Response.status(Status.INTERNAL_SERVER_ERROR).build(); } @@ -545,7 +550,7 @@ public void listScopes(final SecurityContext securityContext, final AsyncRespons * @param asyncResponse AsyncResponse provides means for asynchronous server side response processing. */ @Override - public void listStreams(final String scopeName, final String showInternalStreams, + public void listStreams(final String scopeName, final String filterType, final String filterValue, final SecurityContext securityContext, final AsyncResponse asyncResponse) { long traceId = LoggerHelpers.traceEnter(log, "listStreams"); long requestId = requestIdGenerator.nextLong(); @@ -563,11 +568,54 @@ public void listStreams(final String scopeName, final String showInternalStreams LoggerHelpers.traceLeave(log, "listStreams", traceId); return; } - boolean showOnlyInternalStreams = showInternalStreams != null && showInternalStreams.equals("true"); - controllerService.listStreamsInScope(scopeName, requestId) - .thenApply(streamsList -> { - StreamsList streams = new StreamsList(); - streamsList.forEach((stream, config) -> { + boolean showOnlyInternalStreams = filterType != null && filterType.equals("showInternalStreams"); + boolean showStreamsWithTag = filterType != null && filterType.equals("tag"); + String tag; + if (showStreamsWithTag && filterValue != null) { + tag = filterValue; + List streams = new ArrayList<>(); + String finalTag = tag; + localController.listStreamsForTag(scopeName, tag).collectRemaining(streams::add).thenCompose(v -> { + List>> streamConfigFutureList = streams.stream().filter(stream -> { + boolean isAuthorized = false; + try { + isAuthorized = restAuthHelper.isAuthorized(authHeader, authorizationResource.ofStreamInScope(scopeName, stream.getStreamName()), + principal, READ); + } catch (AuthException e) { + log.warn(requestId, "List Streams with tag {} for scope {} failed due to authentication failure.", + finalTag, scopeName); + // Ignore. This exception occurs under abnormal circumstances and not to determine + // whether the user is authorized. In case it does occur, we assume that the user + // is unauthorized. + } + return isAuthorized; + }).map(stream -> localController.getStreamConfiguration(scopeName, stream.getStreamName()) + .thenApply(config -> new ImmutablePair<>(stream, config))) + .collect(Collectors.toList()); + return Futures.allOfWithResults(streamConfigFutureList); + }).thenApply(streamConfigPairs -> { + StreamsList responseStreams = new StreamsList(); + responseStreams.setStreams(new ArrayList<>()); + streamConfigPairs.forEach(pair -> responseStreams.addStreamsItem(ModelHelper.encodeStreamResponse(pair.left.getScope(), pair.left.getStreamName(), pair.right))); + log.info(requestId, "Successfully fetched streams for scope: {} with tag: {}", scopeName, finalTag); + return Response.status(Status.OK).entity(responseStreams).build(); + }).exceptionally(exception -> { + if (exception.getCause() instanceof StoreException.DataNotFoundException + || exception instanceof StoreException.DataNotFoundException) { + log.warn(requestId, "Scope name: {} not found", scopeName); + return Response.status(Status.NOT_FOUND).build(); + } else { + log.warn(requestId, "listStreams for {} with tag {} failed with exception: {}", scopeName, finalTag, exception); + return Response.status(Status.INTERNAL_SERVER_ERROR).build(); + } + }).thenApply(asyncResponse::resume) + .thenAccept(x -> LoggerHelpers.traceLeave(log, "listStreams", traceId)); + } else { + controllerService.listStreamsInScope(scopeName, requestId) + .thenApply(streamsList -> { + StreamsList streams = new StreamsList(); + streams.setStreams(new ArrayList<>()); + streamsList.forEach((stream, config) -> { try { if (restAuthHelper.isAuthorized(authHeader, authorizationResource.ofStreamInScope(scopeName, stream), @@ -579,7 +627,8 @@ public void listStreams(final String scopeName, final String showInternalStreams } } } catch (AuthException e) { - log.warn(e.getMessage(), e); + log.warn(requestId, "Read internal streams for scope {} failed due to authentication failure.", + scopeName); // Ignore. This exception occurs under abnormal circumstances and not to determine // whether the user is authorized. In case it does occur, we assume that the user // is unauthorized. @@ -598,6 +647,7 @@ public void listStreams(final String scopeName, final String showInternalStreams } }).thenApply(asyncResponse::resume) .thenAccept(x -> LoggerHelpers.traceLeave(log, "listStreams", traceId)); + } } /** @@ -621,7 +671,7 @@ public void updateStream(final String scopeName, final String streamName, authorizationResource.ofStreamInScope(scopeName, streamName), READ_UPDATE); } catch (AuthException e) { - log.warn(requestId, "Update stream for {} failed due to authentication failure.", + log.warn(requestId, "Update stream for {} failed due to authentication failure.", scopeName + "/" + streamName); asyncResponse.resume(Response.status(Status.fromStatusCode(e.getResponseCode())).build()); LoggerHelpers.traceLeave(log, "Update stream", traceId); @@ -670,7 +720,7 @@ public void updateStreamState(final String scopeName, final String streamName, getAuthorizationHeader(), authorizationResource.ofStreamInScope(scopeName, streamName), READ_UPDATE); } catch (AuthException e) { - log.warn(requestId, "Update stream for {} failed due to authentication failure.", + log.warn(requestId, "Update stream for {} failed due to authentication failure.", scopeName + "/" + streamName); asyncResponse.resume(Response.status(Status.fromStatusCode(e.getResponseCode())).build()); LoggerHelpers.traceLeave(log, "Update stream", traceId); @@ -733,7 +783,7 @@ public void getScalingEvents(final String scopeName, final String streamName, fi getAuthorizationHeader(), authorizationResource.ofStreamInScope(scopeName, streamName), READ); } catch (AuthException e) { - log.warn(requestId, "Get scaling events for {} failed due to authentication failure.", + log.warn(requestId, "Get scaling events for {} failed due to authentication failure.", scopeName + "/" + streamName); asyncResponse.resume(Response.status(Status.fromStatusCode(e.getResponseCode())).build()); LoggerHelpers.traceLeave(log, "Get scaling events", traceId); @@ -741,7 +791,7 @@ public void getScalingEvents(final String scopeName, final String streamName, fi } if (from < 0 || to < 0 || from > to) { - log.warn(requestId, "Received invalid request from client for scopeName/streamName: {}/{} ", + log.warn(requestId, "Received invalid request from client for scopeName/streamName: {}/{} ", scopeName, streamName); asyncResponse.resume(Response.status(Status.BAD_REQUEST).build()); LoggerHelpers.traceLeave(log, "getScalingEvents", traceId); @@ -769,7 +819,7 @@ public void getScalingEvents(final String scopeName, final String streamName, fi if (referenceEvent != null) { finalScaleMetadataList.add(0, referenceEvent); } - log.info(requestId, "Successfully fetched required scaling events for scope: {}, stream: {}", + log.info(requestId, "Successfully fetched required scaling events for scope: {}, stream: {}", scopeName, streamName); return Response.status(Status.OK).entity(finalScaleMetadataList).build(); }).exceptionally(exception -> { diff --git a/controller/src/main/java/io/pravega/controller/server/rest/v1/ApiV1.java b/controller/src/main/java/io/pravega/controller/server/rest/v1/ApiV1.java index f4ef0a2cd61..ab8b0ed944e 100644 --- a/controller/src/main/java/io/pravega/controller/server/rest/v1/ApiV1.java +++ b/controller/src/main/java/io/pravega/controller/server/rest/v1/ApiV1.java @@ -268,9 +268,9 @@ void listReaderGroups(@ApiParam(value = "Scope name", required = true) @PathPara @ApiResponse( code = 500, message = "Server error", response = StreamsList.class) }) void listStreams(@ApiParam(value = "Scope name", required = true) @PathParam("scopeName") String scopeName, - @ApiParam(value = "Flag whether to display only system created streams") - @QueryParam("showInternalStreams") String showInternalStreams, - @Context SecurityContext securityContext, @Suspended final AsyncResponse asyncResponse); + @ApiParam(value = "Filter options", allowableValues = "showInternalStreams, tag") @QueryParam("filter_type") String filterType, + @ApiParam(value = "value to be passed. must match the type passed with it.") @QueryParam("filter_value") String filterValue, + @Context SecurityContext securityContext, @Suspended final AsyncResponse asyncResponse); @PUT @Path("/{scopeName}/streams/{streamName}") diff --git a/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServer.java b/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServer.java index f34534db2d4..5de0800cb00 100644 --- a/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServer.java +++ b/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServer.java @@ -20,8 +20,10 @@ import io.grpc.Server; import io.grpc.ServerBuilder; import io.grpc.ServerInterceptors; +import io.grpc.netty.GrpcSslContexts; import io.grpc.netty.NettyServerBuilder; import io.netty.channel.ChannelOption; +import io.netty.handler.ssl.SslContext; import io.pravega.common.LoggerHelpers; import io.pravega.common.tracing.RequestTracker; import io.pravega.controller.server.ControllerService; @@ -31,8 +33,11 @@ import io.pravega.shared.controller.tracing.RPCTracingHelpers; import java.io.File; import lombok.Getter; +import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; +import javax.net.ssl.SSLException; + /** * gRPC based RPC Server for the Controller. */ @@ -74,10 +79,19 @@ public GRPCServer(ControllerService controllerService, GRPCServerConfig serverCo if (serverConfig.isTlsEnabled() && !Strings.isNullOrEmpty(serverConfig.getTlsCertFile())) { builder = builder.useTransportSecurity(new File(serverConfig.getTlsCertFile()), new File(serverConfig.getTlsKeyFile())); + SslContext ctx = getSSLContext(serverConfig); + ((NettyServerBuilder) builder).sslContext(ctx); } this.server = builder.build(); } + @SneakyThrows(SSLException.class) + private SslContext getSSLContext(GRPCServerConfig serverConfig) { + return GrpcSslContexts.forServer(new File(serverConfig.getTlsCertFile()), new File(serverConfig.getTlsKeyFile())) + .protocols(serverConfig.getTlsProtocolVersion()) + .build(); + } + /** * Start gRPC server. */ diff --git a/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServerConfig.java b/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServerConfig.java index e0a62df9482..1034da86a3f 100644 --- a/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServerConfig.java +++ b/controller/src/main/java/io/pravega/controller/server/rpc/grpc/GRPCServerConfig.java @@ -48,6 +48,7 @@ public interface GRPCServerConfig extends ServerConfig { * * @return Whether this deployment has auth enabled. */ + @Override boolean isAuthorizationEnabled(); /** @@ -65,6 +66,12 @@ public interface GRPCServerConfig extends ServerConfig { */ boolean isTlsEnabled(); + /** + * A configuration for TLS protocol versions. + * @return A variable for tls protocol version + */ + String[] getTlsProtocolVersion(); + /** * The truststore to be used while talking to segmentstore over TLS. * diff --git a/controller/src/main/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImpl.java b/controller/src/main/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImpl.java index 67952c3a3ff..1c01501ec38 100644 --- a/controller/src/main/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImpl.java +++ b/controller/src/main/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImpl.java @@ -20,9 +20,12 @@ import io.pravega.auth.AuthPluginConfig; import io.pravega.common.Exceptions; import io.pravega.controller.server.rpc.grpc.GRPCServerConfig; + +import java.util.Arrays; import java.util.Optional; import java.util.Properties; +import io.pravega.controller.util.Config; import lombok.Builder; import lombok.Data; @@ -37,6 +40,7 @@ public class GRPCServerConfigImpl implements GRPCServerConfig { private final boolean authorizationEnabled; private final String userPasswordFile; private final boolean tlsEnabled; + private final String[] tlsProtocolVersion; private final String tlsCertFile; private final String tlsKeyFile; private final String tokenSigningKey; @@ -48,7 +52,7 @@ public class GRPCServerConfigImpl implements GRPCServerConfig { @Builder public GRPCServerConfigImpl(final int port, final String publishedRPCHost, final Integer publishedRPCPort, - boolean authorizationEnabled, String userPasswordFile, boolean tlsEnabled, + boolean authorizationEnabled, String userPasswordFile, boolean tlsEnabled, String[] tlsProtocolVersion, String tlsCertFile, String tlsKeyFile, String tokenSigningKey, Integer accessTokenTTLInSeconds, boolean isRGWritesWithReadPermEnabled, String tlsTrustStore, @@ -73,6 +77,11 @@ public GRPCServerConfigImpl(final int port, final String publishedRPCHost, final this.authorizationEnabled = authorizationEnabled; this.userPasswordFile = userPasswordFile; this.tlsEnabled = tlsEnabled; + if (tlsProtocolVersion == null) { + this.tlsProtocolVersion = Config.TLS_PROTOCOL_VERSION.toArray(new String[Config.TLS_PROTOCOL_VERSION.size()]); + } else { + this.tlsProtocolVersion = Arrays.copyOf(tlsProtocolVersion, tlsProtocolVersion.length); + } this.tlsCertFile = tlsCertFile; this.tlsKeyFile = tlsKeyFile; this.tlsTrustStore = tlsTrustStore; @@ -108,6 +117,7 @@ public String toString() { // TLS config .append(String.format("tlsEnabled: %b, ", tlsEnabled)) + .append(String.format("tlsProtocolVersion: %s, ", Arrays.toString(tlsProtocolVersion))) .append(String.format("tlsCertFile is %s, ", Strings.isNullOrEmpty(tlsCertFile) ? "unspecified" : "specified")) .append(String.format("tlsKeyFile is %s, ", diff --git a/controller/src/main/java/io/pravega/controller/store/InMemoryScope.java b/controller/src/main/java/io/pravega/controller/store/InMemoryScope.java index b1f3c4cf1f0..ba06c5e298a 100644 --- a/controller/src/main/java/io/pravega/controller/store/InMemoryScope.java +++ b/controller/src/main/java/io/pravega/controller/store/InMemoryScope.java @@ -183,7 +183,7 @@ public CompletableFuture, String>> listKeyValueTables(int limi @Synchronized public CompletableFuture getReaderGroupId(String rgName, OperationContext context) { if (this.readerGroupsMap.containsKey(rgName)) { - return CompletableFuture.completedFuture(((InMemoryReaderGroup) this.readerGroupsMap.get(rgName)).getId()); + return CompletableFuture.completedFuture(this.readerGroupsMap.get(rgName).getId()); } return Futures.failedFuture(StoreException.create(StoreException.Type.DATA_NOT_FOUND, "reader group not found in scope.")); } diff --git a/controller/src/main/java/io/pravega/controller/store/PravegaTablesScope.java b/controller/src/main/java/io/pravega/controller/store/PravegaTablesScope.java index 0537af505ba..3f9e7ab48ed 100644 --- a/controller/src/main/java/io/pravega/controller/store/PravegaTablesScope.java +++ b/controller/src/main/java/io/pravega/controller/store/PravegaTablesScope.java @@ -91,7 +91,7 @@ public CompletableFuture createScope(OperationContext context) { // We will first attempt to create the entry for the scope in scopes table. // If scopes table does not exist, we create the scopes table (idempotent) // followed by creating a new entry for this scope with a new unique id. - // We then retrive id from the store (in case someone concurrently created the entry or entry already existed. + // We then retrieve id from the store (in case someone concurrently created the entry or entry already existed. // This unique id is used to create scope specific table with unique id. // If scope entry exists in Scopes table, create the streamsInScope table before throwing DataExists exception return Futures.handleCompose(withCreateTableIfAbsent(() -> storeHelper.addNewEntry( @@ -252,7 +252,7 @@ public CompletableFuture, String>> listStreamsForTag(String ta Preconditions.checkNotNull(context, "Operation context cannot be null"); return getStreamsFromNextTagChunk(tag, continuationToken, context).thenCompose(pair -> { - if (pair.getLeft().isEmpty() && !pair.getRight().contains(LAST_TAG_CHUNK)) { + if (pair.getLeft().isEmpty() && !pair.getRight().endsWith(LAST_TAG_CHUNK)) { return listStreamsForTag(tag, pair.getRight(), executor, context); } else { return CompletableFuture.completedFuture(pair); @@ -269,20 +269,24 @@ public CompletableFuture, String>> listStreamsForTag(String ta * @return A future that returns a List of Streams and the token. */ CompletableFuture, String>> getStreamsFromNextTagChunk(String tag, String token, OperationContext context) { - return getAllStreamTagsInScopeTableNames(context).thenApply( - chunkTableList -> { - if (token.isEmpty()) { - // token is empty, try reading from the first tag table. - return chunkTableList.get(0); - } else { - // return next index - return chunkTableList.get(chunkTableList.indexOf(token) + 1); + if (token.endsWith(LAST_TAG_CHUNK)) { + return CompletableFuture.completedFuture(new ImmutablePair<>(Collections.emptyList(), token)); + } else { + return getAllStreamTagsInScopeTableNames(context).thenApply( + chunkTableList -> { + if (token.isEmpty()) { + // token is empty, try reading from the first tag table. + return chunkTableList.get(0); + } else { + // return next index + return chunkTableList.get(chunkTableList.indexOf(token) + 1); + } } - } - ).thenCompose(table -> storeHelper.expectingDataNotFound( - storeHelper.getEntry(table, tag, TagRecord::fromBytes, context.getRequestId()) - .thenApply(ver -> new ImmutablePair<>(new ArrayList<>(ver.getObject().getStreams()), table)), - new ImmutablePair<>(Collections.emptyList(), table))); + ).thenCompose(table -> storeHelper.expectingDataNotFound( + storeHelper.getEntry(table, tag, TagRecord::fromBytes, context.getRequestId()) + .thenApply(ver -> new ImmutablePair<>(new ArrayList<>(ver.getObject().getStreams()), table)), + new ImmutablePair<>(Collections.emptyList(), table))); + } } @Override @@ -319,25 +323,27 @@ public CompletableFuture addStreamToScope(String stream, OperationContext public CompletableFuture addTagsUnderScope(String stream, Set tags, OperationContext context) { return getAllStreamTagsInScopeTableNames(stream, context) .thenCompose(table -> Futures.allOf(tags.stream() - .parallel() - .map(key -> storeHelper.getAndUpdateEntry(table, key, - e -> appendStreamToEntry(stream, e), - e -> false, // no need of cleanup - context.getRequestId())) - .collect(Collectors.toList()))); - } - - private TableSegmentEntry appendStreamToEntry(String appendValue, TableSegmentEntry entry) { - String key = entry.getKey().getKey().toString(StandardCharsets.UTF_8); + .parallel() + .map(key -> { + log.debug(context.getRequestId(), "Adding stream {} to tag {} index on table {}", stream, key, table); + return storeHelper.getAndUpdateEntry(table, key, + e -> appendStreamToEntry(key, stream, e), + e -> false, // no need of cleanup + context.getRequestId()); + }) + .collect(Collectors.toList()))); + } + + private TableSegmentEntry appendStreamToEntry(String tag, String appendValue, TableSegmentEntry entry) { byte[] array = storeHelper.getArray(entry.getValue()); byte[] updatedBytes; if (array.length == 0) { - updatedBytes = TagRecord.builder().tagName(key).stream(appendValue).build().toBytes(); + updatedBytes = TagRecord.builder().tagName(tag).stream(appendValue).build().toBytes(); } else { TagRecord record = TagRecord.fromBytes(array); updatedBytes = record.toBuilder().stream(appendValue).build().toBytes(); } - return TableSegmentEntry.versioned(key.getBytes(StandardCharsets.UTF_8), updatedBytes, + return TableSegmentEntry.versioned(tag.getBytes(StandardCharsets.UTF_8), updatedBytes, entry.getKey().getVersion().getSegmentVersion()); } @@ -352,15 +358,17 @@ public CompletableFuture removeTagsUnderScope(String stream, Set t return getAllStreamTagsInScopeTableNames(stream, context) .thenCompose(table -> Futures.allOf(tags.stream() .parallel() - .map(key -> storeHelper.getAndUpdateEntry(table, key, - e -> removeStreamFromEntry(stream, e), - this::isEmptyTagRecord, - context.getRequestId())) + .map(key -> { + log.debug(context.getRequestId(), "Removing stream {} from tag {} index on table {}", stream, key, table); + return storeHelper.getAndUpdateEntry(table, key, + e -> removeStreamFromEntry(key, stream, e), + this::isEmptyTagRecord, + context.getRequestId()); + }) .collect(Collectors.toList()))); } - private TableSegmentEntry removeStreamFromEntry(String removeValue, TableSegmentEntry currentEntry) { - String k = currentEntry.getKey().getKey().toString(StandardCharsets.UTF_8); + private TableSegmentEntry removeStreamFromEntry(String tag, String removeValue, TableSegmentEntry currentEntry) { byte[] array = storeHelper.getArray(currentEntry.getValue()); byte[] updatedBytes = new byte[0]; if (array.length != 0) { @@ -369,14 +377,14 @@ private TableSegmentEntry removeStreamFromEntry(String removeValue, TableSegment TagRecord updatedRecord = record.toBuilder().removeStream(removeValue).build(); updatedBytes = updatedRecord.toBytes(); } - return TableSegmentEntry.versioned(k.getBytes(StandardCharsets.UTF_8), + return TableSegmentEntry.versioned(tag.getBytes(StandardCharsets.UTF_8), updatedBytes, currentEntry.getKey().getVersion().getSegmentVersion()); } private boolean isEmptyTagRecord(TableSegmentEntry entry) { byte[] array = storeHelper.getArray(entry.getValue()); - return array.length != 0 && TagRecord.fromBytes(array).getStreams().isEmpty(); + return array.length == 0 || TagRecord.fromBytes(array).getStreams().isEmpty(); } public CompletableFuture removeStreamFromScope(String stream, OperationContext context) { @@ -438,6 +446,7 @@ public CompletableFuture removeReaderGroupFromScope(String readerGroup, Op .thenCompose(tableName -> Futures.toVoid(storeHelper.removeEntry(tableName, readerGroup, context.getRequestId()))); } + @Override public CompletableFuture getReaderGroupId(String readerGroupName, OperationContext context) { Preconditions.checkNotNull(context, "Operation context cannot be null"); return getReaderGroupsInScopeTableName(context) @@ -476,6 +485,6 @@ private CompletableFuture, String>> readAll(int limit, String } token.set(Base64.getEncoder().encodeToString(result.getKey().array())); return new ImmutablePair<>(taken, token.get()); - }), DATA_NOT_FOUND_PREDICATE, new ImmutablePair<>(Collections.emptyList(), null)); + }), DATA_NOT_FOUND_PREDICATE, new ImmutablePair<>(Collections.emptyList(), token.get())); } } diff --git a/controller/src/main/java/io/pravega/controller/store/PravegaTablesStoreHelper.java b/controller/src/main/java/io/pravega/controller/store/PravegaTablesStoreHelper.java index 7dfd22e3ad7..331672afad1 100644 --- a/controller/src/main/java/io/pravega/controller/store/PravegaTablesStoreHelper.java +++ b/controller/src/main/java/io/pravega/controller/store/PravegaTablesStoreHelper.java @@ -97,7 +97,7 @@ public class PravegaTablesStoreHelper { private final int numOfRetries; @lombok.Data - private static class TableCacheKey implements Cache.CacheKey { + private static class TableCacheKey implements Cache.CacheKey { private final String table; private final String key; } @@ -124,7 +124,7 @@ public PravegaTablesStoreHelper(SegmentHelper segmentHelper, GrpcAuthHelper auth * @param Type of object to deserialize the response into. */ private void putInCache(String table, String key, VersionedMetadata value, long time) { - TableCacheKey cacheKey = new TableCacheKey<>(table, key); + TableCacheKey cacheKey = new TableCacheKey(table, key); cache.put(cacheKey, value, time); } @@ -138,7 +138,7 @@ private void putInCache(String table, String key, VersionedMetadata valu * @return Returns a completableFuture which when completed will have the deserialized value with its store key version. */ private VersionedMetadata getCachedData(String table, String key, long time, long requestId) { - TableCacheKey cacheKey = new TableCacheKey<>(table, key); + TableCacheKey cacheKey = new TableCacheKey(table, key); VersionedMetadata cachedData = cache.getCachedData(cacheKey, time); if (cachedData == null) { return null; @@ -153,7 +153,7 @@ private VersionedMetadata getCachedData(String table, String key, long ti * @param key key to invalidate */ public void invalidateCache(String table, String key) { - cache.invalidateCache(new TableCacheKey<>(table, key)); + cache.invalidateCache(new TableCacheKey(table, key)); } @SuppressWarnings("unchecked") @@ -170,10 +170,21 @@ private VersionedMetadata getVersionedMetadata(VersionedMetadata v) { * @return CompletableFuture which when completed will indicate successful creation of table. */ public CompletableFuture createTable(String tableName, long requestId) { + return this.createTable(tableName, requestId, 0); + } + + /** + * Method to create a new Table. If the table already exists, segment helper responds with success. + * @param tableName table name + * @param requestId request id + * @param rolloverSizeBytes rollover size of the table segment + * @return CompletableFuture which when completed will indicate successful creation of table. + */ + public CompletableFuture createTable(String tableName, long requestId, long rolloverSizeBytes) { log.debug(requestId, "create table called for table: {}", tableName); return Futures.toVoid(withRetries(() -> segmentHelper.createTableSegment(tableName, authToken.get(), requestId, - false, 0), + false, 0, rolloverSizeBytes), () -> String.format("create table: %s", tableName), requestId)) .whenCompleteAsync((r, e) -> { if (e != null) { @@ -255,6 +266,26 @@ private CompletableFuture conditionalDeleteOfKey(String tableName, long re return expectingWriteConflict(removeEntry(tableName, key, new Version.LongVersion(keyVersion.getSegmentVersion()), requestId), null); } + /** + * Method to get the number of entries in a Table Segment. If the table already exists, segment helper responds with success. + * @param tableName Name of the Table Segment for which we want the entry count + * @param requestId request id + * @return CompletableFuture which when completed will return number of entries in table. + */ + public CompletableFuture getEntryCount(String tableName, long requestId) { + log.debug(requestId, "create table called for table: {}", tableName); + + return withRetries(() -> segmentHelper.getTableSegmentEntryCount(tableName, authToken.get(), requestId), + () -> String.format("GetInfo table: %s", tableName), requestId) + .whenCompleteAsync((r, e) -> { + if (e != null) { + log.warn(requestId, "Get Table Segment info for table {} threw exception", tableName, e); + } else { + log.debug(requestId, "Get Table Segment info for table {} completed successfully", tableName); + } + }, executor); + } + /** * Method to get and conditionally update value for the specified key. * @@ -284,6 +315,7 @@ public CompletableFuture getAndUpdateEntry(final String tableName, final S return segmentHelper.updateTableEntries(tableName, Collections.singletonList(updatedEntry), authToken.get(), requestId) .thenCompose(keyVersions -> { if (shouldAttemptCleanup) { + log.debug(requestId, "Delete of table key {} on table {}", tableKey, tableName); // attempt a conditional delete of the entry since there are zero entries. return conditionalDeleteOfKey(tableName, requestId, tableKey, keyVersions.get(0)); } else { diff --git a/controller/src/main/java/io/pravega/controller/store/ZKStoreHelper.java b/controller/src/main/java/io/pravega/controller/store/ZKStoreHelper.java index 37ec62213bb..598f7482cfb 100644 --- a/controller/src/main/java/io/pravega/controller/store/ZKStoreHelper.java +++ b/controller/src/main/java/io/pravega/controller/store/ZKStoreHelper.java @@ -19,6 +19,7 @@ import java.util.List; import java.util.concurrent.CompletableFuture; import java.util.concurrent.Executor; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Consumer; import java.util.function.Function; @@ -47,11 +48,22 @@ public class ZKStoreHelper { @VisibleForTesting @Getter(AccessLevel.PUBLIC) private final Cache cache; + private final AtomicBoolean isZKConnected = new AtomicBoolean(false); public ZKStoreHelper(final CuratorFramework cf, Executor executor) { client = cf; this.executor = executor; this.cache = new Cache(); + this.isZKConnected.set(client.getZookeeperClient().isConnected()); + //Listen for any zookeeper connection state changes + client.getConnectionStateListenable().addListener( + (curatorClient, newState) -> { + this.isZKConnected.set(newState.isConnected()); + }); + } + + public boolean isZKConnected() { + return isZKConnected.get(); } /** @@ -442,7 +454,7 @@ public CompletableFuture> getCachedData(String path, St } } - @SuppressWarnings("unchecked") + @SuppressWarnings({ "unchecked", "rawtypes" }) private VersionedMetadata getVersionedMetadata(VersionedMetadata v) { // Since cache is untyped and holds all types of deserialized objects, we typecast it to the requested object type // based on the type in caller's supplied Deserialization function. @@ -479,6 +491,7 @@ public int hashCode() { } @Override + @SuppressWarnings("rawtypes") public boolean equals(Object obj) { return obj instanceof ZkCacheKey && path.equals(((ZkCacheKey) obj).path) diff --git a/controller/src/main/java/io/pravega/controller/store/checkpoint/CheckpointStore.java b/controller/src/main/java/io/pravega/controller/store/checkpoint/CheckpointStore.java index 8291c32ae13..9f0cb2ccd33 100644 --- a/controller/src/main/java/io/pravega/controller/store/checkpoint/CheckpointStore.java +++ b/controller/src/main/java/io/pravega/controller/store/checkpoint/CheckpointStore.java @@ -134,4 +134,11 @@ void removeReader(final String process, final String readerGroup, final String r throws CheckpointStoreException; Set getProcesses() throws CheckpointStoreException; + + /** + * Get the health status. + * + * @return true/false. + */ + public boolean isHealthy(); } diff --git a/controller/src/main/java/io/pravega/controller/store/checkpoint/InMemoryCheckpointStore.java b/controller/src/main/java/io/pravega/controller/store/checkpoint/InMemoryCheckpointStore.java index caf2ccba380..fe1a8f6e71f 100644 --- a/controller/src/main/java/io/pravega/controller/store/checkpoint/InMemoryCheckpointStore.java +++ b/controller/src/main/java/io/pravega/controller/store/checkpoint/InMemoryCheckpointStore.java @@ -183,6 +183,16 @@ public Set getProcesses() throws CheckpointStoreException { return map.keySet().stream().map(this::getProcess).collect(Collectors.toSet()); } + /** + * Get the health status. + * + * @return true by deafult. + */ + @Override + public boolean isHealthy() { + return true; + } + private String getKey(final String process, final String readerGroup) { return process + SEPARATOR + readerGroup; } diff --git a/controller/src/main/java/io/pravega/controller/store/checkpoint/ZKCheckpointStore.java b/controller/src/main/java/io/pravega/controller/store/checkpoint/ZKCheckpointStore.java index ba48054b0c7..c996b8e404c 100644 --- a/controller/src/main/java/io/pravega/controller/store/checkpoint/ZKCheckpointStore.java +++ b/controller/src/main/java/io/pravega/controller/store/checkpoint/ZKCheckpointStore.java @@ -28,6 +28,7 @@ import java.util.List; import java.util.Map; import java.util.Set; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Function; import java.util.stream.Collectors; import lombok.AllArgsConstructor; @@ -42,12 +43,13 @@ * Zookeeper based checkpoint store. */ @Slf4j -class ZKCheckpointStore implements CheckpointStore { +public class ZKCheckpointStore implements CheckpointStore { private static final String ROOT = "eventProcessors"; private final CuratorFramework client; private final Serializer positionSerializer; private final JavaSerializer groupDataSerializer; + private final AtomicBoolean isZKConnected = new AtomicBoolean(false); ZKCheckpointStore(CuratorFramework client) { this.client = client; @@ -63,6 +65,12 @@ public Position deserialize(ByteBuffer serializedValue) { } }; this.groupDataSerializer = new JavaSerializer<>(); + this.isZKConnected.set(client.getZookeeperClient().isConnected()); + //Listen for any zookeeper connection state changes + client.getConnectionStateListenable().addListener( + (curatorClient, newState) -> { + this.isZKConnected.set(newState.isConnected()); + }); } @Data @@ -77,6 +85,16 @@ enum State { private final List readerIds; } + /** + * Get the zookeeper health status. + * + * @return true if zookeeper is connected. + */ + @Override + public boolean isHealthy() { + return isZKConnected.get(); + } + @Override public void setPosition(String process, String readerGroup, String readerId, Position position) throws CheckpointStoreException { updateNode(getReaderPath(process, readerGroup, readerId), positionSerializer.serialize(position).array()); diff --git a/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableBase.java b/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableBase.java index 7fb436a8a2b..fbbac0d86ff 100644 --- a/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableBase.java +++ b/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableBase.java @@ -193,6 +193,7 @@ public CompletableFuture> getAllSegmentIds(OperationContext context) { } // region state + @Override abstract public CompletableFuture getId(OperationContext context); abstract CompletableFuture createStateIfAbsent(final KVTStateRecord state, OperationContext context); diff --git a/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableMetadataStore.java b/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableMetadataStore.java index 5a4d04f4255..a3a5fb966e1 100644 --- a/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableMetadataStore.java +++ b/controller/src/main/java/io/pravega/controller/store/kvtable/AbstractKVTableMetadataStore.java @@ -106,6 +106,7 @@ public KVTOperationContext createContext(String scopeName, String name, long req getKVTable(scopeName, name, null), requestId); } + @Override public KeyValueTable getKVTable(String scope, final String name, OperationContext context) { if (context instanceof KVTOperationContext) { return ((KVTOperationContext) context).getKvTable(); diff --git a/controller/src/main/java/io/pravega/controller/store/kvtable/KVTableMetadataStore.java b/controller/src/main/java/io/pravega/controller/store/kvtable/KVTableMetadataStore.java index 5567cda6b00..b001cc3478f 100644 --- a/controller/src/main/java/io/pravega/controller/store/kvtable/KVTableMetadataStore.java +++ b/controller/src/main/java/io/pravega/controller/store/kvtable/KVTableMetadataStore.java @@ -85,7 +85,7 @@ CompletableFuture createEntryForKVTable(final String scopeName, * Creates a new stream with the given name and configuration. * * @param scopeName scope name - * @param kvtName stream name + * @param kvtName stream name * @param configuration stream configuration * @param createTimestamp stream creation timestamp * @param context operation context diff --git a/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStore.java b/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStore.java index 622f6051407..b10c03af20f 100644 --- a/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStore.java +++ b/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStore.java @@ -65,6 +65,13 @@ public class PravegaTablesKVTMetadataStore extends AbstractKVTableMetadataStore this.executor = executor; } + @VisibleForTesting + public PravegaTablesKVTMetadataStore(CuratorFramework curatorClient, ScheduledExecutorService executor, PravegaTablesStoreHelper storeHelper) { + super(new ZKHostIndex(curatorClient, "/hostRequestIndex", executor)); + this.storeHelper = storeHelper; + this.executor = executor; + } + @Override PravegaTablesKVTable newKeyValueTable(final String scope, final String name) { log.debug("Fetching KV Table from PravegaTables store {}/{}", scope, name); @@ -82,6 +89,7 @@ public CompletableFuture deleteFromScope(final String scope, executor); } + @Override CompletableFuture recordLastKVTableSegment(final String scope, final String kvtable, int lastActiveSegment, OperationContext ctx, final Executor executor) { OperationContext context = getOperationContext(ctx); diff --git a/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTable.java b/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTable.java index a0be24518cf..5e7534f5be3 100644 --- a/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTable.java +++ b/controller/src/main/java/io/pravega/controller/store/kvtable/PravegaTablesKVTable.java @@ -71,6 +71,7 @@ class PravegaTablesKVTable extends AbstractKVTableBase { this.idRef = new AtomicReference<>(null); } + @Override public CompletableFuture getId(OperationContext context) { Preconditions.checkNotNull(context, "context cannot be null"); String id = idRef.get(); diff --git a/controller/src/main/java/io/pravega/controller/store/kvtable/records/KVTSegmentRecord.java b/controller/src/main/java/io/pravega/controller/store/kvtable/records/KVTSegmentRecord.java index d7dc2ef9570..b6b9cc6a99c 100644 --- a/controller/src/main/java/io/pravega/controller/store/kvtable/records/KVTSegmentRecord.java +++ b/controller/src/main/java/io/pravega/controller/store/kvtable/records/KVTSegmentRecord.java @@ -46,6 +46,7 @@ public class KVTSegmentRecord implements SegmentRecord { public static class KVTSegmentRecordBuilder implements ObjectBuilder { } + @Override public long segmentId() { return NameUtils.computeSegmentId(segmentNumber, creationEpoch); } diff --git a/controller/src/main/java/io/pravega/controller/store/stream/AbstractStreamMetadataStore.java b/controller/src/main/java/io/pravega/controller/store/stream/AbstractStreamMetadataStore.java index 21a64a81e0e..af24c2ed232 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/AbstractStreamMetadataStore.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/AbstractStreamMetadataStore.java @@ -950,7 +950,7 @@ public CompletableFuture getEpoch(final String scope, } @Override - public CompletableFuture> startCommitTransactions( + public CompletableFuture, List>> startCommitTransactions( String scope, String stream, int limit, OperationContext ctx, ScheduledExecutorService executor) { OperationContext context = getOperationContext(ctx); return Futures.completeOn(getStream(scope, stream, context).startCommittingTransactions(limit, context), executor); @@ -964,23 +964,16 @@ public CompletableFuture> getVer } @Override - public CompletableFuture recordCommitOffsets(String scope, String stream, UUID txnId, Map commitOffsets, - OperationContext ctx, ScheduledExecutorService executor) { - OperationContext context = getOperationContext(ctx); - return Futures.completeOn(getStream(scope, stream, context).recordCommitOffsets(txnId, commitOffsets, context), - executor); - } - - @Override - public CompletableFuture completeCommitTransactions(String scope, String stream, + public CompletableFuture completeCommitTransactions(String scope, String stream, VersionedMetadata record, - OperationContext ctx, ScheduledExecutorService executor) { + OperationContext ctx, ScheduledExecutorService executor, + Map writerMarks) { OperationContext context = getOperationContext(ctx); Stream streamObj = getStream(scope, stream, context); - return Futures.completeOn(streamObj.completeCommittingTransactions(record, context), executor) + return Futures.completeOn(streamObj.completeCommittingTransactions(record, context, writerMarks), executor) .thenAcceptAsync(result -> { streamObj.getNumberOfOngoingTransactions(context).thenAccept(count -> - TransactionMetrics.reportOpenTransactions(scope, stream, count)); + TransactionMetrics.reportOpenTransactions(scope, stream, count.intValue())); }, executor); } diff --git a/controller/src/main/java/io/pravega/controller/store/stream/InMemoryStream.java b/controller/src/main/java/io/pravega/controller/store/stream/InMemoryStream.java index a3b610ece54..179f8715015 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/InMemoryStream.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/InMemoryStream.java @@ -40,6 +40,7 @@ import io.pravega.controller.store.stream.records.WriterMark; import io.pravega.controller.store.stream.records.StreamSubscriber; import io.pravega.controller.util.Config; +import io.pravega.shared.NameUtils; import javax.annotation.concurrent.GuardedBy; import java.time.Duration; @@ -58,6 +59,7 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentSkipListSet; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; @@ -144,9 +146,9 @@ public class InMemoryStream extends PersistentStreamBase { } @Override - public CompletableFuture getNumberOfOngoingTransactions(OperationContext context) { + public CompletableFuture getNumberOfOngoingTransactions(OperationContext context) { synchronized (txnsLock) { - return CompletableFuture.completedFuture(activeTxns.size()); + return CompletableFuture.completedFuture((long) activeTxns.size()); } } @@ -755,24 +757,24 @@ public CompletableFuture> getActiveTxns(OperationCont } @Override - CompletableFuture>> getOrderedCommittingTxnInLowestEpoch(int limit, OperationContext context) { + CompletableFuture> getOrderedCommittingTxnInLowestEpoch(int limit, OperationContext context) { List toPurge = new ArrayList<>(); - Map committing = new HashMap<>(); + ConcurrentSkipListSet committing = new ConcurrentSkipListSet<>(Comparator.comparingLong(VersionedTransactionData::getCommitOrder)); AtomicInteger smallestEpoch = new AtomicInteger(Integer.MAX_VALUE); // take smallest epoch and collect transactions from smallest epoch. transactionCommitOrder .forEach((order, txId) -> { int epoch = RecordHelper.getTransactionEpoch(txId); - ActiveTxnRecord record; + VersionedMetadata record; synchronized (txnsLock) { - record = activeTxns.containsKey(txId) ? activeTxns.get(txId).getObject() : - ActiveTxnRecord.EMPTY; + record = activeTxns.containsKey(txId) ? activeTxns.get(txId) : + new VersionedMetadata<>(ActiveTxnRecord.EMPTY, null); } - switch (record.getTxnStatus()) { + switch (record.getObject().getTxnStatus()) { case COMMITTING: - if (record.getCommitOrder() == order) { + if (record.getObject().getCommitOrder() == order) { // if entry matches record's position then include it - committing.put(txId, record); + committing.add(convertToVersionedMetadata(txId, record.getObject(), record.getVersion())); if (smallestEpoch.get() > epoch) { smallestEpoch.set(epoch); } @@ -796,14 +798,21 @@ record = activeTxns.containsKey(txId) ? activeTxns.get(txId).getObject() : // take smallest epoch from committing transactions. order transactions in this epoch by // ordered position - List> list = committing.entrySet().stream().filter(x -> RecordHelper.getTransactionEpoch(x.getKey()) == smallestEpoch.get()) - .sorted(Comparator.comparing(x -> x.getValue().getCommitOrder())) + List list = committing.stream().filter(x -> RecordHelper.getTransactionEpoch(x.getId()) == smallestEpoch.get()) + .sorted(Comparator.comparing(VersionedTransactionData::getCommitOrder)) .limit(limit) .collect(Collectors.toList()); return CompletableFuture.completedFuture(list); } + private VersionedTransactionData convertToVersionedMetadata(UUID id, ActiveTxnRecord record, Version version) { + int epoch = NameUtils.getEpoch(id); + return new VersionedTransactionData(epoch, id, version, record.getTxnStatus(), record.getLeaseExpiryTime(), + record.getMaxExecutionExpiryTime(), record.getWriterId(), record.getCommitTime(), record.getCommitOrder(), + record.getCommitOffsets()); + } + @Override CompletableFuture> getAllOrderedCommittingTxns(OperationContext context) { synchronized (txnsLock) { diff --git a/controller/src/main/java/io/pravega/controller/store/stream/PersistentStreamBase.java b/controller/src/main/java/io/pravega/controller/store/stream/PersistentStreamBase.java index 9f7352874a8..fc718abb38f 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/PersistentStreamBase.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/PersistentStreamBase.java @@ -20,7 +20,6 @@ import io.pravega.controller.store.Version; import io.pravega.controller.store.VersionedMetadata; import com.google.common.base.Preconditions; -import com.google.common.base.Strings; import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; import com.google.common.collect.ImmutableSet; @@ -57,6 +56,7 @@ import java.util.HashMap; import java.util.HashSet; import java.util.List; +import java.util.LinkedList; import java.util.LongSummaryStatistics; import java.util.Map; import java.util.Optional; @@ -65,7 +65,6 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionException; import java.util.concurrent.CompletionStage; -import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentSkipListSet; import java.util.concurrent.Executor; import java.util.concurrent.atomic.AtomicInteger; @@ -1533,7 +1532,7 @@ public CompletableFuture pingTransaction(final Version final TxnStatus status = txnData.getStatus(); final String writerId = txnData.getWriterId(); final long commitTime = txnData.getCommitTime(); - final long position = txnData.getPosition(); + final long position = txnData.getCommitOrder(); final ImmutableMap commitOffsets = txnData.getCommitOffsets(); final ActiveTxnRecord newData = new ActiveTxnRecord(creationTime, System.currentTimeMillis() + lease, maxExecutionExpiryTime, status, writerId, commitTime, position, commitOffsets); @@ -1732,49 +1731,25 @@ public CompletableFuture recordCommitOffsets(final UUID txnId, final Map generateMarksForTransactions(CommittingTransactionsRecord committingTransactionsRecord, - OperationContext context) { + CompletableFuture generateMarksForTransactions(OperationContext context, + Map writerMarks) { Preconditions.checkNotNull(context, "Operation context cannot be null"); - val getTransactionsFuture = Futures.allOfWithResults( - committingTransactionsRecord.getTransactionsToCommit().stream().map(txId -> { - int epoch = RecordHelper.getTransactionEpoch(txId); - // Ignore data not found exceptions. DataNotFound Exceptions can be thrown because transaction record no longer - // exists and this is an idempotent case. DataNotFound can also be thrown because writer's mark was deleted - // as we attempted to update an existing record. Note: Delete can be triggered by writer explicitly calling - // removeWriter api. - return Futures.exceptionallyExpecting(getActiveTx(epoch, txId, context), DATA_NOT_FOUND_PREDICATE, null); - }).collect(Collectors.toList())); + Preconditions.checkArgument(writerMarks != null); - return getTransactionsFuture - .thenCompose(txnRecords -> { - // Filter transactions for which either writer id is not present of time/position is not reported - // Then group transactions by writer ids - val groupedByWriters = txnRecords.stream().filter(x -> - x != null && !Strings.isNullOrEmpty(x.getObject().getWriterId()) && - x.getObject().getCommitTime() >= 0L && !x.getObject().getCommitOffsets().isEmpty()) - .collect(Collectors.groupingBy(x -> x.getObject().getWriterId())); - - // For each writerId we will take the transaction with the time and position pair (which is to take - // max of all transactions for the said writer). - // Note: if multiple transactions from same writer have same time, we will take any one arbitrarily and - // use its position for watermarks. Other positions and times would be ignored. - val noteTimeFutures = groupedByWriters - .entrySet().stream().map(groupEntry -> { - ActiveTxnRecord latest = groupEntry.getValue().stream() - .max(Comparator.comparingLong(x -> x.getObject().getCommitTime())) - .get().getObject(); - return Futures.exceptionallyExpecting( - noteWriterMark(latest.getWriterId(), latest.getCommitTime(), latest.getCommitOffsets(), - context), - DATA_NOT_FOUND_PREDICATE, null); - }).collect(Collectors.toList()); - return Futures.allOf(noteTimeFutures); - }); + // For each writerId we will take the transaction with the time and position pair (which is to take + // max of all transactions for the said writer). + // Note: if multiple transactions from same writer have same time, we will take any one arbitrarily and + // use its position for watermarks. Other positions and times would be ignored. + val noteTimeFutures = writerMarks + .entrySet() + .stream().map(x -> Futures.exceptionallyExpecting( + noteWriterMark(x.getKey(), x.getValue().getTimestamp(), x.getValue().getPosition(), context), + DATA_NOT_FOUND_PREDICATE, null)).collect(Collectors.toList()); + return Futures.allOf(noteTimeFutures); } @VisibleForTesting @@ -1955,7 +1930,7 @@ public CompletableFuture deleteStreamCutBefore(StreamCutReferenceRecord re } @Override - public CompletableFuture> startCommittingTransactions( + public CompletableFuture, List>> startCommittingTransactions( int limit, OperationContext context) { Preconditions.checkNotNull(context, "Operation context cannot be null"); return getVersionedCommitTransactionsRecord(context) @@ -1964,14 +1939,16 @@ public CompletableFuture> startC return getOrderedCommittingTxnInLowestEpoch(limit, context) .thenCompose(list -> { if (list.isEmpty()) { - return CompletableFuture.completedFuture(versioned); + List emptyTransactionData = new LinkedList<>(); + return CompletableFuture.completedFuture(new SimpleEntry<>(versioned, emptyTransactionData)); } else { - Map.Entry firstEntry = list.get(0); - ImmutableList.Builder txIdList = ImmutableList.builder(); - list.forEach(x -> txIdList.add(x.getKey())); - List positions = list.stream().map(x -> x.getValue().getCommitOrder()) + ImmutableList.Builder txIdList = ImmutableList.builder(); + list.forEach(x -> { + txIdList.add(x.getId()); + }); + List positions = list.stream().map(VersionedTransactionData::getCommitOrder) .collect(Collectors.toList()); - int epoch = RecordHelper.getTransactionEpoch(firstEntry.getKey()); + int epoch = RecordHelper.getTransactionEpoch(list.get(0).getId()); CommittingTransactionsRecord record = new CommittingTransactionsRecord(epoch, txIdList.build()); return updateCommittingTxnRecord(new VersionedMetadata<>(record, versioned.getVersion()), @@ -1979,11 +1956,15 @@ public CompletableFuture> startC // now that we have included transactions from positions for commit, we // can safely remove the position references in orderer. .thenCompose(version -> removeTxnsFromCommitOrder(positions, context) - .thenApply(v -> new VersionedMetadata<>(record, version))); + .thenApply(v -> new SimpleEntry<>( + new VersionedMetadata<>(record, version), list))); } }); } else { - return CompletableFuture.completedFuture(versioned); + List transactionsToCommit = versioned.getObject().getTransactionsToCommit() + .stream().map(UUID::toString).collect(Collectors.toList()); + return getVersionedTransactionRecords(versioned.getObject().getEpoch(), transactionsToCommit, context) + .thenApply(x -> new SimpleEntry<>(versioned, x)); } }); } @@ -1998,12 +1979,13 @@ public CompletableFuture> getVer @Override public CompletableFuture completeCommittingTransactions(VersionedMetadata record, - OperationContext context) { + OperationContext context, + Map writerMarks) { Preconditions.checkNotNull(context, "operation context cannot be null"); // Chain all transaction commit futures one after the other. This will ensure that order of commit // if honoured and is based on the order in the list. - CompletableFuture future = generateMarksForTransactions(record.getObject(), context); + CompletableFuture future = generateMarksForTransactions(context, writerMarks); for (UUID txnId : record.getObject().getTransactionsToCommit()) { log.debug(context.getRequestId(), "Committing transaction {} on stream {}/{}", txnId, scope, name); // commit transaction in segment store @@ -2180,7 +2162,7 @@ public CompletableFuture getWriterMark(String writer, OperationConte return getWriterMarkRecord(writer, context).thenApply(VersionedMetadata::getObject); } - protected CompletableFuture>> getOrderedCommittingTxnInLowestEpochHelper( + protected CompletableFuture> getOrderedCommittingTxnInLowestEpochHelper( ZkOrderedStore txnCommitOrderer, int limit, Executor executor, OperationContext context) { Preconditions.checkNotNull(context, "operation context cannot be null"); @@ -2202,19 +2184,17 @@ protected CompletableFuture>> getOrderedCo // is no longer active) // or its a duplicate entry or transaction is aborting. ConcurrentSkipListSet toPurge = new ConcurrentSkipListSet<>(); - ConcurrentHashMap transactionsMap = new ConcurrentHashMap<>(); + ConcurrentSkipListSet transactionsData = new ConcurrentSkipListSet<>( + Comparator.comparingLong(VersionedTransactionData::getCommitOrder)); // Collect transactions that are in committing state from smallest available epoch // smallest epoch has transactions in committing state, we should break, else continue. // also remove any transaction order references which are invalid. - return Futures.loop(() -> iterator.hasNext() && transactionsMap.isEmpty(), () -> { - return processTransactionsInEpoch(iterator.next(), toPurge, transactionsMap, - limit, executor, context); - }, executor).thenCompose(v -> txnCommitOrderer.removeEntities(getScope(), getName(), toPurge)) - .thenApply(v -> - transactionsMap.entrySet().stream().sorted( - Comparator.comparing(x -> x.getValue().getCommitOrder())) - .collect(Collectors.toList())); + return Futures.loop(() -> iterator.hasNext() && transactionsData.isEmpty(), + () -> processTransactionsInEpoch(iterator.next(), toPurge, transactionsData, + limit, executor, context), executor) + .thenCompose(v -> txnCommitOrderer.removeEntities(getScope(), getName(), toPurge)) + .thenApply(v -> transactionsData.stream().collect(Collectors.toList())); }); } @@ -2226,9 +2206,19 @@ protected CompletableFuture> getAllOrderedCommittingTxnsHelper(Z .collect(Collectors.toMap(Map.Entry::getKey, x -> UUID.fromString(x.getValue())))); } + CompletableFuture> getVersionedTransactionRecords(int epoch, List txnIds, OperationContext context) { + Preconditions.checkNotNull(context, "operation context cannot be null"); + + return Futures.allOfWithResults(txnIds.stream().map(txnIdStr -> { + UUID txnId = UUID.fromString(txnIdStr); + return Futures.exceptionallyExpecting(getTransactionData(txnId, context), + DATA_NOT_FOUND_PREDICATE, VersionedTransactionData.EMPTY); + }).collect(Collectors.toList())); + } + private CompletableFuture processTransactionsInEpoch(Map.Entry>> nextEpoch, ConcurrentSkipListSet toPurge, - ConcurrentHashMap transactionsMap, + ConcurrentSkipListSet transactionsMap, int limit, Executor executor, OperationContext context) { int epoch = nextEpoch.getKey(); @@ -2242,17 +2232,17 @@ private CompletableFuture processTransactionsInEpoch(Map.Entry from.get() < txnIds.size() && transactionsMap.size() < limit, - () -> getTransactionRecords(epoch, txnIds.subList(from.get(), till.get()), context).thenAccept(txns -> { - for (int i = 0; i < txns.size() && transactionsMap.size() < limit; i++) { - ActiveTxnRecord txnRecord = txns.get(i); + () -> getVersionedTransactionRecords(epoch, txnIds.subList(from.get(), till.get()), context).thenAccept(txns -> { + for (int i = 0; i < txns.size() && transactionsMap.size() < limit; i++) { + VersionedTransactionData txnRecord = txns.get(i); int index = from.get() + i; UUID txnId = UUID.fromString(txnIds.get(index)); long order = orders.get(index); - switch (txnRecord.getTxnStatus()) { + switch (txnRecord.getStatus()) { case COMMITTING: if (txnRecord.getCommitOrder() == order) { // if entry matches record's position then include it - transactionsMap.put(txnId, txnRecord); + transactionsMap.add(txnRecord); } else { log.debug(context.getRequestId(), "duplicate txn {} at position {}. removing {}", txnId, txnRecord.getCommitOrder(), order); @@ -2270,7 +2260,7 @@ private CompletableFuture processTransactionsInEpoch(Map.Entry updateActiveTx(final int epoch, * @param limit number of txns to fetch. * @return CompletableFuture which when completed will return ordered list of transaction ids and records. */ - abstract CompletableFuture>> getOrderedCommittingTxnInLowestEpoch( + abstract CompletableFuture> getOrderedCommittingTxnInLowestEpoch( int limit, OperationContext context); @VisibleForTesting diff --git a/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStream.java b/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStream.java index 1ebf7638c61..5abd00f2d50 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStream.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStream.java @@ -213,9 +213,9 @@ private String getWritersTableName(String id) { @Override public CompletableFuture completeCommittingTransactions(VersionedMetadata record, - OperationContext context) { + OperationContext context, + Map writerMarks) { Preconditions.checkNotNull(context, "operation context cannot be null"); - // create all transaction entries in committing txn list. // remove all entries from active txn in epoch. // reset CommittingTxnRecord @@ -235,12 +235,10 @@ public CompletableFuture completeCommittingTransactions(VersionedMetadata< if (record.getObject().getTransactionsToCommit().size() == 0) { future = CompletableFuture.completedFuture(null); } else { - future = generateMarksForTransactions(record.getObject(), context) + future = generateMarksForTransactions(context, writerMarks) .thenCompose(v -> createCompletedTxEntries(completedRecords, context)) .thenCompose(x -> getTransactionsInEpochTable(record.getObject().getEpoch(), context) - .thenCompose(table -> { - return storeHelper.removeEntries(table, txnIdStrings, context.getRequestId()); - })) + .thenCompose(table -> storeHelper.removeEntries(table, txnIdStrings, context.getRequestId()))) .thenCompose(x -> tryRemoveOlderTransactionsInEpochTables(epoch -> epoch < record.getObject().getEpoch(), context)); } @@ -834,10 +832,10 @@ private CompletableFuture> getEpochsWithTransactions(OperationCont } @Override - public CompletableFuture getNumberOfOngoingTransactions(OperationContext context) { + public CompletableFuture getNumberOfOngoingTransactions(OperationContext context) { Preconditions.checkNotNull(context, "operation context cannot be null"); - List> futures = new ArrayList<>(); + List> futures = new ArrayList<>(); // first get the number of ongoing transactions from the cache. return getEpochsWithTransactionsTable(context) .thenCompose(epochsWithTxn -> storeHelper.getAllKeys(epochsWithTxn, context.getRequestId()) @@ -848,21 +846,19 @@ public CompletableFuture getNumberOfOngoingTransactions(OperationContex .thenCompose(v -> Futures.allOfWithResults(futures) .thenApply(list -> list .stream() - .reduce(0, Integer::sum)))); + .reduce(0L, Long::sum)))); } - private CompletableFuture getNumberOfOngoingTransactions(int epoch, OperationContext context) { + private CompletableFuture getNumberOfOngoingTransactions(int epoch, OperationContext context) { Preconditions.checkNotNull(context, "operation context cannot be null"); AtomicInteger count = new AtomicInteger(0); return getTransactionsInEpochTable(epoch, context) - .thenCompose(epochTableName -> storeHelper.getAllKeys(epochTableName, context.getRequestId()) - .forEachRemaining(x -> count.incrementAndGet(), executor) - .thenApply(x -> count.get())); + .thenCompose(epochTableName -> storeHelper.getEntryCount(epochTableName, context.getRequestId())); } @Override - public CompletableFuture>> getOrderedCommittingTxnInLowestEpoch( + public CompletableFuture> getOrderedCommittingTxnInLowestEpoch( int limit, OperationContext context) { Preconditions.checkNotNull(context, "operation context cannot be null"); @@ -894,6 +890,31 @@ CompletableFuture> getTransactionRecords(int epoch, List> getVersionedTransactionRecords(int epoch, List txnIds, OperationContext context) { + Preconditions.checkNotNull(context, "operation context cannot be null"); + + return getTransactionsInEpochTable(epoch, context) + .thenCompose(epochTxnTable -> storeHelper.getEntries(epochTxnTable, txnIds, + ActiveTxnRecord::fromBytes, NON_EXISTENT_TXN, context.getRequestId()) + .thenApply(res -> { + List list = new ArrayList<>(); + for (int i = 0; i < txnIds.size(); i++) { + VersionedMetadata txn = res.get(i); + ActiveTxnRecord activeTxnRecord = txn.getObject(); + if (!ActiveTxnRecord.EMPTY.equals(activeTxnRecord)) { + VersionedTransactionData vdata = new VersionedTransactionData(epoch, UUID.fromString(txnIds.get(i)), txn.getVersion(), + activeTxnRecord.getTxnStatus(), activeTxnRecord.getTxCreationTimestamp(), + activeTxnRecord.getMaxExecutionExpiryTime(), activeTxnRecord.getWriterId(), + activeTxnRecord.getCommitTime(), activeTxnRecord.getCommitOrder(), + activeTxnRecord.getCommitOffsets()); + list.add(vdata); + } + } + return list; + })); + } + @Override public CompletableFuture> getTxnInEpoch(int epoch, OperationContext context) { Preconditions.checkNotNull(context, "operation context cannot be null"); @@ -968,7 +989,6 @@ CompletableFuture addTxnToCommitOrder(UUID txId, OperationContext context) @Override CompletableFuture removeTxnsFromCommitOrder(List orderedPositions, OperationContext context) { Preconditions.checkNotNull(context, "operation context cannot be null"); - return txnCommitOrderer.removeEntities(getScope(), getName(), orderedPositions); } @@ -992,13 +1012,10 @@ CompletableFuture removeActiveTxEntry(final int epoch, final UUID txId, Op private CompletableFuture tryRemoveOlderTransactionsInEpochTables(Predicate epochPredicate, OperationContext context) { Preconditions.checkNotNull(context, "operation context cannot be null"); - return getEpochsWithTransactions(context) - .thenCompose(list -> { - return Futures.allOf(list.stream().filter(epochPredicate) - .map(x -> tryRemoveTransactionsInEpochTable(x, context)) - .collect(Collectors.toList())); - }); + .thenCompose(list -> Futures.allOf(list.stream().filter(epochPredicate) + .map(x -> tryRemoveTransactionsInEpochTable(x, context)) + .collect(Collectors.toList()))); } private CompletableFuture tryRemoveTransactionsInEpochTable(int epoch, OperationContext context) { diff --git a/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStore.java b/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStore.java index 6b9eb197d41..1326c85b38f 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStore.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStore.java @@ -102,7 +102,7 @@ public class PravegaTablesStreamMetadataStore extends AbstractStreamMetadataStor } @VisibleForTesting - PravegaTablesStreamMetadataStore(CuratorFramework curatorClient, ScheduledExecutorService executor, + public PravegaTablesStreamMetadataStore(CuratorFramework curatorClient, ScheduledExecutorService executor, Duration gcPeriod, PravegaTablesStoreHelper helper) { super(new ZKHostIndex(curatorClient, "/hostTxnIndex", executor), new ZKHostIndex(curatorClient, "/hostRequestIndex", executor)); ZKStoreHelper zkStoreHelper = new ZKStoreHelper(curatorClient, executor); diff --git a/controller/src/main/java/io/pravega/controller/store/stream/Stream.java b/controller/src/main/java/io/pravega/controller/store/stream/Stream.java index 42fca1a5613..8124394c171 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/Stream.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/Stream.java @@ -513,7 +513,7 @@ CompletableFuture> sealTransaction(final UUID tx * @return a boolean indicating whether a transaction is active on the stream. * Returns the number of transactions ongoing for the stream. */ - CompletableFuture getNumberOfOngoingTransactions(OperationContext context); + CompletableFuture getNumberOfOngoingTransactions(OperationContext context); /** * Api to get all active transactions as a map of transaction id to Active transaction record @@ -586,7 +586,7 @@ CompletableFuture getSizeTillStreamCut(Map streamCut, Optional * @param limit maximum number of transactions to include in a commit batch * @return A completableFuture which, when completed, will contain committing transaction record if it exists, or null otherwise. */ - CompletableFuture> startCommittingTransactions(int limit, + CompletableFuture, List>> startCommittingTransactions(int limit, OperationContext context); /** @@ -608,7 +608,8 @@ CompletableFuture> startCommitti * @param record existing versioned record. */ CompletableFuture completeCommittingTransactions(VersionedMetadata record, - OperationContext context); + OperationContext context, + Map writerMarks); /** * Method to record commit offset for a transaction. This method stores the commit offset in ActiveTransaction record. diff --git a/controller/src/main/java/io/pravega/controller/store/stream/StreamMetadataStore.java b/controller/src/main/java/io/pravega/controller/store/stream/StreamMetadataStore.java index 8f6ecaa13ee..addcfc679e6 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/StreamMetadataStore.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/StreamMetadataStore.java @@ -1350,7 +1350,7 @@ CompletableFuture getSizeTillStreamCut(final String scope, final String st * @param executor executor * @return A completableFuture which, when completed, mean that the record has been created successfully. */ - CompletableFuture> startCommitTransactions(final String scope, + CompletableFuture, List>> startCommitTransactions(final String scope, final String stream, final int limit, final OperationContext context, @@ -1378,28 +1378,14 @@ CompletableFuture> getVersionedC * @param record versioned record * @param context operation context * @param executor executor + * @param writerMarks Mapping of WriterId to Transaction Offset. * @return A completableFuture which, when completed, will mean that deletion of txnCommitNode is complete. */ CompletableFuture completeCommitTransactions(final String scope, final String stream, final VersionedMetadata record, - final OperationContext context, final ScheduledExecutorService executor); - + final OperationContext context, final ScheduledExecutorService executor, + Map writerMarks); - /** - * Method to record commit offset for a transaction. This method stores the commit offset in ActiveTransaction record. - * Its behaviour is idempotent and if a transaction already has commitOffsets set earlier, they are not overwritten. - * @param scope scope name - * @param stream stream name - * @param txnId transaction id - * @param commitOffsets segment to offset position where transaction was committed - * @param context operation context - * @param executor executor - * @return A completableFuture which, when completed, will have transaction commit offset recorded successfully. - */ - CompletableFuture recordCommitOffsets(final String scope, final String stream, final UUID txnId, - final Map commitOffsets, - final OperationContext context, final ScheduledExecutorService executor); - /** * This method attempts to create a new Waiting Request node and set the processor's name in the node. * If a node already exists, this attempt is ignored. @@ -1608,4 +1594,5 @@ CompletableFuture checkReaderGroupExists(final String scope, final Stri */ CompletableFuture getReaderGroupId(final String scopeName, final String rgName, OperationContext context, Executor executor); + } diff --git a/controller/src/main/java/io/pravega/controller/store/stream/TxnWriterMark.java b/controller/src/main/java/io/pravega/controller/store/stream/TxnWriterMark.java new file mode 100644 index 00000000000..baab9608205 --- /dev/null +++ b/controller/src/main/java/io/pravega/controller/store/stream/TxnWriterMark.java @@ -0,0 +1,33 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.store.stream; + +import lombok.AllArgsConstructor; +import lombok.Data; + +import java.util.Map; +import java.util.UUID; + +/* This is a data class that represents a writer mark generated by a Txn commit. + * Writers send mark information for watermarking purposes, containing time and position. + */ +@Data +@AllArgsConstructor +public class TxnWriterMark { + final private long timestamp; + final private Map position; + final private UUID transactionId; +} diff --git a/controller/src/main/java/io/pravega/controller/store/stream/VersionedTransactionData.java b/controller/src/main/java/io/pravega/controller/store/stream/VersionedTransactionData.java index 53665bb4aff..16bbfb0d8fc 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/VersionedTransactionData.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/VersionedTransactionData.java @@ -40,6 +40,6 @@ public class VersionedTransactionData { private final long maxExecutionExpiryTime; private final String writerId; private final Long commitTime; - private final Long position; + private final Long commitOrder; private final ImmutableMap commitOffsets; } diff --git a/controller/src/main/java/io/pravega/controller/store/stream/ZKGarbageCollector.java b/controller/src/main/java/io/pravega/controller/store/stream/ZKGarbageCollector.java index 2d98ff2dff8..a1f867db5cb 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/ZKGarbageCollector.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/ZKGarbageCollector.java @@ -51,7 +51,7 @@ * is updated, all watchers receive the latest update. */ @Slf4j -class ZKGarbageCollector extends AbstractService implements AutoCloseable { +class ZKGarbageCollector extends AbstractService { private static final String GC_ROOT = "/garbagecollection/%s"; private static final String GUARD_PATH = GC_ROOT + "/guard"; @@ -126,6 +126,19 @@ protected void doStop() { notifyStopped(); } }); + + watch.getAndUpdate(x -> { + if (x != null) { + try { + x.close(); + } catch (IOException e) { + throw Exceptions.sneakyThrow(e); + } + } + return x; + }); + + gcExecutor.shutdownNow(); } int getLatestBatch() { @@ -200,19 +213,4 @@ private NodeCache registerWatch(String watchPath) { return nodeCache; } - @Override - public void close() { - watch.getAndUpdate(x -> { - if (x != null) { - try { - x.close(); - } catch (IOException e) { - throw Exceptions.sneakyThrow(e); - } - } - return x; - }); - - gcExecutor.shutdownNow(); - } } diff --git a/controller/src/main/java/io/pravega/controller/store/stream/ZKStream.java b/controller/src/main/java/io/pravega/controller/store/stream/ZKStream.java index eae7464c7b5..6aef5aa3aad 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/ZKStream.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/ZKStream.java @@ -181,11 +181,11 @@ class ZKStream extends PersistentStreamBase { // region overrides @Override - public CompletableFuture getNumberOfOngoingTransactions(OperationContext context) { + public CompletableFuture getNumberOfOngoingTransactions(OperationContext context) { return store.getChildren(activeTxRoot).thenCompose(list -> Futures.allOfWithResults(list.stream().map(epoch -> getNumberOfOngoingTransactions(Integer.parseInt(epoch))).collect(Collectors.toList()))) - .thenApply(list -> list.stream().reduce(0, Integer::sum)); + .thenApply(list -> list.stream().reduce(0, Integer::sum).longValue()); } private CompletableFuture getNumberOfOngoingTransactions(int epoch) { @@ -553,7 +553,7 @@ public CompletableFuture> getTxnInEpoch(int epoch, Op } @Override - public CompletableFuture>> getOrderedCommittingTxnInLowestEpoch(int limit, + public CompletableFuture> getOrderedCommittingTxnInLowestEpoch(int limit, OperationContext context) { return super.getOrderedCommittingTxnInLowestEpochHelper(txnCommitOrderer, limit, executor, context); } diff --git a/controller/src/main/java/io/pravega/controller/store/stream/ZookeeperBucketStore.java b/controller/src/main/java/io/pravega/controller/store/stream/ZookeeperBucketStore.java index 879f3062f0e..86ff7698b14 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/ZookeeperBucketStore.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/ZookeeperBucketStore.java @@ -47,6 +47,10 @@ public class ZookeeperBucketStore implements BucketStore { storeHelper = new ZKStoreHelper(client, executor); } + public boolean isZKConnected() { + return storeHelper.isZKConnected(); + } + @Override public StoreType getStoreType() { return StoreType.Zookeeper; diff --git a/controller/src/main/java/io/pravega/controller/store/stream/records/ActiveTxnRecord.java b/controller/src/main/java/io/pravega/controller/store/stream/records/ActiveTxnRecord.java index 3b0fb166c32..e9a1a23dea4 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/records/ActiveTxnRecord.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/records/ActiveTxnRecord.java @@ -39,7 +39,7 @@ public class ActiveTxnRecord { public static final ActiveTxnRecord EMPTY = ActiveTxnRecord.builder().txCreationTimestamp(Long.MIN_VALUE) .leaseExpiryTime(Long.MIN_VALUE).maxExecutionExpiryTime(Long.MIN_VALUE).txnStatus(TxnStatus.UNKNOWN) - .writerId(Optional.empty()).commitTime(Optional.empty()).build(); + .writerId(Optional.empty()).commitTime(Optional.empty()).commitOrder(Optional.empty()).commitOffsets(ImmutableMap.of()).build(); public static final ActiveTxnRecordSerializer SERIALIZER = new ActiveTxnRecordSerializer(); diff --git a/controller/src/main/java/io/pravega/controller/store/stream/records/EpochRecord.java b/controller/src/main/java/io/pravega/controller/store/stream/records/EpochRecord.java index 349434e52a1..fe4e4b95b9b 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/records/EpochRecord.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/records/EpochRecord.java @@ -121,6 +121,7 @@ private EpochRecordBuilder merges(final long mergeCount) { return this; } + @Override public EpochRecord build() { return new EpochRecord(epoch, referenceEpoch, segments, creationTime, splits, merges); } diff --git a/controller/src/main/java/io/pravega/controller/store/stream/records/ReaderGroupConfigRecord.java b/controller/src/main/java/io/pravega/controller/store/stream/records/ReaderGroupConfigRecord.java index d5a5f6bbb3d..77d4ef8d4cb 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/records/ReaderGroupConfigRecord.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/records/ReaderGroupConfigRecord.java @@ -71,7 +71,8 @@ public static ReaderGroupConfigRecord update(ReaderGroupConfig rgConfig, long ge .retentionTypeOrdinal(rgConfig.getRetentionType().ordinal()) .startingStreamCuts(startStreamCuts) .endingStreamCuts(endStreamCuts) - .updating(isUpdating).build(); + .updating(isUpdating) + .build(); } public static ReaderGroupConfigRecord complete(ReaderGroupConfigRecord rgConfigRecord) { @@ -83,7 +84,8 @@ public static ReaderGroupConfigRecord complete(ReaderGroupConfigRecord rgConfigR .retentionTypeOrdinal(rgConfigRecord.getRetentionTypeOrdinal()) .startingStreamCuts(rgConfigRecord.getStartingStreamCuts()) .endingStreamCuts(rgConfigRecord.getEndingStreamCuts()) - .updating(false).build(); + .updating(false) + .build(); } @SneakyThrows(IOException.class) diff --git a/controller/src/main/java/io/pravega/controller/store/stream/records/StreamConfigurationRecord.java b/controller/src/main/java/io/pravega/controller/store/stream/records/StreamConfigurationRecord.java index a3d01233450..2db6d58eebb 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/records/StreamConfigurationRecord.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/records/StreamConfigurationRecord.java @@ -232,7 +232,8 @@ protected byte getWriteVersion() { protected void declareVersions() { version(0).revision(0, this::write00, this::read00) .revision(1, this::write01, this::read01) - .revision(2, this::write02, this::read02); + .revision(2, this::write02, this::read02) + .revision(3, this::write03, this::read03); } @Override @@ -301,6 +302,23 @@ private void write02(StreamConfigurationRecord streamConfigurationRecord, Revisi revisionDataOutput.writeCollection(streamConfigurationRecord.removeTags, stringSerializer); } + private void read03(RevisionDataInput revisionDataInput, + StreamConfigurationRecordBuilder configurationRecordBuilder) + throws IOException { + StreamConfiguration.StreamConfigurationBuilder streamConfigurationBuilder = StreamConfiguration.builder(); + streamConfigurationBuilder.scalingPolicy(configurationRecordBuilder.streamConfiguration.getScalingPolicy()) + .retentionPolicy(configurationRecordBuilder.streamConfiguration.getRetentionPolicy()) + .timestampAggregationTimeout(configurationRecordBuilder.streamConfiguration.getTimestampAggregationTimeout()) + .tags(configurationRecordBuilder.streamConfiguration.getTags()) + .rolloverSizeBytes(revisionDataInput.readLong()); + configurationRecordBuilder.streamConfiguration(streamConfigurationBuilder.build()); + } + + private void write03(StreamConfigurationRecord streamConfigurationRecord, RevisionDataOutput revisionDataOutput) + throws IOException { + revisionDataOutput.writeLong(streamConfigurationRecord.streamConfiguration.getRolloverSizeBytes()); + } + @Override protected StreamConfigurationRecordBuilder newBuilder() { return StreamConfigurationRecord.builder(); diff --git a/controller/src/main/java/io/pravega/controller/store/stream/records/StreamSegmentRecord.java b/controller/src/main/java/io/pravega/controller/store/stream/records/StreamSegmentRecord.java index 9aba71b2766..a6535ea7683 100644 --- a/controller/src/main/java/io/pravega/controller/store/stream/records/StreamSegmentRecord.java +++ b/controller/src/main/java/io/pravega/controller/store/stream/records/StreamSegmentRecord.java @@ -48,6 +48,7 @@ public static class StreamSegmentRecordBuilder implements ObjectBuilder createKeyValueTable(S CreateTableEvent event = new CreateTableEvent(scope, kvtName, kvtConfig.getPartitionCount(), kvtConfig.getPrimaryKeyLength(), kvtConfig.getSecondaryKeyLength(), - createTimestamp, requestId, id); + createTimestamp, requestId, id, kvtConfig.getRolloverSizeBytes()); //4. Update ScopeTable with the entry for this KVT and Publish the event for creation return eventHelper.addIndexAndSubmitTask(event, () -> kvtMetadataStore.createEntryForKVTable(scope, kvtName, id, @@ -264,21 +264,21 @@ public String retrieveDelegationToken() { return authHelper.retrieveMasterToken(); } - public CompletableFuture createNewSegments(String scope, String kvt, - List segmentIds, int keyLength, long requestId) { + public CompletableFuture createNewSegments(String scope, String kvt, List segmentIds, + int keyLength, long requestId, long rolloverSizeBytes) { return Futures.toVoid(Futures.allOfWithResults(segmentIds .stream() .parallel() - .map(segment -> createNewSegment(scope, kvt, segment, keyLength, retrieveDelegationToken(), requestId)) + .map(segment -> createNewSegment(scope, kvt, segment, keyLength, retrieveDelegationToken(), requestId, rolloverSizeBytes)) .collect(Collectors.toList()))); } private CompletableFuture createNewSegment(String scope, String kvt, long segmentId, int keyLength, String controllerToken, - long requestId) { + long requestId, long rolloverSizeBytes) { final String qualifiedTableSegmentName = getQualifiedTableSegmentName(scope, kvt, segmentId); log.debug("Creating segment {}", qualifiedTableSegmentName); return Futures.toVoid(withRetries(() -> segmentHelper.createTableSegment(qualifiedTableSegmentName, controllerToken, - requestId, false, keyLength), executor)); + requestId, false, keyLength, rolloverSizeBytes), executor)); } @Override diff --git a/controller/src/main/java/io/pravega/controller/task/Stream/StreamMetadataTasks.java b/controller/src/main/java/io/pravega/controller/task/Stream/StreamMetadataTasks.java index b0f7f9f221f..4a83aa8f95d 100644 --- a/controller/src/main/java/io/pravega/controller/task/Stream/StreamMetadataTasks.java +++ b/controller/src/main/java/io/pravega/controller/task/Stream/StreamMetadataTasks.java @@ -138,6 +138,7 @@ public class StreamMetadataTasks extends TaskBase { private static final TagLogger log = new TagLogger(LoggerFactory.getLogger(StreamMetadataTasks.class)); private static final int SUBSCRIBER_OPERATION_RETRIES = 10; private static final int READER_GROUP_OPERATION_MAX_RETRIES = 10; + private static final long READER_GROUP_SEGMENT_ROLLOVER_SIZE_BYTES = 4 * 1024 * 1024; // 4MB private final AtomicLong retentionFrequencyMillis; private final StreamMetadataStore streamMetadataStore; @@ -212,7 +213,7 @@ public void initializeStreamWriters(final EventStreamClientFactory clientFactory if (toSetEventHelper) { this.eventHelper = new EventHelper(clientFactory.createEventWriter(streamName, ControllerEventProcessors.CONTROLLER_EVENT_SERIALIZER, - EventWriterConfig.builder().retryAttempts(Integer.MAX_VALUE).build()), + EventWriterConfig.builder().enableConnectionPooling(true).retryAttempts(Integer.MAX_VALUE).build()), this.executor, this.eventExecutor, this.context.getHostId(), ((AbstractStreamMetadataStore) this.streamMetadataStore).getHostTaskIndex()); toSetEventHelper = false; @@ -454,7 +455,8 @@ public CompletableFuture createReaderGroupTask } return CompletableFuture.completedFuture(null); }).thenCompose(x -> createRGStream(scope, NameUtils.getStreamForReaderGroup(readerGroup), - StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).build(), + StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)) + .rolloverSizeBytes(READER_GROUP_SEGMENT_ROLLOVER_SIZE_BYTES).build(), System.currentTimeMillis(), 10, getRequestId(context)) .thenCompose(createStatus -> { if (createStatus.equals(Controller.CreateStreamStatus.Status.STREAM_EXISTS) @@ -755,25 +757,32 @@ public CompletableFuture updateStream(String scope, S long requestId) { final OperationContext context = streamMetadataStore.createStreamContext(scope, stream, requestId); - // 1. get configuration - return streamMetadataStore.getConfigurationRecord(scope, stream, context, executor) - .thenCompose(configProperty -> { - // 2. post event to start update workflow - if (!configProperty.getObject().isUpdating()) { - return eventHelperFuture.thenCompose(eventHelper -> eventHelper.addIndexAndSubmitTask( - new UpdateStreamEvent(scope, stream, requestId), - // 3. update new configuration in the store with updating flag = true - // if attempt to update fails, we bail out with no harm done - () -> streamMetadataStore.startUpdateConfiguration(scope, stream, newConfig, - context, executor)) - // 4. wait for update to complete - .thenCompose(x -> eventHelper.checkDone(() -> isUpdated(scope, stream, newConfig, context)) - .thenApply(y -> UpdateStreamStatus.Status.SUCCESS))); - } else { - log.error(requestId, "Another update in progress for {}/{}", - scope, stream); - return CompletableFuture.completedFuture(UpdateStreamStatus.Status.FAILURE); + return streamMetadataStore.getState(scope, stream, true, context, executor) + .thenCompose(state -> { + if (state.equals(State.SEALED)) { + log.error(requestId, "Cannot update a sealed stream {}/{}", scope, stream); + return CompletableFuture.completedFuture(UpdateStreamStatus.Status.STREAM_SEALED); } + // 1. get configuration + return streamMetadataStore.getConfigurationRecord(scope, stream, context, executor) + .thenCompose(configProperty -> { + // 2. post event to start update workflow + if (!configProperty.getObject().isUpdating()) { + return eventHelperFuture.thenCompose(eventHelper -> eventHelper.addIndexAndSubmitTask( + new UpdateStreamEvent(scope, stream, requestId), + // 3. update new configuration in the store with updating flag = true + // if attempt to update fails, we bail out with no harm done + () -> streamMetadataStore.startUpdateConfiguration(scope, stream, newConfig, + context, executor)) + // 4. wait for update to complete + .thenCompose(y -> eventHelper.checkDone(() -> isUpdated(scope, stream, newConfig, context)) + .thenApply(z -> UpdateStreamStatus.Status.SUCCESS))); + } else { + log.error(requestId, "Another update in progress for {}/{}", + scope, stream); + return CompletableFuture.completedFuture(UpdateStreamStatus.Status.FAILURE); + } + }); }) .exceptionally(ex -> { final String message = "Exception updating stream configuration {}"; @@ -791,11 +800,15 @@ CompletableFuture isUpdated(String scope, String stream, StreamConfigur .thenApply(v -> { State state = stateFuture.join(); StreamConfigurationRecord configProperty = configPropertyFuture.join(); - // if property is updating and doesn't match our request, it's a subsequent update if (configProperty.isUpdating()) { return !configProperty.getStreamConfiguration().equals(newConfig); } else { + // if stream is sealed then update should not be allowed + if (state.equals(State.SEALED)) { + log.error("Cannot update a sealed stream {}/{}", scope, stream); + throw new UnsupportedOperationException("Cannot update a sealed stream: " + NameUtils.getScopedStreamName(scope, stream)); + } // if update-barrier is not updating, then update is complete if property matches our expectation // and state is not updating return !(configProperty.getStreamConfiguration().equals(newConfig) && @@ -1404,18 +1417,25 @@ public CompletableFuture truncateStream(final String final long requestId) { final OperationContext context = streamMetadataStore.createStreamContext(scope, stream, requestId); - // 1. get stream cut - return eventHelperFuture.thenCompose(eventHelper -> startTruncation(scope, stream, streamCut, context) - // 4. check for truncation to complete - .thenCompose(truncationStarted -> { - if (truncationStarted) { - return eventHelper.checkDone(() -> isTruncated(scope, stream, streamCut, context), 1000L) - .thenApply(y -> UpdateStreamStatus.Status.SUCCESS); - } else { - log.error(requestId, "Unable to start truncation for {}/{}", scope, stream); - return CompletableFuture.completedFuture(UpdateStreamStatus.Status.FAILURE); + return streamMetadataStore.getState(scope, stream, true, context, executor) + .thenCompose(state -> { + if (state.equals(State.SEALED)) { + log.error(requestId, "Cannot truncate a sealed stream {}/{}", scope, stream); + return CompletableFuture.completedFuture(UpdateStreamStatus.Status.STREAM_SEALED); } - })) + // 1. get stream cut + return eventHelperFuture.thenCompose(eventHelper -> startTruncation(scope, stream, streamCut, context) // 1. get stream cut + // 4. check for truncation to complete + .thenCompose(truncationStarted -> { + if (truncationStarted) { + return eventHelper.checkDone(() -> isTruncated(scope, stream, streamCut, context), 1000L) + .thenApply(y -> UpdateStreamStatus.Status.SUCCESS); + } else { + log.error(requestId, "Unable to start truncation for {}/{}", scope, stream); + return CompletableFuture.completedFuture(UpdateStreamStatus.Status.FAILURE); + } + })); + }) .exceptionally(ex -> { final String message = "Exception thrown in trying to truncate stream"; return handleUpdateStreamError(ex, requestId, message, NameUtils.getScopedStreamName(scope, stream)); @@ -1461,6 +1481,11 @@ CompletableFuture isTruncated(String scope, String stream, Map createMarkStream(String scope, String baseStream final long segmentId = NameUtils.computeSegmentId(response.getStartingSegmentNumber(), 0); return notifyNewSegment(scope, markStream, segmentId, response.getConfiguration().getScalingPolicy(), - this.retrieveDelegationToken(), requestId); + this.retrieveDelegationToken(), requestId, config.getRolloverSizeBytes()); }) .thenCompose(v -> { return streamMetadataStore.getVersionedState(scope, markStream, context, executor) @@ -1832,14 +1857,14 @@ public CompletableFuture notifyNewSegments(String scope, String stream, St .stream() .parallel() .map(segment -> notifyNewSegment(scope, stream, segment, configuration.getScalingPolicy(), controllerToken, - requestId)) + requestId, configuration.getRolloverSizeBytes())) .collect(Collectors.toList()))); } public CompletableFuture notifyNewSegment(String scope, String stream, long segmentId, ScalingPolicy policy, - String controllerToken, long requestId) { + String controllerToken, long requestId, long rolloverSize) { return Futures.toVoid(withRetries(() -> segmentHelper.createSegment(scope, - stream, segmentId, policy, controllerToken, requestId), executor)); + stream, segmentId, policy, controllerToken, requestId, rolloverSize), executor)); } public CompletableFuture notifyDeleteSegments(String scope, String stream, Set segmentsToDelete, @@ -1929,6 +1954,8 @@ private UpdateStreamStatus.Status handleUpdateStreamError(Throwable ex, long req log.error(requestId, "Exception updating Stream {}. Cause: {}.", streamFullName, logMessage, cause); if (cause instanceof StoreException.DataNotFoundException) { return UpdateStreamStatus.Status.STREAM_NOT_FOUND; + } else if (cause instanceof UnsupportedOperationException) { + return UpdateStreamStatus.Status.STREAM_SEALED; } else if (cause instanceof TimeoutException) { throw new CompletionException(cause); } else { diff --git a/controller/src/main/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasks.java b/controller/src/main/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasks.java index 273616d9a5f..03b785964a8 100644 --- a/controller/src/main/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasks.java +++ b/controller/src/main/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasks.java @@ -191,13 +191,13 @@ public void initializeStreamWriters(final EventStreamClientFactory clientFactory commitWriterFuture.complete(clientFactory.createEventWriter( config.getCommitStreamName(), ControllerEventProcessors.COMMIT_EVENT_SERIALIZER, - EventWriterConfig.builder().retryAttempts(Integer.MAX_VALUE).build())); + EventWriterConfig.builder().enableConnectionPooling(true).retryAttempts(Integer.MAX_VALUE).build())); } if (!abortWriterFuture.isDone()) { abortWriterFuture.complete(clientFactory.createEventWriter( config.getAbortStreamName(), ControllerEventProcessors.ABORT_EVENT_SERIALIZER, - EventWriterConfig.builder().retryAttempts(Integer.MAX_VALUE).build())); + EventWriterConfig.builder().enableConnectionPooling(true).retryAttempts(Integer.MAX_VALUE).build())); } this.setReady(); } @@ -219,14 +219,16 @@ public void initializeStreamWriters(final EventStreamWriter commitW * @param lease Time for which transaction shall remain open with sending any heartbeat. * the scaling operation is initiated on the txn stream. * @param requestId request id. + * @param rolloverSizeBytes rollover size for txn segment. * @return transaction id. */ public CompletableFuture>> createTxn(final String scope, final String stream, final long lease, - final long requestId) { + final long requestId, + final long rolloverSizeBytes) { final OperationContext context = streamMetadataStore.createStreamContext(scope, stream, requestId); - return createTxnBody(scope, stream, lease, context); + return createTxnBody(scope, stream, lease, context, rolloverSizeBytes); } /** @@ -310,7 +312,7 @@ public CompletableFuture commitTxn(final String scope, final String s * @param stream stream name. * @param txId transaction id. * @param writerId writer id - * @param timestamp commit time as recorded by writer + * @param timestamp commit time as recorded by writer. This is required for watermarking. * @param requestId requestid * @return true/false. */ @@ -348,12 +350,14 @@ public CompletableFuture commitTxn(final String scope, final String s * @param stream stream name. * @param lease txn lease. * @param ctx context. + * @param rolloverSizeBytes rollover size for txn segment. * @return identifier of the created txn. */ CompletableFuture>> createTxnBody(final String scope, final String stream, final long lease, - final OperationContext ctx) { + final OperationContext ctx, + final long rolloverSizeBytes) { Preconditions.checkNotNull(ctx, "Operation context is null"); // Step 1. Validate parameters. CompletableFuture validate = validate(lease); @@ -383,7 +387,7 @@ CompletableFuture>> cre streamMetadataStore.getSegmentsInEpoch(scope, stream, txnData.getEpoch(), ctx, executor), executor); CompletableFuture notify = segmentsFuture.thenComposeAsync(activeSegments -> - notifyTxnCreation(scope, stream, activeSegments, txnId, ctx.getRequestId()), executor) + notifyTxnCreation(scope, stream, activeSegments, txnId, ctx.getRequestId(), rolloverSizeBytes), executor) .whenComplete((v, e) -> // Method notifyTxnCreation ensures that notification completes // even in the presence of n/w or segment store failures. @@ -737,22 +741,25 @@ CompletableFuture writeAbortEvent(String scope, String stream, int ep private CompletableFuture notifyTxnCreation(final String scope, final String stream, final List segments, final UUID txnId, - long requestId) { + final long requestId, final long rolloverSizeBytes) { Timer timer = new Timer(); return Futures.allOf(segments.stream() .parallel() - .map(segment -> notifyTxnCreation(scope, stream, segment.segmentId(), txnId, requestId)) + .map(segment -> notifyTxnCreation(scope, stream, segment.segmentId(), txnId, requestId, rolloverSizeBytes)) .collect(Collectors.toList())) .thenRun(() -> TransactionMetrics.getInstance().createTransactionSegments(timer.getElapsed())); } private CompletableFuture notifyTxnCreation(final String scope, final String stream, - final long segmentId, final UUID txnId, long requestId) { + final long segmentId, final UUID txnId, final long requestId, + final long rolloverSizeBytes) { return TaskStepsRetryHelper.withRetries(() -> segmentHelper.createTransaction(scope, - stream, - segmentId, - txnId, - this.retrieveDelegationToken(), requestId), executor); + stream, + segmentId, + txnId, + this.retrieveDelegationToken(), + requestId, + rolloverSizeBytes), executor); } public String retrieveDelegationToken() { diff --git a/controller/src/main/java/io/pravega/controller/util/Config.java b/controller/src/main/java/io/pravega/controller/util/Config.java index 76e41693356..a39f5ec7667 100644 --- a/controller/src/main/java/io/pravega/controller/util/Config.java +++ b/controller/src/main/java/io/pravega/controller/util/Config.java @@ -16,20 +16,24 @@ package io.pravega.controller.util; import com.google.common.base.Strings; +import io.pravega.common.security.TLSProtocolVersion; import io.pravega.common.util.Property; import io.pravega.common.util.TypedProperties; import io.pravega.controller.server.rpc.grpc.GRPCServerConfig; import io.pravega.controller.server.rpc.grpc.impl.GRPCServerConfigImpl; import io.pravega.shared.metrics.MetricsConfig; +import lombok.SneakyThrows; +import lombok.val; +import lombok.extern.slf4j.Slf4j; + import java.io.File; import java.io.FileReader; import java.io.IOException; import java.net.URL; import java.util.Arrays; +import java.util.Collections; +import java.util.List; import java.util.Properties; -import lombok.SneakyThrows; -import lombok.extern.slf4j.Slf4j; -import lombok.val; /** * Utility class to supply Controller Configuration. @@ -125,6 +129,9 @@ public final class Config { public static final Property PROPERTY_TLS_ENABLED = Property.named( "security.tls.enable", false, "auth.tlsEnabled"); + public static final Property PROPERTY_TLS_PROTOCOL_VERSION = Property.named( + "security.tls.protocolVersion", "TLSv1.2,TLSv1.3"); + public static final Property PROPERTY_TLS_CERT_FILE = Property.named( "security.tls.server.certificate.location", "", "auth.tlsCertFile"); @@ -174,7 +181,7 @@ public final class Config { "transaction.lease.count.min", 10000, "transaction.minLeaseValue"); public static final Property PROPERTY_TXN_MAX_LEASE = Property.named( - "transaction.lease.count.max", 120000L, "transaction.maxLeaseValue"); + "transaction.lease.count.max", 600000L, "transaction.maxLeaseValue"); public static final Property PROPERTY_TXN_MAX_EXECUTION_TIMEBOUND_DAYS = Property.named( "transaction.execution.timeBound.days", 1); @@ -228,6 +235,7 @@ public final class Config { public static final boolean AUTHORIZATION_ENABLED; public static final String USER_PASSWORD_FILE; public static final boolean TLS_ENABLED; + public static final List TLS_PROTOCOL_VERSION; public static final String TLS_KEY_FILE; public static final String TLS_CERT_FILE; public static final String TLS_TRUST_STORE; @@ -319,6 +327,8 @@ public final class Config { WRITES_TO_RGSTREAMS_WITH_READ_PERMISSIONS = p.getBoolean(PROPERTY_WRITES_TO_RGSTREAMS_WITH_READ_PERMISSIONS); TLS_ENABLED = p.getBoolean(PROPERTY_TLS_ENABLED); + String[] protocols = new TLSProtocolVersion(p.get(PROPERTY_TLS_PROTOCOL_VERSION)).getProtocols(); + TLS_PROTOCOL_VERSION = Collections.unmodifiableList(Arrays.asList(protocols)); TLS_KEY_FILE = p.get(PROPERTY_TLS_KEY_FILE); TLS_CERT_FILE = p.get(PROPERTY_TLS_CERT_FILE); TLS_TRUST_STORE = p.get(PROPERTY_TLS_TRUST_STORE); @@ -453,6 +463,7 @@ private static GRPCServerConfig createGrpcServerConfig() { .authorizationEnabled(Config.AUTHORIZATION_ENABLED) .userPasswordFile(Config.USER_PASSWORD_FILE) .tlsEnabled(Config.TLS_ENABLED) + .tlsProtocolVersion(Config.TLS_PROTOCOL_VERSION.toArray(new String[Config.TLS_PROTOCOL_VERSION.size()])) .tlsCertFile(Config.TLS_CERT_FILE) .tlsTrustStore(Config.TLS_TRUST_STORE) .tlsKeyFile(Config.TLS_KEY_FILE) diff --git a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/CheckpointStoreTests.java b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/CheckpointStoreTests.java index e5b31cda1a7..3d7aff4003d 100644 --- a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/CheckpointStoreTests.java +++ b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/CheckpointStoreTests.java @@ -41,7 +41,7 @@ public abstract class CheckpointStoreTests { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected CheckpointStore checkpointStore; @Before diff --git a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEPSerializedRHTest.java b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEPSerializedRHTest.java index eeb06985968..6b191aecbc2 100644 --- a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEPSerializedRHTest.java +++ b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEPSerializedRHTest.java @@ -45,7 +45,7 @@ public class ConcurrentEPSerializedRHTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private LinkedBlockingQueue requestStream = new LinkedBlockingQueue<>(); private List history = Collections.synchronizedList(new ArrayList<>()); diff --git a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEventProcessorTest.java b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEventProcessorTest.java index a0a712500c0..023642aa651 100644 --- a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEventProcessorTest.java +++ b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ConcurrentEventProcessorTest.java @@ -49,7 +49,7 @@ public class ConcurrentEventProcessorTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @Data @AllArgsConstructor diff --git a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/EventProcessorTest.java b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/EventProcessorTest.java index 048e8bc475f..45722ebc821 100644 --- a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/EventProcessorTest.java +++ b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/EventProcessorTest.java @@ -234,7 +234,7 @@ public String getCheckpointName() { } @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private ScheduledExecutorService executor; @Before diff --git a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/SerializedRequestHandlerTest.java b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/SerializedRequestHandlerTest.java index 217a835a987..4796bcc0da3 100644 --- a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/SerializedRequestHandlerTest.java +++ b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/SerializedRequestHandlerTest.java @@ -50,7 +50,7 @@ public class SerializedRequestHandlerTest extends ThreadPooledTestSuite { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @Test(timeout = 10000) public void testProcessEvent() throws InterruptedException, ExecutionException { diff --git a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZKCheckpointStoreTests.java b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZKCheckpointStoreTests.java index 72a81ac0055..27942bdafb8 100644 --- a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZKCheckpointStoreTests.java +++ b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZKCheckpointStoreTests.java @@ -144,6 +144,7 @@ public void readerWithoutCheckpointTest() throws Exception { final String reader1 = "reader1"; final String reader2 = "reader2"; + Assert.assertFalse(checkpointStore.isHealthy()); Set processes = checkpointStore.getProcesses(); Assert.assertEquals(0, processes.size()); diff --git a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZkCheckpointStoreConnectivityTest.java b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZkCheckpointStoreConnectivityTest.java index 56b6b8cc811..fcb4812453a 100644 --- a/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZkCheckpointStoreConnectivityTest.java +++ b/controller/src/test/java/io/pravega/controller/eventProcessor/impl/ZkCheckpointStoreConnectivityTest.java @@ -37,7 +37,7 @@ public class ZkCheckpointStoreConnectivityTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private CuratorFramework cli; private CheckpointStore checkpointStore; diff --git a/controller/src/test/java/io/pravega/controller/fault/ControllerClusterListenerTest.java b/controller/src/test/java/io/pravega/controller/fault/ControllerClusterListenerTest.java index 3c4106216ff..3a6177ab69f 100644 --- a/controller/src/test/java/io/pravega/controller/fault/ControllerClusterListenerTest.java +++ b/controller/src/test/java/io/pravega/controller/fault/ControllerClusterListenerTest.java @@ -53,6 +53,7 @@ import org.junit.Test; import org.junit.ClassRule; import org.junit.rules.Timeout; +import org.junit.Assert; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; @@ -138,6 +139,8 @@ public void clusterListenerTest() throws Exception { clusterListener.awaitRunning(); + Assert.assertTrue(clusterListener.areAllSweepersReady()); + validateAddedNode(host.getHostId()); // Add a new host @@ -151,6 +154,7 @@ public void clusterListenerTest() throws Exception { clusterListener.stopAsync(); clusterListener.awaitTerminated(); + Assert.assertFalse(clusterListener.isRunning()); validateRemovedNode(host.getHostId()); } diff --git a/controller/src/test/java/io/pravega/controller/fault/SegmentContainerMonitorTest.java b/controller/src/test/java/io/pravega/controller/fault/SegmentContainerMonitorTest.java index 6ed5a847219..0aec5845916 100644 --- a/controller/src/test/java/io/pravega/controller/fault/SegmentContainerMonitorTest.java +++ b/controller/src/test/java/io/pravega/controller/fault/SegmentContainerMonitorTest.java @@ -140,7 +140,6 @@ public Host getHostForTableSegment(String table) { SegmentContainerMonitor monitor = new SegmentContainerMonitor(new MockHostControllerStore(), PRAVEGA_ZK_CURATOR_RESOURCE.client, new UniformContainerBalancer(), 2); monitor.startAsync().awaitRunning(); - assertEquals(hostStore.getContainerCount(), Config.HOST_STORE_CONTAINER_COUNT); //Rebalance should be triggered for the very first attempt. Verify that no hosts are added to the store. diff --git a/controller/src/test/java/io/pravega/controller/mocks/SegmentHelperMock.java b/controller/src/test/java/io/pravega/controller/mocks/SegmentHelperMock.java index 6cfbdc33af7..55081475a68 100644 --- a/controller/src/test/java/io/pravega/controller/mocks/SegmentHelperMock.java +++ b/controller/src/test/java/io/pravega/controller/mocks/SegmentHelperMock.java @@ -27,6 +27,8 @@ import io.pravega.controller.server.SegmentHelper; import io.pravega.controller.server.WireCommandFailedException; import io.pravega.controller.store.host.HostControllerStore; +import io.pravega.controller.store.host.HostStoreFactory; +import io.pravega.controller.store.host.impl.HostMonitorConfigImpl; import io.pravega.controller.stream.api.grpc.v1.Controller.NodeUri; import io.pravega.controller.stream.api.grpc.v1.Controller.TxnStatus; import io.pravega.shared.protocol.netty.WireCommandType; @@ -57,7 +59,7 @@ public class SegmentHelperMock { private static final int SERVICE_PORT = 12345; public static SegmentHelper getSegmentHelperMock() { - SegmentHelper helper = spy(new SegmentHelper(mock(ConnectionPool.class), mock(HostControllerStore.class), mock(ScheduledExecutorService.class))); + SegmentHelper helper = spy(new SegmentHelper(mock(ConnectionPool.class), HostStoreFactory.createInMemoryStore(HostMonitorConfigImpl.dummyConfig()), mock(ScheduledExecutorService.class))); doReturn(NodeUri.newBuilder().setEndpoint("localhost").setPort(SERVICE_PORT).build()).when(helper).getSegmentUri( anyString(), anyString(), anyLong()); @@ -66,13 +68,13 @@ public static SegmentHelper getSegmentHelperMock() { anyString(), anyString(), anyLong(), any(), anyLong()); doReturn(CompletableFuture.completedFuture(null)).when(helper).createSegment( - anyString(), anyString(), anyLong(), any(), any(), anyLong()); + anyString(), anyString(), anyLong(), any(), any(), anyLong(), anyLong()); doReturn(CompletableFuture.completedFuture(null)).when(helper).deleteSegment( anyString(), anyString(), anyLong(), any(), anyLong()); doReturn(CompletableFuture.completedFuture(null)).when(helper).createTransaction( - anyString(), anyString(), anyLong(), any(), any(), anyLong()); + anyString(), anyString(), anyLong(), any(), any(), anyLong(), anyLong()); TxnStatus txnStatus = TxnStatus.newBuilder().setStatus(TxnStatus.Status.SUCCESS).build(); doReturn(CompletableFuture.completedFuture(txnStatus)).when(helper).abortTransaction( @@ -92,7 +94,7 @@ public static SegmentHelper getSegmentHelperMock() { .when(helper).getSegmentInfo(anyString(), anyString(), anyLong(), anyString(), anyLong()); doReturn(CompletableFuture.completedFuture(null)).when(helper).createTableSegment( - anyString(), anyString(), anyLong(), anyBoolean(), anyInt()); + anyString(), anyString(), anyLong(), anyBoolean(), anyInt(), anyLong()); doReturn(CompletableFuture.completedFuture(null)).when(helper).deleteTableSegment( anyString(), anyBoolean(), anyString(), anyLong()); @@ -109,13 +111,13 @@ public static SegmentHelper getFailingSegmentHelperMock() { anyString(), anyString(), anyLong(), any(), anyLong()); doReturn(Futures.failedFuture(new RuntimeException())).when(helper).createSegment( - anyString(), anyString(), anyLong(), any(), any(), anyLong()); + anyString(), anyString(), anyLong(), any(), any(), anyLong(), anyLong()); doReturn(Futures.failedFuture(new RuntimeException())).when(helper).deleteSegment( anyString(), anyString(), anyLong(), any(), anyLong()); doReturn(Futures.failedFuture(new RuntimeException())).when(helper).createTransaction( - anyString(), anyString(), anyLong(), any(), any(), anyLong()); + anyString(), anyString(), anyLong(), any(), any(), anyLong(), anyLong()); doReturn(Futures.failedFuture(new RuntimeException())).when(helper).abortTransaction( anyString(), anyString(), anyLong(), any(), any(), anyLong()); @@ -127,7 +129,7 @@ public static SegmentHelper getFailingSegmentHelperMock() { anyString(), anyString(), any(), anyLong(), any(), anyLong()); doReturn(Futures.failedFuture(new RuntimeException())).when(helper).createTableSegment( - anyString(), anyString(), anyLong(), anyBoolean(), anyInt()); + anyString(), anyString(), anyLong(), anyBoolean(), anyInt(), anyLong()); return helper; } @@ -147,7 +149,7 @@ public static SegmentHelper getSegmentHelperMockForTables(ScheduledExecutorServi mapOfTablesPosition.putIfAbsent(tableName, new HashMap<>()); } }, executor); - }).when(helper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt()); + }).when(helper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt(), anyLong()); // endregion // region delete table diff --git a/controller/src/test/java/io/pravega/controller/rest/v1/ModelHelperTest.java b/controller/src/test/java/io/pravega/controller/rest/v1/ModelHelperTest.java index 3df6b677012..ac55b21ccf3 100644 --- a/controller/src/test/java/io/pravega/controller/rest/v1/ModelHelperTest.java +++ b/controller/src/test/java/io/pravega/controller/rest/v1/ModelHelperTest.java @@ -54,11 +54,24 @@ public void testGetCreateStreamConfig() { createStreamRequest.setStreamName("stream"); createStreamRequest.setScalingPolicy(scalingConfig); - // Stream with Fixed Scaling Policy and no Retention Policy + // Stream with Fixed Scaling Policy and no Retention Policy and default rolloverSize StreamConfiguration streamConfig = getCreateStreamConfig(createStreamRequest); Assert.assertEquals(ScalingPolicy.ScaleType.FIXED_NUM_SEGMENTS, streamConfig.getScalingPolicy().getScaleType()); Assert.assertEquals(2, streamConfig.getScalingPolicy().getMinNumSegments()); Assert.assertNull(streamConfig.getRetentionPolicy()); + Assert.assertEquals(streamConfig.getTimestampAggregationTimeout(), 0); + Assert.assertEquals(streamConfig.getRolloverSizeBytes(), 0); + + // Stream with Fixed Scaling Policy and no Retention Policy and positive rolloverSize + createStreamRequest = new CreateStreamRequest(); + createStreamRequest.setStreamName("stream"); + createStreamRequest.setScalingPolicy(scalingConfig); + createStreamRequest.setTimestampAggregationTimeout(1000L); + createStreamRequest.setRolloverSizeBytes(1024L); + + streamConfig = getCreateStreamConfig(createStreamRequest); + Assert.assertEquals(streamConfig.getTimestampAggregationTimeout(), 1000L); + Assert.assertEquals(streamConfig.getRolloverSizeBytes(), 1024L); // Stream with Fixed Scaling Policy & Size based Retention Policy with min & max limits RetentionConfig retentionConfig = new RetentionConfig(); @@ -196,11 +209,15 @@ public void testGetUpdateStreamConfig() { scalingConfig.setMinSegments(2); UpdateStreamRequest updateStreamRequest = new UpdateStreamRequest(); updateStreamRequest.setScalingPolicy(scalingConfig); + updateStreamRequest.setTimestampAggregationTimeout(1000L); + updateStreamRequest.setRolloverSizeBytes(1024L); StreamConfiguration streamConfig = getUpdateStreamConfig(updateStreamRequest); Assert.assertEquals(ScalingPolicy.ScaleType.FIXED_NUM_SEGMENTS, streamConfig.getScalingPolicy().getScaleType()); Assert.assertEquals(2, streamConfig.getScalingPolicy().getMinNumSegments()); Assert.assertNull(streamConfig.getRetentionPolicy()); + Assert.assertEquals(streamConfig.getTimestampAggregationTimeout(), 1000L); + Assert.assertEquals(streamConfig.getRolloverSizeBytes(), 1024L); scalingConfig.setType(ScalingConfig.TypeEnum.BY_RATE_IN_EVENTS_PER_SEC); scalingConfig.setTargetRate(123); @@ -254,10 +271,14 @@ public void testEncodeStreamResponse() { Assert.assertEquals(ScalingConfig.TypeEnum.FIXED_NUM_SEGMENTS, streamProperty.getScalingPolicy().getType()); Assert.assertEquals((Integer) 1, streamProperty.getScalingPolicy().getMinSegments()); Assert.assertNull(streamProperty.getRetentionPolicy()); + Assert.assertEquals((long) streamProperty.getTimestampAggregationTimeout(), 0L); + Assert.assertEquals((long) streamProperty.getRolloverSizeBytes(), 0L); streamConfig = StreamConfiguration.builder() .scalingPolicy(ScalingPolicy.byDataRate(100, 200, 1)) .retentionPolicy(RetentionPolicy.byTime(Duration.ofDays(100L))) + .timestampAggregationTimeout(1000L) + .rolloverSizeBytes(1024L) .build(); streamProperty = encodeStreamResponse("scope", "stream", streamConfig); Assert.assertEquals(ScalingConfig.TypeEnum.BY_RATE_IN_KBYTES_PER_SEC, @@ -268,6 +289,8 @@ public void testEncodeStreamResponse() { Assert.assertEquals(RetentionConfig.TypeEnum.LIMITED_DAYS, streamProperty.getRetentionPolicy().getType()); Assert.assertEquals((Long) 100L, streamProperty.getRetentionPolicy().getValue()); + Assert.assertEquals((long) streamProperty.getTimestampAggregationTimeout(), 1000L); + Assert.assertEquals((long) streamProperty.getRolloverSizeBytes(), 1024L); streamConfig = StreamConfiguration.builder() .scalingPolicy(ScalingPolicy.byEventRate(100, 200, 1)) diff --git a/controller/src/test/java/io/pravega/controller/rest/v1/PingTest.java b/controller/src/test/java/io/pravega/controller/rest/v1/PingTest.java index 6a044eba376..da4e98d5b08 100644 --- a/controller/src/test/java/io/pravega/controller/rest/v1/PingTest.java +++ b/controller/src/test/java/io/pravega/controller/rest/v1/PingTest.java @@ -124,6 +124,7 @@ protected Client createJerseyClient() throws Exception { RESTServerConfig getServerConfig() throws Exception { return RESTServerConfigImpl.builder().host("localhost").port(TestUtils.getAvailableListenPort()) .tlsEnabled(true) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .keyFilePath(getResourcePath(SecurityConfigDefaults.TLS_SERVER_KEYSTORE_NAME)) .keyFilePasswordPath(getResourcePath(SecurityConfigDefaults.TLS_PASSWORD_FILE_NAME)) .build(); @@ -140,6 +141,7 @@ public static class FailingSecurePingTest extends SecurePingTest { RESTServerConfig getServerConfig() throws Exception { return RESTServerConfigImpl.builder().host("localhost").port(TestUtils.getAvailableListenPort()) .tlsEnabled(true) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .keyFilePath(getResourcePath(SecurityConfigDefaults.TLS_SERVER_KEYSTORE_NAME)) .keyFilePasswordPath("Wrong_Path") .build(); diff --git a/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataAuthFocusedTests.java b/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataAuthFocusedTests.java index 93fa315c1f2..cf073e5b56f 100644 --- a/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataAuthFocusedTests.java +++ b/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataAuthFocusedTests.java @@ -67,8 +67,10 @@ import javax.ws.rs.core.MultivaluedHashMap; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.Response; + import org.junit.After; import org.junit.AfterClass; +import org.junit.Assert; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; @@ -442,7 +444,7 @@ public void testListStreamsReturnsEmptyListWhenUserHasNoStreamsAssigned() { // Assert assertEquals(HTTP_STATUS_OK, response.getStatus()); - assertEquals(null, listedStreams.getStreams()); + Assert.assertTrue(listedStreams.getStreams().isEmpty()); response.close(); } diff --git a/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataTests.java b/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataTests.java index f2751e702f2..8c28e4f3276 100644 --- a/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataTests.java +++ b/controller/src/test/java/io/pravega/controller/rest/v1/StreamMetaDataTests.java @@ -70,6 +70,8 @@ import javax.ws.rs.core.GenericType; import javax.ws.rs.core.Response; import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.tuple.ImmutablePair; +import org.apache.commons.lang3.tuple.Pair; import org.junit.After; import org.junit.Before; import org.junit.Rule; @@ -756,7 +758,7 @@ public void testListStreams() throws ExecutionException, InterruptedException { assertEquals("List element", "stream2", streamsListResp.getStreams().get(1).getStreamName()); response.close(); - response = addAuthHeaders(client.target(resourceURI).queryParam("showInternalStreams", "true").request()).buildGet().invoke(); + response = addAuthHeaders(client.target(resourceURI).queryParam("filter_type", "showInternalStreams").request()).buildGet().invoke(); assertEquals("List Streams response code", 200, response.getStatus()); assertTrue(response.bufferEntity()); streamsListResp = response.readEntity(StreamsList.class); @@ -765,6 +767,39 @@ public void testListStreams() throws ExecutionException, InterruptedException { streamsListResp.getStreams().get(0).getStreamName()); response.close(); + // Test for tags + final StreamConfiguration streamConfigurationForTags = StreamConfiguration.builder() + .scalingPolicy(ScalingPolicy.byEventRate(100, 2, 2)) + .retentionPolicy(RetentionPolicy.byTime(Duration.ofMillis(123L))).tag("testTag") + .build(); + List tagStream = new ArrayList<>(); + tagStream.add("streamForTags"); + ImmutablePair, String> tagPair = new ImmutablePair<>(tagStream, ""); + ImmutablePair, String> emptyPair = new ImmutablePair<>(Collections.emptyList(), ""); + when(mockControllerService.listStreamsForTag(eq("scope1"), eq("testTag"), anyString(), anyLong())).thenReturn(CompletableFuture.completedFuture(tagPair)).thenReturn(CompletableFuture.completedFuture(emptyPair)); + when(mockControllerService.getStream(eq("scope1"), eq("streamForTags"), anyLong())).thenReturn(CompletableFuture.completedFuture(streamConfigurationForTags)); + response = addAuthHeaders(client.target(resourceURI).queryParam("filter_type", "tag").queryParam("filter_value", "testTag").request()).buildGet().invoke(); + assertEquals("List Streams response code", 200, response.getStatus()); + assertTrue(response.bufferEntity()); + final StreamsList streamsListForTags = response.readEntity(StreamsList.class); + assertEquals("List count", streamsListForTags.getStreams().size(), 1); + assertEquals("List element", streamsListForTags.getStreams().get(0).getStreamName(), "streamForTags"); + response.close(); + + final CompletableFuture, String>> completableFutureForTag = new CompletableFuture<>(); + completableFutureForTag.completeExceptionally(StoreException.create(StoreException.Type.DATA_NOT_FOUND, "scope1")); + when(mockControllerService.listStreamsForTag(eq("scope1"), eq("testTag"), anyString(), anyLong())).thenReturn(completableFutureForTag); + response = addAuthHeaders(client.target(resourceURI).queryParam("filter_type", "tag").queryParam("filter_value", "testTag").request()).buildGet().invoke(); + assertEquals("List Streams response code", 404, response.getStatus()); + response.close(); + + final CompletableFuture, String>> completableFutureForTag1 = new CompletableFuture<>(); + completableFutureForTag1.completeExceptionally(new Exception()); + when(mockControllerService.listStreamsForTag(eq("scope1"), eq("testTag"), anyString(), anyLong())).thenReturn(completableFutureForTag1); + response = addAuthHeaders(client.target(resourceURI).queryParam("filter_type", "tag").queryParam("filter_value", "testTag").request()).buildGet().invoke(); + assertEquals("List Streams response code", 500, response.getStatus()); + response.close(); + // Test to list large number of streams. streamsList = new HashMap<>(); for (int i = 0; i < 50000; i++) { diff --git a/controller/src/test/java/io/pravega/controller/server/ControllerServiceConfigTest.java b/controller/src/test/java/io/pravega/controller/server/ControllerServiceConfigTest.java index 2007f290075..358994bdd6b 100644 --- a/controller/src/test/java/io/pravega/controller/server/ControllerServiceConfigTest.java +++ b/controller/src/test/java/io/pravega/controller/server/ControllerServiceConfigTest.java @@ -38,7 +38,7 @@ */ public class ControllerServiceConfigTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @Test public void configTests() { diff --git a/controller/src/test/java/io/pravega/controller/server/ControllerServiceMainTest.java b/controller/src/test/java/io/pravega/controller/server/ControllerServiceMainTest.java index e5c4c8fb9e4..e86f54227cf 100644 --- a/controller/src/test/java/io/pravega/controller/server/ControllerServiceMainTest.java +++ b/controller/src/test/java/io/pravega/controller/server/ControllerServiceMainTest.java @@ -25,23 +25,18 @@ import java.io.IOException; import java.util.Optional; import java.util.concurrent.CompletableFuture; -import java.util.concurrent.TimeUnit; - import lombok.Cleanup; import lombok.extern.slf4j.Slf4j; import org.junit.After; import org.junit.Before; -import org.junit.Rule; import org.junit.Test; -import org.junit.rules.Timeout; /** * ControllerServiceMain tests. */ public abstract class ControllerServiceMainTest { private static final CompletableFuture INVOKED = new CompletableFuture<>(); - @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + protected StoreClientConfig storeClientConfig; private final boolean disableControllerCluster; diff --git a/controller/src/test/java/io/pravega/controller/server/ControllerServiceStarterTest.java b/controller/src/test/java/io/pravega/controller/server/ControllerServiceStarterTest.java index 642293c5643..dbc978350d2 100644 --- a/controller/src/test/java/io/pravega/controller/server/ControllerServiceStarterTest.java +++ b/controller/src/test/java/io/pravega/controller/server/ControllerServiceStarterTest.java @@ -52,7 +52,7 @@ @Slf4j public abstract class ControllerServiceStarterTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected StoreClientConfig storeClientConfig; protected StoreClient storeClient; protected final int grpcPort; @@ -126,6 +126,7 @@ protected ControllerServiceConfig createControllerServiceConfig() { .port(grpcPort) .authorizationEnabled(enableAuth) .tlsEnabled(enableAuth) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .tlsCertFile(SecurityConfigDefaults.TLS_SERVER_CERT_PATH) .tlsKeyFile(SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_PATH) .userPasswordFile(SecurityConfigDefaults.AUTH_HANDLER_INPUT_PATH) diff --git a/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithKVTableTest.java b/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithKVTableTest.java index 0f669360d12..b4318a9e9af 100644 --- a/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithKVTableTest.java +++ b/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithKVTableTest.java @@ -73,7 +73,7 @@ public abstract class ControllerServiceWithKVTableTest { public static final PravegaZkCuratorResource PRAVEGA_ZK_CURATOR_RESOURCE = new PravegaZkCuratorResource(); private static final String SCOPE = "scope"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); protected SegmentHelper segmentHelperMock; diff --git a/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithPravegaTablesKVTableTest.java b/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithPravegaTablesKVTableTest.java index b9efb2a4ce4..3abbcbdc534 100644 --- a/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithPravegaTablesKVTableTest.java +++ b/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithPravegaTablesKVTableTest.java @@ -33,6 +33,6 @@ StreamMetadataStore getStore() { @Override KVTableMetadataStore getKVTStore() { return KVTableStoreFactory.createPravegaTablesStore(segmentHelperMock, - GrpcAuthHelper.getDisabledAuthHelper().getDisabledAuthHelper(), PRAVEGA_ZK_CURATOR_RESOURCE.client, executor); + GrpcAuthHelper.getDisabledAuthHelper(), PRAVEGA_ZK_CURATOR_RESOURCE.client, executor); } } diff --git a/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithStreamTest.java b/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithStreamTest.java index f5a12be061b..8393c7f3b3a 100644 --- a/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithStreamTest.java +++ b/controller/src/test/java/io/pravega/controller/server/ControllerServiceWithStreamTest.java @@ -104,7 +104,7 @@ public abstract class ControllerServiceWithStreamTest { private static final String STREAM = "stream"; private static final String STREAM1 = "stream1"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); protected CuratorFramework zkClient; @@ -381,7 +381,8 @@ public void createReaderGroupTest() throws Exception { .maxOutstandingCheckpointRequest(2) .retentionType(ReaderGroupConfig.StreamDataRetention.AUTOMATIC_RELEASE_AT_LAST_CHECKPOINT) .startingStreamCuts(startSC) - .endingStreamCuts(endSC).build(); + .endingStreamCuts(endSC) + .build(); Controller.CreateReaderGroupResponse rgStatus = consumer.createReaderGroup(SCOPE, "rg1", rgConfig, System.currentTimeMillis(), 0L).get(); assertEquals(Controller.CreateReaderGroupResponse.Status.SUCCESS, rgStatus.getStatus()); diff --git a/controller/src/test/java/io/pravega/controller/server/PravegaTablesControllerServiceStarterTest.java b/controller/src/test/java/io/pravega/controller/server/PravegaTablesControllerServiceStarterTest.java index 2d121f3c69f..4a33500fff3 100644 --- a/controller/src/test/java/io/pravega/controller/server/PravegaTablesControllerServiceStarterTest.java +++ b/controller/src/test/java/io/pravega/controller/server/PravegaTablesControllerServiceStarterTest.java @@ -41,12 +41,12 @@ StoreClientConfig getStoreConfig(ZKClientConfig zkClientConfig) { @Override StreamMetadataStore getStore(StoreClient storeClient) { return StreamStoreFactory.createPravegaTablesStore(SegmentHelperMock.getSegmentHelperMockForTables(executor), - GrpcAuthHelper.getDisabledAuthHelper().getDisabledAuthHelper(), (CuratorFramework) storeClient.getClient(), executor); + GrpcAuthHelper.getDisabledAuthHelper(), (CuratorFramework) storeClient.getClient(), executor); } @Override KVTableMetadataStore getKVTStore(StoreClient storeClient) { return KVTableStoreFactory.createPravegaTablesStore(SegmentHelperMock.getSegmentHelperMockForTables(executor), - GrpcAuthHelper.getDisabledAuthHelper().getDisabledAuthHelper(), (CuratorFramework) storeClient.getClient(), executor); + GrpcAuthHelper.getDisabledAuthHelper(), (CuratorFramework) storeClient.getClient(), executor); } } diff --git a/controller/src/test/java/io/pravega/controller/server/SegmentHelperTest.java b/controller/src/test/java/io/pravega/controller/server/SegmentHelperTest.java index 78502bbcc41..4699e7ec6a9 100644 --- a/controller/src/test/java/io/pravega/controller/server/SegmentHelperTest.java +++ b/controller/src/test/java/io/pravega/controller/server/SegmentHelperTest.java @@ -84,6 +84,7 @@ public class SegmentHelperTest extends ThreadPooledTestSuite { @Test public void getSegmentUri() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); helper.getSegmentUri("", "", 0); @@ -92,9 +93,10 @@ public void getSegmentUri() { @Test public void createSegment() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.createSegment("", "", - 0, ScalingPolicy.fixed(2), "", Long.MIN_VALUE); + 0, ScalingPolicy.fixed(2), "", Long.MIN_VALUE, 1024L); long requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.AuthTokenCheckFailed(requestId, "SomeException")); AssertExtensions.assertThrows("", @@ -104,18 +106,18 @@ public void createSegment() { ); // On receiving SegmentAlreadyExists true should be returned. - CompletableFuture result = helper.createSegment("", "", 0L, ScalingPolicy.fixed(2), "", requestId); + CompletableFuture result = helper.createSegment("", "", 0L, ScalingPolicy.fixed(2), "", requestId, 0L); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentCreated(requestId, getQualifiedStreamSegmentName("", "", 0L))); result.join(); - CompletableFuture ret = helper.createSegment("", "", 0L, ScalingPolicy.fixed(2), "", requestId); + CompletableFuture ret = helper.createSegment("", "", 0L, ScalingPolicy.fixed(2), "", requestId, -1024L); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentAlreadyExists(requestId, getQualifiedStreamSegmentName("", "", 0L), "")); ret.join(); // handleUnexpectedReply - CompletableFuture resultException = helper.createSegment("", "", 0L, ScalingPolicy.fixed(2), "", requestId); + CompletableFuture resultException = helper.createSegment("", "", 0L, ScalingPolicy.fixed(2), "", requestId, 0L); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentDeleted(requestId, getQualifiedStreamSegmentName("", "", 0L))); AssertExtensions.assertThrows("", @@ -124,7 +126,7 @@ public void createSegment() { ); Supplier> futureSupplier = () -> helper.createSegment("", "", - 0, ScalingPolicy.fixed(2), "", Long.MIN_VALUE); + 0, ScalingPolicy.fixed(2), "", Long.MIN_VALUE, 0L); validateProcessingFailureCFE(factory, futureSupplier); testConnectionFailure(factory, futureSupplier); } @@ -132,6 +134,7 @@ public void createSegment() { @Test public void truncateSegment() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.truncateSegment("", "", 0L, 0L, "", System.nanoTime()); @@ -166,6 +169,7 @@ public void truncateSegment() { @Test public void deleteSegment() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.deleteSegment("", "", 0L, "", System.nanoTime()); long requestId = ((MockConnection) (factory.connection)).getRequestId(); @@ -195,6 +199,7 @@ public void deleteSegment() { @Test public void sealSegment() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.sealSegment("", "", 0L, "", System.nanoTime()); @@ -228,10 +233,11 @@ public void sealSegment() { @Test public void createTransaction() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); UUID txId = new UUID(0, 0L); CompletableFuture retVal = helper.createTransaction("", "", 0L, txId, - "", System.nanoTime()); + "", System.nanoTime(), 1024 * 1024L); long requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.AuthTokenCheckFailed(requestId, "SomeException")); @@ -242,18 +248,18 @@ public void createTransaction() { ); CompletableFuture result = helper.createTransaction("", "", 0L, - new UUID(0L, 0L), "", System.nanoTime()); + new UUID(0L, 0L), "", System.nanoTime(), 1024 * 1024L); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentCreated(requestId, getQualifiedStreamSegmentName("", "", 0L))); result.join(); - result = helper.createTransaction("", "", 0L, new UUID(0L, 0L), "", System.nanoTime()); + result = helper.createTransaction("", "", 0L, new UUID(0L, 0L), "", System.nanoTime(), 1024 * 1024L); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentAlreadyExists(requestId, getQualifiedStreamSegmentName("", "", 0L), "")); result.join(); Supplier> futureSupplier = () -> helper.createTransaction("", "", 0L, txId, - "", System.nanoTime()); + "", System.nanoTime(), 1024 * 1024L); validateProcessingFailureCFE(factory, futureSupplier); testConnectionFailure(factory, futureSupplier); @@ -262,6 +268,7 @@ public void createTransaction() { @Test public void commitTransaction() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.commitTransaction("", "", 0L, 0L, new UUID(0, 0L), "", System.nanoTime()); @@ -295,6 +302,7 @@ public void commitTransaction() { @Test public void abortTransaction() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.abortTransaction("", "", 0L, new UUID(0, 0L), "", System.nanoTime()); @@ -330,6 +338,7 @@ public void abortTransaction() { @Test public void updatePolicy() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.updatePolicy("", "", ScalingPolicy.fixed(1), 0L, "", System.nanoTime()); @@ -357,6 +366,7 @@ public void updatePolicy() { @Test public void getSegmentInfo() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); CompletableFuture retVal = helper.getSegmentInfo("", "", 0L, "", System.nanoTime()); @@ -381,6 +391,37 @@ public void getSegmentInfo() { testConnectionFailure(factory, futureSupplier); } + @Test + public void testGetTableSegmentInfo() { + MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup + SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); + String tableName = "transactionsInEpoch-0.#.a4fa6f45-b49d-4364-b91e-597c6e9ff78e"; + CompletableFuture result = helper.getTableSegmentInfo(tableName, "", 0L); + long requestId = ((MockConnection) (factory.connection)).getRequestId(); + factory.rp.process(new WireCommands.TableSegmentInfo(requestId, tableName, 0L, 100L, 7L, 10)); + WireCommands.TableSegmentInfo tableInfo = result.join(); + assertEquals(7L, tableInfo.getEntryCount()); + assertEquals(0L, tableInfo.getStartOffset()); + assertEquals(100L, tableInfo.getLength()); + assertEquals(10, tableInfo.getKeyLength()); + + String tableNotExists = "tableNotExists"; + CompletableFuture result1 = helper.getTableSegmentInfo(tableNotExists, "", 1L); + requestId = ((MockConnection) (factory.connection)).getRequestId(); + factory.rp.process(new WireCommands.NoSuchSegment(requestId, tableNotExists, "", -1)); + AssertExtensions.assertThrows("", + () -> result1.join(), + ex -> Exceptions.unwrap(ex) instanceof WireCommandFailedException + && ((WireCommandFailedException) Exceptions.unwrap(ex)).getReason().equals(WireCommandFailedException.Reason.SegmentDoesNotExist) + ); + + final String exceptionTestTable = "testTable"; + Supplier> futureSupplier = () -> helper.getTableSegmentInfo(exceptionTestTable, "", 2L); + validateProcessingFailureCFE(factory, futureSupplier); + testConnectionFailure(factory, futureSupplier); + } + @Test public void testGetSegmentAttribute() { MockConnectionFactory factory = new MockConnectionFactory(); @@ -445,7 +486,7 @@ public void testReadSegment() { MockConnectionFactory factory = new MockConnectionFactory(); @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); - CompletableFuture retVal = helper.readSegment("", 0, 10, + CompletableFuture retVal = helper.readSegment("", 0L, 10, new PravegaNodeUri("localhost", 12345), ""); long requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.AuthTokenCheckFailed(requestId, "SomeException")); @@ -455,14 +496,14 @@ public void testReadSegment() { && ((WireCommandFailedException) ex).getReason().equals(WireCommandFailedException.Reason.AuthFailed) ); - CompletableFuture result = helper.readSegment("", 0, 10, + CompletableFuture result = helper.readSegment("", 0L, 10, new PravegaNodeUri("localhost", 12345), ""); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentRead("", 0, true, true, Unpooled.wrappedBuffer(new byte[10]), requestId)); result.join(); - Supplier> futureSupplier = () -> helper.readSegment("", 0, 10, + Supplier> futureSupplier = () -> helper.readSegment("", 0L, 10, new PravegaNodeUri("localhost", 12345), ""); validateProcessingFailureCFE(factory, futureSupplier); testConnectionFailure(factory, futureSupplier); @@ -471,24 +512,25 @@ public void testReadSegment() { @Test public void testCreateTableSegment() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); long requestId = Long.MIN_VALUE; // On receiving SegmentAlreadyExists true should be returned. - CompletableFuture result = helper.createTableSegment("", "", requestId, false, 0); + CompletableFuture result = helper.createTableSegment("", "", requestId, false, 0, 0L); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentAlreadyExists(requestId, getQualifiedStreamSegmentName("", "", 0L), "")); result.join(); // On Receiving SegmentCreated true should be returned. - result = helper.createTableSegment("", "", requestId, false, 0); + result = helper.createTableSegment("", "", requestId, false, 0, -1024L); requestId = ((MockConnection) (factory.connection)).getRequestId(); factory.rp.process(new WireCommands.SegmentCreated(requestId, getQualifiedStreamSegmentName("", "", 0L))); result.join(); // Validate failure conditions. - Supplier> futureSupplier = () -> helper.createTableSegment("", "", 0L, false, 0); + Supplier> futureSupplier = () -> helper.createTableSegment("", "", 0L, false, 0, 1024L); validateAuthTokenCheckFailed(factory, futureSupplier); validateWrongHost(factory, futureSupplier); validateConnectionDropped(factory, futureSupplier); @@ -500,6 +542,7 @@ public void testCreateTableSegment() { @Test public void testDeleteTableSegment() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); long requestId = System.nanoTime(); @@ -536,6 +579,7 @@ public void testDeleteTableSegment() { @Test public void testUpdateTableEntries() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); List entries = Arrays.asList(TableSegmentEntry.notExists("k".getBytes(), "v".getBytes()), TableSegmentEntry.unversioned("k1".getBytes(), "v".getBytes()), @@ -581,6 +625,7 @@ public void testUpdateTableEntries() { @Test public void testRemoveTableKeys() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); List keys = Arrays.asList(TableSegmentKey.notExists("k".getBytes()), TableSegmentKey.notExists("k1".getBytes())); @@ -621,6 +666,7 @@ public void testRemoveTableKeys() { @Test public void testReadTable() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); List keysToBeRead = Arrays.asList(TableSegmentKey.unversioned(key0), TableSegmentKey.unversioned(key1)); @@ -655,6 +701,7 @@ public void testReadTable() { @Test public void testReadTableKeys() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); final List keys1 = Arrays.asList( @@ -713,6 +760,7 @@ public void testReadTableKeys() { @Test public void testReadTableEntries() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); List entries1 = Arrays.asList( TableSegmentEntry.versioned(key0, value, 10L), @@ -768,6 +816,7 @@ public void testReadTableEntries() { @Test(timeout = 10000) public void testTimeout() { MockConnectionFactory factory = new MockConnectionFactory(); + @Cleanup SegmentHelper helper = new SegmentHelper(factory, new MockHostControllerStore(), executorService()); helper.setTimeout(Duration.ofMillis(100)); List keysToBeRead = Arrays.asList(TableSegmentKey.unversioned(key0), @@ -785,6 +834,7 @@ public void testTimeout() { public void testProcessAndRethrowExceptions() { // The wire-command itself we use for this test is immaterial, so we are using the simplest one here. WireCommands.Hello dummyRequest = new WireCommands.Hello(0, 0); + @SuppressWarnings("resource") SegmentHelper objectUnderTest = new SegmentHelper(null, null, null); AssertExtensions.assertThrows("Unexpected exception thrown", diff --git a/controller/src/test/java/io/pravega/controller/server/SegmentStoreConnectionManagerTest.java b/controller/src/test/java/io/pravega/controller/server/SegmentStoreConnectionManagerTest.java deleted file mode 100644 index 8fac4155b5c..00000000000 --- a/controller/src/test/java/io/pravega/controller/server/SegmentStoreConnectionManagerTest.java +++ /dev/null @@ -1,422 +0,0 @@ -/** - * Copyright Pravega Authors. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package io.pravega.controller.server; - -import io.pravega.client.connection.impl.ClientConnection; -import io.pravega.client.connection.impl.ConnectionFactory; -import io.pravega.common.Exceptions; -import io.pravega.controller.server.SegmentStoreConnectionManager.ConnectionWrapper; -import io.pravega.controller.server.SegmentStoreConnectionManager.ReusableReplyProcessor; -import io.pravega.controller.server.SegmentStoreConnectionManager.SegmentStoreConnectionPool; -import io.pravega.shared.protocol.netty.Append; -import io.pravega.shared.protocol.netty.ConnectionFailedException; -import io.pravega.shared.protocol.netty.PravegaNodeUri; -import io.pravega.shared.protocol.netty.ReplyProcessor; -import io.pravega.shared.protocol.netty.WireCommand; -import io.pravega.shared.protocol.netty.WireCommands; -import io.pravega.test.common.AssertExtensions; -import java.util.List; -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.ScheduledExecutorService; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicInteger; -import lombok.Getter; -import org.junit.Before; -import org.junit.Test; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; - -public class SegmentStoreConnectionManagerTest { - private AtomicInteger replyProcCounter; - private AtomicInteger connectionCounter; - - @Before - public void setUp() { - replyProcCounter = new AtomicInteger(); - connectionCounter = new AtomicInteger(); - } - - @Test(timeout = 30000) - public void connectionTest() { - PravegaNodeUri uri = new PravegaNodeUri("pravega", 1234); - ConnectionFactory cf = spy(new MockConnectionFactory()); - - SegmentStoreConnectionPool pool = new SegmentStoreConnectionPool(uri, cf, 2, 1); - ReplyProcessor myReplyProc = getReplyProcessor(); - - // we should be able to establish two connections safely - ConnectionWrapper connection1 = pool.getConnection(myReplyProc).join(); - // verify that the connection returned is of type MockConnection - assertTrue(connection1.getConnection() instanceof MockConnection); - assertTrue(((MockConnection) connection1.getConnection()).getRp() instanceof ReusableReplyProcessor); - assertEquals(connection1.getReplyProcessor(), myReplyProc); - verify(cf, times(1)).establishConnection(any(), any()); - - ReplyProcessor myReplyProc2 = getReplyProcessor(); - - ConnectionWrapper connection2 = pool.getConnection(myReplyProc2).join(); - assertEquals(connection2.getReplyProcessor(), myReplyProc2); - verify(cf, times(2)).establishConnection(any(), any()); - - // return these connections - connection1.close(); - connection2.close(); - - // idempotent connection close - connection1.close(); - connection2.close(); - - // verify that connections are reset - assertNull(connection1.getReplyProcessor()); - assertNull(connection2.getReplyProcessor()); - - // verify that one connection was closed - connection2.getConnection().close(); - - assertTrue(((MockConnection) connection2.getConnection()).isClosed.get()); - - // now create two more connections - // 1st should be delivered from available connections. - connection1 = pool.getConnection(getReplyProcessor()).join(); - // we should get back first connection - assertEquals(((MockConnection) connection1.getConnection()).uniqueId, 1); - - // 2nd request should result in creation of new connection - connection2 = pool.getConnection(getReplyProcessor()).join(); - assertEquals(((MockConnection) connection2.getConnection()).uniqueId, 3); - - // attempt to create a third connection - CompletableFuture connection3Future = pool.getConnection(getReplyProcessor()); - // this would not have completed. the waiting queue should have this entry - assertFalse(connection3Future.isDone()); - - CompletableFuture connection4Future = pool.getConnection(myReplyProc); - assertFalse(connection4Future.isDone()); - - // return connection1. it should be assigned to first waiting connection (connection3) - connection1.close(); - ConnectionWrapper connection3 = connection3Future.join(); - // verify that connection 3 received a connection object - assertEquals(((MockConnection) connection3.getConnection()).uniqueId, 1); - - // now fail connection 2 and return it. - connection2.failConnection(); - connection2.close(); - assertTrue(((MockConnection) connection2.getConnection()).isClosed.get()); - - // this should not be given to the waiting request. instead a new connection should be established. - ConnectionWrapper connection4 = connection4Future.join(); - assertEquals(((MockConnection) connection4.getConnection()).uniqueId, 4); - - // create another waiting request - CompletableFuture connection5Future = pool.getConnection(myReplyProc); - - // test shutdown - pool.shutdown(); - connection3.close(); - assertFalse(((MockConnection) connection3.getConnection()).isClosed.get()); - - // connection 5 should have been returned by using connection3 - ConnectionWrapper connection5 = connection5Future.join(); - // since returned connection served the waiting request no new connection should have been established - assertEquals(((MockConnection) connection5.getConnection()).uniqueId, 1); - - // return connection 4.. this should be closed as there is no one waiting - connection4.close(); - assertTrue(((MockConnection) connection4.getConnection()).isClosed.get()); - - // we should still be able to request new connections.. request connection 6.. this should be served immediately - // by way of new connection - ConnectionWrapper connection6 = pool.getConnection(myReplyProc).join(); - assertEquals(((MockConnection) connection6.getConnection()).uniqueId, 5); - - // request connect 7. this should wait as connection could is 2. - CompletableFuture connection7Future = pool.getConnection(myReplyProc); - assertFalse(connection7Future.isDone()); - - // return connection 5.. connection7 should get connection5's object and no new connection should be established - connection5.close(); - ConnectionWrapper connection7 = connection7Future.join(); - assertEquals(((MockConnection) connection7.getConnection()).uniqueId, 1); - - // return connection 6 and 7. they should be closed. - connection6.close(); - assertTrue(((MockConnection) connection6.getConnection()).isClosed.get()); - - connection7.close(); - assertTrue(((MockConnection) connection7.getConnection()).isClosed.get()); - - // create connection 8 - // close the connection explicitly - ConnectionWrapper connection8 = pool.getConnection(myReplyProc).join(); - assertEquals(((MockConnection) connection8.getConnection()).uniqueId, 6); - connection8.getConnection().close(); - - CompletableFuture future = new CompletableFuture<>(); - connection8.sendAsync(new WireCommands.Hello(0, 0), future); - AssertExtensions.assertFutureThrows("Connection should fail", - future, e -> { - Throwable unwrap = Exceptions.unwrap(e); - return unwrap instanceof WireCommandFailedException && - ((WireCommandFailedException) unwrap).getReason().equals(WireCommandFailedException.Reason.ConnectionFailed); - }); - } - - private ReplyProcessor getReplyProcessor() { - int uniqueId = replyProcCounter.incrementAndGet(); - return new ReplyProcessor() { - @Override - public void hello(WireCommands.Hello hello) { - - } - - @Override - public void wrongHost(WireCommands.WrongHost wrongHost) { - - } - - @Override - public void segmentAlreadyExists(WireCommands.SegmentAlreadyExists segmentAlreadyExists) { - - } - - @Override - public void segmentIsSealed(WireCommands.SegmentIsSealed segmentIsSealed) { - - } - - @Override - public void segmentIsTruncated(WireCommands.SegmentIsTruncated segmentIsTruncated) { - - } - - @Override - public void noSuchSegment(WireCommands.NoSuchSegment noSuchSegment) { - - } - - @Override - public void tableSegmentNotEmpty(WireCommands.TableSegmentNotEmpty tableSegmentNotEmpty) { - - } - - @Override - public void invalidEventNumber(WireCommands.InvalidEventNumber invalidEventNumber) { - - } - - @Override - public void appendSetup(WireCommands.AppendSetup appendSetup) { - - } - - @Override - public void dataAppended(WireCommands.DataAppended dataAppended) { - - } - - @Override - public void conditionalCheckFailed(WireCommands.ConditionalCheckFailed dataNotAppended) { - - } - - @Override - public void segmentRead(WireCommands.SegmentRead segmentRead) { - - } - - @Override - public void segmentAttributeUpdated(WireCommands.SegmentAttributeUpdated segmentAttributeUpdated) { - - } - - @Override - public void segmentAttribute(WireCommands.SegmentAttribute segmentAttribute) { - - } - - @Override - public void streamSegmentInfo(WireCommands.StreamSegmentInfo streamInfo) { - - } - - @Override - public void segmentCreated(WireCommands.SegmentCreated segmentCreated) { - - } - - @Override - public void segmentsMerged(WireCommands.SegmentsMerged segmentsMerged) { - - } - - @Override - public void segmentSealed(WireCommands.SegmentSealed segmentSealed) { - - } - - @Override - public void segmentTruncated(WireCommands.SegmentTruncated segmentTruncated) { - - } - - @Override - public void segmentDeleted(WireCommands.SegmentDeleted segmentDeleted) { - - } - - @Override - public void operationUnsupported(WireCommands.OperationUnsupported operationUnsupported) { - - } - - @Override - public void keepAlive(WireCommands.KeepAlive keepAlive) { - - } - - @Override - public void connectionDropped() { - - } - - @Override - public void segmentPolicyUpdated(WireCommands.SegmentPolicyUpdated segmentPolicyUpdated) { - - } - - @Override - public void processingFailure(Exception error) { - - } - - @Override - public void authTokenCheckFailed(WireCommands.AuthTokenCheckFailed authTokenCheckFailed) { - - } - - @Override - public void tableEntriesUpdated(WireCommands.TableEntriesUpdated tableEntriesUpdated) { - - } - - @Override - public void tableKeysRemoved(WireCommands.TableKeysRemoved tableKeysRemoved) { - - } - - @Override - public void tableRead(WireCommands.TableRead tableRead) { - - } - - @Override - public void tableKeyDoesNotExist(WireCommands.TableKeyDoesNotExist tableKeyDoesNotExist) { - - } - - @Override - public void tableKeyBadVersion(WireCommands.TableKeyBadVersion tableKeyBadVersion) { - - } - - @Override - public void tableKeysRead(WireCommands.TableKeysRead tableKeysRead) { - - } - - @Override - public void tableEntriesRead(WireCommands.TableEntriesRead tableEntriesRead) { - - } - - @Override - public void tableEntriesDeltaRead(WireCommands.TableEntriesDeltaRead tableEntriesDeltaRead) { - - } - - @Override - public void errorMessage(WireCommands.ErrorMessage errorMessage) { - - } - }; - } - - private class MockConnectionFactory implements ConnectionFactory { - @Getter - private ReplyProcessor rp; - - @Override - public CompletableFuture establishConnection(PravegaNodeUri endpoint, ReplyProcessor rp) { - this.rp = rp; - ClientConnection connection = new MockConnection(rp); - return CompletableFuture.completedFuture(connection); - } - - @Override - public ScheduledExecutorService getInternalExecutor() { - return null; - } - - @Override - public void close() { - - } - } - - private class MockConnection implements ClientConnection { - int uniqueId = connectionCounter.incrementAndGet(); - - @Getter - private final ReplyProcessor rp; - @Getter - private AtomicBoolean isClosed = new AtomicBoolean(false); - - public MockConnection(ReplyProcessor rp) { - this.rp = rp; - } - - @Override - public void send(WireCommand cmd) throws ConnectionFailedException { - if (isClosed.get()) { - throw new ConnectionFailedException(); - } - } - - @Override - public void send(Append append) throws ConnectionFailedException { - - } - - - @Override - public void sendAsync(List appends, CompletedCallback callback) { - - } - - @Override - public void close() { - isClosed.set(true); - } - } -} diff --git a/controller/src/test/java/io/pravega/controller/server/ZKBackedControllerServiceStarterTest.java b/controller/src/test/java/io/pravega/controller/server/ZKBackedControllerServiceStarterTest.java index dbc44d057f6..d43ac5dee0e 100644 --- a/controller/src/test/java/io/pravega/controller/server/ZKBackedControllerServiceStarterTest.java +++ b/controller/src/test/java/io/pravega/controller/server/ZKBackedControllerServiceStarterTest.java @@ -47,6 +47,7 @@ import io.pravega.shared.protocol.netty.WireCommand; import io.pravega.shared.protocol.netty.WireCommands; import io.pravega.test.common.AssertExtensions; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.TestingServerStarter; import java.util.List; import java.util.Optional; @@ -262,6 +263,7 @@ protected ControllerServiceConfig createControllerServiceConfigWithEventProcesso .port(grpcPort) .authorizationEnabled(false) .tlsEnabled(false) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .build())) .restServerConfig(Optional.empty()) .build(); diff --git a/controller/src/test/java/io/pravega/controller/server/bucket/BucketServiceTest.java b/controller/src/test/java/io/pravega/controller/server/bucket/BucketServiceTest.java index 6fda6d71470..a3c2ff16466 100644 --- a/controller/src/test/java/io/pravega/controller/server/bucket/BucketServiceTest.java +++ b/controller/src/test/java/io/pravega/controller/server/bucket/BucketServiceTest.java @@ -52,7 +52,7 @@ public abstract class BucketServiceTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); StreamMetadataStore streamMetadataStore; BucketStore bucketStore; BucketManager retentionService; diff --git a/controller/src/test/java/io/pravega/controller/server/bucket/WatermarkWorkflowTest.java b/controller/src/test/java/io/pravega/controller/server/bucket/WatermarkWorkflowTest.java index 6e6fec085ec..8ef150753f2 100644 --- a/controller/src/test/java/io/pravega/controller/server/bucket/WatermarkWorkflowTest.java +++ b/controller/src/test/java/io/pravega/controller/server/bucket/WatermarkWorkflowTest.java @@ -91,7 +91,7 @@ public class WatermarkWorkflowTest { @ClassRule public static final PravegaZkCuratorResource PRAVEGA_ZK_CURATOR_RESOURCE = new PravegaZkCuratorResource(10000, 1000, RETRY_POLICY); @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); StreamMetadataStore streamMetadataStore; BucketStore bucketStore; @@ -128,9 +128,11 @@ public void testWatermarkClient() { Stream stream = new StreamImpl("scope", "stream"); SynchronizerClientFactory clientFactory = spy(SynchronizerClientFactory.class); + @Cleanup MockRevisionedStreamClient revisionedClient = new MockRevisionedStreamClient(); doAnswer(x -> revisionedClient).when(clientFactory).createRevisionedStreamClient(anyString(), any(), any()); + @Cleanup PeriodicWatermarking.WatermarkClient client = new PeriodicWatermarking.WatermarkClient(stream, clientFactory); // iteration 1 ==> null -> w1 client.reinitialize(); diff --git a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorPravegaTablesStreamTest.java b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorPravegaTablesStreamTest.java index 1ee54da4605..067bc6a417f 100644 --- a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorPravegaTablesStreamTest.java +++ b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorPravegaTablesStreamTest.java @@ -15,18 +15,116 @@ */ package io.pravega.controller.server.eventProcessor; +import io.pravega.client.stream.ScalingPolicy; +import io.pravega.client.stream.StreamConfiguration; +import io.pravega.common.Exceptions; +import io.pravega.common.concurrent.Futures; +import io.pravega.controller.mocks.EventHelperMock; +import io.pravega.controller.mocks.EventStreamWriterMock; import io.pravega.controller.mocks.SegmentHelperMock; +import io.pravega.controller.server.SegmentHelper; +import io.pravega.controller.server.eventProcessor.requesthandlers.CommitRequestHandler; import io.pravega.controller.server.security.auth.GrpcAuthHelper; +import io.pravega.controller.store.VersionedMetadata; import io.pravega.controller.store.stream.StreamMetadataStore; +import io.pravega.controller.store.stream.AbstractStreamMetadataStore; +import io.pravega.controller.store.stream.PravegaTablesStreamMetadataStore; +import io.pravega.controller.store.stream.VersionedTransactionData; +import io.pravega.controller.store.stream.State; +import io.pravega.controller.store.stream.OperationContext; +import io.pravega.controller.store.stream.TxnStatus; import io.pravega.controller.store.stream.StreamStoreFactory; +import io.pravega.controller.store.PravegaTablesStoreHelper; +import io.pravega.controller.store.stream.records.CommittingTransactionsRecord; +import io.pravega.controller.store.task.TaskStoreFactory; +import io.pravega.controller.task.EventHelper; +import io.pravega.controller.task.Stream.StreamMetadataTasks; +import io.pravega.controller.task.Stream.StreamTransactionMetadataTasks; +import io.pravega.controller.util.Config; +import io.pravega.shared.controller.event.CommitEvent; +import io.pravega.test.common.AssertExtensions; +import org.junit.Test; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; +import org.mockito.ArgumentMatchers; + +import java.time.Duration; +import java.util.List; +import java.util.function.Function; + +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.times; /** * Controller Event ProcessorTests. */ public class ControllerEventProcessorPravegaTablesStreamTest extends ControllerEventProcessorTest { + @Override StreamMetadataStore createStore() { return StreamStoreFactory.createPravegaTablesStore( SegmentHelperMock.getSegmentHelperMockForTables(executor), GrpcAuthHelper.getDisabledAuthHelper(), PRAVEGA_ZK_CURATOR_RESOURCE.client, executor); } + + @Test(timeout = 10000) + public void testTxnPartialCommitRetry() { + PravegaTablesStoreHelper storeHelper = spy(new PravegaTablesStoreHelper(SegmentHelperMock.getSegmentHelperMockForTables(executor), GrpcAuthHelper.getDisabledAuthHelper(), executor)); + this.streamStore = new PravegaTablesStreamMetadataStore(PRAVEGA_ZK_CURATOR_RESOURCE.client, executor, Duration.ofHours(Config.COMPLETED_TRANSACTION_TTL_IN_HOURS), storeHelper); + SegmentHelper segmentHelperMock = SegmentHelperMock.getSegmentHelperMock(); + EventHelper eventHelperMock = EventHelperMock.getEventHelperMock(executor, "1", ((AbstractStreamMetadataStore) this.streamStore).getHostTaskIndex()); + StreamMetadataTasks streamMetadataTasks = new StreamMetadataTasks(streamStore, this.bucketStore, TaskStoreFactory.createInMemoryStore(executor), + segmentHelperMock, executor, "1", GrpcAuthHelper.getDisabledAuthHelper(), eventHelperMock); + StreamTransactionMetadataTasks streamTransactionMetadataTasks = new StreamTransactionMetadataTasks(this.streamStore, segmentHelperMock, + executor, "host", GrpcAuthHelper.getDisabledAuthHelper()); + streamTransactionMetadataTasks.initializeStreamWriters(new EventStreamWriterMock<>(), new EventStreamWriterMock<>()); + + String scope = "scope"; + String stream = "stream"; + // region createStream + final ScalingPolicy policy1 = ScalingPolicy.fixed(2); + final StreamConfiguration configuration1 = StreamConfiguration.builder().scalingPolicy(policy1).build(); + streamStore.createScope(scope, null, executor).join(); + long start = System.currentTimeMillis(); + streamStore.createStream(scope, stream, configuration1, start, null, executor).join(); + streamStore.setState(scope, stream, State.ACTIVE, null, executor).join(); + + StreamMetadataTasks spyStreamMetadataTasks = spy(streamMetadataTasks); + List txnDataList = createAndCommitTransactions(3); + int epoch = txnDataList.get(0).getEpoch(); + spyStreamMetadataTasks.setRequestEventWriter(new EventStreamWriterMock<>()); + CommitRequestHandler commitEventProcessor = new CommitRequestHandler(streamStore, spyStreamMetadataTasks, streamTransactionMetadataTasks, bucketStore, executor); + + final String committingTxnsRecordKey = "committingTxns"; + long failingClientRequestId = 123L; + doReturn(failingClientRequestId).when(spyStreamMetadataTasks).getRequestId(any()); + + OperationContext context = this.streamStore.createStreamContext(scope, stream, failingClientRequestId); + streamStore.startCommitTransactions(scope, stream, 100, context, executor).join(); + + doReturn(Futures.failedFuture(new RuntimeException())).when(storeHelper).updateEntry(anyString(), eq(committingTxnsRecordKey), any(), ArgumentMatchers.>any(), any(), eq(failingClientRequestId)); + AssertExtensions.assertFutureThrows("Updating CommittingTxnRecord fails", commitEventProcessor.processEvent(new CommitEvent(scope, stream, epoch)), e -> Exceptions.unwrap(e) instanceof RuntimeException); + verify(storeHelper, times(1)).removeEntries(anyString(), any(), eq(failingClientRequestId)); + VersionedMetadata versionedCommitRecord = this.streamStore.getVersionedCommittingTransactionsRecord(scope, stream, context, executor).join(); + CommittingTransactionsRecord commitRecord = versionedCommitRecord.getObject(); + assertFalse(CommittingTransactionsRecord.EMPTY.equals(commitRecord)); + for (VersionedTransactionData txnData : txnDataList) { + checkTransactionState(scope, stream, txnData.getId(), TxnStatus.COMMITTED); + } + + long goodClientRequestId = 4567L; + doReturn(goodClientRequestId).when(spyStreamMetadataTasks).getRequestId(any()); + commitEventProcessor.processEvent(new CommitEvent(scope, stream, epoch)).join(); + versionedCommitRecord = this.streamStore.getVersionedCommittingTransactionsRecord(scope, stream, context, executor).join(); + commitRecord = versionedCommitRecord.getObject(); + assertTrue(CommittingTransactionsRecord.EMPTY.equals(commitRecord)); + + for (VersionedTransactionData txnData : txnDataList) { + checkTransactionState(scope, stream, txnData.getId(), TxnStatus.COMMITTED); + } + } } diff --git a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorTest.java b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorTest.java index d74367fbfc2..21898c75f88 100644 --- a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorTest.java +++ b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorTest.java @@ -98,14 +98,14 @@ public abstract class ControllerEventProcessorTest { private static final String STREAM = "stream"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected CuratorFramework zkClient; protected ScheduledExecutorService executor; - private StreamMetadataStore streamStore; - private BucketStore bucketStore; - private StreamMetadataTasks streamMetadataTasks; - private StreamTransactionMetadataTasks streamTransactionMetadataTasks; + protected StreamMetadataStore streamStore; + protected BucketStore bucketStore; + protected StreamMetadataTasks streamMetadataTasks; + protected StreamTransactionMetadataTasks streamTransactionMetadataTasks; private HostControllerStore hostStore; private TestingServer zkServer; private SegmentHelper segmentHelperMock; @@ -325,7 +325,8 @@ public void testCommitAndStreamProcessorFairness() { assertNull(streamStore.getWaitingRequestProcessor(SCOPE, STREAM, null, executor).join()); streamStore.setState(SCOPE, STREAM, State.SCALING, null, executor).join(); - commitEventProcessor.processEvent(new CommitEvent(SCOPE, STREAM, epoch)).join(); + AssertExtensions.assertFutureThrows("Operation should be disallowed", commitEventProcessor.processEvent(new CommitEvent(SCOPE, STREAM, epoch)), + e -> Exceptions.unwrap(e) instanceof StoreException.OperationNotAllowedException); assertEquals(commitEventProcessor.getProcessorName(), streamStore.getWaitingRequestProcessor(SCOPE, STREAM, null, executor).join()); streamStore.setState(SCOPE, STREAM, State.ACTIVE, null, executor).join(); @@ -354,7 +355,7 @@ public void testCommitAndStreamProcessorFairness() { assertEquals(commitEventProcessor.getProcessorName(), streamStore.getWaitingRequestProcessor(SCOPE, STREAM, null, executor).join()); } - private List createAndCommitTransactions(int count) { + protected List createAndCommitTransactions(int count) { List retVal = new ArrayList<>(count); for (int i = 0; i < count; i++) { UUID txnId = streamStore.generateTransactionId(SCOPE, STREAM, null, executor).join(); @@ -387,7 +388,7 @@ public void testAbortEventProcessor() { checkTransactionState(SCOPE, STREAM, txnData.getId(), TxnStatus.ABORTED); } - private void checkTransactionState(String scope, String stream, UUID txnId, TxnStatus expectedStatus) { + protected void checkTransactionState(String scope, String stream, UUID txnId, TxnStatus expectedStatus) { TxnStatus txnStatus = streamStore.transactionStatus(scope, stream, txnId, null, executor).join(); assertEquals(expectedStatus, txnStatus); } diff --git a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorsTest.java b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorsTest.java index fbf84ffb3e4..38729955a8a 100644 --- a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorsTest.java +++ b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ControllerEventProcessorsTest.java @@ -24,12 +24,14 @@ import io.pravega.client.stream.EventStreamWriter; import io.pravega.client.stream.impl.PositionImpl; import io.pravega.common.concurrent.Futures; +import io.pravega.controller.eventProcessor.EventProcessorConfig; import io.pravega.controller.eventProcessor.EventProcessorGroup; import io.pravega.controller.eventProcessor.EventProcessorSystem; import io.pravega.controller.server.eventProcessor.impl.ControllerEventProcessorConfigImpl; import io.pravega.controller.store.checkpoint.CheckpointStore; import io.pravega.controller.store.checkpoint.CheckpointStoreException; -import io.pravega.controller.store.host.HostControllerStore; +import io.pravega.controller.store.checkpoint.CheckpointStoreFactory; +import io.pravega.controller.store.checkpoint.ZKCheckpointStore; import io.pravega.controller.store.kvtable.KVTableMetadataStore; import io.pravega.controller.store.stream.BucketStore; import io.pravega.controller.store.stream.StreamMetadataStore; @@ -50,6 +52,7 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Executor; +import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; @@ -58,20 +61,36 @@ import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.ThreadPooledTestSuite; import lombok.Cleanup; +import org.apache.curator.CuratorZookeeperClient; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.listen.Listenable; +import org.apache.curator.framework.state.ConnectionStateListener; +import org.junit.Assert; import org.junit.Rule; import org.junit.Test; import org.junit.rules.Timeout; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; +import static org.mockito.ArgumentMatchers.eq; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyString; -import static org.mockito.Mockito.*; +import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.doNothing; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.when; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.atLeast; +import static org.mockito.Mockito.atLeastOnce; +import static org.mockito.Mockito.never; public class ControllerEventProcessorsTest extends ThreadPooledTestSuite { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @Override public int getThreadPoolSize() { @@ -89,13 +108,106 @@ public void testEventKey() { assertEquals(commitEvent.getKey(), "test/test"); } + @Test(timeout = 30000L) + public void testIsReady() throws Exception { + Controller controller = mock(Controller.class); + StreamMetadataStore streamStore = mock(StreamMetadataStore.class); + BucketStore bucketStore = mock(BucketStore.class); + ConnectionPool connectionPool = mock(ConnectionPool.class); + StreamMetadataTasks streamMetadataTasks = mock(StreamMetadataTasks.class); + StreamTransactionMetadataTasks streamTransactionMetadataTasks = mock(StreamTransactionMetadataTasks.class); + KVTableMetadataStore kvtStore = mock(KVTableMetadataStore.class); + TableMetadataTasks kvtTasks = mock(TableMetadataTasks.class); + ControllerEventProcessorConfig config = ControllerEventProcessorConfigImpl.withDefault(); + EventProcessorSystem system = mock(EventProcessorSystem.class); + CuratorZookeeperClient curatorZKClientMock = mock(CuratorZookeeperClient.class); + CuratorFramework client = mock(CuratorFramework.class); + Listenable listen = mock(Listenable.class); + doNothing().when(listen).addListener(any(ConnectionStateListener.class)); + doReturn(listen).when(client).getConnectionStateListenable(); + doReturn(curatorZKClientMock).when(client).getZookeeperClient(); + doReturn(true).when(curatorZKClientMock).isConnected(); + ZKCheckpointStore checkpointStore = (ZKCheckpointStore) CheckpointStoreFactory.createZKStore(client); + + doAnswer(x -> null).when(streamMetadataTasks).initializeStreamWriters(any(), any()); + doAnswer(x -> null).when(streamTransactionMetadataTasks).initializeStreamWriters(any(EventStreamClientFactory.class), + any(ControllerEventProcessorConfig.class)); + CompletableFuture createScopeResponseFuture = new CompletableFuture<>(); + CompletableFuture createScopeSignalFuture = new CompletableFuture<>(); + doAnswer(x -> { + createScopeSignalFuture.complete(null); + return createScopeResponseFuture; + }).when(controller).createScope(anyString()); + + LinkedBlockingQueue> createStreamResponses = new LinkedBlockingQueue<>(); + LinkedBlockingQueue> createStreamSignals = new LinkedBlockingQueue<>(); + List> createStreamResponsesList = new LinkedList<>(); + List> createStreamSignalsList = new LinkedList<>(); + for (int i = 0; i < 4; i++) { + CompletableFuture responseFuture = new CompletableFuture<>(); + CompletableFuture signalFuture = new CompletableFuture<>(); + createStreamResponsesList.add(responseFuture); + createStreamResponses.add(responseFuture); + createStreamSignalsList.add(signalFuture); + createStreamSignals.add(signalFuture); + } + + // return a future from latches queue + doAnswer(x -> { + createStreamSignals.take().complete(null); + return createStreamResponses.take(); + }).when(controller).createStream(anyString(), anyString(), any()); + + @Cleanup + ControllerEventProcessors processors = spy(new ControllerEventProcessors("host1", + config, controller, checkpointStore, streamStore, bucketStore, + connectionPool, streamMetadataTasks, streamTransactionMetadataTasks, + kvtStore, kvtTasks, system, executorService())); + + //Check isReady() method before invoking bootstrap + Assert.assertFalse(processors.getBootstrapCompleted().get()); + Assert.assertTrue(processors.isMetadataServiceConnected()); + Assert.assertFalse(processors.isRunning()); + Assert.assertFalse(processors.isReady()); + + // call bootstrap on ControllerEventProcessors + processors.bootstrap(streamTransactionMetadataTasks, streamMetadataTasks, kvtTasks); + + // wait on create scope being called. + createScopeSignalFuture.join(); + createScopeResponseFuture.complete(true); + createStreamSignalsList.get(0).join(); + createStreamSignalsList.get(1).join(); + createStreamSignalsList.get(2).join(); + createStreamSignalsList.get(3).join(); + createStreamResponsesList.get(0).complete(true); + createStreamResponsesList.get(1).complete(true); + createStreamResponsesList.get(2).complete(true); + createStreamResponsesList.get(3).complete(true); + + AssertExtensions.assertEventuallyEquals(true, () -> processors.getBootstrapCompleted().get(), 10000); + Assert.assertTrue(processors.isMetadataServiceConnected()); + Assert.assertFalse(processors.isRunning()); + Assert.assertFalse(processors.isReady()); + + EventProcessorGroup mockEventProcessorGroup = mock(EventProcessorGroup.class); + doNothing().when(mockEventProcessorGroup).awaitRunning(); + doReturn(mockEventProcessorGroup).when(system).createEventProcessorGroup(any(EventProcessorConfig.class), any(CheckpointStore.class), any(ScheduledExecutorService.class)); + + processors.startAsync(); + processors.awaitRunning(); + Assert.assertTrue(processors.isMetadataServiceConnected()); + Assert.assertTrue(processors.isBootstrapCompleted()); + Assert.assertTrue(processors.isRunning()); + Assert.assertTrue(processors.isReady()); + } + @Test(timeout = 10000) public void testHandleOrphaned() throws CheckpointStoreException { Controller localController = mock(Controller.class); CheckpointStore checkpointStore = mock(CheckpointStore.class); StreamMetadataStore streamStore = mock(StreamMetadataStore.class); BucketStore bucketStore = mock(BucketStore.class); - HostControllerStore hostStore = mock(HostControllerStore.class); ConnectionPool connectionPool = mock(ConnectionPool.class); StreamMetadataTasks streamMetadataTasks = mock(StreamMetadataTasks.class); StreamTransactionMetadataTasks streamTransactionMetadataTasks = mock(StreamTransactionMetadataTasks.class); @@ -107,7 +219,6 @@ public void testHandleOrphaned() throws CheckpointStoreException { EventProcessorGroup mockProcessor = spy(processor); doThrow(new CheckpointStoreException("host not found")).when(mockProcessor).notifyProcessFailure("host3"); - when(system.createEventProcessorGroup(any(), any(), any())).thenReturn(mockProcessor); @Cleanup @@ -115,8 +226,11 @@ public void testHandleOrphaned() throws CheckpointStoreException { config, localController, checkpointStore, streamStore, bucketStore, connectionPool, streamMetadataTasks, streamTransactionMetadataTasks, kvtStore, kvtTasks, system, executorService()); - //check for a case where init is not initalized so that kvtRequestProcessors don't get initialized and will be null + //check for a case where init is not initialized so that kvtRequestProcessors don't get initialized and will be null assertTrue(Futures.await(processors.sweepFailedProcesses(() -> Sets.newHashSet("host1")))); + Assert.assertFalse(processors.isReady()); + Assert.assertFalse(processors.isBootstrapCompleted()); + Assert.assertFalse(processors.isMetadataServiceConnected()); processors.startAsync(); processors.awaitRunning(); assertTrue(Futures.await(processors.sweepFailedProcesses(() -> Sets.newHashSet("host1")))); @@ -126,7 +240,7 @@ public void testHandleOrphaned() throws CheckpointStoreException { } @Test(timeout = 30000L) - public void testBootstrap() { + public void testBootstrap() throws Exception { Controller controller = mock(Controller.class); CheckpointStore checkpointStore = mock(CheckpointStore.class); StreamMetadataStore streamStore = mock(StreamMetadataStore.class); @@ -189,7 +303,7 @@ public void testBootstrap() { // call bootstrap on ControllerEventProcessors processors.bootstrap(streamTransactionMetadataTasks, streamMetadataTasks, kvtTasks); - + // wait on create scope being called. createScopeSignalsList.get(0).join(); @@ -237,6 +351,7 @@ public void testBootstrap() { createStreamResponsesList.get(5).complete(true); createStreamResponsesList.get(6).complete(true); createStreamResponsesList.get(7).complete(true); + AssertExtensions.assertEventuallyEquals(true, () -> processors.getBootstrapCompleted().get(), 10000); } @Test(timeout = 10000L) diff --git a/controller/src/test/java/io/pravega/controller/server/eventProcessor/PravegaTablesScaleRequestHandlerTest.java b/controller/src/test/java/io/pravega/controller/server/eventProcessor/PravegaTablesScaleRequestHandlerTest.java index 359ef05a4b2..7937ff08f19 100644 --- a/controller/src/test/java/io/pravega/controller/server/eventProcessor/PravegaTablesScaleRequestHandlerTest.java +++ b/controller/src/test/java/io/pravega/controller/server/eventProcessor/PravegaTablesScaleRequestHandlerTest.java @@ -60,7 +60,7 @@ Number getVersionNumber(VersionedMetadata versioned) { @Override StreamMetadataStore getStore() { storeHelper = spy(new PravegaTablesStoreHelper(segmentHelper, GrpcAuthHelper.getDisabledAuthHelper(), executor)); - return TestStreamStoreFactory.createPravegaTablesStore(zkClient, executor, storeHelper); + return TestStreamStoreFactory.createPravegaTablesStreamStore(zkClient, executor, storeHelper); } diff --git a/controller/src/test/java/io/pravega/controller/server/eventProcessor/RequestHandlersTest.java b/controller/src/test/java/io/pravega/controller/server/eventProcessor/RequestHandlersTest.java index 15caff4140e..aa3c4624d14 100644 --- a/controller/src/test/java/io/pravega/controller/server/eventProcessor/RequestHandlersTest.java +++ b/controller/src/test/java/io/pravega/controller/server/eventProcessor/RequestHandlersTest.java @@ -258,7 +258,7 @@ private void concurrentTxnCommit(String stream, String func, verify(streamStore1Spied, times(invocationCount.get("startCommitTransactions"))) .startCommitTransactions(anyString(), anyString(), anyInt(), any(), any()); verify(streamStore1Spied, times(invocationCount.get("completeCommitTransactions"))) - .completeCommitTransactions(anyString(), anyString(), any(), any(), any()); + .completeCommitTransactions(anyString(), anyString(), any(), any(), any(), any()); verify(streamStore1Spied, times(invocationCount.get("updateVersionedState"))) .updateVersionedState(anyString(), anyString(), any(), any(), any(), any()); @@ -270,6 +270,98 @@ private void concurrentTxnCommit(String stream, String func, streamStore2.close(); } + @SuppressWarnings("unchecked") + @Test(timeout = 300000) + public void updateSealedStream() throws Exception { + String stream = "updateSealed"; + StreamMetadataStore streamStore = getStore(); + StreamMetadataStore streamStoreSpied = spy(getStore()); + StreamConfiguration config = StreamConfiguration.builder().scalingPolicy( + ScalingPolicy.byEventRate(1, 2, 1)).build(); + streamStore.createStream(scope, stream, config, System.currentTimeMillis(), null, executor).join(); + + streamStore.setState(scope, stream, State.ACTIVE, null, executor).join(); + streamStore.setState(scope, stream, State.SEALED, null, executor).join(); + + UpdateStreamTask requestHandler = new UpdateStreamTask(streamMetadataTasks, streamStoreSpied, bucketStore, executor); + + CompletableFuture wait = new CompletableFuture<>(); + CompletableFuture signal = new CompletableFuture<>(); + + streamStore.startUpdateConfiguration(scope, stream, config, null, executor).join(); + + UpdateStreamEvent event = new UpdateStreamEvent(scope, stream, System.currentTimeMillis()); + + doAnswer(x -> { + signal.complete(null); + wait.join(); + return streamStore.completeUpdateConfiguration(x.getArgument(0), x.getArgument(1), + x.getArgument(2), x.getArgument(3), x.getArgument(4)); + }).when(streamStoreSpied).completeUpdateConfiguration(anyString(), anyString(), any(), any(), any()); + + CompletableFuture future = CompletableFuture.completedFuture(null) + .thenComposeAsync(v -> requestHandler.execute(event), executor); + signal.join(); + wait.complete(null); + + AssertExtensions.assertSuppliedFutureThrows("Updating sealed stream job should fail", () -> future, + e -> Exceptions.unwrap(e) instanceof UnsupportedOperationException); + + // validate + VersionedMetadata versioned = streamStore.getConfigurationRecord(scope, stream, null, executor).join(); + assertFalse(versioned.getObject().isUpdating()); + assertEquals(2, getVersionNumber(versioned.getVersion())); + assertEquals(State.SEALED, streamStore.getState(scope, stream, true, null, executor).join()); + streamStore.close(); + } + + @SuppressWarnings("unchecked") + @Test(timeout = 300000) + public void truncateSealedStream() throws Exception { + String stream = "truncateSealed"; + StreamMetadataStore streamStore = getStore(); + StreamMetadataStore streamStoreSpied = spy(getStore()); + StreamConfiguration config = StreamConfiguration.builder().scalingPolicy( + ScalingPolicy.byEventRate(1, 2, 1)).build(); + streamStore.createStream(scope, stream, config, System.currentTimeMillis(), null, executor).join(); + + streamStore.setState(scope, stream, State.ACTIVE, null, executor).join(); + streamStore.setState(scope, stream, State.SEALED, null, executor).join(); + + TruncateStreamTask requestHandler = new TruncateStreamTask(streamMetadataTasks, streamStoreSpied, executor); + + CompletableFuture wait = new CompletableFuture<>(); + CompletableFuture signal = new CompletableFuture<>(); + + Map map = new HashMap<>(); + map.put(0L, 100L); + + streamStore.startTruncation(scope, stream, map, null, executor).join(); + + TruncateStreamEvent event = new TruncateStreamEvent(scope, stream, System.currentTimeMillis()); + + doAnswer(x -> { + signal.complete(null); + wait.join(); + return streamStore.completeTruncation(x.getArgument(0), x.getArgument(1), + x.getArgument(2), x.getArgument(3), x.getArgument(4)); + }).when(streamStoreSpied).completeTruncation(anyString(), anyString(), any(), any(), any()); + + CompletableFuture future = CompletableFuture.completedFuture(null) + .thenComposeAsync(v -> requestHandler.execute(event), executor); + signal.join(); + wait.complete(null); + + AssertExtensions.assertSuppliedFutureThrows("Updating sealed stream job should fail", () -> future, + e -> Exceptions.unwrap(e) instanceof UnsupportedOperationException); + + // validate + VersionedMetadata versioned = streamStore.getTruncationRecord(scope, stream, null, executor).join(); + assertFalse(versioned.getObject().isUpdating()); + assertEquals(2, getVersionNumber(versioned.getVersion())); + assertEquals(State.SEALED, streamStore.getState(scope, stream, true, null, executor).join()); + streamStore.close(); + } @SuppressWarnings("unchecked") @Test(timeout = 300000) @@ -368,7 +460,7 @@ private void concurrentRollingTxnCommit(String stream, String func, verify(streamStore1Spied, times(invocationCount.get("completeRollingTxn"))) .completeRollingTxn(anyString(), anyString(), any(), any(), any(), any()); verify(streamStore1Spied, times(invocationCount.get("completeCommitTransactions"))) - .completeCommitTransactions(anyString(), anyString(), any(), any(), any()); + .completeCommitTransactions(anyString(), anyString(), any(), any(), any(), any()); verify(streamStore1Spied, times(invocationCount.get("updateVersionedState"))) .updateVersionedState(anyString(), anyString(), any(), any(), any(), any()); } else { @@ -398,8 +490,8 @@ private void setMockCommitTxnLatch(StreamMetadataStore store, StreamMetadataStor signal.complete(null); waitOn.join(); return store.completeCommitTransactions(x.getArgument(0), x.getArgument(1), - x.getArgument(2), x.getArgument(3), x.getArgument(4)); - }).when(spied).completeCommitTransactions(anyString(), anyString(), any(), any(), any()); + x.getArgument(2), x.getArgument(3), x.getArgument(4), Collections.emptyMap()); + }).when(spied).completeCommitTransactions(anyString(), anyString(), any(), any(), any(), any()); break; case "startRollingTxn": doAnswer(x -> { @@ -797,7 +889,8 @@ public void testScaleIgnoreFairness() { ScaleOpEvent scaleEvent = new ScaleOpEvent(fairness, fairness, Collections.singletonList(0L), Collections.singletonList(new AbstractMap.SimpleEntry<>(0.0, 1.0)), false, System.currentTimeMillis(), 0L); - streamRequestHandler.process(scaleEvent, () -> false).join(); + AssertExtensions.assertFutureThrows("", streamRequestHandler.process(scaleEvent, () -> false), + e -> Exceptions.unwrap(e) instanceof RuntimeException); // verify that scale was started assertEquals(State.SCALING, streamStore.getState(fairness, fairness, true, null, executor).join()); @@ -818,7 +911,8 @@ public void testScaleIgnoreFairness() { ScaleOpEvent scaleEvent2 = new ScaleOpEvent(fairness, fairness, Collections.singletonList(NameUtils.computeSegmentId(1, 1)), Collections.singletonList(new AbstractMap.SimpleEntry<>(0.0, 1.0)), false, System.currentTimeMillis(), 0L); - streamRequestHandler.process(scaleEvent2, () -> false).join(); + AssertExtensions.assertFutureThrows("", streamRequestHandler.process(scaleEvent2, () -> false), + e -> Exceptions.unwrap(e) instanceof StoreException.OperationNotAllowedException); streamStore.deleteWaitingRequestConditionally(fairness, fairness, "myProcessor", null, executor).join(); } @@ -841,7 +935,7 @@ public void testUpdateIgnoreFairness() { System.currentTimeMillis(), 0L).join(); // 1. set segment helper mock to throw exception - doAnswer(x -> Futures.failedFuture(new NullPointerException())) + doAnswer(x -> Futures.failedFuture(new RuntimeException())) .when(segmentHelper).updatePolicy(anyString(), anyString(), any(), anyLong(), anyString(), anyLong()); // 2. start process --> this should fail with a retryable exception while talking to segment store! @@ -852,7 +946,7 @@ public void testUpdateIgnoreFairness() { UpdateStreamEvent event = new UpdateStreamEvent(fairness, fairness, 0L); AssertExtensions.assertFutureThrows("", streamRequestHandler.process(event, () -> false), - e -> Exceptions.unwrap(e) instanceof NullPointerException); + e -> Exceptions.unwrap(e) instanceof RuntimeException); verify(segmentHelper, atLeastOnce()).updatePolicy(anyString(), anyString(), any(), anyLong(), anyString(), anyLong()); @@ -893,8 +987,7 @@ public void testTruncateIgnoreFairness() { System.currentTimeMillis(), 0L).join(); // 1. set segment helper mock to throw exception - Exception exception = StoreException.create(StoreException.Type.DATA_NOT_FOUND, "Some processing exception"); - doAnswer(x -> Futures.failedFuture(exception)) + doAnswer(x -> Futures.failedFuture(new RuntimeException())) .when(segmentHelper).truncateSegment(anyString(), anyString(), anyLong(), anyLong(), anyString(), anyLong()); // 2. start process --> this should fail with a retryable exception while talking to segment store! @@ -904,7 +997,7 @@ public void testTruncateIgnoreFairness() { TruncateStreamEvent event = new TruncateStreamEvent(fairness, fairness, 0L); AssertExtensions.assertFutureThrows("", streamRequestHandler.process(event, () -> false), - e -> Exceptions.unwrap(e) instanceof StoreException.DataNotFoundException); + e -> Exceptions.unwrap(e) instanceof RuntimeException); verify(segmentHelper, atLeastOnce()).truncateSegment(anyString(), anyString(), anyLong(), anyLong(), anyString(), anyLong()); @@ -935,13 +1028,12 @@ public void testCommitTxnIgnoreFairness() { streamMetadataTasks.createStream(fairness, fairness, StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).build(), System.currentTimeMillis(), 0L).join(); - UUID txn = streamTransactionMetadataTasks.createTxn(fairness, fairness, 30000, 0L).join().getKey().getId(); + UUID txn = streamTransactionMetadataTasks.createTxn(fairness, fairness, 30000, 0L, 1024 * 1024L).join().getKey().getId(); streamStore.sealTransaction(fairness, fairness, txn, true, Optional.empty(), "", Long.MIN_VALUE, null, executor).join(); // 1. set segment helper mock to throw exception - Exception exception = StoreException.create(StoreException.Type.ILLEGAL_STATE, "Some processing exception"); - doAnswer(x -> Futures.failedFuture(exception)) + doAnswer(x -> Futures.failedFuture(new RuntimeException())) .when(segmentHelper).commitTransaction(anyString(), anyString(), anyLong(), anyLong(), any(), anyString(), anyLong()); @@ -954,7 +1046,7 @@ public void testCommitTxnIgnoreFairness() { CommitEvent event = new CommitEvent(fairness, fairness, 0); AssertExtensions.assertFutureThrows("", requestHandler.process(event, () -> false), - e -> Exceptions.unwrap(e) instanceof StoreException.IllegalStateException); + e -> Exceptions.unwrap(e) instanceof RuntimeException); verify(segmentHelper, atLeastOnce()).commitTransaction(anyString(), anyString(), anyLong(), anyLong(), any(), anyString(), anyLong()); @@ -997,8 +1089,7 @@ public void testSealIgnoreFairness() { System.currentTimeMillis(), 0L).join(); // 1. set segment helper mock to throw exception - Exception exception = StoreException.create(StoreException.Type.DATA_CONTAINER_NOT_FOUND, "Some processing exception"); - doAnswer(x -> Futures.failedFuture(exception)) + doAnswer(x -> Futures.failedFuture(new RuntimeException())) .when(segmentHelper).sealSegment(anyString(), anyString(), anyLong(), anyString(), anyLong()); // 2. start process --> this should fail with a retryable exception while talking to segment store! @@ -1058,7 +1149,8 @@ public void testDeleteIgnoreFairness() { assertEquals(State.SEALED, streamStore.getState(fairness, fairness, true, null, executor).join()); DeleteStreamEvent event = new DeleteStreamEvent(fairness, fairness, 0L, createTimestamp); - streamRequestHandler.process(event, () -> false).join(); + AssertExtensions.assertFutureThrows("", streamRequestHandler.process(event, () -> false), + e -> Exceptions.unwrap(e) instanceof RuntimeException); verify(segmentHelper, atLeastOnce()) .deleteSegment(anyString(), anyString(), anyLong(), anyString(), anyLong()); diff --git a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ScaleRequestHandlerTest.java b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ScaleRequestHandlerTest.java index 8f9ca62c9fe..51eeff532dd 100644 --- a/controller/src/test/java/io/pravega/controller/server/eventProcessor/ScaleRequestHandlerTest.java +++ b/controller/src/test/java/io/pravega/controller/server/eventProcessor/ScaleRequestHandlerTest.java @@ -110,7 +110,7 @@ public abstract class ScaleRequestHandlerTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); protected CuratorFramework zkClient; protected StreamMetadataStore streamStore; @@ -286,7 +286,7 @@ public void testScaleRequest() throws ExecutionException, InterruptedException { // This will bring down the test duration drastically because a retryable failure can keep retrying for few seconds. // And if someone changes retry durations and number of attempts in retry helper, it will impact this test's running time. // hence sending incorrect segmentsToSeal list which will result in a non retryable failure and this will fail immediately - assertTrue(Futures.await(multiplexer.process(new ScaleOpEvent(scope, stream, Lists.newArrayList(five), + assertFalse(Futures.await(multiplexer.process(new ScaleOpEvent(scope, stream, Lists.newArrayList(five), Lists.newArrayList(new AbstractMap.SimpleEntry<>(0.5, 1.0)), false, System.currentTimeMillis(), System.currentTimeMillis()), () -> false))); activeSegments = streamStore.getActiveSegments(scope, stream, null, executor).get(); assertTrue(activeSegments.stream().noneMatch(z -> z.segmentId() == three)); @@ -503,9 +503,10 @@ public void testInconsistentScaleRequestAfterRollingTxn() throws Exception { assertEquals(TxnStatus.COMMITTED, txnStatus); // 6. run scale. this should fail in scaleCreateNewEpochs with IllegalArgumentException with epochTransitionConsistent - requestHandler.process(new ScaleOpEvent(scope, stream, Lists.newArrayList(1L), + AssertExtensions.assertFutureThrows("epoch transition should be inconsistent", requestHandler.process(new ScaleOpEvent(scope, stream, Lists.newArrayList(1L), Lists.newArrayList(new AbstractMap.SimpleEntry<>(0.5, 0.75), new AbstractMap.SimpleEntry<>(0.75, 1.0)), - false, System.currentTimeMillis(), System.currentTimeMillis()), () -> false).join(); + false, System.currentTimeMillis(), System.currentTimeMillis()), () -> false), e -> Exceptions.unwrap(e) instanceof IllegalStateException); + state = streamStore.getState(scope, stream, true, null, executor).join(); assertEquals(State.ACTIVE, state); } diff --git a/controller/src/test/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestProcessorTest.java b/controller/src/test/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestProcessorTest.java index d4702b06be3..036fd63a20f 100644 --- a/controller/src/test/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestProcessorTest.java +++ b/controller/src/test/java/io/pravega/controller/server/eventProcessor/requesthandlers/StreamRequestProcessorTest.java @@ -46,7 +46,7 @@ public abstract class StreamRequestProcessorTest extends ThreadPooledTestSuite { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @Override public int getThreadPoolSize() { @@ -115,7 +115,7 @@ public TestRequestProcessor1(StreamMetadataStore streamMetadataStore, ScheduledE } public CompletableFuture testProcess(TestEvent1 event) { - return withCompletion(this, event, event.scope, event.stream, EVENT_RETRY_PREDICATE); + return withCompletion(this, event, event.scope, event.stream, OPERATION_NOT_ALLOWED_PREDICATE); } @Override @@ -144,7 +144,7 @@ public TestRequestProcessor2(StreamMetadataStore streamMetadataStore, ScheduledE } public CompletableFuture testProcess(TestEvent2 event) { - return withCompletion(this, event, event.scope, event.stream, EVENT_RETRY_PREDICATE); + return withCompletion(this, event, event.scope, event.stream, OPERATION_NOT_ALLOWED_PREDICATE); } @Override @@ -228,7 +228,8 @@ public void testRequestProcessor() throws InterruptedException { started1.join(); // 2. start test event2 processing on processor 2. Make this fail with OperationNotAllowed and verify that it gets postponed. - requestProcessor2.process(event21, () -> false).join(); + AssertExtensions.assertFutureThrows("Fail first processing with operation not allowed", requestProcessor2.process(event21, () -> false), + e -> Exceptions.unwrap(e) instanceof StoreException.OperationNotAllowedException); // also verify that store has set the processor name of processor 2. String waitingProcessor = getStore().getWaitingRequestProcessor(scope, stream, null, executorService()).join(); assertEquals(TestRequestProcessor2.class.getSimpleName(), waitingProcessor); @@ -298,7 +299,8 @@ public void testCompleteStartedTasks() throws InterruptedException { started1.join(); // 2. start test event2 processing on processor 2. Make this fail with OperationNotAllowed and verify that it gets postponed. - requestProcessor2.process(event2, () -> false).join(); + AssertExtensions.assertFutureThrows("Fail first processing with operation not allowed", requestProcessor2.process(event2, () -> false), + e -> Exceptions.unwrap(e) instanceof StoreException.OperationNotAllowedException); // also verify that store has set the processor name of processor 2. String waitingProcessor = getStore().getWaitingRequestProcessor(scope, stream, null, executorService()).join(); assertEquals(TestRequestProcessor2.class.getSimpleName(), waitingProcessor); @@ -306,10 +308,10 @@ public void testCompleteStartedTasks() throws InterruptedException { assertEquals(taken2, event2); // 3. Fail processing on processor 1 - waitForIt1.completeExceptionally(new IllegalArgumentException()); + waitForIt1.completeExceptionally(new RuntimeException()); // processing11 should complete successfully. - AssertExtensions.assertFutureThrows("", processing11, e -> Exceptions.unwrap(e) instanceof IllegalArgumentException); + AssertExtensions.assertFutureThrows("", processing11, e -> Exceptions.unwrap(e) instanceof RuntimeException); // set ignore started to true requestProcessor1.ignoreStarted = true; @@ -342,16 +344,18 @@ public void testFailingProcessor() { FailingEvent event1 = new FailingEvent("scope", "stream", Futures.failedFuture(new RuntimeException("hasStarted")), CompletableFuture.completedFuture(null)); - processor.process(event1, () -> false).join(); + AssertExtensions.assertFutureThrows("exception should be thrown after has started", processor.process(event1, () -> false), + e -> Exceptions.unwrap(e) instanceof RuntimeException && Exceptions.unwrap(e).getMessage().equals("hasStarted")); verify(processor, times(1)).hasTaskStarted(event1); - verify(processor, times(1)).writeBack(event1); + verify(processor, never()).writeBack(event1); verify(processor, never()).execute(event1); FailingEvent event2 = new FailingEvent("scope", "stream", CompletableFuture.completedFuture(true), Futures.failedFuture(new RuntimeException("execute"))); - processor.process(event2, () -> false).join(); + AssertExtensions.assertFutureThrows("exception should be thrown after execute", processor.process(event2, () -> false), + e -> Exceptions.unwrap(e) instanceof RuntimeException && Exceptions.unwrap(e).getMessage().equals("execute")); verify(processor, times(1)).writeBack(event2); verify(processor, times(1)).hasTaskStarted(event2); verify(processor, times(1)).execute(event2); diff --git a/controller/src/test/java/io/pravega/controller/server/health/ClusterListenerHealthContributorTest.java b/controller/src/test/java/io/pravega/controller/server/health/ClusterListenerHealthContributorTest.java new file mode 100644 index 00000000000..7df3ad41f27 --- /dev/null +++ b/controller/src/test/java/io/pravega/controller/server/health/ClusterListenerHealthContributorTest.java @@ -0,0 +1,83 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import io.pravega.common.cluster.Cluster; +import io.pravega.common.cluster.Host; +import io.pravega.controller.fault.ControllerClusterListener; +import io.pravega.controller.fault.FailoverSweeper; +import io.pravega.controller.server.ControllerServiceConfig; +import io.pravega.controller.server.impl.ControllerServiceConfigImpl; +import io.pravega.controller.task.Stream.TxnSweeper; +import io.pravega.controller.task.TaskSweeper; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.ScheduledExecutorService; + +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; + +/** + * Unit tests for ClusterListenerHealthContributor + */ +public class ClusterListenerHealthContributorTest { + private ControllerClusterListener clusterListener; + private ClusterListenerHealthContributor contributor; + private Health.HealthBuilder builder; + + @Before + public void setup() { + Host host = mock(Host.class); + Cluster cluster = mock(Cluster.class); + ScheduledExecutorService executor = mock(ScheduledExecutorService.class); + ControllerServiceConfig serviceConfig = mock(ControllerServiceConfigImpl.class); + TaskSweeper taskSweeper = mock(TaskSweeper.class); + TxnSweeper txnSweeper = mock(TxnSweeper.class); + List failoverSweepers = new ArrayList<>(); + failoverSweepers.add(taskSweeper); + failoverSweepers.add(txnSweeper); + + doReturn(true).when(serviceConfig).isControllerClusterListenerEnabled(); + clusterListener = spy(new ControllerClusterListener(host, cluster, executor, failoverSweepers)); + doReturn(true).when(clusterListener).isReady(); + contributor = new ClusterListenerHealthContributor("clusterlistener", clusterListener); + builder = Health.builder().name("clusterlistener"); + } + + @After + public void tearDown() { + contributor.close(); + } + + @Test + public void testHealthCheck() throws Exception { + clusterListener.startAsync(); + clusterListener.awaitRunning(); + Status status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.UP, status); + clusterListener.stopAsync(); + clusterListener.awaitTerminated(); + status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.DOWN, status); + } +} diff --git a/controller/src/test/java/io/pravega/controller/server/health/EventProcessorHealthContributorTest.java b/controller/src/test/java/io/pravega/controller/server/health/EventProcessorHealthContributorTest.java new file mode 100644 index 00000000000..ba8533d3d74 --- /dev/null +++ b/controller/src/test/java/io/pravega/controller/server/health/EventProcessorHealthContributorTest.java @@ -0,0 +1,193 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import com.google.common.collect.Sets; +import com.google.common.util.concurrent.Service; +import io.pravega.client.connection.impl.ConnectionPool; +import io.pravega.client.stream.EventStreamWriter; +import io.pravega.common.cluster.Host; +import io.pravega.controller.eventProcessor.EventProcessorGroup; +import io.pravega.controller.eventProcessor.EventProcessorSystem; +import io.pravega.controller.eventProcessor.impl.EventProcessorSystemImpl; +import io.pravega.controller.server.eventProcessor.ControllerEventProcessorConfig; +import io.pravega.controller.server.eventProcessor.ControllerEventProcessors; +import io.pravega.controller.server.eventProcessor.LocalController; +import io.pravega.controller.server.eventProcessor.impl.ControllerEventProcessorConfigImpl; +import io.pravega.controller.store.checkpoint.CheckpointStore; +import io.pravega.controller.store.checkpoint.CheckpointStoreException; +import io.pravega.controller.store.kvtable.KVTableMetadataStore; +import io.pravega.controller.store.stream.BucketStore; +import io.pravega.controller.store.stream.StreamMetadataStore; +import io.pravega.controller.task.KeyValueTable.TableMetadataTasks; +import io.pravega.controller.task.Stream.StreamMetadataTasks; +import io.pravega.controller.task.Stream.StreamTransactionMetadataTasks; +import io.pravega.shared.controller.event.ControllerEvent; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.test.common.ThreadPooledTestSuite; + +import lombok.SneakyThrows; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.util.Set; +import java.util.concurrent.Executor; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.when; + +/** + * Unit tests for EventProcessorHealthContributor + */ +public class EventProcessorHealthContributorTest extends ThreadPooledTestSuite { + private ControllerEventProcessors eventProcessors; + private EventProcessorHealthContributor contributor; + private Health.HealthBuilder builder; + + @SneakyThrows + @Before + public void setup() { + Host host = mock(Host.class); + LocalController localController = mock(LocalController.class); + CheckpointStore checkpointStore = mock(CheckpointStore.class); + StreamMetadataStore streamStore = mock(StreamMetadataStore.class); + BucketStore bucketStore = mock(BucketStore.class); + ConnectionPool connectionPool = mock(ConnectionPool.class); + StreamMetadataTasks streamMetadataTasks = mock(StreamMetadataTasks.class); + StreamTransactionMetadataTasks streamTransactionMetadataTasks = mock(StreamTransactionMetadataTasks.class); + ScheduledExecutorService executor = mock(ScheduledExecutorService.class); + KVTableMetadataStore kvtMetadataStore = mock(KVTableMetadataStore.class); + TableMetadataTasks kvtMetadataTasks = mock(TableMetadataTasks.class); + EventProcessorSystem system = mock(EventProcessorSystemImpl.class); + ControllerEventProcessorConfig config = ControllerEventProcessorConfigImpl.withDefault(); + + EventProcessorGroup processor = getProcessor(); + EventProcessorGroup mockProcessor = spy(processor); + doThrow(new CheckpointStoreException("host not found")).when(mockProcessor).notifyProcessFailure("host3"); + when(system.createEventProcessorGroup(any(), any(), any())).thenReturn(mockProcessor); + + eventProcessors = spy(new ControllerEventProcessors(host.getHostId(), + config, localController, checkpointStore, streamStore, + bucketStore, connectionPool, streamMetadataTasks, streamTransactionMetadataTasks, kvtMetadataStore, + kvtMetadataTasks, system, executorService())); + doReturn(true).when(eventProcessors).isReady(); + + contributor = new EventProcessorHealthContributor("eventprocessors", eventProcessors); + builder = Health.builder().name("eventprocessors"); + } + + @After + public void tearDown() { + contributor.close(); + eventProcessors.close(); + } + + @Test + public void testHealthCheck() throws Exception { + eventProcessors.startAsync(); + eventProcessors.awaitRunning(); + Status status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.UP, status); + eventProcessors.stopAsync(); + eventProcessors.awaitTerminated(); + status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.DOWN, status); + } + + private EventProcessorGroup getProcessor() { + return new EventProcessorGroup() { + @Override + public void notifyProcessFailure(String process) throws CheckpointStoreException { + + } + + @Override + public EventStreamWriter getWriter() { + return null; + } + + @Override + public Set getProcesses() throws CheckpointStoreException { + return Sets.newHashSet("host1", "host2"); + } + + @Override + public Service startAsync() { + return null; + } + + @Override + public boolean isRunning() { + return false; + } + + @Override + public State state() { + return null; + } + + @Override + public Service stopAsync() { + return null; + } + + @Override + public void awaitRunning() { + + } + + @Override + public void awaitRunning(long timeout, TimeUnit unit) throws TimeoutException { + + } + + @Override + public void awaitTerminated() { + + } + + @Override + public void awaitTerminated(long timeout, TimeUnit unit) throws TimeoutException { + + } + + @Override + public Throwable failureCause() { + return null; + } + + @Override + public void addListener(Listener listener, Executor executor) { + + } + + @Override + public void close() throws Exception { + + } + }; + } +} diff --git a/controller/src/test/java/io/pravega/controller/server/health/GRPCServerHealthContributorTest.java b/controller/src/test/java/io/pravega/controller/server/health/GRPCServerHealthContributorTest.java new file mode 100644 index 00000000000..4a7728825cd --- /dev/null +++ b/controller/src/test/java/io/pravega/controller/server/health/GRPCServerHealthContributorTest.java @@ -0,0 +1,66 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import io.pravega.common.tracing.RequestTracker; +import io.pravega.controller.server.ControllerService; +import io.pravega.controller.server.rpc.grpc.GRPCServer; +import io.pravega.controller.server.rpc.grpc.GRPCServerConfig; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; + +/** + * Unit tests for GRPCServiceHealthContibutor + */ +public class GRPCServerHealthContributorTest { + private GRPCServer grpcServer; + private GRPCServerHealthContributor contributor; + private Health.HealthBuilder builder; + + @Before + public void setup() { + ControllerService service = mock(ControllerService.class); + GRPCServerConfig config = mock(GRPCServerConfig.class); + RequestTracker requestTracker = new RequestTracker(config.isRequestTracingEnabled()); + grpcServer = spy(new GRPCServer(service, config, requestTracker)); + contributor = new GRPCServerHealthContributor("grpc", grpcServer); + builder = Health.builder().name("grpc"); + } + + @After + public void tearDown() { + contributor.close(); + } + + @Test + public void testHealthCheck() throws Exception { + grpcServer.startAsync(); + grpcServer.awaitRunning(); + Status status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.UP, status); + grpcServer.stopAsync(); + grpcServer.awaitTerminated(); + status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.DOWN, status); + } +} diff --git a/controller/src/test/java/io/pravega/controller/server/health/RetentionServiceHealthContributorTest.java b/controller/src/test/java/io/pravega/controller/server/health/RetentionServiceHealthContributorTest.java new file mode 100644 index 00000000000..5d189b9fcff --- /dev/null +++ b/controller/src/test/java/io/pravega/controller/server/health/RetentionServiceHealthContributorTest.java @@ -0,0 +1,81 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import io.pravega.controller.server.bucket.BucketManager; +import io.pravega.controller.server.bucket.BucketServiceFactory; +import io.pravega.controller.server.bucket.PeriodicRetention; +import io.pravega.controller.server.bucket.ZooKeeperBucketManager; +import io.pravega.controller.store.client.StoreType; +import io.pravega.controller.store.stream.BucketStore; +import io.pravega.controller.store.stream.ZookeeperBucketStore; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.time.Duration; +import java.util.UUID; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ScheduledExecutorService; + +import static org.mockito.Mockito.doNothing; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; + +/** + * Unit tests for RetentionServiceHealthContibutor + */ +public class RetentionServiceHealthContributorTest { + private BucketManager retentionService; + private RetentionServiceHealthContributor contributor; + private Health.HealthBuilder builder; + + @Before + public void setup() { + BucketStore bucketStore = mock(ZookeeperBucketStore.class); + doReturn(StoreType.Zookeeper).when(bucketStore).getStoreType(); + String hostId = UUID.randomUUID().toString(); + BucketServiceFactory bucketStoreFactory = spy(new BucketServiceFactory(hostId, bucketStore, 2)); + ScheduledExecutorService executor = mock(ScheduledExecutorService.class); + PeriodicRetention periodicRetention = mock(PeriodicRetention.class); + retentionService = spy(bucketStoreFactory.createWatermarkingService(Duration.ofMillis(5), periodicRetention::retention, executor)); + doReturn(CompletableFuture.completedFuture(null)).when((ZooKeeperBucketManager) retentionService).initializeService(); + doNothing().when((ZooKeeperBucketManager) retentionService).startBucketOwnershipListener(); + doReturn(true).when(retentionService).isHealthy(); + + contributor = new RetentionServiceHealthContributor("retentionservice", retentionService); + builder = Health.builder().name("retentionservice"); + } + + @After + public void tearDown() { + contributor.close(); + } + + @Test + public void testHealthCheck() throws Exception { + retentionService.startAsync(); + Status status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.UP, status); + retentionService.stopAsync(); + status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.DOWN, status); + } +} diff --git a/controller/src/test/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributorTest.java b/controller/src/test/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributorTest.java new file mode 100644 index 00000000000..ba4067f63f4 --- /dev/null +++ b/controller/src/test/java/io/pravega/controller/server/health/SegmentContainerMonitorHealthContributorTest.java @@ -0,0 +1,78 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import io.pravega.controller.fault.ContainerBalancer; +import io.pravega.controller.fault.SegmentContainerMonitor; +import io.pravega.controller.store.host.HostControllerStore; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import org.apache.curator.CuratorZookeeperClient; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.listen.Listenable; +import org.apache.curator.framework.state.ConnectionStateListener; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.Assert; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.doNothing; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; + +/** + * Unit tests for SegmentContainerMonitorHealthContributor + */ +public class SegmentContainerMonitorHealthContributorTest { + private SegmentContainerMonitor monitor; + private SegmentContainerMonitorHealthContributor contributor; + private Health.HealthBuilder builder; + + @Before + public void setup() { + HostControllerStore hostStore = mock(HostControllerStore.class); + CuratorFramework client = mock(CuratorFramework.class); + ContainerBalancer balancer = mock(ContainerBalancer.class); + CuratorZookeeperClient curatorZKClientMock = mock(CuratorZookeeperClient.class); + Listenable listen = mock(Listenable.class); + doNothing().when(listen).addListener(any(ConnectionStateListener.class)); + doReturn(listen).when(client).getConnectionStateListenable(); + doReturn(curatorZKClientMock).when(client).getZookeeperClient(); + doReturn(true).when(curatorZKClientMock).isConnected(); + monitor = spy(new SegmentContainerMonitor(hostStore, client, balancer, 1)); + contributor = new SegmentContainerMonitorHealthContributor("segmentcontainermonitor", monitor); + builder = Health.builder().name("monitor"); + } + + @After + public void tearDown() { + contributor.close(); + } + + @Test + public void testHealthCheck() throws Exception { + monitor.startAsync(); + monitor.awaitRunning(); + Status status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.UP, status); + monitor.stopAsync(); + monitor.awaitTerminated(); + status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.DOWN, status); + } +} \ No newline at end of file diff --git a/controller/src/test/java/io/pravega/controller/server/health/WatermarkingServiceHealthContibutorTest.java b/controller/src/test/java/io/pravega/controller/server/health/WatermarkingServiceHealthContibutorTest.java new file mode 100644 index 00000000000..a67c05da643 --- /dev/null +++ b/controller/src/test/java/io/pravega/controller/server/health/WatermarkingServiceHealthContibutorTest.java @@ -0,0 +1,80 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.server.health; + +import io.pravega.controller.server.bucket.BucketManager; +import io.pravega.controller.server.bucket.BucketServiceFactory; +import io.pravega.controller.server.bucket.PeriodicWatermarking; +import io.pravega.controller.server.bucket.ZooKeeperBucketManager; +import io.pravega.controller.store.client.StoreType; +import io.pravega.controller.store.stream.ZookeeperBucketStore; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; + +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; +import java.time.Duration; +import java.util.UUID; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ScheduledExecutorService; + +import static org.mockito.Mockito.doNothing; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; + +/** + * Unit tests for WatermarkingServiceHealthContibutor + */ +public class WatermarkingServiceHealthContibutorTest { + private BucketManager watermarkingService; + private WatermarkingServiceHealthContributor contributor; + private Health.HealthBuilder builder; + + @Before + public void setup() { + ZookeeperBucketStore bucketStore = mock(ZookeeperBucketStore.class); + doReturn(StoreType.Zookeeper).when(bucketStore).getStoreType(); + String hostId = UUID.randomUUID().toString(); + BucketServiceFactory bucketStoreFactory = new BucketServiceFactory(hostId, bucketStore, 2); + ScheduledExecutorService executor = mock(ScheduledExecutorService.class); + PeriodicWatermarking periodicWatermarking = mock(PeriodicWatermarking.class); + watermarkingService = spy(bucketStoreFactory.createWatermarkingService(Duration.ofMillis(10), periodicWatermarking::watermark, executor)); + doReturn(CompletableFuture.completedFuture(null)).when((ZooKeeperBucketManager) watermarkingService).initializeService(); + doNothing().when((ZooKeeperBucketManager) watermarkingService).startBucketOwnershipListener(); + doReturn(true).when(watermarkingService).isHealthy(); + contributor = new WatermarkingServiceHealthContributor("watermarkingservice", watermarkingService); + builder = Health.builder().name("watermark"); + } + + @After + public void tearDown() { + contributor.close(); + } + + @Test + public void testHealthCheck() throws Exception { + watermarkingService.startAsync(); + Assert.assertTrue(watermarkingService.isRunning()); + Status status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.UP, status); + watermarkingService.stopAsync(); + status = contributor.doHealthCheck(builder); + Assert.assertEquals(Status.DOWN, status); + } +} diff --git a/controller/src/test/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImplTest.java b/controller/src/test/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImplTest.java index 6cbdb0fa933..c6acc960c95 100644 --- a/controller/src/test/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImplTest.java +++ b/controller/src/test/java/io/pravega/controller/server/rpc/grpc/impl/GRPCServerConfigImplTest.java @@ -35,7 +35,7 @@ public class GRPCServerConfigImplTest { // Note: It might seem odd that we are unit testing the toString() method of the code under test. The reason we are // doing that is that the method is hand-rolled and there is a bit of logic there that isn't entirely unlikely to fail. @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @Test public void testToStringReturnsSuccessfullyWithAllConfigSpecified() { diff --git a/controller/src/test/java/io/pravega/controller/server/rpc/grpc/v1/ControllerServiceImplTest.java b/controller/src/test/java/io/pravega/controller/server/rpc/grpc/v1/ControllerServiceImplTest.java index a5a2c7d75e0..73209e99802 100644 --- a/controller/src/test/java/io/pravega/controller/server/rpc/grpc/v1/ControllerServiceImplTest.java +++ b/controller/src/test/java/io/pravega/controller/server/rpc/grpc/v1/ControllerServiceImplTest.java @@ -837,6 +837,80 @@ public void sealStreamTests() { UpdateStreamStatus.Status.STREAM_NOT_FOUND, updateStreamStatus.getStatus()); } + @Test + public void updateSealedStreamTest() { + CreateScopeStatus createScopeStatus; + CreateStreamStatus createStreamStatus; + UpdateStreamStatus updateStreamStatus; + final StreamConfiguration configuration1 = + StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(4)).build(); + + // Create a test scope. + ResultObserver result1 = new ResultObserver<>(); + this.controllerService.createScope(ModelHelper.createScopeInfo(SCOPE1), result1); + createScopeStatus = result1.get(); + assertEquals("Create Scope", CreateScopeStatus.Status.SUCCESS, createScopeStatus.getStatus()); + + // Create a test stream. + ResultObserver result2 = new ResultObserver<>(); + this.controllerService.createStream(ModelHelper.decode(SCOPE1, STREAM1, configuration1), result2); + createStreamStatus = result2.get(); + Assert.assertEquals("Create stream", + CreateStreamStatus.Status.SUCCESS, createStreamStatus.getStatus()); + + // Seal the test stream. + ResultObserver result3 = new ResultObserver<>(); + this.controllerService.sealStream(ModelHelper.createStreamInfo(SCOPE1, STREAM1), result3); + updateStreamStatus = result3.get(); + assertEquals("Seal stream", UpdateStreamStatus.Status.SUCCESS, updateStreamStatus.getStatus()); + + // Update the sealed test stream. + ResultObserver result4 = new ResultObserver<>(); + final StreamConfiguration configuration = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).build(); + this.controllerService.updateStream(ModelHelper.decode(SCOPE1, STREAM1, configuration), result4); + updateStreamStatus = result4.get(); + assertEquals("Update sealed stream", UpdateStreamStatus.Status.STREAM_SEALED, updateStreamStatus.getStatus()); + } + + @Test + public void truncateSealedStreamTest() { + CreateScopeStatus createScopeStatus; + CreateStreamStatus createStreamStatus; + UpdateStreamStatus truncateStreamStatus; + final StreamConfiguration configuration1 = + StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(4)).build(); + + // Create a test scope. + ResultObserver result1 = new ResultObserver<>(); + this.controllerService.createScope(ModelHelper.createScopeInfo(SCOPE1), result1); + createScopeStatus = result1.get(); + assertEquals("Create Scope", CreateScopeStatus.Status.SUCCESS, createScopeStatus.getStatus()); + + // Create a test stream. + ResultObserver result2 = new ResultObserver<>(); + this.controllerService.createStream(ModelHelper.decode(SCOPE1, STREAM1, configuration1), result2); + createStreamStatus = result2.get(); + Assert.assertEquals("Create stream", + CreateStreamStatus.Status.SUCCESS, createStreamStatus.getStatus()); + + // Seal the test stream. + ResultObserver result3 = new ResultObserver<>(); + this.controllerService.sealStream(ModelHelper.createStreamInfo(SCOPE1, STREAM1), result3); + UpdateStreamStatus updateStreamStatus = result3.get(); + assertEquals("Seal stream", UpdateStreamStatus.Status.SUCCESS, updateStreamStatus.getStatus()); + + // Truncate the sealed test stream + ResultObserver result4 = new ResultObserver<>(); + this.controllerService.truncateStream(Controller.StreamCut.newBuilder() + .setStreamInfo(StreamInfo.newBuilder() + .setScope(SCOPE1) + .setStream(STREAM1) + .build()) + .putCut(0, 0).putCut(1, 0).putCut(2, 0).putCut(3, 0).build(), result4); + truncateStreamStatus = result4.get(); + assertEquals("Truncate sealed stream", UpdateStreamStatus.Status.STREAM_SEALED, truncateStreamStatus.getStatus()); + } + @Test public void getCurrentSegmentsTest() { createScopeAndStream(SCOPE1, STREAM1, ScalingPolicy.fixed(2)); diff --git a/controller/src/test/java/io/pravega/controller/server/security/auth/StreamAuthParamsTest.java b/controller/src/test/java/io/pravega/controller/server/security/auth/StreamAuthParamsTest.java index d926c9d9999..d246794ecc6 100644 --- a/controller/src/test/java/io/pravega/controller/server/security/auth/StreamAuthParamsTest.java +++ b/controller/src/test/java/io/pravega/controller/server/security/auth/StreamAuthParamsTest.java @@ -31,7 +31,7 @@ public class StreamAuthParamsTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @Test public void rejectsConstructionWhenInputIsInvalid() { diff --git a/controller/src/test/java/io/pravega/controller/server/v1/ControllerServiceTest.java b/controller/src/test/java/io/pravega/controller/server/v1/ControllerServiceTest.java index 2a1eda4cb15..2b99ebe9b68 100644 --- a/controller/src/test/java/io/pravega/controller/server/v1/ControllerServiceTest.java +++ b/controller/src/test/java/io/pravega/controller/server/v1/ControllerServiceTest.java @@ -92,7 +92,7 @@ public class ControllerServiceTest { public static final PravegaZkCuratorResource PRAVEGA_ZK_CURATOR_RESOURCE = new PravegaZkCuratorResource(); private static final String SCOPE = "scope"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private final String stream1 = "stream1"; private final String stream2 = "stream2"; diff --git a/controller/src/test/java/io/pravega/controller/store/PravegaTablesScopeTest.java b/controller/src/test/java/io/pravega/controller/store/PravegaTablesScopeTest.java new file mode 100644 index 00000000000..cdddce60e25 --- /dev/null +++ b/controller/src/test/java/io/pravega/controller/store/PravegaTablesScopeTest.java @@ -0,0 +1,85 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.controller.store; + +import io.pravega.client.tables.impl.TableSegmentEntry; +import io.pravega.client.tables.impl.TableSegmentKey; +import io.pravega.client.tables.impl.TableSegmentKeyVersion; +import io.pravega.controller.server.SegmentHelper; +import io.pravega.controller.server.security.auth.GrpcAuthHelper; +import io.pravega.controller.store.stream.OperationContext; +import io.pravega.test.common.ThreadPooledTestSuite; +import org.junit.Test; + +import java.nio.charset.StandardCharsets; +import java.util.List; +import java.util.Set; +import java.util.concurrent.CompletableFuture; + +import static java.util.Collections.singletonList; +import static org.junit.Assert.assertEquals; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyLong; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class PravegaTablesScopeTest extends ThreadPooledTestSuite { + private final String scope = "scope"; + private final String stream = "stream"; + private final String indexTable = "table"; + private final String tag = "tag"; + private final byte[] tagBytes = tag.getBytes(StandardCharsets.UTF_8); + private final OperationContext context = new TestOperationContext(); + private List keySnapshot; + + @Test(timeout = 5000) + @SuppressWarnings("unchecked") + public void testRemoveTagsUnderScope() { + // Setup Mocks. + GrpcAuthHelper authHelper = mock(GrpcAuthHelper.class); + when(authHelper.retrieveMasterToken()).thenReturn(""); + SegmentHelper segmentHelper = mock(SegmentHelper.class); + PravegaTablesStoreHelper storeHelper = new PravegaTablesStoreHelper(segmentHelper, authHelper, executorService()); + PravegaTablesScope tablesScope = spy(new PravegaTablesScope(scope, storeHelper)); + doReturn(CompletableFuture.completedFuture(indexTable)).when(tablesScope) + .getAllStreamTagsInScopeTableNames(stream, context); + // Simulate an empty value being returned. + TableSegmentEntry entry = TableSegmentEntry.versioned(tagBytes, new byte[0], 1L); + when(segmentHelper.readTable(eq(indexTable), any(), anyString(), anyLong())) + .thenReturn(CompletableFuture.completedFuture(singletonList(entry))); + when(segmentHelper.updateTableEntries(eq(indexTable), any(), anyString(), anyLong())) + .thenReturn(CompletableFuture.completedFuture(singletonList(TableSegmentKeyVersion.from(2L)))); + when(segmentHelper.removeTableKeys(eq(indexTable), any(), anyString(), anyLong())) + .thenAnswer(invocation -> { + //Capture the key value sent during removeTableKeys. + keySnapshot = (List) invocation.getArguments()[1]; + return CompletableFuture.completedFuture(null); + }); + + // Invoke the removeTags method. + tablesScope.removeTagsUnderScope(stream, Set.of(tag), context).join(); + // Verify if correctly detect that the data is empty and the entry is cleaned up. + verify(segmentHelper, times(1)).removeTableKeys(eq(indexTable), eq(keySnapshot), anyString(), anyLong()); + // Verify if the version number is as expected. + assertEquals(2L, keySnapshot.get(0).getVersion().getSegmentVersion()); + } +} \ No newline at end of file diff --git a/controller/src/test/java/io/pravega/controller/store/PravegaTablesStoreHelperTest.java b/controller/src/test/java/io/pravega/controller/store/PravegaTablesStoreHelperTest.java index e890b6c0d35..2ead32b2ed8 100644 --- a/controller/src/test/java/io/pravega/controller/store/PravegaTablesStoreHelperTest.java +++ b/controller/src/test/java/io/pravega/controller/store/PravegaTablesStoreHelperTest.java @@ -199,19 +199,19 @@ public void testRetriesExhausted() { CompletableFuture connectionDropped = Futures.failedFuture( new WireCommandFailedException(WireCommandType.CREATE_TABLE_SEGMENT, WireCommandFailedException.Reason.ConnectionDropped)); - doAnswer(x -> connectionDropped).when(segmentHelper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt()); + doAnswer(x -> connectionDropped).when(segmentHelper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt(), anyLong()); AssertExtensions.assertFutureThrows("ConnectionDropped", storeHelper.createTable("table", 0L), e -> Exceptions.unwrap(e) instanceof StoreException.StoreConnectionException); CompletableFuture connectionFailed = Futures.failedFuture( new WireCommandFailedException(WireCommandType.CREATE_TABLE_SEGMENT, WireCommandFailedException.Reason.ConnectionFailed)); - doAnswer(x -> connectionFailed).when(segmentHelper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt()); + doAnswer(x -> connectionFailed).when(segmentHelper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt(), anyLong()); AssertExtensions.assertFutureThrows("ConnectionFailed", storeHelper.createTable("table", 0L), e -> Exceptions.unwrap(e) instanceof StoreException.StoreConnectionException); CompletableFuture authFailed = Futures.failedFuture( new WireCommandFailedException(WireCommandType.CREATE_TABLE_SEGMENT, WireCommandFailedException.Reason.AuthFailed)); - doAnswer(x -> connectionFailed).when(segmentHelper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt()); + doAnswer(x -> connectionFailed).when(segmentHelper).createTableSegment(anyString(), anyString(), anyLong(), anyBoolean(), anyInt(), anyLong()); AssertExtensions.assertFutureThrows("AuthFailed", storeHelper.createTable("table", 0L), e -> Exceptions.unwrap(e) instanceof StoreException.StoreConnectionException); } diff --git a/controller/src/test/java/io/pravega/controller/store/client/StoreClientFactoryTest.java b/controller/src/test/java/io/pravega/controller/store/client/StoreClientFactoryTest.java index 27f7dc9c634..19af4cd65c4 100644 --- a/controller/src/test/java/io/pravega/controller/store/client/StoreClientFactoryTest.java +++ b/controller/src/test/java/io/pravega/controller/store/client/StoreClientFactoryTest.java @@ -36,7 +36,7 @@ public class StoreClientFactoryTest extends ThreadPooledTestSuite { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); TestingServer zkServer; @Override diff --git a/controller/src/test/java/io/pravega/controller/store/index/ZkHostIndexTest.java b/controller/src/test/java/io/pravega/controller/store/index/ZkHostIndexTest.java index d0feece18f7..9dc1521dcf6 100644 --- a/controller/src/test/java/io/pravega/controller/store/index/ZkHostIndexTest.java +++ b/controller/src/test/java/io/pravega/controller/store/index/ZkHostIndexTest.java @@ -46,7 +46,7 @@ public class ZkHostIndexTest { @ClassRule public static final PravegaZkCuratorResource PRAVEGA_ZK_CURATOR_RESOURCE = new PravegaZkCuratorResource(RETRY_POLICY); @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); @Before diff --git a/controller/src/test/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStoreTest.java b/controller/src/test/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStoreTest.java index 7750bd7c723..536bf32c42b 100644 --- a/controller/src/test/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStoreTest.java +++ b/controller/src/test/java/io/pravega/controller/store/kvtable/PravegaTablesKVTMetadataStoreTest.java @@ -18,18 +18,36 @@ import io.pravega.controller.PravegaZkCuratorResource; import io.pravega.controller.mocks.SegmentHelperMock; import io.pravega.controller.server.SegmentHelper; +import io.pravega.controller.server.WireCommandFailedException; import io.pravega.controller.server.security.auth.GrpcAuthHelper; +import io.pravega.controller.store.PravegaTablesStoreHelper; +import io.pravega.controller.store.stream.StreamMetadataStore; import io.pravega.controller.store.stream.StreamStoreFactory; +import io.pravega.controller.store.stream.OperationContext; import io.pravega.controller.store.stream.StoreException; +import io.pravega.controller.store.stream.TestStreamStoreFactory; import io.pravega.controller.stream.api.grpc.v1.Controller; -import io.pravega.test.common.AssertExtensions; +import io.pravega.shared.protocol.netty.WireCommandType; +import org.apache.commons.lang3.tuple.Pair; import org.apache.curator.RetryPolicy; import org.apache.curator.retry.RetryOneTime; +import org.junit.Assert; +import io.pravega.test.common.AssertExtensions; import org.junit.Test; import org.junit.ClassRule; import java.util.UUID; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionException; + +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyInt; +import static org.mockito.ArgumentMatchers.anyLong; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.when; /** * Zookeeper based stream metadata store tests. @@ -76,4 +94,25 @@ public void testInvalidOperation() throws Exception { store.getActiveSegments(scope, kvtable1, null, executor), (Throwable t) -> t instanceof StoreException.IllegalStateException); } -} + + @Test + public void testPartiallyDeletedScope() throws Exception { + final String scopeName = "partialScope"; + + PravegaTablesStoreHelper storeHelperSpy = spy(new PravegaTablesStoreHelper(segmentHelperMockForTables, GrpcAuthHelper.getDisabledAuthHelper(), executor)); + WireCommandFailedException wcfe = new WireCommandFailedException(WireCommandType.READ_TABLE_KEYS, WireCommandFailedException.Reason.TableKeyDoesNotExist); + when(storeHelperSpy.getKeysPaginated(anyString(), any(), anyInt(), anyLong())).thenReturn(CompletableFuture.failedFuture(new CompletionException(StoreException.create(StoreException.Type.DATA_NOT_FOUND, wcfe, "kvTablesInScope not found.")))); + StreamMetadataStore testStreamStore = TestStreamStoreFactory.createPravegaTablesStreamStore(PRAVEGA_ZK_CURATOR_RESOURCE.client, executor, storeHelperSpy); + KVTableMetadataStore testKVStore = TestStreamStoreFactory.createPravegaTablesKVStore(PRAVEGA_ZK_CURATOR_RESOURCE.client, executor, storeHelperSpy); + + OperationContext context = testStreamStore.createScopeContext(scopeName, 0L); + CompletableFuture createScopeFuture = testStreamStore.createScope(scopeName, context, executor); + Controller.CreateScopeStatus status = createScopeFuture.get(); + Assert.assertEquals(Controller.CreateScopeStatus.Status.SUCCESS, status.getStatus()); + + String token = Controller.ContinuationToken.newBuilder().build().getToken(); + Pair, String> kvtList = testKVStore.listKeyValueTables(scopeName, token, 2, context, executor).get(); + Assert.assertEquals(0, kvtList.getKey().size()); + Assert.assertEquals(token, kvtList.getValue()); + } +} \ No newline at end of file diff --git a/controller/src/test/java/io/pravega/controller/store/stream/BucketStoreTest.java b/controller/src/test/java/io/pravega/controller/store/stream/BucketStoreTest.java index 6209fba65f5..58448c2bdd8 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/BucketStoreTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/BucketStoreTest.java @@ -38,7 +38,7 @@ public abstract class BucketStoreTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); BucketStore bucketStore; ScheduledExecutorService executorService; diff --git a/controller/src/test/java/io/pravega/controller/store/stream/HostStoreTest.java b/controller/src/test/java/io/pravega/controller/store/stream/HostStoreTest.java index cbc67293191..b35c705fb5a 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/HostStoreTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/HostStoreTest.java @@ -51,7 +51,7 @@ @Slf4j public class HostStoreTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private final String host = "localhost"; private final int controllerPort = 9090; diff --git a/controller/src/test/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStoreTest.java b/controller/src/test/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStoreTest.java index 6079b831da8..d71502ce9e2 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStoreTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/PravegaTablesStreamMetadataStoreTest.java @@ -114,7 +114,29 @@ public void testInvalidOperation() throws Exception { store.getActiveSegments(scope, stream1, null, executor), (Throwable t) -> t instanceof StoreException.IllegalStateException); } - + + @Test + public void testInvalidTokenForListStreamWithTags() throws Exception { + + final String scope = "testListStreamTag"; + final String stream = "stream1"; + final String lastTagChunk = ".#.24"; + final StreamConfiguration streamConfig = StreamConfiguration.builder().build(); + + store.createScope(scope, null, executor).get(); + store.createStream(scope, stream, streamConfig, System.currentTimeMillis(), null, executor).get(); + store.setState(scope, stream, State.ACTIVE, null, executor).get(); + Pair, String> result1 = store.listStreamsForTag(scope, "InvalidToken", "", executor, null).get(); + assertTrue(result1.getLeft().isEmpty()); + String token = result1.getRight(); + assertTrue(token.contains(lastTagChunk)); + + // invoke the API by passing the last token. + Pair, String> result2 = store.listStreamsForTag(scope, "InvalidTag", token, executor, null).get(); + assertTrue(result2.getLeft().isEmpty()); + assertTrue(result2.getRight().contains(lastTagChunk)); + } + @Test public void testScaleMetadata() throws Exception { String scope = "testScopeScale"; @@ -497,7 +519,7 @@ public void testPartiallyCreatedScope() { scopeObj.createScope(context), e -> Exceptions.unwrap(e).equals(unknown)); } - + @Test public void testDeleteScopeWithEntries() { PravegaTablesStreamMetadataStore store = (PravegaTablesStreamMetadataStore) this.store; @@ -542,12 +564,6 @@ public void testDeleteScopeWithEntries() { } - private byte[] getIdInBytes(UUID id) { - byte[] b = new byte[2 * Long.BYTES]; - BitConverter.writeUUID(new ByteArraySegment(b), id); - return b; - } - private Set getAllBatches(PravegaTablesStreamMetadataStore testStore) { Set batches = new ConcurrentSkipListSet<>(); testStore.getStoreHelper().getAllKeys(COMPLETED_TRANSACTIONS_BATCHES_TABLE, 0L) @@ -572,8 +588,8 @@ private Map getAllTransactionsInBatch(PravegaTablesS private void createAndCommitTransaction(String scope, String stream, UUID txnId, PravegaTablesStreamMetadataStore testStore) { testStore.createTransaction(scope, stream, txnId, 10000L, 10000L, null, executor).join(); testStore.sealTransaction(scope, stream, txnId, true, Optional.empty(), "", 0L, null, executor).join(); - VersionedMetadata record = testStore.startCommitTransactions(scope, stream, 100, null, executor).join(); - testStore.completeCommitTransactions(scope, stream, record, null, executor).join(); + VersionedMetadata record = testStore.startCommitTransactions(scope, stream, 100, null, executor).join().getKey(); + testStore.completeCommitTransactions(scope, stream, record, null, executor, Collections.emptyMap()).join(); } private SimpleEntry findSplitsAndMerges(String scope, String stream) throws InterruptedException, java.util.concurrent.ExecutionException { diff --git a/controller/src/test/java/io/pravega/controller/store/stream/StreamMetadataStoreTest.java b/controller/src/test/java/io/pravega/controller/store/stream/StreamMetadataStoreTest.java index 5476c120ccd..cdc712e42cb 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/StreamMetadataStoreTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/StreamMetadataStoreTest.java @@ -31,7 +31,6 @@ import io.pravega.common.concurrent.Futures; import io.pravega.controller.store.Version; import io.pravega.controller.store.VersionedMetadata; -import io.pravega.controller.store.stream.records.ActiveTxnRecord; import io.pravega.controller.store.stream.records.CommittingTransactionsRecord; import io.pravega.controller.store.stream.records.EpochRecord; import io.pravega.controller.store.stream.records.EpochTransitionRecord; @@ -97,7 +96,7 @@ public abstract class StreamMetadataStoreTest { //Ensure each test completes within 30 seconds. @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected TestStore store; protected BucketStore bucketStore; protected final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); @@ -339,7 +338,7 @@ public void partialStreamsInScope() throws Exception { return result; }).when(streamObjSpied).getConfiguration(any()); - ((TestStore) store).setStream(streamObjSpied); + store.setStream(streamObjSpied); // verify that when we do list stream in scope we do not get partial. streamInScope = store.listStreamsInScope("Scope", null, executor).get(); @@ -847,12 +846,12 @@ public void scaleWithTxTest() throws Exception { EpochTransitionRecord response2 = versioned2.getObject(); assertEquals(activeEpoch.getEpoch(), response2.getActiveEpoch()); - VersionedMetadata record = store.startCommitTransactions(scope, stream, 100, null, executor).join(); + VersionedMetadata record = store.startCommitTransactions(scope, stream, 100, null, executor).join().getKey(); store.setState(scope, stream, State.COMMITTING_TXN, null, executor).join(); record = store.startRollingTxn(scope, stream, activeEpoch.getEpoch(), record, null, executor).join(); store.rollingTxnCreateDuplicateEpochs(scope, stream, Collections.emptyMap(), System.currentTimeMillis(), record, null, executor).join(); store.completeRollingTxn(scope, stream, Collections.emptyMap(), record, null, executor).join(); - store.completeCommitTransactions(scope, stream, record, null, executor).join(); + store.completeCommitTransactions(scope, stream, record, null, executor, Collections.emptyMap()).join(); store.setState(scope, stream, State.ACTIVE, null, executor).join(); activeEpoch = store.getActiveEpoch(scope, stream, null, true, executor).join(); assertEquals(3, activeEpoch.getEpoch()); @@ -905,12 +904,12 @@ record = store.startRollingTxn(scope, stream, activeEpoch.getEpoch(), record, nu store.sealTransaction(scope, stream, tx15.getId(), true, Optional.of(tx15.getVersion()), "", Long.MIN_VALUE, null, executor).get(); - record = store.startCommitTransactions(scope, stream, 100, null, executor).join(); + record = store.startCommitTransactions(scope, stream, 100, null, executor).join().getKey(); store.setState(scope, stream, State.COMMITTING_TXN, null, executor).get(); record = store.startRollingTxn(scope, stream, activeEpoch.getEpoch(), record, null, executor).join(); store.rollingTxnCreateDuplicateEpochs(scope, stream, Collections.emptyMap(), System.currentTimeMillis(), record, null, executor).join(); store.completeRollingTxn(scope, stream, Collections.emptyMap(), record, null, executor).join(); - store.completeCommitTransactions(scope, stream, record, null, executor).join(); + store.completeCommitTransactions(scope, stream, record, null, executor, Collections.emptyMap()).join(); store.setState(scope, stream, State.ACTIVE, null, executor).join(); activeEpoch = store.getActiveEpoch(scope, stream, null, true, executor).join(); @@ -962,12 +961,12 @@ public void scaleWithTxnForInconsistentScanerios() throws Exception { assertEquals(1, response.getActiveEpoch()); EpochRecord activeEpoch = store.getActiveEpoch(scope, stream, null, true, executor).join(); - VersionedMetadata record = store.startCommitTransactions(scope, stream, 100, null, executor).join(); + VersionedMetadata record = store.startCommitTransactions(scope, stream, 100, null, executor).join().getKey(); store.setState(scope, stream, State.COMMITTING_TXN, null, executor).join(); record = store.startRollingTxn(scope, stream, activeEpoch.getEpoch(), record, null, executor).join(); store.rollingTxnCreateDuplicateEpochs(scope, stream, Collections.emptyMap(), System.currentTimeMillis(), record, null, executor).join(); store.completeRollingTxn(scope, stream, Collections.emptyMap(), record, null, executor).join(); - store.completeCommitTransactions(scope, stream, record, null, executor).join(); + store.completeCommitTransactions(scope, stream, record, null, executor, Collections.emptyMap()).join(); store.setState(scope, stream, State.ACTIVE, null, executor).join(); state = store.getVersionedState(scope, stream, null, executor).join(); @@ -1040,10 +1039,9 @@ public void txnOrderTest() throws Exception { assertEquals(positions.get(3L), tx02); // verify that when we retrieve transactions from lowest epoch we get tx00 - List> orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(100, context).join(); - List ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); - assertEquals(1, ordered.size()); - assertEquals(tx00, ordered.get(0)); + List orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(100, context).join(); + assertEquals(1, orderedRecords.size()); + assertEquals(tx00, orderedRecords.get(0).getId()); // verify that duplicates and stale entries are purged. entries for open transaction and committing are retained positions = streamObj.getAllOrderedCommittingTxns(context).join(); @@ -1075,10 +1073,9 @@ public void txnOrderTest() throws Exception { // verify that we still get tx00 only orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(100, context).join(); - ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); - assertEquals(1, ordered.size()); - assertEquals(tx00, ordered.get(0)); - assertEquals(0L, orderedRecords.get(0).getValue().getCommitOrder()); + assertEquals(1, orderedRecords.size()); + assertEquals(tx00, orderedRecords.get(0).getId()); + assertEquals(0L, orderedRecords.get(0).getCommitOrder().longValue()); // verify that positions has 3 new entries added though positions = streamObj.getAllOrderedCommittingTxns(context).join(); @@ -1090,7 +1087,7 @@ public void txnOrderTest() throws Exception { assertEquals(positions.get(6L), tx12); VersionedMetadata record = store.startCommitTransactions(scope, stream, 100, - null, executor).join(); + null, executor).join().getKey(); // verify that after including transaction tx00 in the record, we no longer keep its reference in the ordered positions = streamObj.getAllOrderedCommittingTxns(context).join(); @@ -1107,22 +1104,21 @@ public void txnOrderTest() throws Exception { assertEquals(0, record.getObject().getEpoch()); assertEquals(1, activeEpoch.getEpoch()); // also, transactions to commit match transactions in lowest epoch - assertEquals(record.getObject().getTransactionsToCommit(), ordered); + assertEquals(record.getObject().getTransactionsToCommit(), orderedRecords.stream().map(x -> x.getId()).collect(Collectors.toList())); record = store.startRollingTxn(scope, stream, activeEpoch.getEpoch(), record, null, executor).join(); store.rollingTxnCreateDuplicateEpochs(scope, stream, Collections.emptyMap(), System.currentTimeMillis(), record, null, executor).join(); store.completeRollingTxn(scope, stream, Collections.emptyMap(), record, null, executor).join(); - store.completeCommitTransactions(scope, stream, record, null, executor).join(); + store.completeCommitTransactions(scope, stream, record, null, executor, Collections.emptyMap()).join(); store.setState(scope, stream, State.ACTIVE, null, executor).join(); // after committing, we should have committed tx00 while having purged references for tx01 and tx02 // getting ordered list should return txn on epoch 1 in the order in which we issued commits orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(100, context).join(); - ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); - assertEquals(3, ordered.size()); - assertEquals(tx10, ordered.get(0)); - assertEquals(tx11, ordered.get(1)); - assertEquals(tx12, ordered.get(2)); + assertEquals(3, orderedRecords.size()); + assertEquals(tx10, orderedRecords.get(0).getId()); + assertEquals(tx11, orderedRecords.get(1).getId()); + assertEquals(tx12, orderedRecords.get(2).getId()); // verify that transactions are still present in position positions = streamObj.getAllOrderedCommittingTxns(context).join(); @@ -1134,26 +1130,26 @@ record = store.startRollingTxn(scope, stream, activeEpoch.getEpoch(), record, nu // we will issue next round of commit, which will commit txns on epoch 1. activeEpoch = store.getActiveEpoch(scope, stream, null, true, executor).join(); - record = store.startCommitTransactions(scope, stream, 100, null, executor).join(); + record = store.startCommitTransactions(scope, stream, 100, null, executor).join().getKey(); + List txnIdList = orderedRecords.stream().map(x -> x.getId()).collect(Collectors.toList()); // verify that the order in record is same - assertEquals(record.getObject().getTransactionsToCommit(), ordered); + assertEquals(record.getObject().getTransactionsToCommit(), txnIdList); // verify that transactions included for commit are removed from positions. positions = streamObj.getAllOrderedCommittingTxns(context).join(); assertEquals(1, positions.size()); assertEquals(positions.get(3L), tx02); - assertEquals(record.getObject().getTransactionsToCommit(), ordered); + assertEquals(record.getObject().getTransactionsToCommit(), txnIdList); store.setState(scope, stream, State.COMMITTING_TXN, null, executor).join(); // verify that it is committing transactions on epoch 1 - store.completeCommitTransactions(scope, stream, record, null, executor).join(); + store.completeCommitTransactions(scope, stream, record, null, executor, Collections.emptyMap()).join(); store.setState(scope, stream, State.ACTIVE, null, executor).join(); // references for tx00 should be removed from orderer orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(100, context).join(); - ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); - assertEquals(0, ordered.size()); + assertEquals(0, orderedRecords.size()); // verify that only reference to the open transaction is retained in position positions = streamObj.getAllOrderedCommittingTxns(context).join(); @@ -1174,11 +1170,6 @@ public void txnCommitBatchLimitTest() throws Exception { store.createStream(scope, stream, configuration, start, null, executor).get(); store.setState(scope, stream, State.ACTIVE, null, executor).get(); - long scaleTs = System.currentTimeMillis(); - SimpleEntry segment2 = new SimpleEntry<>(0.5, 0.75); - SimpleEntry segment3 = new SimpleEntry<>(0.75, 1.0); - List scale1SealedSegments = Collections.singletonList(1L); - // create 3 transactions on epoch 0 --> tx00, tx01, tx02 and mark them as committing.. UUID tx00 = store.generateTransactionId(scope, stream, null, executor).join(); store.createTransaction(scope, stream, tx00, @@ -1201,29 +1192,25 @@ public void txnCommitBatchLimitTest() throws Exception { PersistentStreamBase streamObj = (PersistentStreamBase) ((AbstractStreamMetadataStore) store).getStream(scope, stream, null); StreamOperationContext context = new StreamOperationContext(((AbstractStreamMetadataStore) store).getScope(scope, null), streamObj, 0L); // verify that when we retrieve transactions from lowest epoch we get tx00, tx01 - List> orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(2, context).join(); - List ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); - assertEquals(2, ordered.size()); - assertEquals(tx00, ordered.get(0)); - assertEquals(tx01, ordered.get(1)); + List orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(2, context).join(); + assertEquals(2, orderedRecords.size()); + assertEquals(tx00, orderedRecords.get(0).getId()); + assertEquals(tx01, orderedRecords.get(1).getId()); orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(1000, context).join(); - ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); - assertEquals(3, ordered.size()); - assertEquals(tx00, ordered.get(0)); - assertEquals(tx01, ordered.get(1)); - assertEquals(tx02, ordered.get(2)); + assertEquals(3, orderedRecords.size()); + assertEquals(tx00, orderedRecords.get(0).getId()); + assertEquals(tx01, orderedRecords.get(1).getId()); + assertEquals(tx02, orderedRecords.get(2).getId()); // commit tx00 and tx01 ((AbstractStreamMetadataStore) store).commitTransaction(scope, stream, tx00, null, executor).join(); ((AbstractStreamMetadataStore) store).commitTransaction(scope, stream, tx01, null, executor).join(); + streamObj.removeTxnsFromCommitOrder(List.of(0L, 1L), context); orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(1000, context).join(); - ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); - assertEquals(1, ordered.size()); - assertEquals(tx02, ordered.get(0)); - - // scale and create transaction on new epoch too. + assertEquals(1, orderedRecords.size()); + assertEquals(tx02, orderedRecords.get(0).getId()); } @Test @@ -1265,11 +1252,10 @@ public void txnCommitBatchLimitMaxLimitExceedingTest() throws Exception { "", Long.MIN_VALUE, null, executor).get(); // verify that when we retrieve transactions from lowest epoch we get tx00, tx01 - List> orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(2, context).join(); - List ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); + List ordered = streamObj.getOrderedCommittingTxnInLowestEpoch(2, context).join(); assertEquals(2, ordered.size()); - assertEquals(tx00, ordered.get(0)); - assertEquals(tx01, ordered.get(1)); + assertEquals(tx00, ordered.get(0).getId()); + assertEquals(tx01, ordered.get(1).getId()); } @Test @@ -1284,29 +1270,29 @@ public void txnCommitBatchLimitOrderTest() throws Exception { store.createStream(scope, stream, configuration, start, null, executor).get(); store.setState(scope, stream, State.ACTIVE, null, executor).get(); - + + PersistentStreamBase streamObj = (PersistentStreamBase) ((AbstractStreamMetadataStore) store).getStream(scope, stream, null); + OperationContext context = new StreamOperationContext(((AbstractStreamMetadataStore) store).getScope(scope, null), streamObj, 0L); + // create 3 transactions on epoch 0 --> tx00, tx01, tx02 and mark them as committing.. List txns = new ArrayList<>(); for (int i = 0; i < 100; i++) { - UUID tx = store.generateTransactionId(scope, stream, null, executor).join(); + UUID tx = store.generateTransactionId(scope, stream, context, executor).join(); store.createTransaction(scope, stream, tx, - 100, 100, null, executor).join(); + 100, 100, context, executor).join(); store.sealTransaction(scope, stream, tx, true, Optional.empty(), - "", Long.MIN_VALUE, null, executor).join(); + "", Long.MIN_VALUE, context, executor).join(); txns.add(tx); } - PersistentStreamBase streamObj = (PersistentStreamBase) ((AbstractStreamMetadataStore) store).getStream(scope, stream, null); - - OperationContext context = new StreamOperationContext(((AbstractStreamMetadataStore) store).getScope(scope, null), streamObj, 0L); while (!txns.isEmpty()) { int limit = 5; - List> orderedRecords = streamObj.getOrderedCommittingTxnInLowestEpoch(limit, context).join(); - List ordered = orderedRecords.stream().map(Map.Entry::getKey).collect(Collectors.toList()); + List ordered = streamObj.getOrderedCommittingTxnInLowestEpoch(limit, context).join(); assertEquals(limit, ordered.size()); for (int i = 0; i < limit; i++) { - assertEquals(txns.remove(0), ordered.get(i)); - ((AbstractStreamMetadataStore) store).commitTransaction(scope, stream, ordered.get(i), null, executor).join(); + assertEquals(txns.remove(0), ordered.get(i).getId()); + ((AbstractStreamMetadataStore) store).commitTransaction(scope, stream, ordered.get(i).getId(), context, executor).join(); + streamObj.removeTxnsFromCommitOrder(ordered.stream().map(txn -> txn.getCommitOrder()).collect(Collectors.toList()), context); } } } @@ -1955,9 +1941,10 @@ public void testMarkOnTransactionCommit() { String writer1 = "writer1"; long time = 1L; store.sealTransaction(scope, stream, txnId, true, Optional.of(tx01.getVersion()), writer1, time, null, executor).join(); - VersionedMetadata record = store.startCommitTransactions(scope, stream, 100, null, executor).join(); - store.recordCommitOffsets(scope, stream, txnId, Collections.singletonMap(0L, 1L), null, executor).join(); - store.completeCommitTransactions(scope, stream, record, null, executor).join(); + VersionedMetadata record = store.startCommitTransactions(scope, stream, 100, null, executor).join().getKey(); + store.completeCommitTransactions(scope, stream, record, null, executor, + Collections.singletonMap(writer1, + new TxnWriterMark(time, Collections.singletonMap(0L, 1L), txnId))).join(); // verify that writer mark is created in the store WriterMark mark = store.getWriterMark(scope, stream, writer1, null, executor).join(); diff --git a/controller/src/test/java/io/pravega/controller/store/stream/StreamTest.java b/controller/src/test/java/io/pravega/controller/store/stream/StreamTest.java index d176c263710..4b25be0094e 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/StreamTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/StreamTest.java @@ -63,7 +63,7 @@ public class StreamTest extends ThreadPooledTestSuite { @ClassRule public static final PravegaZkCuratorResource PRAVEGA_ZK_CURATOR_RESOURCE = new PravegaZkCuratorResource(RETRY_POLICY); @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private ZkOrderedStore orderer; @Override diff --git a/controller/src/test/java/io/pravega/controller/store/stream/StreamTestBase.java b/controller/src/test/java/io/pravega/controller/store/stream/StreamTestBase.java index ee2cc3b3955..9f66b586fb6 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/StreamTestBase.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/StreamTestBase.java @@ -147,17 +147,16 @@ private UUID createAndCommitTransaction(Stream stream, int msb, long lsb) { private void rollTransactions(Stream stream, long time, int epoch, int activeEpoch, Map txnSizeMap, Map activeSizeMap) { OperationContext context = getContext(); - stream.startCommittingTransactions(100, context) .thenCompose(ctr -> stream.getVersionedState(context) .thenCompose(state -> stream.updateVersionedState(state, State.COMMITTING_TXN, context)) - .thenCompose(state -> stream.startRollingTxn(activeEpoch, ctr, context) + .thenCompose(state -> stream.startRollingTxn(activeEpoch, ctr.getKey(), context) .thenCompose(ctr2 -> stream.rollingTxnCreateDuplicateEpochs( txnSizeMap, time, ctr2, context) .thenCompose(v -> stream.completeRollingTxn(activeSizeMap, ctr2, context)) - .thenCompose(v -> stream.completeCommittingTransactions(ctr2, context)) + .thenCompose(v -> stream.completeCommittingTransactions(ctr2, context, Collections.emptyMap())) ))) .thenCompose(x -> stream.updateState(State.ACTIVE, context)).join(); } @@ -712,7 +711,7 @@ public void segmentQueriesDuringRollingTxn() { List activeSegmentsBefore = stream.getActiveSegments(context).join(); // start commit transactions - VersionedMetadata ctr = stream.startCommittingTransactions(100, context).join(); + VersionedMetadata ctr = stream.startCommittingTransactions(100, context).join().getKey(); stream.getVersionedState(context).thenCompose(s -> stream.updateVersionedState(s, State.COMMITTING_TXN, context)).join(); // start rolling transaction @@ -749,7 +748,7 @@ public void segmentQueriesDuringRollingTxn() { startingSegmentNumber + 5, 3))); stream.completeRollingTxn(Collections.emptyMap(), ctr, context).join(); - stream.completeCommittingTransactions(ctr, context).join(); + stream.completeCommittingTransactions(ctr, context, Collections.emptyMap()).join(); } @Test(timeout = 30000L) @@ -1615,16 +1614,17 @@ public void testTransactionMark() { String writer1 = "writer1"; long time = 1L; streamObj.sealTransaction(txnId, true, Optional.of(tx01.getVersion()), writer1, time, context).join(); - VersionedMetadata record = streamObj.startCommittingTransactions(100, context).join(); - streamObj.recordCommitOffsets(txnId, Collections.singletonMap(0L, 1L), context).join(); - streamObj.generateMarksForTransactions(record.getObject(), context).join(); + streamObj.startCommittingTransactions(100, context).join(); + TxnWriterMark writerMarks = new TxnWriterMark(time, Collections.singletonMap(0L, 1L), txnId); + Map marksForWriters = Collections.singletonMap(writer1, writerMarks); + streamObj.generateMarksForTransactions(context, marksForWriters).join(); // verify that writer mark is created in the store WriterMark mark = streamObj.getWriterMark(writer1, context).join(); assertEquals(mark.getTimestamp(), time); // idempotent call to generateMarksForTransactions - streamObj.generateMarksForTransactions(record.getObject(), context).join(); + streamObj.generateMarksForTransactions(context, marksForWriters).join(); mark = streamObj.getWriterMark(writer1, context).join(); assertEquals(mark.getTimestamp(), time); @@ -1633,9 +1633,10 @@ public void testTransactionMark() { AssertExtensions.assertFutureThrows("", streamObj.getActiveTx(0, txnId, context), e -> Exceptions.unwrap(e) instanceof StoreException.DataNotFoundException); - streamObj.generateMarksForTransactions(record.getObject(), context).join(); + streamObj.generateMarksForTransactions(context, marksForWriters).join(); mark = streamObj.getWriterMark(writer1, context).join(); assertEquals(mark.getTimestamp(), time); + } @Test(timeout = 30000L) @@ -1664,12 +1665,12 @@ public void testTransactionMarkFromSingleWriter() { VersionedTransactionData tx04 = streamObj.createTransaction(txnId4, 100, 100, context).join(); streamObj.sealTransaction(txnId4, true, Optional.of(tx04.getVersion()), writer, time + 4L, context).join(); - VersionedMetadata record = streamObj.startCommittingTransactions(100, context).join(); - streamObj.recordCommitOffsets(txnId1, Collections.singletonMap(0L, 1L), context).join(); - streamObj.recordCommitOffsets(txnId2, Collections.singletonMap(0L, 2L), context).join(); - streamObj.recordCommitOffsets(txnId3, Collections.singletonMap(0L, 3L), context).join(); - streamObj.recordCommitOffsets(txnId4, Collections.singletonMap(0L, 4L), context).join(); - streamObj.generateMarksForTransactions(record.getObject(), context).join(); + streamObj.startCommittingTransactions(100, context).join(); + TxnWriterMark writerMarks = new TxnWriterMark(time + 4L, + Collections.singletonMap(0L, 1L), txnId4); + Map marksForWriters = Collections.singletonMap(writer, writerMarks); + + streamObj.generateMarksForTransactions(context, marksForWriters).join(); // verify that writer mark is created in the store WriterMark mark = streamObj.getWriterMark(writer, context).join(); diff --git a/controller/src/test/java/io/pravega/controller/store/stream/TestStreamStoreFactory.java b/controller/src/test/java/io/pravega/controller/store/stream/TestStreamStoreFactory.java index 6a4017fa127..abe5cfec59c 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/TestStreamStoreFactory.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/TestStreamStoreFactory.java @@ -11,6 +11,8 @@ import com.google.common.annotations.VisibleForTesting; import io.pravega.controller.store.PravegaTablesStoreHelper; +import io.pravega.controller.store.kvtable.KVTableMetadataStore; +import io.pravega.controller.store.kvtable.PravegaTablesKVTMetadataStore; import io.pravega.controller.util.Config; import org.apache.curator.framework.CuratorFramework; @@ -19,8 +21,15 @@ public class TestStreamStoreFactory { @VisibleForTesting - public static StreamMetadataStore createPravegaTablesStore(final CuratorFramework client, - final ScheduledExecutorService executor, PravegaTablesStoreHelper helper) { + public static StreamMetadataStore createPravegaTablesStreamStore(final CuratorFramework client, + final ScheduledExecutorService executor, PravegaTablesStoreHelper helper) { return new PravegaTablesStreamMetadataStore(client, executor, Duration.ofHours(Config.COMPLETED_TRANSACTION_TTL_IN_HOURS), helper); } + + @VisibleForTesting + public static KVTableMetadataStore createPravegaTablesKVStore(final CuratorFramework client, + final ScheduledExecutorService executor, PravegaTablesStoreHelper helper) { + return new PravegaTablesKVTMetadataStore(client, executor, helper); + } + } diff --git a/controller/src/test/java/io/pravega/controller/store/stream/ZKCounterTest.java b/controller/src/test/java/io/pravega/controller/store/stream/ZKCounterTest.java index 7c25d7f6165..4f3b0942e8d 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/ZKCounterTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/ZKCounterTest.java @@ -51,7 +51,7 @@ */ public class ZKCounterTest { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private TestingServer zkServer; private CuratorFramework cli; private ScheduledExecutorService executor; diff --git a/controller/src/test/java/io/pravega/controller/store/stream/ZkGarbageCollectorTest.java b/controller/src/test/java/io/pravega/controller/store/stream/ZkGarbageCollectorTest.java index cb823e734e2..f76eda33ad9 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/ZkGarbageCollectorTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/ZkGarbageCollectorTest.java @@ -43,7 +43,7 @@ public class ZkGarbageCollectorTest extends ThreadPooledTestSuite { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private TestingServer zkServer; private CuratorFramework cli; diff --git a/controller/src/test/java/io/pravega/controller/store/stream/ZkStreamTest.java b/controller/src/test/java/io/pravega/controller/store/stream/ZkStreamTest.java index 72a2ea8ecef..de52626103c 100644 --- a/controller/src/test/java/io/pravega/controller/store/stream/ZkStreamTest.java +++ b/controller/src/test/java/io/pravega/controller/store/stream/ZkStreamTest.java @@ -47,6 +47,7 @@ import java.util.concurrent.TimeUnit; import java.util.function.Predicate; import java.util.stream.Collectors; +import lombok.Cleanup; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.CuratorFrameworkFactory; import org.apache.curator.retry.RetryOneTime; @@ -73,7 +74,7 @@ public class ZkStreamTest { private static final String SCOPE = "scope"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private TestingServer zkTestServer; private CuratorFramework cli; private StreamMetadataStore storePartialMock; @@ -119,6 +120,7 @@ public void testZkConnectionLoss() throws Exception { public void testCreateStreamState() throws Exception { final ScalingPolicy policy = ScalingPolicy.fixed(5); + @Cleanup final StreamMetadataStore store = new ZKStreamMetadataStore(cli, executor); final String streamName = "testfail"; @@ -141,6 +143,7 @@ public void testCreateStreamState() throws Exception { public void testZkCreateScope() throws Exception { // create new scope test + @Cleanup final StreamMetadataStore store = new ZKStreamMetadataStore(cli, executor); final String scopeName = "Scope1"; CompletableFuture createScopeStatus = store.createScope(scopeName, null, executor); @@ -174,6 +177,7 @@ public void testZkCreateScope() throws Exception { @Test public void testZkDeleteScope() throws Exception { // create new scope + @Cleanup final StreamMetadataStore store = new ZKStreamMetadataStore(cli, executor); final String scopeName = "Scope1"; store.createScope(scopeName, null, executor).get(); @@ -203,6 +207,7 @@ public void testZkDeleteScope() throws Exception { @Test public void testGetScope() throws Exception { + @Cleanup final StreamMetadataStore store = new ZKStreamMetadataStore(cli, executor); final String scope1 = "Scope1"; final String scope2 = "Scope2"; @@ -224,6 +229,7 @@ public void testGetScope() throws Exception { @Test public void testZkListScope() throws Exception { // list scope test + @Cleanup final StreamMetadataStore store = new ZKStreamMetadataStore(cli, executor); store.createScope("Scope1", null, executor).get(); store.createScope("Scope2", null, executor).get(); @@ -242,6 +248,7 @@ public void testZkStream() throws Exception { double keyChunk = 1.0 / 5; final ScalingPolicy policy = ScalingPolicy.fixed(5); + @Cleanup final StreamMetadataStore store = new ZKStreamMetadataStore(cli, executor); final String streamName = "test"; store.createScope(SCOPE, null, executor).get(); diff --git a/controller/src/test/java/io/pravega/controller/task/KeyValueTable/TableMetadataTasksTest.java b/controller/src/test/java/io/pravega/controller/task/KeyValueTable/TableMetadataTasksTest.java index 6b4151eccc2..1f82bf3431c 100644 --- a/controller/src/test/java/io/pravega/controller/task/KeyValueTable/TableMetadataTasksTest.java +++ b/controller/src/test/java/io/pravega/controller/task/KeyValueTable/TableMetadataTasksTest.java @@ -67,7 +67,7 @@ public abstract class TableMetadataTasksTest { protected static final String SCOPE = "taskscope"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected StreamMetadataStore streamStore; protected KVTableMetadataStore kvtStore; protected TableMetadataTasks kvtMetadataTasks; diff --git a/controller/src/test/java/io/pravega/controller/task/Stream/IntermittentCnxnFailureTest.java b/controller/src/test/java/io/pravega/controller/task/Stream/IntermittentCnxnFailureTest.java index 4b092206104..8cef30586b2 100644 --- a/controller/src/test/java/io/pravega/controller/task/Stream/IntermittentCnxnFailureTest.java +++ b/controller/src/test/java/io/pravega/controller/task/Stream/IntermittentCnxnFailureTest.java @@ -83,7 +83,7 @@ public class IntermittentCnxnFailureTest { public static final PravegaZkCuratorResource PRAVEGA_ZK_CURATOR_RESOURCE = new PravegaZkCuratorResource(); private static final String SCOPE = "scope"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); private final String stream1 = "stream1"; private final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); @@ -196,7 +196,7 @@ public void createStreamTest() throws Exception { // Mock createSegment to return success. doReturn(CompletableFuture.completedFuture(true)).when(segmentHelperMock).createSegment( - anyString(), anyString(), anyInt(), any(), any(), anyLong()); + anyString(), anyString(), anyInt(), any(), any(), anyLong(), anyLong()); AtomicBoolean result = new AtomicBoolean(false); Retry.withExpBackoff(10, 10, 4) diff --git a/controller/src/test/java/io/pravega/controller/task/Stream/RequestSweeperTest.java b/controller/src/test/java/io/pravega/controller/task/Stream/RequestSweeperTest.java index 4305df3f410..a79b9db9e9e 100644 --- a/controller/src/test/java/io/pravega/controller/task/Stream/RequestSweeperTest.java +++ b/controller/src/test/java/io/pravega/controller/task/Stream/RequestSweeperTest.java @@ -80,7 +80,7 @@ public abstract class RequestSweeperTest { public static final PravegaZkCuratorResource PRAVEGA_ZK_CURATOR_RESOURCE = new PravegaZkCuratorResource(RETRY_POLICY); @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); private final String stream1 = "stream1"; diff --git a/controller/src/test/java/io/pravega/controller/task/Stream/StreamMetadataTasksTest.java b/controller/src/test/java/io/pravega/controller/task/Stream/StreamMetadataTasksTest.java index 390dcd1033f..b43151e847c 100644 --- a/controller/src/test/java/io/pravega/controller/task/Stream/StreamMetadataTasksTest.java +++ b/controller/src/test/java/io/pravega/controller/task/Stream/StreamMetadataTasksTest.java @@ -16,10 +16,12 @@ package io.pravega.controller.task.Stream; import com.google.common.collect.ImmutableMap; +import com.google.common.collect.ImmutableSet; import com.google.common.collect.Lists; import io.pravega.client.ClientConfig; import io.pravega.client.connection.impl.ConnectionFactory; import io.pravega.client.connection.impl.SocketConnectionFactoryImpl; +import io.pravega.client.control.impl.ModelHelper; import io.pravega.client.segment.impl.Segment; import io.pravega.client.stream.EventStreamWriter; import io.pravega.client.stream.EventWriterConfig; @@ -96,6 +98,10 @@ import io.pravega.shared.controller.event.SealStreamEvent; import io.pravega.shared.controller.event.TruncateStreamEvent; import io.pravega.shared.controller.event.UpdateStreamEvent; +import io.pravega.shared.controller.event.CreateReaderGroupEvent; +import io.pravega.shared.controller.event.UpdateReaderGroupEvent; +import io.pravega.shared.controller.event.DeleteReaderGroupEvent; +import io.pravega.shared.controller.event.RGStreamCutRecord; import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.TestingServerStarter; import java.time.Duration; @@ -120,6 +126,7 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.function.Supplier; import java.util.stream.Collectors; +import lombok.Cleanup; import lombok.Data; import lombok.Getter; import org.apache.commons.lang3.NotImplementedException; @@ -135,6 +142,7 @@ import org.mockito.Mock; import static io.pravega.shared.NameUtils.computeSegmentId; +import static io.pravega.test.common.AssertExtensions.assertFutureThrows; import static org.junit.Assert.*; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyLong; @@ -146,7 +154,7 @@ public abstract class StreamMetadataTasksTest { private static final String SCOPE = "scope"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); protected boolean authEnabled = false; protected CuratorFramework zkClient; @@ -279,6 +287,7 @@ public void testeventHelperNPE() throws Exception { EventHelper helper = EventHelperMock.getEventHelperMock(executor, "host", ((AbstractStreamMetadataStore) streamMetadataStore).getHostTaskIndex()); + @Cleanup StreamMetadataTasks streamMetadataTasks = new StreamMetadataTasks(streamMetadataStore, bucketStore, taskMetadataStore, segmentHelperMock, executor, "host", new GrpcAuthHelper(authEnabled, "key", 300), helper); @@ -681,6 +690,43 @@ public void readerGroupsTest() throws InterruptedException, ExecutionException { assertEquals(subscriberToNonSubscriberConfig.getEndingStreamCuts().size(), responseRG3.getConfig().getEndingStreamCutsCount()); } + @Test(timeout = 30000) + public void readerGroupFailureTests() throws InterruptedException { + WriterMock requestEventWriter = new WriterMock(streamMetadataTasks, executor); + streamMetadataTasks.setRequestEventWriter(requestEventWriter); + UpdateReaderGroupEvent badUpdateEvent = new UpdateReaderGroupEvent(SCOPE, "rg3", 2L, UUID.randomUUID(), 0L, false, ImmutableSet.of()); + requestEventWriter.writeEvent(badUpdateEvent); + AssertExtensions.assertFutureThrows("DataNotFoundException", processFailingEvent(requestEventWriter), e -> Exceptions.unwrap(e) instanceof StoreException.DataNotFoundException); + + String scopedStreamName = "scope/stream"; + ReaderGroupConfig rgConf = ReaderGroupConfig.builder().disableAutomaticCheckpoints() + .stream(scopedStreamName) + .retentionType(ReaderGroupConfig.StreamDataRetention.NONE) + .build(); + CreateReaderGroupEvent badCreateEvent = buildCreateRGEvent(SCOPE, "rg", rgConf, 1L, System.currentTimeMillis()); + + requestEventWriter.writeEvent(badCreateEvent); + AssertExtensions.assertFutureThrows("DataNotFoundException", processFailingEvent(requestEventWriter), e -> Exceptions.unwrap(e) instanceof StoreException.DataNotFoundException); + + DeleteReaderGroupEvent badDeleteEvent = new DeleteReaderGroupEvent(SCOPE, "rg3", 1L, UUID.randomUUID()); + requestEventWriter.writeEvent(badDeleteEvent); + AssertExtensions.assertFutureThrows("DataNotFoundException", processFailingEvent(requestEventWriter), e -> Exceptions.unwrap(e) instanceof StoreException.DataNotFoundException); + } + + private CreateReaderGroupEvent buildCreateRGEvent(String scope, String rgName, ReaderGroupConfig config, + final long requestId, final long createTimestamp) { + Map startStreamCuts = config.getStartingStreamCuts().entrySet().stream() + .collect(Collectors.toMap(e -> e.getKey().getScopedName(), + e -> new RGStreamCutRecord(ImmutableMap.copyOf(ModelHelper.getStreamCutMap(e.getValue()))))); + Map endStreamCuts = config.getEndingStreamCuts().entrySet().stream() + .collect(Collectors.toMap(e -> e.getKey().getScopedName(), + e -> new RGStreamCutRecord(ImmutableMap.copyOf(ModelHelper.getStreamCutMap(e.getValue()))))); + return new CreateReaderGroupEvent(requestId, scope, rgName, config.getGroupRefreshTimeMillis(), + config.getAutomaticCheckpointIntervalMillis(), config.getMaxOutstandingCheckpointRequest(), + config.getRetentionType().ordinal(), config.getGeneration(), config.getReaderGroupId(), + startStreamCuts, endStreamCuts, createTimestamp); + } + @Test(timeout = 30000) public void updateSubscriberStreamCutTest() throws InterruptedException, ExecutionException { final String stream1ScopedName = NameUtils.getScopedStreamName(SCOPE, stream1); @@ -2429,13 +2475,13 @@ public void sealStreamWithTxnTest() throws Exception { streamStorePartialMock.setState(SCOPE, streamWithTxn, State.ACTIVE, null, executor).get(); // create txn - VersionedTransactionData openTxn = streamTransactionMetadataTasks.createTxn(SCOPE, streamWithTxn, 10000L, 0L) + VersionedTransactionData openTxn = streamTransactionMetadataTasks.createTxn(SCOPE, streamWithTxn, 10000L, 0L, 1024 * 1024L) .get().getKey(); - VersionedTransactionData committingTxn = streamTransactionMetadataTasks.createTxn(SCOPE, streamWithTxn, 10000L, 0L) + VersionedTransactionData committingTxn = streamTransactionMetadataTasks.createTxn(SCOPE, streamWithTxn, 10000L, 0L, 1024 * 1024L) .get().getKey(); - VersionedTransactionData abortingTxn = streamTransactionMetadataTasks.createTxn(SCOPE, streamWithTxn, 10000L, 0L) + VersionedTransactionData abortingTxn = streamTransactionMetadataTasks.createTxn(SCOPE, streamWithTxn, 10000L, 0L, 1024 * 1024L) .get().getKey(); // set transaction to committing @@ -2467,7 +2513,8 @@ public void sealStreamWithTxnTest() throws Exception { List abortListBefore = abortWriter.getEventList(); streamMetadataTasks.sealStream(SCOPE, streamWithTxn, 0L); - processEvent(requestEventWriter).join(); + AssertExtensions.assertFutureThrows("seal stream did not fail processing with correct exception", + processEvent(requestEventWriter), e -> Exceptions.unwrap(e) instanceof StoreException.OperationNotAllowedException); requestEventWriter.eventQueue.take(); reset(streamStorePartialMock); @@ -2475,7 +2522,7 @@ public void sealStreamWithTxnTest() throws Exception { // verify that the txn status is set to aborting VersionedTransactionData txnData = streamStorePartialMock.getTransactionData(SCOPE, streamWithTxn, openTxn.getId(), null, executor).join(); assertEquals(txnData.getStatus(), TxnStatus.ABORTING); - assertEquals(0, requestEventWriter.getEventQueue().size()); + assertEquals(requestEventWriter.getEventQueue().size(), 1); // verify that events are posted for the abort txn. List abortListAfter = abortWriter.getEventList(); @@ -2494,6 +2541,9 @@ public void sealStreamWithTxnTest() throws Exception { doReturn(CompletableFuture.completedFuture(retVal)).when(streamStorePartialMock).getActiveTxns( eq(SCOPE), eq(streamWithTxn), any(), any()); + AssertExtensions.assertFutureThrows("seal stream did not fail processing with correct exception", + processEvent(requestEventWriter), e -> Exceptions.unwrap(e) instanceof StoreException.OperationNotAllowedException); + reset(streamStorePartialMock); // Now complete all existing transactions and verify that seal completes @@ -2503,6 +2553,7 @@ public void sealStreamWithTxnTest() throws Exception { activeTxns = streamStorePartialMock.getActiveTxns(SCOPE, streamWithTxn, null, executor).join(); assertTrue(activeTxns.isEmpty()); + assertTrue(Futures.await(processEvent(requestEventWriter))); // endregion } @@ -2779,7 +2830,7 @@ public void checkUpdateCompleteTest() throws ExecutionException, InterruptedExce streamStorePartialMock.setState(SCOPE, test, State.ACTIVE, null, executor).join(); assertTrue(streamMetadataTasks.isUpdated(SCOPE, test, configuration2, null).get()); - // start next update with different configuration. + // start next update with different configuration. final StreamConfiguration configuration3 = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).build(); streamMetadataTasks.updateStream(SCOPE, test, configuration3, 0L); Futures.loop(configUpdated, () -> Futures.delayedFuture(Duration.ofMillis(100), executor), executor).join(); @@ -2787,8 +2838,31 @@ public void checkUpdateCompleteTest() throws ExecutionException, InterruptedExce streamStorePartialMock.setState(SCOPE, test, State.UPDATING, null, executor).join(); // we should still get complete for previous configuration we attempted to update assertTrue(streamMetadataTasks.isUpdated(SCOPE, test, configuration2, null).get()); - + assertFalse(streamMetadataTasks.isUpdated(SCOPE, test, configuration3, null).get()); + + // test update on a sealed stream + String testStream = "testUpdateSealed"; + streamStorePartialMock.createStream(SCOPE, testStream, configuration, System.currentTimeMillis(), null, executor).get(); + streamStorePartialMock.setState(SCOPE, testStream, State.ACTIVE, null, executor).get(); + streamMetadataTasks.setRequestEventWriter(new EventStreamWriterMock<>()); + final StreamConfiguration configuration4 = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(4)).build(); + streamMetadataTasks.updateStream(SCOPE, testStream, configuration4, 0L); + + // wait till configuration is updated + configUpdated = () -> !streamStorePartialMock.getConfigurationRecord(SCOPE, testStream, null, + executor).join().getObject().isUpdating(); + Futures.loop(configUpdated, () -> Futures.delayedFuture(Duration.ofMillis(100), executor), executor).join(); + + configurationRecord = streamStorePartialMock.getConfigurationRecord(SCOPE, + testStream, null, executor).join(); + assertTrue(configurationRecord.getObject().isUpdating()); + streamStorePartialMock.completeUpdateConfiguration(SCOPE, testStream, configurationRecord, null, executor).join(); + + streamStorePartialMock.setState(SCOPE, testStream, State.SEALED, null, executor).join(); + assertFutureThrows("Should throw UnsupportedOperationException", + streamMetadataTasks.isUpdated(SCOPE, testStream, configuration4, null), + e -> UnsupportedOperationException.class.isAssignableFrom(e.getClass())); // end region } @@ -2836,6 +2910,30 @@ public void checkTruncateCompleteTest() throws ExecutionException, InterruptedEx // we should still get complete for previous configuration we attempted to update assertTrue(streamMetadataTasks.isTruncated(SCOPE, test, map, null).get()); assertFalse(streamMetadataTasks.isTruncated(SCOPE, test, map2, null).get()); + + // test truncate on a sealed stream + String testStream = "testTruncateSealed"; + streamStorePartialMock.createStream(SCOPE, testStream, configuration, System.currentTimeMillis(), null, executor).get(); + streamStorePartialMock.setState(SCOPE, testStream, State.ACTIVE, null, executor).get(); + streamMetadataTasks.setRequestEventWriter(new EventStreamWriterMock<>()); + + // region truncate + map = Collections.singletonMap(0L, 1L); + streamMetadataTasks.truncateStream(SCOPE, testStream, map, 0L); + + truncationStarted = () -> !streamStorePartialMock.getTruncationRecord(SCOPE, testStream, null, + executor).join().getObject().isUpdating(); + Futures.loop(truncationStarted, () -> Futures.delayedFuture(Duration.ofMillis(100), executor), executor).join(); + + truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, testStream, + null, executor).join(); + assertTrue(truncationRecord.getObject().isUpdating()); + streamStorePartialMock.completeTruncation(SCOPE, testStream, truncationRecord, null, executor).join(); + + streamStorePartialMock.setState(SCOPE, testStream, State.SEALED, null, executor).join(); + assertFutureThrows("Should throw UnsupportedOperationException", + streamMetadataTasks.isTruncated(SCOPE, testStream, map, null), + e -> UnsupportedOperationException.class.isAssignableFrom(e.getClass())); // end region } @@ -2883,9 +2981,10 @@ public void testAddIndexAndSubmitTask() { } @Test(timeout = 30000) - public void concurrentCreateStreamTest() { + public void concurrentCreateStreamTest() throws Exception { TaskMetadataStore taskMetadataStore = spy(TaskStoreFactory.createZKStore(zkClient, executor)); + @Cleanup StreamMetadataTasks metadataTask = new StreamMetadataTasks(streamStorePartialMock, bucketStore, taskMetadataStore, SegmentHelperMock.getSegmentHelperMock(), executor, "host", new GrpcAuthHelper(authEnabled, "key", 300)); @@ -3034,6 +3133,19 @@ private CompletableFuture processEvent(WriterMock requestEventWriter) thro }); } + private CompletableFuture processFailingEvent(WriterMock requestEventWriter) throws InterruptedException { + ControllerEvent event; + try { + event = requestEventWriter.getEventQueue().take(); + } catch (InterruptedException e) { + throw new RuntimeException(e); + } + return streamRequestHandler.processEvent(event) + .exceptionally(e -> { + throw new CompletionException(e); + }); + } + @Data public class WriterMock implements EventStreamWriter { private final StreamMetadataTasks streamMetadataTasks; diff --git a/controller/src/test/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasksTest.java b/controller/src/test/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasksTest.java index 1ea30de71e3..2768a9c2a9f 100644 --- a/controller/src/test/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasksTest.java +++ b/controller/src/test/java/io/pravega/controller/task/Stream/StreamTransactionMetadataTasksTest.java @@ -262,8 +262,8 @@ public void commitAbortTests() { // Create 2 transactions final long lease = 5000; - VersionedTransactionData txData1 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L).join().getKey(); - VersionedTransactionData txData2 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L).join().getKey(); + VersionedTransactionData txData1 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L, 1024 * 1024L).join().getKey(); + VersionedTransactionData txData2 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L, 1024 * 1024L).join().getKey(); // Commit the first one TxnStatus status = txnTasks.commitTxn(SCOPE, STREAM, txData1.getId(), 0L).join(); @@ -305,10 +305,10 @@ public void failOverTests() throws Exception { failedTxnTasks.initializeStreamWriters(new EventStreamWriterMock<>(), new EventStreamWriterMock<>()); // Create 3 transactions from failedHost. - VersionedTransactionData tx1 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L).join().getKey(); - VersionedTransactionData tx2 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L).join().getKey(); - VersionedTransactionData tx3 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L).join().getKey(); - VersionedTransactionData tx4 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L).join().getKey(); + VersionedTransactionData tx1 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L, 0L).join().getKey(); + VersionedTransactionData tx2 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L, 0L).join().getKey(); + VersionedTransactionData tx3 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L, 0L).join().getKey(); + VersionedTransactionData tx4 = failedTxnTasks.createTxn(SCOPE, STREAM, 10000, 0L, 0L).join().getKey(); // Ping another txn from failedHost. PingTxnStatus pingStatus = failedTxnTasks.pingTxn(SCOPE, STREAM, tx4.getId(), 10000, 0L).join(); @@ -430,8 +430,8 @@ public void idempotentOperationsTests() throws CheckpointStoreException, Interru // Create 2 transactions final long lease = 5000; - VersionedTransactionData txData1 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L).join().getKey(); - VersionedTransactionData txData2 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L).join().getKey(); + VersionedTransactionData txData1 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L, 1024 * 1024L).join().getKey(); + VersionedTransactionData txData2 = txnTasks.createTxn(SCOPE, STREAM, lease, 0L, 1024 * 1024L).join().getKey(); UUID tx1 = txData1.getId(); UUID tx2 = txData2.getId(); @@ -508,7 +508,7 @@ public void partialTxnCreationTest() { final long lease = 10000; AssertExtensions.assertFutureThrows("Transaction creation fails, although a new txn id gets added to the store", - txnTasks.createTxn(SCOPE, STREAM, lease, 0L), + txnTasks.createTxn(SCOPE, STREAM, lease, 0L, 1024 * 1024L), e -> e instanceof RuntimeException); // Ensure that exactly one transaction is active on the stream. @@ -585,7 +585,7 @@ public CompletableFuture answer(InvocationOnMock invoc } }).when(streamStoreMock).createTransaction(any(), any(), any(), anyLong(), anyLong(), any(), any()); Pair> txn = txnTasks.createTxn(SCOPE, STREAM, 10000L, - 0L).join(); + 0L, 1024 * 1024L).join(); // verify that generate transaction id is called 3 times verify(streamStoreMock, times(3)).generateTransactionId(any(), any(), any(), any()); @@ -624,7 +624,7 @@ public void txnPingTest() throws Exception { // Verify Ping transaction on committing transaction. Pair> txn = txnTasks.createTxn(SCOPE, STREAM, 10000L, - 0L).join(); + 0L, 0L).join(); UUID txnId = txn.getKey().getId(); txnTasks.commitTxn(SCOPE, STREAM, txnId, 0L).join(); assertEquals(PingTxnStatus.Status.COMMITTED, txnTasks.pingTxn(SCOPE, STREAM, txnId, 10000L, @@ -634,7 +634,7 @@ public void txnPingTest() throws Exception { streamStoreMock.startCommitTransactions(SCOPE, STREAM, 100, null, executor).join(); val record = streamStoreMock.getVersionedCommittingTransactionsRecord( SCOPE, STREAM, null, executor).join(); - streamStoreMock.completeCommitTransactions(SCOPE, STREAM, record, null, executor).join(); + streamStoreMock.completeCommitTransactions(SCOPE, STREAM, record, null, executor, Collections.emptyMap()).join(); // verify that transaction is removed from active txn AssertExtensions.assertFutureThrows("Fetching Active Txn record should throw DNF", @@ -645,7 +645,7 @@ val record = streamStoreMock.getVersionedCommittingTransactionsRecord( 0L).join().getStatus()); // Verify Ping transaction on an aborting transaction. - txn = txnTasks.createTxn(SCOPE, STREAM, 10000L, 0L).join(); + txn = txnTasks.createTxn(SCOPE, STREAM, 10000L, 0L, 1024 * 1024L).join(); txnId = txn.getKey().getId(); txnTasks.abortTxn(SCOPE, STREAM, txnId, null, 0L).join(); assertEquals(PingTxnStatus.Status.ABORTED, txnTasks.pingTxn(SCOPE, STREAM, txnId, 10000L, @@ -665,7 +665,7 @@ val record = streamStoreMock.getVersionedCommittingTransactionsRecord( // Verify max execution time. txnTasks.setMaxExecutionTime(1L); - txn = txnTasks.createTxn(SCOPE, STREAM, 10000L, 0L).join(); + txn = txnTasks.createTxn(SCOPE, STREAM, 10000L, 0L, 1024 * 1024L).join(); UUID tid = txn.getKey().getId(); AssertExtensions.assertEventuallyEquals(PingTxnStatus.Status.MAX_EXECUTION_TIME_EXCEEDED, () -> txnTasks.pingTxn(SCOPE, STREAM, tid, 10000L, 0L).join().getStatus(), 10000L); @@ -727,7 +727,7 @@ public void writerInitializationTest() throws Exception { streamStore.setState(SCOPE, STREAM, State.ACTIVE, null, executor).join(); CompletableFuture>> createFuture = txnTasks.createTxn( - SCOPE, STREAM, leasePeriod, 0L); + SCOPE, STREAM, leasePeriod, 0L, 0L); // create and ping transactions should not wait for writer initialization and complete immediately. createFuture.join(); @@ -741,7 +741,7 @@ public void writerInitializationTest() throws Exception { txnTasks.initializeStreamWriters(commitWriter, abortWriter); assertTrue(Futures.await(commitFuture)); - UUID txnId2 = txnTasks.createTxn(SCOPE, STREAM, leasePeriod, 0L).join().getKey().getId(); + UUID txnId2 = txnTasks.createTxn(SCOPE, STREAM, leasePeriod, 0L, 1024 * 1024L).join().getKey().getId(); assertTrue(Futures.await(txnTasks.abortTxn(SCOPE, STREAM, txnId2, null, 0L))); } diff --git a/controller/src/test/java/io/pravega/controller/task/Stream/ZkStreamMetadataTasksTest.java b/controller/src/test/java/io/pravega/controller/task/Stream/ZkStreamMetadataTasksTest.java index d0a53498bf1..8afd0bdddf8 100644 --- a/controller/src/test/java/io/pravega/controller/task/Stream/ZkStreamMetadataTasksTest.java +++ b/controller/src/test/java/io/pravega/controller/task/Stream/ZkStreamMetadataTasksTest.java @@ -40,12 +40,14 @@ public void removeSubscriberTest() throws InterruptedException, ExecutionExcepti assertTrue(true); } + @Override @Test public void updateSubscriberStreamCutTest() throws InterruptedException, ExecutionException { // skip ZK tests assertTrue(true); } + @Override @Test public void readerGroupsTest() throws InterruptedException, ExecutionException { // skip ZK tests @@ -105,4 +107,10 @@ public void consumptionBasedRetentionTimeLimitWithOverlappingMinTest() { public void sizeBasedRetentionStreamTest() { // no op } + + @Test + @Override + public void readerGroupFailureTests() { + // no op + } } diff --git a/controller/src/test/java/io/pravega/controller/task/TaskMetadataStoreTests.java b/controller/src/test/java/io/pravega/controller/task/TaskMetadataStoreTests.java index 62317172356..19a57f12981 100644 --- a/controller/src/test/java/io/pravega/controller/task/TaskMetadataStoreTests.java +++ b/controller/src/test/java/io/pravega/controller/task/TaskMetadataStoreTests.java @@ -46,7 +46,7 @@ public abstract class TaskMetadataStoreTests { @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected TaskMetadataStore taskMetadataStore; private final Resource resource = new Resource("scope", "stream1"); diff --git a/controller/src/test/java/io/pravega/controller/task/TaskTest.java b/controller/src/test/java/io/pravega/controller/task/TaskTest.java index 91c7659ab03..b1dfcbf09a2 100644 --- a/controller/src/test/java/io/pravega/controller/task/TaskTest.java +++ b/controller/src/test/java/io/pravega/controller/task/TaskTest.java @@ -81,7 +81,7 @@ public abstract class TaskTest { private static final String HOSTNAME = "host-1234"; private static final String SCOPE = "scope"; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(10, "test"); protected CuratorFramework cli; diff --git a/controller/src/test/java/io/pravega/controller/timeout/TimeoutServiceTest.java b/controller/src/test/java/io/pravega/controller/timeout/TimeoutServiceTest.java index 292116e0dd0..07044fb9a9a 100644 --- a/controller/src/test/java/io/pravega/controller/timeout/TimeoutServiceTest.java +++ b/controller/src/test/java/io/pravega/controller/timeout/TimeoutServiceTest.java @@ -86,7 +86,7 @@ public abstract class TimeoutServiceTest { private final static long LEASE = 2000; private final static int RETRY_DELAY = 1000; @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); protected ScheduledExecutorService executor; protected CuratorFramework client; diff --git a/docker/bookkeeper/Dockerfile b/docker/bookkeeper/Dockerfile index 7ecac1b3fb8..f2e39f99d70 100644 --- a/docker/bookkeeper/Dockerfile +++ b/docker/bookkeeper/Dockerfile @@ -27,6 +27,7 @@ FROM apache/bookkeeper:4.14.1 ARG BK_VERSION=4.14.1 ARG DISTRO_NAME=bookkeeper-all-${BK_VERSION}-bin +ENV JAVA_HOME=/usr/lib/jvm/java-11 RUN set -x \ && yum install -y iproute wget \ diff --git a/docker/bookkeeper/entrypoint.sh b/docker/bookkeeper/entrypoint.sh index 7bfa6f61d46..3e4ffdb50d9 100755 --- a/docker/bookkeeper/entrypoint.sh +++ b/docker/bookkeeper/entrypoint.sh @@ -42,7 +42,6 @@ BOOKKEEPER=${BINDIR}/bookkeeper SCRIPTS_DIR=${BK_HOME}/scripts export PATH=$PATH:/opt/bookkeeper/bin -export JAVA_HOME=/usr/lib/jvm/java-11 export BK_zkLedgersRootPath=${BK_LEDGERS_PATH} export BOOKIE_PORT=${BOOKIE_PORT} export SERVICE_PORT=${BOOKIE_PORT} @@ -74,6 +73,24 @@ create_bookie_dirs() { done } +# Create a Bookie ID if this is a newly added bookkeeper pod +# or read the Bookie ID if a cookie containing this value already exists +set_bookieid() { + IFS=',' read -ra journal_directories <<< $BK_journalDirectories + COOKIE="${journal_directories[0]}/current/VERSION" + if [ `find ${COOKIE} | wc -l` -gt 0 ]; then + # Reading the Bookie ID value from the existing cookie + bkHost=`cat ${COOKIE} | grep bookieHost` + IFS=" " read -ra id <<< $bkHost + BK_bookieId=${id[1]:1:-1} + else + # Creating a new Bookie ID following the latest nomenclature + BK_bookieId="`hostname -s`-${RANDOM}" + fi + echo "BookieID = $BK_bookieId" + sed -i "s|.*bookieId=.*\$|bookieId=${BK_bookieId}|" ${BK_HOME}/conf/bk_server.conf +} + wait_for_zookeeper() { echo "Waiting for zookeeper" until zk-shell --run-once "ls /" ${BK_zkServers}; do sleep 5; done @@ -145,7 +162,10 @@ initialize_cluster() { } format_bookie_data_and_metadata() { - if [ `find $BK_journalDirectory $BK_ledgerDirectories $BK_indexDirectories -type f 2> /dev/null | wc -l` -gt 0 ]; then + IFS=',' read -ra journal_directories <<< $BK_journalDirectories + IFS=',' read -ra ledger_directories <<< $BK_ledgerDirectories + IFS=" " eval 'directory_names="${journal_directories[*]} ${ledger_directories[*]}"' + if [ `find $directory_names $BK_indexDirectories -type f 2> /dev/null | wc -l` -gt 0 ]; then # The container already contains data in BK directories. Examples of when this can happen include: # - A container was restarted, say, in a non-Kubernetes deployment. # - A container running on Kubernetes was updated/evacuated, and @@ -171,6 +191,9 @@ echo "Creating directories for Bookkeeper journal and ledgers" create_bookie_dirs "${BK_journalDirectories}" create_bookie_dirs "${BK_ledgerDirectories}" +echo "Configuring the Bookie ID" +set_bookieid + echo "Sourcing ${SCRIPTS_DIR}/common.sh" source ${SCRIPTS_DIR}/common.sh @@ -190,4 +213,4 @@ echo "Initializing Cluster" initialize_cluster echo "Starting the bookie" -/opt/bookkeeper/bin/bookkeeper bookie \ No newline at end of file +/opt/bookkeeper/bin/bookkeeper bookie diff --git a/docker/pravega/Dockerfile b/docker/pravega/Dockerfile index 5a5ef45521c..eb04c45f1fc 100644 --- a/docker/pravega/Dockerfile +++ b/docker/pravega/Dockerfile @@ -13,27 +13,16 @@ # See the License for the specific language governing permissions and # limitations under the License. # -FROM openjdk:11.0.8-jre-slim +FROM adoptopenjdk/openjdk11:jre-11.0.11_9-alpine -RUN apt-get update && apt-get install -y -q \ - rpcbind \ - nfs-common \ - python \ - jq \ +RUN apk --update --no-cache add \ + #used in readiness and liveness probes curl \ - net-tools \ - iproute2 - -# Adding Java system truststore 'cacerts', as it is missing from the Java -# distribution installed by the base image and one of the storage bindings(ECS) -# depends on its presence. -# -# For installing ca-certificates for jre requires the presence of man folder -# otherwise it fails with the following error -# `Sub-process /usr/bin/dpkg returned an error code (1)` -RUN mkdir -p /usr/share/man/man1 && apt-get install -y -q ca-certificates-java \ - && /var/lib/dpkg/info/ca-certificates-java.postinst configure \ - && rm -rf /var/lib/apt/lists/* + #used in wait_for function + python3 \ + #used in init_kubernetes + jq \ + && ln -sf python3 /usr/bin/python EXPOSE 9090 9091 10000 12345 @@ -42,10 +31,6 @@ WORKDIR /opt/pravega COPY pravega/ /opt/pravega/ COPY scripts/ /opt/pravega/scripts/ -# Default shell of jdk11 image is dash -# Creating symlink to point to bash -RUN ln -sf /bin/bash /bin/sh - RUN chmod +x -R /opt/pravega/scripts/ ENTRYPOINT [ "/opt/pravega/scripts/entrypoint.sh" ] diff --git a/docker/pravega/scripts/common.sh b/docker/pravega/scripts/common.sh index d871a87fe97..a0b63549960 100755 --- a/docker/pravega/scripts/common.sh +++ b/docker/pravega/scripts/common.sh @@ -45,6 +45,6 @@ add_certs_into_truststore() { CERTS=/etc/secret-volume/ca-bundle/* for cert in $CERTS do - yes | keytool -importcert -storepass changeit -file "${cert}" -alias "${cert}" -keystore /usr/local/openjdk-11/lib/security/cacerts || true + yes | keytool -importcert -storepass changeit -file "${cert}" -alias "${cert}" -cacerts || true done } diff --git a/docker/pravega/scripts/init_controller.sh b/docker/pravega/scripts/init_controller.sh index 089648f2d67..d7534eb57e1 100755 --- a/docker/pravega/scripts/init_controller.sh +++ b/docker/pravega/scripts/init_controller.sh @@ -19,5 +19,10 @@ init_controller() { [ ! -z "$HOSTNAME" ] && add_system_property "controller.metrics.prefix" "${HOSTNAME}" add_system_property "controller.zk.connect.uri" "${ZK_URL}" add_system_property "controller.server.store.host.type" "Zookeeper" + [ -d "$INFLUX_DB_SECRET_MOUNT_PATH" ] \ + && [ -f "$INFLUX_DB_SECRET_MOUNT_PATH"/username ] && [ -f "$INFLUX_DB_SECRET_MOUNT_PATH"/password ] \ + && add_system_property "metrics.influxDB.connect.credentials.username" "$(cat "$INFLUX_DB_SECRET_MOUNT_PATH"/username)" \ + && add_system_property "metrics.influxDB.connect.credentials.pwd" "$(cat "$INFLUX_DB_SECRET_MOUNT_PATH"/password)" echo "JAVA_OPTS=${JAVA_OPTS}" + } diff --git a/docker/pravega/scripts/init_kubernetes.sh b/docker/pravega/scripts/init_kubernetes.sh index 3f481f26713..0355949f996 100755 --- a/docker/pravega/scripts/init_kubernetes.sh +++ b/docker/pravega/scripts/init_kubernetes.sh @@ -55,7 +55,7 @@ init_kubernetes() { export PUBLISHED_PORT="" local service=$( k8 "${ns}" "services" "${podname}" .kind ) - if [[ "$service" != "Service" ]]; + if [[ "${service}" != "Service" ]]; then echo "Failed to get External Service. Exiting..." exit 1 diff --git a/docker/pravega/scripts/init_segmentstore.sh b/docker/pravega/scripts/init_segmentstore.sh index b696ceba0e1..379de5c33a3 100755 --- a/docker/pravega/scripts/init_segmentstore.sh +++ b/docker/pravega/scripts/init_segmentstore.sh @@ -24,5 +24,9 @@ init_segmentstore() { add_system_property "autoScale.controller.connect.uri" "${CONTROLLER_URL}" add_system_property "bookkeeper.zk.connect.uri" "${BK_ZK_URL:-${ZK_URL}}" add_system_property "pravegaservice.storageThreadPool.size" "${STORAGE_THREAD_POOL_SIZE}" + [ -d "$INFLUX_DB_SECRET_MOUNT_PATH" ] \ + && [ -f "$INFLUX_DB_SECRET_MOUNT_PATH"/username ] && [ -f "$INFLUX_DB_SECRET_MOUNT_PATH"/password ] \ + && add_system_property "metrics.influxDB.connect.credentials.username" "$(cat "$INFLUX_DB_SECRET_MOUNT_PATH"/username)" \ + && add_system_property "metrics.influxDB.connect.credentials.pwd" "$(cat "$INFLUX_DB_SECRET_MOUNT_PATH"/password)" echo "JAVA_OPTS=${JAVA_OPTS}" } diff --git a/documentation/src/docs/controller-service.md b/documentation/src/docs/controller-service.md index c490967f3cf..81a1aa79917 100644 --- a/documentation/src/docs/controller-service.md +++ b/documentation/src/docs/controller-service.md @@ -128,7 +128,11 @@ have two policies that users can define, namely [**Scaling** **Policy**](https:/ - **Scaling policy** describes if and under what circumstances a Stream should automatically scale its number of Segments. - **Retention policy** describes a policy about how much data to retain within a Stream based on **time** (*Time-based Retention*) and data **size** (*Size-based Retention*). - 3. [**Transaction**](pravega-concepts.md#transactions) **Management**: Implementing Transactions requires the manipulation of Stream Segments. With + 3. **Tag Management**: The Controller is responsible for storing user-defined Tags for a given Stream. + Tags are singleton labels that are useful to specify identifying attributes of a Stream, meaningful and relevant to users. + The Controller indexes these Tags and provides APIs by which the user can organize and select a subset of Streams under a given Scope that share the same tag. + + 4. [**Transaction**](pravega-concepts.md#transactions) **Management**: Implementing Transactions requires the manipulation of Stream Segments. With each Transaction, Pravega creates a set of Transaction Segments, which are later merged onto the Stream Segments upon commit or discarded upon aborts. The Controller performs the role of Transaction manager and is @@ -247,7 +251,7 @@ to describe the change in generation of Segments in the Stream. The Controller maintains the Stream: it stores the information about all epochs that constitute a given Stream and also about their transitions. The metadata store is designed to persist the information pertaining to Stream Segments, and to enable queries over this information. Apart from the epoch information, it keeps some additional metadata, -such as [state](#stream-state) and its [policies](#stream-policy-manager) and ongoing Transactions on the Stream. Various sub-components of Controller access the stored metadata for each +such as [state](#stream-state), its [policies](#stream-policy-manager), its Tags and ongoing Transactions on the Stream. Various sub-components of Controller access the stored metadata for each Stream via a well-defined [interface](https://github.com/pravega/pravega/blob/master/controller/src/main/java/io/pravega/controller/store/stream/StreamMetadataStore.java). We currently have two concrete implementations of the Stream store @@ -309,8 +313,9 @@ _Segment-info: ⟨segmentid, time, keySpace-start, keySpace-end⟩_. #### Stream Configuration Stream configuration is stored against the StreamConfiguration key in the metadata table. The value against this key has the Stream configuration serialized and persisted. A - Stream configuration contains Stream policies that need to be enforced. + Stream configuration contains Stream policies that need to be enforced and the Stream Tags associated with the Stream. [Scaling policy](https://github.com/pravega/pravega/blob/master/client/src/main/java/io/pravega/client/stream/ScalingPolicy.java) and [Retention policy](https://github.com/pravega/pravega/blob/master/client/src/main/java/io/pravega/client/stream/RetentionPolicy.java) are supplied by the application at the time of Stream creation and enforced by Controller by monitoring the rate and size of data in the Stream. + The Stream Tags too can be specified by the application at the time of Stream creation. - The Scaling policy describes if and when to automatically scale is based on incoming traffic conditions into the Stream. The policy supports two flavors - _traffic as the rate of Events per second_ and _traffic as the rate of @@ -634,7 +639,7 @@ and `updateStream()` operation is performed. - Once the update Stream processing starts, it first sets the Stream state to *Updating*. - Then, the Stream configuration is updated in the metadata store followed by notifying Segment Stores for all _active_ Stream Segments of the Stream, about the change in -policy. Now the state is reset to *Active*. +policy. Any Tag related changes causes the Tag indexes (a reverse index of a Tag to Stream for a Scope) to be updated. Now the state is reset to *Active*. ### Scale Stream diff --git a/documentation/src/docs/pravega-concepts.md b/documentation/src/docs/pravega-concepts.md index dc3e0e4bb93..0a64689471e 100644 --- a/documentation/src/docs/pravega-concepts.md +++ b/documentation/src/docs/pravega-concepts.md @@ -42,7 +42,7 @@ Next, we overview the key concepts in Pravega. For a concise definition of key t ## Streams -Pravega organizes data into Streams. A Stream is a durable, elastic, append-only, unbounded sequence of bytes having good performance and strong consistency.  A Pravega Stream is +Pravega organizes data into Streams. A Stream is a durable, elastic, append-only, unbounded sequence of bytes having good performance and strong consistency. A Pravega Stream is similar to but more flexible than a "topic" in popular message oriented middleware such as [RabbitMQ](https://www.rabbitmq.com/) or [Apache Kafka](https://kafka.apache.org/). Pravega Streams are based on an append-only log data structure. By using append-only logs, Pravega rapidly ingests data into durable storage. It supports a large variety of application use cases: @@ -79,15 +79,15 @@ machine). A Routing Key is important in defining the read and write semantics th ![Reader Client](img/producer.consumer.client.new.png) Pravega provides a client library, written in Java, that implements a convenient -API for Writer and Reader applications.  The Pravega Java Client Library +API for Writer and Reader applications. The Pravega Java Client Library encapsulates the Wire Protocol used to communicate Pravega clients and servers. - **Writer:** An application that creates Events and writes them into a Stream. All data is written by appending to the tail (front) of a Stream. -- **Reader:** An application that reads Events from a Stream.  Readers can read -from any point in the Stream.  Many Readers will be reading Events from the tail +- **Reader:** An application that reads Events from a Stream. Readers can read +from any point in the Stream. Many Readers will be reading Events from the tail of the Stream. Tail reads corresponding to recently written Events are immediately delivered to Readers. Some Readers will read from earlier parts of the Stream (called **catch-up reads**). The application developer has control over the Reader's start position in the Stream. @@ -100,7 +100,7 @@ Reader is created through the Pravega data plane API, the developer includes the name of the Reader Group associated with it. Pravega guarantees that each Event published to a Stream is sent to exactly one Reader within the Reader Group. There could be one or more Readers in the Reader Group and there could be many different Reader Groups simultaneously reading from any given Stream. A Reader Group can be considered as a "composite Reader" or "distributed Reader", that allows a distributed application to read and process Stream data -in parallel. A large amount of Stream data can be consumed by a coordinated group of Readers in a Reader Group.  For example, a collection of Flink tasks processing Stream data in parallel using Reader Group. +in parallel. A large amount of Stream data can be consumed by a coordinated group of Readers in a Reader Group. For example, a collection of Flink tasks processing Stream data in parallel using Reader Group. For more details on the basics of working with Pravega Readers and Writers, please see [Working with Pravega: Basic Reader and Writer](basic-reader-and-writer.md#working-with-pravega-basic-reader-and-writer). @@ -135,7 +135,7 @@ and time. ![Stream Segment](img/segment.split.merge.overtime.new.png)  -- A Stream starts at time **t0** with a configurable number of Stream Segments.  If the +- A Stream starts at time **t0** with a configurable number of Stream Segments. If the rate of data written to the Stream is constant, there will be no change in the number of Stream Segments.  - At time **t1**, the system noted an increase in the ingestion rate and splits Stream **Segment 1** into two parts. This process is referred as **Scale-Up** Event. @@ -144,9 +144,9 @@ rate of data written to the Stream is constant, there will be no change in the n space (i.e., values ranging from **200-399**) would be placed in Stream **Segment 1** and those that hash into the lower part of the key space (i.e., values ranging from **0-199**) would be placed in Stream **Segment 0**. -- After **t1**, Stream **Segment 1** is split into Stream **Segment 2** and Stream **Segment 3**. The Stream **Segment 1** is sealed and stops accepting writes.  At this point in time, Events with Routing Key **300** and _above_ are written to Stream **Segment 3** and those between **200** and **299** would be written into Stream **Segment 2**. +- After **t1**, Stream **Segment 1** is split into Stream **Segment 2** and Stream **Segment 3**. The Stream **Segment 1** is sealed and stops accepting writes. At this point in time, Events with Routing Key **300** and _above_ are written to Stream **Segment 3** and those between **200** and **299** would be written into Stream **Segment 2**. -- Stream **Segment 0** continues accepting the same range of Events as before **t1**.   +- Stream **Segment 0** continues accepting the same range of Events as before **t1**. - Another scale-up Event occurs at time **t2**, as Stream **Segment 0**’s range of Routing Key is split into Stream **Segment 5** and Stream **Segment 4**. Also at this time, Stream **Segment 0** is sealed @@ -159,7 +159,7 @@ accommodate a decrease in the load on the Stream. When a Stream is created, it is configured with a **Scaling Policy** that determines how a Stream handles the varying changes in its load. Pravega has three kinds of Scaling Policy: -1. **Fixed**:  The number of Stream Segments does not vary with load. +1. **Fixed**: The number of Stream Segments does not vary with load. 2. **Data-based**: Pravega splits a Stream Segment into multiple ones (i.e., Scale-up Event) if the number of bytes per second written to that Stream Segment increases beyond a defined threshold. Similarly, Pravega merges two adjacent Stream Segments (i.e., Scale-down Event) if the number of bytes written to them fall below a defined threshold. Note that, even if the load for a Stream Segment reaches the defined threshold, Pravega does not immediately trigger a Scale-up/down Event. Instead, the load should be satisfying the scaling policy threshold for a [sufficient amount of time](https://github.com/pravega/pravega/blob/master/client/src/main/java/io/pravega/client/stream/ScalingPolicy.java). @@ -174,7 +174,7 @@ As mentioned earlier in this section, that an Event is written into one of the S It is also worth emphasizing that Events are written only on the active Stream Segments. Stream Segments that are sealed do not accept writes. In the figure above, -at time **now**, only Stream **Segments 3**, **6** and **4** are active and the entire key space is covered between those three Stream Segments.   +at time **now**, only Stream **Segments 3**, **6** and **4** are active and the entire key space is covered between those three Stream Segments. ### Stream Segments and Reader Groups @@ -220,7 +220,7 @@ This results in the following ordering guarantees: ## Reader Group Checkpoints Pravega provides the ability for an application to initiate a **Checkpoint** on a -Reader Group.  The idea with a Checkpoint is to create a consistent "point in +Reader Group. The idea with a Checkpoint is to create a consistent "point in time" persistence of the state of each Reader in the Reader Group, by using a specialized Event (_Checkpoint Event_) to signal each Reader to preserve its state. Once a Checkpoint has been completed, the application can use the @@ -241,9 +241,9 @@ time window, the Flink job can commit the Transaction and therefore make the results of the processing available for downstream processing, or in the case of an error, the Transaction is aborted and the results disappear. -A key difference between Pravega's Transactions and similar approaches (Kafka's producer-side batching) vary with the feature durability. Events added to a Transaction are durable when the Event is acknowledged back to the Writer. However, the Events in the Transaction are _not_ visible to Readers until the Transaction is committed by the Writer. A Transaction is a similar to a Stream and is associated with multiple Stream Segments.  When an Event is published into a -Transaction, the Event itself is appended to a Stream Segment of the -Transaction.  + +A key difference between Pravega's Transactions and similar approaches (Kafka's producer-side batching) vary with the feature durability. Events added to a Transaction are durable when the Event is acknowledged back to the Writer. However, the Events in the Transaction are _not_ visible to Readers until the Transaction is committed by the Writer. A Transaction is similar to a Stream and is associated with multiple Stream Segments.  When an Event is published into a +Transaction, the Event itself is appended to a Stream Segment of the Transaction.  For example, a Stream has five Stream Segments, when a Transaction is created on that Stream, conceptually that Transaction also has five Stream Segments. When an Event is @@ -265,7 +265,7 @@ mechanism for state shared between multiple processes running in a cluster and m State Synchronizer could be used to maintain a single, shared copy of an application's configuration property across all instances of that application in -a cloud.  State Synchronizer could also be used to store one piece of data or a +a cloud. State Synchronizer could also be used to store one piece of data or a map with thousands of different key value pairs. In Pravega, managing the state of Reader Groups and distribution of Readers throughout the network is implemented using State Synchronizer. An application developer creates a State Synchronizer on a Stream similar to the creation of a Writer. The State Synchronizer keeps a local copy of the shared state and allows faster access to the data for the application. State Synchronizer keeps track of all the changes happening in the shared state and it is responsible for performing any modification to the shared state in the Stream. Each application instance uses the State Synchronizer, to remain updated with the @@ -291,14 +291,14 @@ The following figure depicts the components deployed by Pravega: ![pravega high level architecture](img/pravega.arch.new.png) Pravega is deployed as a distributed system – a cluster of servers and storage -coordinated to run Pravega called a **Pravega cluster**.   +coordinated to run Pravega called a **Pravega cluster**. Pravega presents a software-defined storage (SDS) architecture formed by **Controller** instances (_control plane_) and Pravega Servers (_data plane_). The set of Pravega Servers is collectively known as the **Segment Store**.  The set of Controller instances together forms the control plane of Pravega, providing functionality to _create, update_ and _delete_ Streams. Further, it extends the functionality to retrieve information about the Streams, monitor the health of the Pravega cluster, gather metrics, etc. There -are usually multiple (recommended at least 3) Controller instances running in a running in a cluster for high availability.   +are usually multiple (recommended at least 3) Controller instances running in a running in a cluster for high availability. The [Segment Store](segment-store-service.md) implements the Pravega data plane. Pravega Servers provide the API to read and write data in Streams. Data storage is comprised of two tiers: @@ -310,7 +310,7 @@ Tier 1 Storage. Tier 1 Storage typically runs _within_ the Pravega cluster. Storage tiering allows Pravega to achieve a sweet spot in the latency vs throughput trade-off. This makes Pravega an ideal storage substrate for serving data to both real-time and batch (analytics) applications. Moreover, as data in Tier 1 Storage ages, it is automatically moved into Tier 2 Storage. Thus, Pravega can store vasts amounts of Stream data and applications can read it at any time, while being oblivious to its actual location. Pravega uses [Apache Zookeeper](https://zookeeper.apache.org/) as the -coordination mechanism for the components in the Pravega cluster.   +coordination mechanism for the components in the Pravega cluster. Pravega is a distributed storage system providing the Stream primitive first and foremost. Pravega is carefully designed to take advantage of software-defined storage, so that the @@ -329,57 +329,56 @@ The concepts in Pravega are depicted in the following figure: ![State synchroner](img/putting.all.together.new.png)  -- Pravega clients are Writers and Readers.  Writers write Events into a +- Pravega clients are Writers and Readers. Writers write Events into a Stream. Readers read Events from a Stream. Readers are grouped into Reader Groups to read from a Stream in parallel. - The Controller is a server-side component that manages the control plane of - Pravega.  Streams are created, updated and listed using the Controller API. + Pravega. Streams are created, updated and listed using the Controller API. - The Pravega Server is a server-side component that implements reads, writes and other data plane operations. -- Streams are the fundamental storage primitive in Pravega.  Streams contain a - set of data elements called Events.  Events are appended to the “tail” of - the Stream by Writers.  Readers can read Events from anywhere in the Stream. +- Streams are the fundamental storage primitive in Pravega. Streams contain a + set of data elements called Events. Events are appended to the “tail” of + the Stream by Writers. Readers can read Events from anywhere in the Stream. - A Stream is partitioned into a set of Stream Segments. The number of Stream - Segments in a Stream can change over time.  Events are written into exactly - one of the Stream Segments based on Routing Key.  For any Reader Group reading a Stream, each Stream Segment is assigned to one Reader in that + Segments in a Stream can change over time. Events are written into exactly + one of the Stream Segments based on Routing Key. For any Reader Group reading a Stream, each Stream Segment is assigned to one Reader in that Reader Group.  -- Each Stream Segment is stored in a combination of Tier 1 and Tier 2 Storage.  - The tail of the Stream Segment is stored in Tier 1 providing low latency reads and +- Each Stream Segment is stored in a combination of Tier 1 and Tier 2 Storage. The tail of the Stream Segment is stored in Tier 1 providing low latency reads and writes. The rest of the Stream Segment is stored in Tier 2, providing high throughput read access with horizontal scalability and low cost.  ## A Note on Tiered Storage To deliver an efficient implementation of Streams, Pravega is based on a tiered -storage model.  Events are persisted in low latency/high IOPS storage (Tier 1 +storage model. Events are persisted in low latency/high IOPS storage (Tier 1 Storage, write-ahead log) and higher throughput Tier 2 storage (e.g., file system, object store). Writers and Readers are oblivious to the tiered storage model from an API perspective.  -In Pravega, Tier 1 is based on an append-only **Log** data structure.  As Leigh Stewart +In Pravega, Tier 1 is based on an append-only **Log** data structure. As Leigh Stewart [observed](https://blog.twitter.com/2015/building-distributedlog-twitter-s-high-performance-replicated-log-service), there are really three data access mechanisms in a Log: ![State synchroner](img/anatomy.of.log.png)  All of the write activity, and much of the read activity happens at the tail of -the log.  Writes are appended to the log and many clients try to read data immediately as it is written to the log. These two data access mechanisms are dominated by the need for low latency – low latency writes by Writers and near real-time access to the published data by Readers. +the log. Writes are appended to the log and many clients try to read data immediately as it is written to the log. These two data access mechanisms are dominated by the need for low latency – low latency writes by Writers and near real-time access to the published data by Readers. Please note that not all Readers read from the tail of the log. Some Readers read -by starting at some arbitrary position in the log.  These reads are known as -**catch-up reads**.  Access to historical data traditionally was done by batch -analytics jobs, often using HDFS and Map/Reduce.  However with new streaming +by starting at some arbitrary position in the log. These reads are known as +**catch-up reads**. Access to historical data traditionally was done by batch +analytics jobs, often using HDFS and Map/Reduce. However with new streaming applications, we can access historical data as well as current data by just -accessing the log.  One approach would be to store all the historical data in +accessing the log. One approach would be to store all the historical data in SSDs similar to tail data operations, but that leads to an expensive task and force customers to economize by deleting historical data. Pravega offers a mechanism that allows customers to use cost-effective, highly-scalable, high-throughput storage for the historical part of the log, that way they won’t have to decide on -when to delete historical data.  Basically, if storage is cheap enough, why not +when to delete historical data. Basically, if storage is cheap enough, why not keep all of the history? Tier 1 Storage aids in faster writes to the Streams by assuring durability and makes reading from the tail of a Stream much quicker. Tier 1 Storage is based on the open source Apache BookKeeper Project. Though not essential, we presume that the Tier 1 Storage will be typically implemented on faster SSDs or diff --git a/documentation/src/docs/rest/restapis.md b/documentation/src/docs/rest/restapis.md index 1cc88dd595a..a76b5e24b63 100644 --- a/documentation/src/docs/rest/restapis.md +++ b/documentation/src/docs/rest/restapis.md @@ -1,9 +1,9 @@ -# Pravega Controller REST API +# Pravega Controller APIs ## Overview -List of admin REST APIs for the Pravega Controller service. +List of admin REST APIs for the Pravega controller service. ### Version information @@ -23,6 +23,7 @@ List of admin REST APIs for the Pravega Controller service. ### Tags +* Health : Health check related APIs * ReaderGroups : Reader group related APIs * Scopes : Scope related APIs * Streams : Stream related APIs @@ -33,6 +34,474 @@ List of admin REST APIs for the Pravega Controller service. ## Paths + +### GET /health + +#### Description +Return the Health of the Controller service. + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The Health result of the Controller.|[HealthResult](#healthresult)| +|**500**|Internal server error while fetching the Health.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health +``` + + +#### Example HTTP response + +##### Response 200 +```json +{ + "name" : "string", + "status" : { }, + "readiness" : true, + "liveness" : true, + "details" : { }, + "children" : { + "string" : "[healthresult](#healthresult)" + } +} +``` + + + +### GET /health/details + +#### Description +Fetch the details of the Controller service. + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The list of details.|[HealthDetails](#healthdetails)| +|**500**|Internal server error while fetching the health details of the Controller.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/details +``` + + +#### Example HTTP response + +##### Response 200 +```json +{ } +``` + + + +### GET /health/details/{id} + +#### Description +Fetch the details of a specific health contributor. + + +#### Parameters + +|Type|Name|Description|Schema| +|---|---|---|---| +|**Path**|**id**
*required*|The id of an existing health contributor.|string| + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The list of details for the health contributor with a given id.|[HealthDetails](#healthdetails)| +|**404**|The health details for the contributor with given id was not found.|No Content| +|**500**|Internal server error while fetching the health details for a given health contributor.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/details/string +``` + + +#### Example HTTP response + +##### Response 200 +```json +{ } +``` + + + +### GET /health/liveness + +#### Description +Fetch the liveness state of the Controller service. + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The alive status.|boolean| +|**500**|Internal server error while fetching the liveness state of the Controller.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/liveness +``` + + +#### Example HTTP response + +##### Response 200 +```json +true +``` + + + +### GET /health/liveness/{id} + +#### Description +Fetch the liveness state of the specified health contributor. + + +#### Parameters + +|Type|Name|Description|Schema| +|---|---|---|---| +|**Path**|**id**
*required*|The id of an existing health contributor.|string| + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The alive status for the specified health contributor.|boolean| +|**404**|The liveness status for the contributor with given id was not found.|No Content| +|**500**|Internal server error while fetching the liveness state for a given health contributor.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/liveness/string +``` + + +#### Example HTTP response + +##### Response 200 +```json +true +``` + + + +### GET /health/readiness + +#### Description +Fetch the ready state of the Controller service. + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The ready status.|boolean| +|**500**|Internal server error while fetching the ready state of the Controller.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/readiness +``` + + +#### Example HTTP response + +##### Response 200 +```json +true +``` + + + +### GET /health/readiness/{id} + +#### Description +Fetch the ready state of the health contributor. + + +#### Parameters + +|Type|Name|Description|Schema| +|---|---|---|---| +|**Path**|**id**
*required*|The id of an existing health contributor.|string| + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The readiness status for the health contributor with given id.|boolean| +|**404**|The readiness status for the contributor with given id was not found.|No Content| +|**500**|Internal server error while fetching the ready state for a given health contributor.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/readiness/string +``` + + +#### Example HTTP response + +##### Response 200 +```json +true +``` + + + +### GET /health/status + +#### Description +Fetch the status of the Controller service. + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The health status of the Controller.|[HealthStatus](#healthstatus)| +|**500**|Internal server error while fetching the health status of the Controller.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/status +``` + + +#### Example HTTP response + +##### Response 200 +```json +{ } +``` + + + +### GET /health/status/{id} + +#### Description +Fetch the status of a specific health contributor. + + +#### Parameters + +|Type|Name|Description|Schema| +|---|---|---|---| +|**Path**|**id**
*required*|The id of an existing health contributor.|string| + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The health status of the Controller.|[HealthStatus](#healthstatus)| +|**404**|The health status for the contributor with given id was not found.|No Content| +|**500**|Internal server error while fetching the health status of a given health contributor.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/status/string +``` + + +#### Example HTTP response + +##### Response 200 +```json +{ } +``` + + + +### GET /health/{id} + +#### Description +Return the Health of a health contributor with a given id. + + +#### Parameters + +|Type|Name|Description|Schema| +|---|---|---|---| +|**Path**|**id**
*required*|The id of an existing health contributor.|string| + + +#### Responses + +|HTTP Code|Description|Schema| +|---|---|---| +|**200**|The Health result of the Controller.|[HealthResult](#healthresult)| +|**404**|A health provider for the given id could not be found.|No Content| +|**500**|Internal server error while fetching the health for a given contributor.|No Content| + + +#### Produces + +* `application/json` + + +#### Tags + +* Health + + +#### Example HTTP request + +##### Request path +``` +/health/string +``` + + +#### Example HTTP response + +##### Response 200 +```json +{ + "name" : "string", + "status" : { }, + "readiness" : true, + "liveness" : true, + "details" : { }, + "children" : { + "string" : "[healthresult](#healthresult)" + } +} +``` + + ### POST /scopes @@ -108,7 +577,7 @@ Create a new scope ### GET /scopes #### Description -List all available scopes in pravega +List all available scopes in Pravega #### Responses @@ -363,8 +832,11 @@ Create a new stream |Name|Description|Schema| |---|---|---| |**retentionPolicy**
*optional*|**Example** : `"[retentionconfig](#retentionconfig)"`|[RetentionConfig](#retentionconfig)| +|**rolloverSizeBytes**
*optional*|**Example** : `"[rolloversizebytes](#rolloversizebytes)"`|[RolloverSizeBytes](#rolloversizebytes)| |**scalingPolicy**
*optional*|**Example** : `"[scalingconfig](#scalingconfig)"`|[ScalingConfig](#scalingconfig)| |**streamName**
*optional*|**Example** : `"string"`|string| +|**streamTags**
*optional*|**Example** : `"[tagslist](#tagslist)"`|[TagsList](#tagslist)| +|**timestampAggregationTimeout**
*optional*|**Example** : `"[timestampaggregationtimeout](#timestampaggregationtimeout)"`|[TimestampAggregationTimeout](#timestampaggregationtimeout)| #### Responses @@ -424,7 +896,10 @@ Create a new stream "hours" : 0, "minutes" : 0 } - } + }, + "streamTags" : { }, + "timestampAggregationTimeout" : { }, + "rolloverSizeBytes" : { } } ``` @@ -456,7 +931,8 @@ Create a new stream "hours" : 0, "minutes" : 0 } - } + }, + "tags" : { } } ``` @@ -473,7 +949,8 @@ List streams within the given scope |Type|Name|Description|Schema| |---|---|---|---| |**Path**|**scopeName**
*required*|Scope name|string| -|**Query**|**showInternalStreams**
*optional*|Optional flag whether to display system created streams. If not specified only user created streams will be returned|string| +|**Query**|**filter_type**
*optional*|Filter options|enum (showInternalStreams, tag)| +|**Query**|**filter_value**
*optional*|value to be passed. must match the type passed with it.|string| #### Responses @@ -531,7 +1008,8 @@ List streams within the given scope "hours" : 0, "minutes" : 0 } - } + }, + "tags" : { } } ] } ``` @@ -606,7 +1084,8 @@ Fetch the properties of an existing stream "hours" : 0, "minutes" : 0 } - } + }, + "tags" : { } } ``` @@ -632,7 +1111,10 @@ Update configuration of an existing stream |Name|Description|Schema| |---|---|---| |**retentionPolicy**
*optional*|**Example** : `"[retentionconfig](#retentionconfig)"`|[RetentionConfig](#retentionconfig)| +|**rolloverSizeBytes**
*optional*|**Example** : `"[rolloversizebytes](#rolloversizebytes)"`|[RolloverSizeBytes](#rolloversizebytes)| |**scalingPolicy**
*optional*|**Example** : `"[scalingconfig](#scalingconfig)"`|[ScalingConfig](#scalingconfig)| +|**streamTags**
*optional*|**Example** : `"[tagslist](#tagslist)"`|[TagsList](#tagslist)| +|**timestampAggregationTimeout**
*optional*|**Example** : `"[timestampaggregationtimeout](#timestampaggregationtimeout)"`|[TimestampAggregationTimeout](#timestampaggregationtimeout)| #### Responses @@ -690,7 +1172,10 @@ Update configuration of an existing stream "hours" : 0, "minutes" : 0 } - } + }, + "streamTags" : { }, + "timestampAggregationTimeout" : { }, + "rolloverSizeBytes" : { } } ``` @@ -722,7 +1207,8 @@ Update configuration of an existing stream "hours" : 0, "minutes" : 0 } - } + }, + "tags" : { } } ``` @@ -900,6 +1386,29 @@ Updates the current state of the stream ## Definitions + +### HealthDetails +*Type* : < string, string > map + + + +### HealthResult + +|Name|Description|Schema| +|---|---|---| +|**children**
*optional*|**Example** : `{
"string" : "[healthresult](#healthresult)"
}`|< string, [HealthResult](#healthresult) > map| +|**details**
*optional*|**Example** : `"[healthdetails](#healthdetails)"`|[HealthDetails](#healthdetails)| +|**liveness**
*optional*|**Example** : `true`|boolean| +|**name**
*optional*|**Example** : `"string"`|string| +|**readiness**
*optional*|**Example** : `true`|boolean| +|**status**
*optional*|**Example** : `"[healthstatus](#healthstatus)"`|[HealthStatus](#healthstatus)| + + + +### HealthStatus +*Type* : enum (UP, STARTING, NEW, UNKNOWN, FAILED, DOWN) + + ### ReaderGroupProperty @@ -938,6 +1447,11 @@ Updates the current state of the stream |**value**
*optional*|**Example** : `0`|integer (int64)| + +### RolloverSizeBytes +*Type* : long + + ### ScaleMetadata @@ -1004,6 +1518,7 @@ Updates the current state of the stream |**scalingPolicy**
*optional*|**Example** : `"[scalingconfig](#scalingconfig)"`|[ScalingConfig](#scalingconfig)| |**scopeName**
*optional*|**Example** : `"string"`|string| |**streamName**
*optional*|**Example** : `"string"`|string| +|**tags**
*optional*|**Example** : `"[tagslist](#tagslist)"`|[TagsList](#tagslist)| @@ -1022,6 +1537,11 @@ Updates the current state of the stream |**streams**
*optional*|**Example** : `[ "[streamproperty](#streamproperty)" ]`|< [StreamProperty](#streamproperty) > array| + +### TagsList +*Type* : < string > array + + ### TimeBasedRetention @@ -1032,6 +1552,11 @@ Updates the current state of the stream |**minutes**
*optional*|**Example** : `0`|integer (int64)| + +### TimestampAggregationTimeout +*Type* : long + + diff --git a/documentation/src/docs/security/pravega-security-configurations.md b/documentation/src/docs/security/pravega-security-configurations.md index 885eca130ec..e3e86b90c21 100644 --- a/documentation/src/docs/security/pravega-security-configurations.md +++ b/documentation/src/docs/security/pravega-security-configurations.md @@ -47,6 +47,16 @@ their Transport Layer Security (TLS) and auth (short for authentication and auth |Valid values:|{`true`, `false`} | |Old name:|`controller.auth.tlsEnabled` (deprecated) | +* __controller.security.tls.protocolVersion__ + + |Property| Value | + |---:|:----| + |Description: | Configurable versions for TLS Protocol | + |Type:|String| + |Default:| `TLSv1.2,TLSv1.3`| + |Valid values:|{`TLSv1.2`, `TLSv1.3`, `TLSv1.2,TLSv1.3`, `TLSv1.3,TLSv1.2`} | + + * __controller.security.tls.server.certificate.location__ |Property| Value | @@ -135,7 +145,7 @@ their Transport Layer Security (TLS) and auth (short for authentication and auth |Type:|string| |Default: | None | |Sample value:|`/path/to/client/zookeeper.truststore.pwd` | - |Old name:|`controller.zk.tlsTrustStoreFile` (deprecated) | + |Old name:|`controller.zk.tlsTrustStorePasswordFile` (deprecated) | ### Controller Authentication and Authorization Configuration Parameters @@ -181,6 +191,16 @@ their Transport Layer Security (TLS) and auth (short for authentication and auth |Default:| `false`| |Valid values:|{`true`, `false`} | |Old name:|`pravegaservice.enableTls` (deprecated) | + +* __pravegaservice.security.tls.protocolVersion__ + + |Property| Value | + |---:|:----| + |Description: | Configurable versions for TLS Protocol | + |Type:|String| + |Default:| `TLSv1.2,TLSv1.3`| + |Valid values:|{`TLSv1.2`, `TLSv1.3`, `TLSv1.2,TLSv1.3`, `TLSv1.3,TLSv1.2`} | + * __pravegaservice.security.tls.certificate.autoReload.enable__ @@ -211,6 +231,24 @@ their Transport Layer Security (TLS) and auth (short for authentication and auth |Default: | None | |Sample value:| `/path/to/server/server-privateKey.key` | |Old name:|`pravegaservice.keyFile` (deprecated) | + +* __pravegaservice.security.tls.server.keyStore.location__ + + |Property| Value | + |---:|:----| + |Description: | Path of the `.jks` file that contains the TLS material used for securing the Segment Store's REST interface. It contains the server's public key certificate and the associated pivate key, as well as the CA's certificate. | + |Type:|string| + |Default:| None | + |Sample value:|`/path/to/server/server-keystore.jks` | + +* __pravegaservice.security.tls.server.keyStore.pwd.location__ + + |Property| Value | + |---:|:----| + |Description: | Path of the file containing the password for the keystore specified via `pravegaservice.security.tls.server.keyStore.location`. | + |Type:|string| + |Default:| None | + |Sample value:|`/path/to/server/server-keystore.pwd` | * __autoScale.controller.connect.security.tls.enable__ @@ -355,6 +393,7 @@ fewer security configuration parameters to configure. |Parameter|Details|Default |Feature| |---------|-------|-------------|-------| | `singlenode.security.tls.enable` | Whether to enable TLS for client-server communications. | `false` | TLS | +| `singlenode.security.tls.protocolVersion` | Version of the TLS Protocol. | `TLSv1.2,TLSv1.3` | TLS| | `singlenode.security.tls.certificate.location` | Path of the X.509 PEM-encoded server certificate file for the server. | None | TLS | | `singlenode.security.tls.privateKey.location` | Path of the PEM-encoded private key file for the service. | None | TLS | | `singlenode.security.tls.keyStore.location` | Path of the keystore file in `.jks` for the REST interface. | None | TLS | diff --git a/documentation/src/docs/security/securing-distributed-mode-cluster.md b/documentation/src/docs/security/securing-distributed-mode-cluster.md index e2cee7e7a98..60e5bd8a8c2 100644 --- a/documentation/src/docs/security/securing-distributed-mode-cluster.md +++ b/documentation/src/docs/security/securing-distributed-mode-cluster.md @@ -89,6 +89,7 @@ Controller services can be configured in two different ways: ``` controller.security.tls.enable=true + controller.security.tls.protocolVersion=TLSv1.2,TLSv1.3 controller.security.tls.server.certificate.location=/etc/secrets/server-cert.crt ``` @@ -116,24 +117,27 @@ For a detailed description of these parameters, refer to the | Configuration Parameter| Example Value | |:-----------------------:|:-------------| | `controller.security.tls.enable` | `true` | + | `controller.security.tls.protocolVersion` | `TLSv1.2,TLSv1.3` 1| | `controller.security.tls.server.certificate.location` | `/etc/secrets/server-cert.crt` | | `controller.security.tls.server.privateKey.location` | `/etc/secrets/server-key.key` | | `controller.security.tls.trustStore.location` | `/etc/secrets/ca-cert.crt` | | `controller.security.tls.server.keyStore.location` | `/etc/secrets/server.keystore.jks` | - | `controller.security.tls.server.keyStore.pwd.location` | `/etc/secrets/server.keystore.jks.password` 1 | - | `controller.zk.connect.security.enable` | `false` 2 | - | `controller.zk.connect.security.tls.trustStore.location` | Unspecified 2| - | `controller.zk.connect.security.tls.trustStore.pwd.location` | Unspecified 2| + | `controller.security.tls.server.keyStore.pwd.location` | `/etc/secrets/server.keystore.jks.password` 2 | + | `controller.zk.connect.security.enable` | `false` 3 | + | `controller.zk.connect.security.tls.trustStore.location` | Unspecified 3| + | `controller.zk.connect.security.tls.trustStore.pwd.location` | Unspecified 3| | `controller.security.auth.enable` | `true` | - | `controller.security.pwdAuthHandler.accountsDb.location` 3 | `/etc/secrets/password-auth-handler.database` | + | `controller.security.pwdAuthHandler.accountsDb.location` 4 | `/etc/secrets/password-auth-handler.database` | | `controller.security.auth.delegationToken.signingKey.basis` | `a-secret-value` | - [1]: This and other `.password` files are text files containing the password for the corresponding store. + [1]: `TLSv1.2` and `TLSv1.3` strict modes are also allowed. - [2]: It is assumed here that Zookeeper TLS is disabled. You may enable it and specify the corresponding client-side + [2]: This and other `.password` files are text files containing the password for the corresponding store. + + [3]: It is assumed here that Zookeeper TLS is disabled. You may enable it and specify the corresponding client-side TLS configuration properties via the `controller.zk.*` properties. - [3]: This configuration property is required when using the default Password Auth Handler only. + [4]: This configuration property is required when using the default Password Auth Handler only. **Segment Store** @@ -144,24 +148,27 @@ below lists its TLS and auth parameters and sample values. For a detailed discri | Configuration Parameter| Example Value | |:-----------------------:|:-------------| | `pravegaservice.security.tls.enable` | `true` | + | `pravegaservice.security.tls.protocolVersion` | `TLSv1.2,TLSv1.3` 1 | | `pravegaservice.security.tls.server.certificate.location` | `/etc/secrets/server-cert.crt` | | `pravegaservice.security.tls.certificate.autoReload.enable` | `false` | | `pravegaservice.security.tls.server.privateKey.location` | `/etc/secrets/server-key.key` | - | `pravegaservice.zk.connect.security.enable` | `false` 1 | - | `pravegaservice.zk.connect.security.tls.trustStore.location` | Unspecified 1| - | `pravegaservice.zk.connect.security.tls.trustStore.pwd.location` | Unspecified 1| + | `pravegaservice.zk.connect.security.enable` | `false` 2 | + | `pravegaservice.zk.connect.security.tls.trustStore.location` | Unspecified 2| + | `pravegaservice.zk.connect.security.tls.trustStore.pwd.location` | Unspecified 2| | `autoScale.controller.connect.security.tls.enable` | `true` | | `autoScale.controller.connect.security.tls.truststore.location` | `/etc/secrets/ca-cert.crt` | | `autoScale.controller.connect.security.auth.enable` | `true` | - | `autoScale.security.auth.token.signingKey.basis` | `a-secret-value` 2| + | `autoScale.security.auth.token.signingKey.basis` | `a-secret-value` 3| | `autoScale.controller.connect.security.tls.validateHostName.enable` | `true` | | `pravega.client.auth.loadDynamic` | `false` | | `pravega.client.auth.method` | `Basic` | | `pravega.client.auth.token` | Base64-encoded value of 'username:password' string | -[1]: The secret value you use here must match the same value used for other Controller and Segment Store services. +[1]: `TLSv1.2` and `TLSv1.3` strict modes are also allowed. + +[2]: The secret value you use here must match the same value used for other Controller and Segment Store services. -[2]: It is assumed here that Zookeeper TLS is disabled. You may enable it and specify the corresponding client-side TLS +[3]: It is assumed here that Zookeeper TLS is disabled. You may enable it and specify the corresponding client-side TLS configuration properties via these properties. ### Configuring TLS and Credentials on Client Side diff --git a/documentation/src/docs/security/securing-standalone-mode-cluster.md b/documentation/src/docs/security/securing-standalone-mode-cluster.md index 188a4b3d13b..481a7c39684 100644 --- a/documentation/src/docs/security/securing-standalone-mode-cluster.md +++ b/documentation/src/docs/security/securing-standalone-mode-cluster.md @@ -29,6 +29,8 @@ For standalone mode servers, you may enable SSL/TLS, and/ `auth` (short for Auth The configuration parameter `singlenode.security.tls.enable` determines whether SSL/TLS is enabled in a standalone mode server. Its default value is `false`, and therefore, SSL/TLS is disabled by default. +The configuration parameter `singlenode.security.tls.protocolVersion` configures the TLS Protocol Version. Its default value is `TLSv1.2,TLSv1.3` which is a mixed mode supporting both `TLSv1.2` and `TLSv1.3`. Pravega also supports strict `TLSv1.2` and strict `TLSv1.3` modes. + Similarly, the configuration parameter `singlenode.security.auth.enable` determines whether `auth` is enabled. It is disabled by default as well. The following steps explain how to enable and configure SSL/TLS and/ `auth`: @@ -51,6 +53,7 @@ The following steps explain how to enable and configure SSL/TLS and/ `auth`: ```java singlenode.security.tls.enable=true + singlenode.security.tls.protocolVersion=TLSv1.2,TLSv1.3 singlenode.security.tls.privateKey.location=../config/server-key.key singlenode.security.tls.certificate.location=../config/server-cert.crt singlenode.security.tls.keyStore.location=../config/server.keystore.jks diff --git a/gradle.properties b/gradle.properties index 166af3e3c8d..e34b0214abe 100644 --- a/gradle.properties +++ b/gradle.properties @@ -17,16 +17,18 @@ dockerExecutable=/usr/bin/docker org.gradle.parallel=true +org.gradle.jvmargs=-Xms1g #3rd party Versions apacheCommonsCsvVersion=1.5 -apacheCommonsCompressVersion=1.20 +apacheCommonsCompressVersion=1.21 apacheCuratorVersion=4.0.1 apacheZookeeperVersion=3.5.9 +awsSdkVersion=2.17.43 checkstyleToolVersion=8.23 bookKeeperVersion=4.14.1 commonsBeanutilsVersion=1.9.4 -commonsioVersion=2.6 +commonsioVersion=2.11.0 commonsLang3Version=3.7 dockerClientVersion=8.16.0 ecsObjectClientVersion=3.0.6 @@ -34,20 +36,18 @@ spotbugsVersion=4.0.6 spotbugsAnnotationsVersion=4.0.6 spotbugsPluginVersion=4.4.4 jcipAnnotationsVersion=1.0 -gradleDockerPlugin=3.1.0 gradleLombokPluginVersion=4.0.0 gradleMkdocsPluginVersion=2.1.1 gradleSshPluginVersion=2.9.0 -grpcVersion=1.36.0 +grpcVersion=1.36.2 gsonVersion=2.8.6 guavaVersion=28.2-jre guiceVersion=4.0 -hadoopVersion=3.3.0 +hadoopVersion=3.3.1 javaxServletApiVersion=4.0.0 -jacksonVersion=2.9.10.6 +jacksonVersion=2.12.4 javaxwsrsApiVersion=2.1 jaxbVersion=2.3.1 -jsonSimpleVersion=1.1.1 activationVersion=1.2.0 javaxAnnotationVersion=1.3.2 jerseyVersion=2.29 @@ -62,15 +62,15 @@ nettyBoringSSLVersion=2.0.39.Final protobufGradlePlugin=0.8.15 protobufProtocVersion=3.14.0 qosLogbackVersion=1.2.3 -swaggerJersey2JaxrsVersion=1.5.22 +swaggerJersey2JaxrsVersion=1.6.2 slf4jApiVersion=1.7.25 gradleGitPluginVersion=4.1.0 k8ClientVersion=8.0.0 jjwtVersion=0.9.1 -bouncyCastleVersion=1.60 +bouncyCastleVersion=1.69 # Version and base tags can be overridden at build time -pravegaVersion=0.10.0-SNAPSHOT +pravegaVersion=0.11.0-SNAPSHOT pravegaBaseTag=pravega/pravega bookkeeperBaseTag=pravega/bookkeeper diff --git a/gradle/java.gradle b/gradle/java.gradle index 87da7e2fadf..3db1a0d9dd8 100644 --- a/gradle/java.gradle +++ b/gradle/java.gradle @@ -29,15 +29,23 @@ plugins.withId('java') { "-Xlint:finally", "-Xlint:overrides", "-Xlint:path", - "-Werror", "-Xlint:unchecked", + "-Werror", "--release", getDefaultJavaVersion() ]) } compileTestJava { - options.compilerArgs.addAll(["--release", getDefaultJavaVersion()]) + options.compilerArgs.addAll([ + "-Xlint:empty", + "-Xlint:fallthrough", + "-Xlint:finally", + "-Xlint:overrides", + "-Xlint:path", + "-Werror", + "--release", + getDefaultJavaVersion()]) } archivesBaseName = "pravega" + project.path.replace(':', '-') diff --git a/gradle/protobuf.gradle b/gradle/protobuf.gradle index 4bea272319a..d33757f655b 100644 --- a/gradle/protobuf.gradle +++ b/gradle/protobuf.gradle @@ -44,6 +44,8 @@ plugins.withId('com.google.protobuf') { compile group: 'io.grpc', name: 'grpc-netty', version: grpcVersion compile group: 'io.grpc', name: 'grpc-protobuf', version: grpcVersion compile group: 'io.grpc', name: 'grpc-stub', version: grpcVersion + // override grpc's transitive protobuf version with allprojects' force-upgraded version + compile group: 'com.google.protobuf', name: 'protobuf-java', version: protobufProtocVersion } idea { diff --git a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/BadSegmentTypeException.java b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/BadSegmentTypeException.java index a14993a977a..51de9c3e594 100644 --- a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/BadSegmentTypeException.java +++ b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/BadSegmentTypeException.java @@ -19,21 +19,21 @@ * Exception that is thrown whenever a segment of the wrong type is accessed (i.e., we want a StreamSegment but were given * the name of a Table Segment). */ -public class BadSegmentTypeException extends StreamSegmentException { +public class BadSegmentTypeException extends IllegalArgumentException { private static final long serialVersionUID = 1L; /** * Creates a new instance of the BadSegmentTypeException class. * - * @param streamSegmentName The name of the Segment. - * @param expectedType The expected type for the Segment. - * @param actualType The actual type. + * @param segmentName The name of the Segment. + * @param expectedType The expected type for the Segment. + * @param actualType The actual type. */ - public BadSegmentTypeException(String streamSegmentName, String expectedType, String actualType) { - super(streamSegmentName, getMessage(expectedType, actualType)); + public BadSegmentTypeException(String segmentName, SegmentType expectedType, SegmentType actualType) { + super(getMessage(segmentName, expectedType, actualType)); } - private static String getMessage(String expectedType, String actualType) { - return String.format("Bad Segment Type. Expected '%s', given '%s'.", expectedType, actualType); + private static String getMessage(String segmentName, SegmentType expectedType, SegmentType actualType) { + return String.format("Bad Segment Type for '%s'. Expected '%s', given '%s'.", segmentName, expectedType, actualType); } } diff --git a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/SegmentApi.java b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/SegmentApi.java new file mode 100644 index 00000000000..3a75ca92e12 --- /dev/null +++ b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/SegmentApi.java @@ -0,0 +1,242 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.contracts; + +import io.pravega.common.util.BufferView; + +import java.time.Duration; +import java.util.Collection; +import java.util.Map; +import java.util.concurrent.CompletableFuture; + +/** + * Defines all operations that are supported on a StreamSegment. + * + * Notes about all AttributeUpdates parameters in this interface's methods: + * * Only the Attributes contained in this collection will be touched; all other attributes will be left intact. + * * This can update both Core or Extended Attributes. If an Extended Attribute is updated, its latest value will be kept + * in memory for a while (based on Segment Metadata eviction or other rules), which allow for efficient pipelining. + * * If an Extended Attribute is not loaded, use getAttributes() to load its latest value up. + * * To delete an Attribute, set its value to Attributes.NULL_ATTRIBUTE_VALUE. + */ +public interface SegmentApi { + + /** + * Appends a range of bytes at the end of a StreamSegment and atomically updates the given + * attributes. The byte range will be appended as a contiguous block, however there is no + * guarantee of ordering between different calls to this method. + * + * @param streamSegmentName The name of the StreamSegment to append to. + * @param data A {@link BufferView} representing the data to add. This {@link BufferView} should not be + * modified until the returned CompletableFuture from this method completes. + * @param attributeUpdates A Collection of Attribute-Values to set or update. May be null (which indicates no updates). + * See Notes about AttributeUpdates in the interface Javadoc. + * @param timeout Timeout for the operation + * @return A CompletableFuture that, when completed normally, will indicate the append completed successfully and + * contains the new length of the segment. If the operation failed, the future will be failed with the causing exception. + * (NOTE: the length is not necessarily the same as offset immediately following the data because the append may have + * been batched together with others internally.) Notable exceptions: + * - {@link BadAttributeUpdateException} If {@code attributeUpdates} is non-null and non-empty and at least one of + * the {@link AttributeUpdate} instances within that collection has {@link AttributeUpdate#getUpdateType()} equal to + * {@link AttributeUpdateType#ReplaceIfEquals} or {@link AttributeUpdateType#ReplaceIfGreater} and the condition for + * this update is rejected. + * @throws NullPointerException If any of the arguments are null, except attributeUpdates. + * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't + * check if the StreamSegment does not exist - that exception + * will be set in the returned CompletableFuture). + */ + CompletableFuture append(String streamSegmentName, BufferView data, AttributeUpdateCollection attributeUpdates, Duration timeout); + + /** + * Appends a range of bytes at the end of a StreamSegment an atomically updates the given + * attributes, but only if the current length of the StreamSegment equals a certain value. The + * byte range will be appended as a contiguous block. This method guarantees ordering (among + * subsequent calls). + * + * @param streamSegmentName The name of the StreamSegment to append to. + * @param offset The offset at which to append. If the current length of the StreamSegment does not equal + * this value, the operation will fail with a BadOffsetException. + * @param data A {@link BufferView} representing the data to add. This {@link BufferView} should not be + * modified until the returned CompletableFuture from this method completes. + * @param attributeUpdates A Collection of Attribute-Values to set or update. May be null (which indicates no updates). + * See Notes about AttributeUpdates in the interface Javadoc. + * @param timeout Timeout for the operation + * @return A CompletableFuture that, when completed normally, will indicate the append completed successfully and + * contains the new length of the segment. If the operation failed, the future will be failed with the causing exception. + * (NOTE: the length is not necessarily the same as offset immediately following the data because the append may have + * been batched together with others internally.) Notable exceptions: + * - {@link BadAttributeUpdateException} If {@code attributeUpdates} is non-null and non-empty and at least one of + * the {@link AttributeUpdate} instances within that collection has {@link AttributeUpdate#getUpdateType()} equal to + * {@link AttributeUpdateType#ReplaceIfEquals} or {@link AttributeUpdateType#ReplaceIfGreater} and the condition for + * this update is rejected. + * - {@link BadOffsetException} if the current length of the given Segment does not match the given {@code offset}. + * IMPORTANT: If the append fails validation due to both {@link BadAttributeUpdateException} and {@link BadOffsetException}, + * then {@link BadAttributeUpdateException} will take precedence. + * @throws NullPointerException If any of the arguments are null, except attributeUpdates. + * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't + * check if the StreamSegment does not exist - that exception + * will be set in the returned CompletableFuture). + */ + CompletableFuture append(String streamSegmentName, long offset, BufferView data, AttributeUpdateCollection attributeUpdates, Duration timeout); + + /** + * Performs an attribute update operation on the given Segment. + * + * @param streamSegmentName The name of the StreamSegment which will have its attributes updated. + * @param attributeUpdates A Collection of Attribute-Values to set or update. May be null (which indicates no updates). + * See Notes about AttributeUpdates in the interface Javadoc. + * @param timeout Timeout for the operation + * @return A CompletableFuture that, when completed normally, will indicate the update completed successfully. + * If the operation failed, the future will be failed with the causing exception. Notable exceptions: + * - {@link BadAttributeUpdateException} If at least one of the {@link AttributeUpdate} instances within {@code attributeUpdates} + * has {@link AttributeUpdate#getUpdateType()} equal to {@link AttributeUpdateType#ReplaceIfEquals} or + * {@link AttributeUpdateType#ReplaceIfGreater} and the condition for this update is rejected. + * @throws NullPointerException If any of the arguments are null. + * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't check if the StreamSegment + * does not exist - that exception will be set in the returned CompletableFuture). + */ + CompletableFuture updateAttributes(String streamSegmentName, AttributeUpdateCollection attributeUpdates, Duration timeout); + + /** + * Gets the values of the given Attributes (Core or Extended). + * + * Lookup order: + * 1. (Core or Extended) In-memory Segment Metadata cache (which always has the latest value of an attribute). + * 2. (Extended only) Backing Attribute Index for this Segment. + * + * @param streamSegmentName The name of the StreamSegment for which to get attributes. + * @param attributeIds A Collection of Attribute Ids to fetch. These may be Core or Extended Attributes. + * @param cache If set, then any Extended Attribute values that are not already in the in-memory Segment + * Metadata cache will be atomically added using a conditional update (comparing against a missing value). + * This argument will be ignored if the StreamSegment is currently Sealed. + * @param timeout Timeout for the operation. + * @return A Completable future that, when completed, will contain a Map of Attribute Ids to their latest values. Any + * Attribute that is not set will also be returned (with a value equal to Attributes.NULL_ATTRIBUTE_VALUE). If the operation + * failed, the future will be failed with the causing exception. + * @throws NullPointerException If any of the arguments are null. + * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't check if the StreamSegment + * does not exist - that exception will be set in the returned CompletableFuture). + */ + CompletableFuture> getAttributes(String streamSegmentName, Collection attributeIds, boolean cache, Duration timeout); + + /** + * Initiates a Read operation on a particular StreamSegment and returns a ReadResult which can be used to consume the + * read data. + * + * @param streamSegmentName The name of the StreamSegment to read from. + * @param offset The offset within the stream to start reading at. + * @param maxLength The maximum number of bytes to read. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will contain a ReadResult instance that can be used to + * consume the read data. If the operation failed, the future will be failed with the causing exception. The future + * will be failed with a {@link java.util.concurrent.CancellationException} if the segment container is shutting down + * or the segment is evicted from memory. + * @throws NullPointerException If any of the arguments are null. + * @throws IllegalArgumentException If any of the arguments are invalid. + */ + CompletableFuture read(String streamSegmentName, long offset, int maxLength, Duration timeout); + + /** + * Gets information about a StreamSegment. + * + * @param streamSegmentName The name of the StreamSegment. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will contain the result. If the operation failed, the + * future will be failed with the causing exception. Note that this result will only contain those attributes that + * are loaded in memory (if any) or Core Attributes. To ensure that Extended Attributes are also included, you must use + * getAttributes(), which will fetch all attributes, regardless of where they are currently located. + * @throws IllegalArgumentException If any of the arguments are invalid. + */ + CompletableFuture getStreamSegmentInfo(String streamSegmentName, Duration timeout); + + /** + * Creates a new StreamSegment. + * + * @param streamSegmentName The name of the StreamSegment to create. + * @param attributes A Collection of Attribute-Values to set on the newly created StreamSegment. May be null. + * See Notes about AttributeUpdates in the interface Javadoc. + * @param segmentType Type of Segment to create. This cannot change after creation. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation + * failed, the future will be failed with the causing exception. + * @throws IllegalArgumentException If any of the arguments are invalid. + */ + CompletableFuture createStreamSegment(String streamSegmentName, SegmentType segmentType, Collection attributes, + Duration timeout); + + /** + * Merges a StreamSegment into another. If the StreamSegment is not already sealed, it will seal it. + * + * @param targetSegmentName The name of the StreamSegment to merge into. + * @param sourceSegmentName The name of the StreamSegment to merge. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will contain a MergeStreamSegmentResult instance with information about the + * source and target Segments. If the operation failed, the future will be failed with the causing exception. + * @throws IllegalArgumentException If any of the arguments are invalid. + */ + CompletableFuture mergeStreamSegment(String targetSegmentName, String sourceSegmentName, Duration timeout); + + /** + * Merges a StreamSegment into another and atomically checks and updates a set of attributes on the target StreamSegment. + * If the StreamSegment is not already sealed, it will seal it. + * + * @param targetSegmentName The name of the StreamSegment to merge into. + * @param sourceSegmentName The name of the StreamSegment to merge. + * @param attributeUpdates A Collection of Attribute-Values to set on the target StreamSegment. May be null. + * See Notes about AttributeUpdates in the interface Javadoc. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will contain a MergeStreamSegmentResult instance with information about + * the source and target Segments. If the operation failed, the future will be failed with the causing exception. + * @throws IllegalArgumentException If any of the arguments are invalid. + */ + CompletableFuture mergeStreamSegment(String targetSegmentName, String sourceSegmentName, + AttributeUpdateCollection attributeUpdates, Duration timeout); + + /** + * Seals a StreamSegment for modifications. + * + * @param streamSegmentName The name of the StreamSegment to seal. + * @param timeout Timeout for the operation + * @return A CompletableFuture that, when completed normally, will contain the final length of the StreamSegment. + * If the operation failed, the future will be failed with the causing exception. + * @throws IllegalArgumentException If any of the arguments are invalid. + */ + CompletableFuture sealStreamSegment(String streamSegmentName, Duration timeout); + + /** + * Deletes a StreamSegment. + * + * @param streamSegmentName The name of the StreamSegment to delete. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation + * failed, the future will be failed with the causing exception. + * @throws IllegalArgumentException If any of the arguments are invalid. + */ + CompletableFuture deleteStreamSegment(String streamSegmentName, Duration timeout); + + /** + * Truncates a StreamSegment at a given offset. + * + * @param streamSegmentName The name of the StreamSegment to truncate. + * @param offset The offset at which to truncate. This must be at least equal to the existing truncation + * offset and no larger than the StreamSegment's length. After the operation is complete, + * no offsets below this one will be accessible anymore. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation + * failed, the future will be failed with the causing exception. + */ + CompletableFuture truncateStreamSegment(String streamSegmentName, long offset, Duration timeout); +} diff --git a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/StreamSegmentStore.java b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/StreamSegmentStore.java index 70f7c29b884..9aeead1f633 100644 --- a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/StreamSegmentStore.java +++ b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/StreamSegmentStore.java @@ -15,211 +15,22 @@ */ package io.pravega.segmentstore.contracts; -import io.pravega.common.util.BufferView; import java.time.Duration; -import java.util.Collection; -import java.util.Map; import java.util.concurrent.CompletableFuture; /** - * Defines all operations that are supported on a StreamSegment. - * - * Notes about all AttributeUpdates parameters in this interface's methods: - * * Only the Attributes contained in this collection will be touched; all other attributes will be left intact. - * * This can update both Core or Extended Attributes. If an Extended Attribute is updated, its latest value will be kept - * in memory for a while (based on Segment Metadata eviction or other rules), which allow for efficient pipelining. - * * If an Extended Attribute is not loaded, use getAttributes() to load its latest value up. - * * To delete an Attribute, set its value to Attributes.NULL_ATTRIBUTE_VALUE. + * Defines the StreamSegmentStore which is responsible for delegating the various + * operations possible on a StreamSegment to their respective Container. */ -public interface StreamSegmentStore { - - /** - * Appends a range of bytes at the end of a StreamSegment and atomically updates the given - * attributes. The byte range will be appended as a contiguous block, however there is no - * guarantee of ordering between different calls to this method. - * - * @param streamSegmentName The name of the StreamSegment to append to. - * @param data A {@link BufferView} representing the data to add. This {@link BufferView} should not be - * modified until the returned CompletableFuture from this method completes. - * @param attributeUpdates A Collection of Attribute-Values to set or update. May be null (which indicates no updates). - * See Notes about AttributeUpdates in the interface Javadoc. - * @param timeout Timeout for the operation - * @return A CompletableFuture that, when completed normally, will indicate the append completed successfully and - * contains the new length of the segment. If the operation failed, the future will be failed with the causing exception. - * (NOTE: the length is not necessarily the same as offset immediately following the data because the append may have - * been batched together with others internally.) Notable exceptions: - * - {@link BadAttributeUpdateException} If {@code attributeUpdates} is non-null and non-empty and at least one of - * the {@link AttributeUpdate} instances within that collection has {@link AttributeUpdate#getUpdateType()} equal to - * {@link AttributeUpdateType#ReplaceIfEquals} or {@link AttributeUpdateType#ReplaceIfGreater} and the condition for - * this update is rejected. - * @throws NullPointerException If any of the arguments are null, except attributeUpdates. - * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't - * check if the StreamSegment does not exist - that exception - * will be set in the returned CompletableFuture). - */ - CompletableFuture append(String streamSegmentName, BufferView data, AttributeUpdateCollection attributeUpdates, Duration timeout); - - /** - * Appends a range of bytes at the end of a StreamSegment an atomically updates the given - * attributes, but only if the current length of the StreamSegment equals a certain value. The - * byte range will be appended as a contiguous block. This method guarantees ordering (among - * subsequent calls). - * - * @param streamSegmentName The name of the StreamSegment to append to. - * @param offset The offset at which to append. If the current length of the StreamSegment does not equal - * this value, the operation will fail with a BadOffsetException. - * @param data A {@link BufferView} representing the data to add. This {@link BufferView} should not be - * modified until the returned CompletableFuture from this method completes. - * @param attributeUpdates A Collection of Attribute-Values to set or update. May be null (which indicates no updates). - * See Notes about AttributeUpdates in the interface Javadoc. - * @param timeout Timeout for the operation - * @return A CompletableFuture that, when completed normally, will indicate the append completed successfully and - * contains the new length of the segment. If the operation failed, the future will be failed with the causing exception. - * (NOTE: the length is not necessarily the same as offset immediately following the data because the append may have - * been batched together with others internally.) Notable exceptions: - * - {@link BadAttributeUpdateException} If {@code attributeUpdates} is non-null and non-empty and at least one of - * the {@link AttributeUpdate} instances within that collection has {@link AttributeUpdate#getUpdateType()} equal to - * {@link AttributeUpdateType#ReplaceIfEquals} or {@link AttributeUpdateType#ReplaceIfGreater} and the condition for - * this update is rejected. - * - {@link BadOffsetException} if the current length of the given Segment does not match the given {@code offset}. - * IMPORTANT: If the append fails validation due to both {@link BadAttributeUpdateException} and {@link BadOffsetException}, - * then {@link BadAttributeUpdateException} will take precedence. - * @throws NullPointerException If any of the arguments are null, except attributeUpdates. - * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't - * check if the StreamSegment does not exist - that exception - * will be set in the returned CompletableFuture). - */ - CompletableFuture append(String streamSegmentName, long offset, BufferView data, AttributeUpdateCollection attributeUpdates, Duration timeout); - - /** - * Performs an attribute update operation on the given Segment. - * - * @param streamSegmentName The name of the StreamSegment which will have its attributes updated. - * @param attributeUpdates A Collection of Attribute-Values to set or update. May be null (which indicates no updates). - * See Notes about AttributeUpdates in the interface Javadoc. - * @param timeout Timeout for the operation - * @return A CompletableFuture that, when completed normally, will indicate the update completed successfully. - * If the operation failed, the future will be failed with the causing exception. Notable exceptions: - * - {@link BadAttributeUpdateException} If at least one of the {@link AttributeUpdate} instances within {@code attributeUpdates} - * has {@link AttributeUpdate#getUpdateType()} equal to {@link AttributeUpdateType#ReplaceIfEquals} or - * {@link AttributeUpdateType#ReplaceIfGreater} and the condition for this update is rejected. - * @throws NullPointerException If any of the arguments are null. - * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't check if the StreamSegment - * does not exist - that exception will be set in the returned CompletableFuture). - */ - CompletableFuture updateAttributes(String streamSegmentName, AttributeUpdateCollection attributeUpdates, Duration timeout); - - /** - * Gets the values of the given Attributes (Core or Extended). - * - * Lookup order: - * 1. (Core or Extended) In-memory Segment Metadata cache (which always has the latest value of an attribute). - * 2. (Extended only) Backing Attribute Index for this Segment. - * - * @param streamSegmentName The name of the StreamSegment for which to get attributes. - * @param attributeIds A Collection of Attribute Ids to fetch. These may be Core or Extended Attributes. - * @param cache If set, then any Extended Attribute values that are not already in the in-memory Segment - * Metadata cache will be atomically added using a conditional update (comparing against a missing value). - * This argument will be ignored if the StreamSegment is currently Sealed. - * @param timeout Timeout for the operation. - * @return A Completable future that, when completed, will contain a Map of Attribute Ids to their latest values. Any - * Attribute that is not set will also be returned (with a value equal to Attributes.NULL_ATTRIBUTE_VALUE). If the operation - * failed, the future will be failed with the causing exception. - * @throws NullPointerException If any of the arguments are null. - * @throws IllegalArgumentException If the StreamSegment Name is invalid (NOTE: this doesn't check if the StreamSegment - * does not exist - that exception will be set in the returned CompletableFuture). - */ - CompletableFuture> getAttributes(String streamSegmentName, Collection attributeIds, boolean cache, Duration timeout); - - /** - * Initiates a Read operation on a particular StreamSegment and returns a ReadResult which can be used to consume the - * read data. - * - * @param streamSegmentName The name of the StreamSegment to read from. - * @param offset The offset within the stream to start reading at. - * @param maxLength The maximum number of bytes to read. - * @param timeout Timeout for the operation. - * @return A CompletableFuture that, when completed normally, will contain a ReadResult instance that can be used to - * consume the read data. If the operation failed, the future will be failed with the causing exception. The future - * will be failed with a {@link java.util.concurrent.CancellationException} if the segment container is shutting down - * or the segment is evicted from memory. - * @throws NullPointerException If any of the arguments are null. - * @throws IllegalArgumentException If any of the arguments are invalid. - */ - CompletableFuture read(String streamSegmentName, long offset, int maxLength, Duration timeout); - - /** - * Gets information about a StreamSegment. - * - * @param streamSegmentName The name of the StreamSegment. - * @param timeout Timeout for the operation. - * @return A CompletableFuture that, when completed normally, will contain the result. If the operation failed, the - * future will be failed with the causing exception. Note that this result will only contain those attributes that - * are loaded in memory (if any) or Core Attributes. To ensure that Extended Attributes are also included, you must use - * getAttributes(), which will fetch all attributes, regardless of where they are currently located. - * @throws IllegalArgumentException If any of the arguments are invalid. - */ - CompletableFuture getStreamSegmentInfo(String streamSegmentName, Duration timeout); - - /** - * Creates a new StreamSegment. - * - * @param streamSegmentName The name of the StreamSegment to create. - * @param attributes A Collection of Attribute-Values to set on the newly created StreamSegment. May be null. - * See Notes about AttributeUpdates in the interface Javadoc. - * @param segmentType Type of Segment to create. This cannot change after creation. - * @param timeout Timeout for the operation. - * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation - * failed, the future will be failed with the causing exception. - * @throws IllegalArgumentException If any of the arguments are invalid. - */ - CompletableFuture createStreamSegment(String streamSegmentName, SegmentType segmentType, Collection attributes, - Duration timeout); - - /** - * Merges a StreamSegment into another. If the StreamSegment is not already sealed, it will seal it. - * - * @param targetSegmentName The name of the StreamSegment to merge into. - * @param sourceSegmentName The name of the StreamSegment to merge. - * @param timeout Timeout for the operation. - * @return A CompletableFuture that, when completed normally, will contain a MergeStreamSegmentResult instance with information about the - * source and target Segments. If the operation failed, the future will be failed with the causing exception. - * @throws IllegalArgumentException If any of the arguments are invalid. - */ - CompletableFuture mergeStreamSegment(String targetSegmentName, String sourceSegmentName, Duration timeout); - - /** - * Seals a StreamSegment for modifications. - * - * @param streamSegmentName The name of the StreamSegment to seal. - * @param timeout Timeout for the operation - * @return A CompletableFuture that, when completed normally, will contain the final length of the StreamSegment. - * If the operation failed, the future will be failed with the causing exception. - * @throws IllegalArgumentException If any of the arguments are invalid. - */ - CompletableFuture sealStreamSegment(String streamSegmentName, Duration timeout); - - /** - * Deletes a StreamSegment. - * - * @param streamSegmentName The name of the StreamSegment to delete. - * @param timeout Timeout for the operation. - * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation - * failed, the future will be failed with the causing exception. - * @throws IllegalArgumentException If any of the arguments are invalid. - */ - CompletableFuture deleteStreamSegment(String streamSegmentName, Duration timeout); +public interface StreamSegmentStore extends SegmentApi { /** - * Truncates a StreamSegment at a given offset. + * Applies all outstanding operations in a particular SegmentContainer from the DurableLog into the underlying Storage. * - * @param streamSegmentName The name of the StreamSegment to truncate. - * @param offset The offset at which to truncate. This must be at least equal to the existing truncation - * offset and no larger than the StreamSegment's length. After the operation is complete, - * no offsets below this one will be accessible anymore. - * @param timeout Timeout for the operation. - * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation - * failed, the future will be failed with the causing exception. + * @param containerId The Id of the container that needs to persisted to storage. + * @param timeout Timeout for the operation. + * @return A Completable future that when completed, will indicate that the operation has been successfully completed. + * If the operation fails, it will be completed with the appropriate exception. */ - CompletableFuture truncateStreamSegment(String streamSegmentName, long offset, Duration timeout); + CompletableFuture flushToStorage(int containerId, Duration timeout); } diff --git a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentConfig.java b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentConfig.java index 792b9ba314b..b6192a70446 100644 --- a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentConfig.java +++ b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentConfig.java @@ -27,9 +27,11 @@ public class TableSegmentConfig { public static final TableSegmentConfig NO_CONFIG = TableSegmentConfig.builder().build(); @Builder.Default private final int keyLength = 0; + @Builder.Default + private final long rolloverSizeBytes = 0L; @Override public String toString() { - return String.format("KeyLength = %s", this.keyLength); + return String.format("KeyLength = %s, RolloverSizeBytes = %s", this.keyLength, this.rolloverSizeBytes); } } diff --git a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentInfo.java b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentInfo.java new file mode 100644 index 00000000000..ecaa121aae9 --- /dev/null +++ b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableSegmentInfo.java @@ -0,0 +1,68 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.contracts.tables; + +import io.pravega.segmentstore.contracts.SegmentType; +import lombok.AccessLevel; +import lombok.Builder; +import lombok.Getter; +import lombok.RequiredArgsConstructor; + +/** + * General Table Segment Information. + */ +@Getter +@RequiredArgsConstructor(access = AccessLevel.PRIVATE) +@Builder +public final class TableSegmentInfo { + /** + * The name of the Table Segment. + */ + private final String name; + /** + * The length of the Table Segment. + * NOTE: The actual size (in bytes) is {@link #getLength()} - {@link #getStartOffset()} (account for truncation offset). + */ + private final long length; + /** + * The Start Offset (Truncation Offset) of the Table Segment. + */ + private final long startOffset; + /** + * Gets the number of indexed entries (unique keys). + * + * NOTE: this is an "eventually consistent" value: + * - In-flight (not yet acknowledged) updates and removals are not included. + * - Recently acknowledged updates and removals may or may not be included (depending on whether they were conditional + * or not). As the index is updated (in the background), this value will converge towards the actual number of unique + * keys in the index. + */ + private final long entryCount; + /** + * Gets the Key Length. 0 means variable key length; any non-zero (positive) indicates a Fixed Key Length Table Segment. + */ + private final int keyLength; + /** + * The {@link SegmentType} for the segment. + */ + private final SegmentType type; + + @Override + public String toString() { + return String.format("Name = %s, Entries = %s, StartOffset = %d, Length = %d, Type = %s, KeyLength = %s", + getName(), getEntryCount(), getStartOffset(), getLength(), getType(), getKeyLength()); + } +} diff --git a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableStore.java b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableStore.java index dea10b6e069..b30d18f6466 100644 --- a/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableStore.java +++ b/segmentstore/contracts/src/main/java/io/pravega/segmentstore/contracts/tables/TableStore.java @@ -130,35 +130,6 @@ default CompletableFuture createSegment(String segmentName, SegmentType se */ CompletableFuture deleteSegment(String segmentName, boolean mustBeEmpty, Duration timeout); - /** - * Merges a Table Segment into another Table Segment. - * - * @param targetSegmentName The name of the Table Segment to merge into. - * @param sourceSegmentName The name of the Table Segment to merge. - * @param timeout Timeout for the operation. - * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation - * failed, the future will be failed with the causing exception. Notable Exceptions: - *

    - *
  • {@link StreamSegmentNotExistsException} If either the Source or Target Table Segment do not exist. - *
  • {@link BadSegmentTypeException} If sourceSegmentName or targetSegmentName refer to non-Table Segments. - *
- */ - CompletableFuture merge(String targetSegmentName, String sourceSegmentName, Duration timeout); - - /** - * Seals a Table Segment for modifications. - * - * @param segmentName The name of the Table Segment to seal. - * @param timeout Timeout for the operation - * @return A CompletableFuture that, when completed normally, will indicate the operation completed. If the operation - * failed, the future will be failed with the causing exception. Notable Exceptions: - *
    - *
  • {@link StreamSegmentNotExistsException} If the Table Segment does not exist. - *
  • {@link BadSegmentTypeException} If segmentName refers to a non-Table Segment. - *
- */ - CompletableFuture seal(String segmentName, Duration timeout); - /** * Inserts new or updates existing Table Entries into the given Table Segment. * @@ -338,4 +309,19 @@ default CompletableFuture createSegment(String segmentName, SegmentType se * @throws IllegalDataFormatException If serializedState is not null and cannot be deserialized. */ CompletableFuture>> entryDeltaIterator(String segmentName, long fromPosition, Duration fetchTimeout); + + /** + * Gets information about a Table Segment. + * + * @param segmentName The name of the Table Segment. + * @param timeout Timeout for the operation. + * @return A CompletableFuture that, when completed normally, will contain the result. If the operation failed, the + * future will be failed with the causing exception. Note that this result will only contain the Core Attributes + * for this Segment. Notable Exceptions: + *
    + *
  • {@link StreamSegmentNotExistsException} If the Table Segment does not exist. + *
  • {@link BadSegmentTypeException} If segmentName refers to a non-Table Segment. + *
+ */ + CompletableFuture getInfo(String segmentName, Duration timeout); } diff --git a/segmentstore/contracts/src/test/java/io/pravega/segmentstore/contracts/SegmentTypeTests.java b/segmentstore/contracts/src/test/java/io/pravega/segmentstore/contracts/SegmentTypeTests.java index 44fb5155cea..82741ab140a 100644 --- a/segmentstore/contracts/src/test/java/io/pravega/segmentstore/contracts/SegmentTypeTests.java +++ b/segmentstore/contracts/src/test/java/io/pravega/segmentstore/contracts/SegmentTypeTests.java @@ -66,6 +66,7 @@ public void testToFromAttributes() { Assert.assertEquals("Simple Table Segment.", expectedSimpleSegment, simpleTableSegment); } + @SafeVarargs private void checkBuilder(SegmentType type, long expectedValue, Predicate... predicates) { check(type, expectedValue, predicates); val rebuilt = SegmentType.builder(type).build(); @@ -74,6 +75,7 @@ private void checkBuilder(SegmentType type, long expectedValue, Predicate... predicates) { Assert.assertEquals("Unexpected getValue() for " + type.toString(), expectedValue, type.getValue()); for (val p : predicates) { diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/ServiceStarter.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/ServiceStarter.java index e7aacbf36ed..632d84221c1 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/ServiceStarter.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/ServiceStarter.java @@ -17,15 +17,19 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Preconditions; +import com.sun.management.HotSpotDiagnosticMXBean; +import io.netty.util.internal.PlatformDependent; import io.pravega.common.Exceptions; import io.pravega.common.security.JKSHelper; import io.pravega.common.security.ZKTLSUtils; import io.pravega.common.cluster.Host; import io.pravega.segmentstore.contracts.StreamSegmentStore; import io.pravega.segmentstore.contracts.tables.TableStore; +import io.pravega.segmentstore.server.CacheManager.CacheManagerHealthContributor; import io.pravega.segmentstore.server.host.delegationtoken.TokenVerifierImpl; import io.pravega.segmentstore.server.host.handler.AdminConnectionListener; import io.pravega.segmentstore.server.host.handler.PravegaConnectionListener; +import io.pravega.segmentstore.server.host.health.ZKHealthContributor; import io.pravega.shared.health.bindings.resources.HealthImpl; import io.pravega.segmentstore.server.host.stat.AutoScaleMonitor; import io.pravega.segmentstore.server.host.stat.AutoScalerConfig; @@ -35,6 +39,7 @@ import io.pravega.segmentstore.storage.impl.bookkeeper.BookKeeperConfig; import io.pravega.segmentstore.storage.impl.bookkeeper.BookKeeperLogFactory; import io.pravega.segmentstore.storage.mocks.InMemoryDurableDataLogFactory; +import io.pravega.segmentstore.server.host.health.SegmentContainerRegistryHealthContributor; import io.pravega.shared.health.HealthServiceManager; import io.pravega.shared.metrics.MetricsConfig; import io.pravega.shared.metrics.MetricsProvider; @@ -42,8 +47,11 @@ import io.pravega.shared.rest.RESTServer; import io.pravega.shared.rest.security.AuthHandlerManager; +import java.lang.management.ManagementFactory; import java.util.Collections; import java.util.concurrent.atomic.AtomicReference; + +import lombok.Getter; import lombok.extern.slf4j.Slf4j; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.CuratorFrameworkFactory; @@ -61,16 +69,21 @@ @Slf4j public final class ServiceStarter { //region Members + @VisibleForTesting + @Getter + private HealthServiceManager healthServiceManager; + + @VisibleForTesting + @Getter + private final ServiceBuilder serviceBuilder; private final ServiceBuilderConfig builderConfig; private final ServiceConfig serviceConfig; - private final ServiceBuilder serviceBuilder; private StatsProvider statsProvider; private PravegaConnectionListener listener; private AdminConnectionListener adminListener; private AutoScaleMonitor autoScaleMonitor; private CuratorFramework zkClient; - private HealthServiceManager healthServiceManager; private RESTServer restServer; private boolean closed; @@ -99,6 +112,10 @@ private ServiceBuilder createServiceBuilder() { public void start() throws Exception { Exceptions.checkNotClosed(this.closed, this); + healthServiceManager = new HealthServiceManager(serviceConfig.getHealthCheckInterval()); + healthServiceManager.start(); + log.info("Initializing HealthService ..."); + MetricsConfig metricsConfig = builderConfig.getConfig(MetricsConfig::builder); if (metricsConfig.isEnableStatistics()) { log.info("Initializing metrics provider ..."); @@ -107,19 +124,6 @@ public void start() throws Exception { statsProvider.start(); } - log.info("Initializing HealthService ..."); - healthServiceManager = new HealthServiceManager(serviceConfig.getHealthCheckInterval()); - healthServiceManager.start(); - - if (this.serviceConfig.isRestServerEnabled()) { - log.info("Initializing RESTServer ..."); - restServer = new RESTServer(serviceConfig.getRestServerConfig(), Collections.singleton(new HealthImpl( - new AuthHandlerManager(serviceConfig.getRestServerConfig()), - healthServiceManager.getEndpoint()))); - restServer.startAsync(); - restServer.awaitRunning(); - } - log.info("Initializing ZooKeeper Client ..."); this.zkClient = createZKClient(); @@ -150,7 +154,8 @@ public void start() throws Exception { this.serviceConfig.getListeningPort(), service, tableStoreService, autoScaleMonitor.getStatsRecorder(), autoScaleMonitor.getTableSegmentStatsRecorder(), tokenVerifier, this.serviceConfig.getCertFile(), this.serviceConfig.getKeyFile(), - this.serviceConfig.isReplyWithStackTraceOnError(), serviceBuilder.getLowPriorityExecutor()); + this.serviceConfig.isReplyWithStackTraceOnError(), serviceBuilder.getLowPriorityExecutor(), + this.serviceConfig.getTlsProtocolVersion(), healthServiceManager); this.listener.startListening(); log.info("PravegaConnectionListener started successfully."); @@ -158,11 +163,25 @@ public void start() throws Exception { if (serviceConfig.isEnableAdminGateway()) { this.adminListener = new AdminConnectionListener(this.serviceConfig.isEnableTls(), this.serviceConfig.isEnableTlsReload(), this.serviceConfig.getListeningIPAddress(), this.serviceConfig.getAdminGatewayPort(), service, tableStoreService, - tokenVerifier, this.serviceConfig.getCertFile(), this.serviceConfig.getKeyFile()); + tokenVerifier, this.serviceConfig.getCertFile(), this.serviceConfig.getKeyFile(), this.serviceConfig.getTlsProtocolVersion(), + healthServiceManager); this.adminListener.startListening(); log.info("AdminConnectionListener started successfully."); } log.info("StreamSegmentService started."); + + healthServiceManager.register(new ZKHealthContributor(zkClient)); + healthServiceManager.register(new CacheManagerHealthContributor(serviceBuilder.getCacheManager())); + healthServiceManager.register(new SegmentContainerRegistryHealthContributor(serviceBuilder.getSegmentContainerRegistry())); + + if (this.serviceConfig.isRestServerEnabled()) { + log.info("Initializing RESTServer ..."); + restServer = new RESTServer(serviceConfig.getRestServerConfig(), Collections.singleton(new HealthImpl( + new AuthHandlerManager(serviceConfig.getRestServerConfig()), + healthServiceManager.getEndpoint()))); + restServer.startAsync(); + restServer.awaitRunning(); + } } public void shutdown() { @@ -233,12 +252,34 @@ private void attachStorage(ServiceBuilder builder) { builder.withStorageFactory(setup -> { StorageLoader loader = new StorageLoader(); return loader.load(setup, - this.serviceConfig.getStorageImplementation().toString(), + this.serviceConfig.getStorageImplementation(), this.serviceConfig.getStorageLayout(), setup.getStorageExecutor()); }); } + @VisibleForTesting + static void validateConfig(ServiceBuilderConfig config) { + long xmx = Runtime.getRuntime().maxMemory(); + long nettyDirectMem = PlatformDependent.maxDirectMemory(); //Dio.netty.maxDirectMemory + long maxDirectMemorySize = Long.parseLong(ManagementFactory.getPlatformMXBean(HotSpotDiagnosticMXBean.class) + .getVMOption("MaxDirectMemorySize").getValue()); + maxDirectMemorySize = (maxDirectMemorySize == 0) ? xmx : maxDirectMemorySize; + long cacheSize = config.getConfig(ServiceConfig::builder).getCachePolicy().getMaxSize(); + log.info("MaxDirectMemorySize is {}, Cache size is {} and Netty DM is {}", maxDirectMemorySize, cacheSize, nettyDirectMem); + //run checks + validateConfig(cacheSize, xmx, maxDirectMemorySize, ((com.sun.management.OperatingSystemMXBean) ManagementFactory + .getOperatingSystemMXBean()).getTotalPhysicalMemorySize()); + } + + @VisibleForTesting + static void validateConfig(long cacheSize, long xmx, long maxDirectMem, long totalMem) { + Preconditions.checkState(totalMem > (maxDirectMem + xmx), String.format("MaxDirectMemorySize(%s B) along " + + "with JVM Xmx value(%s B) is greater than the available system memory!", maxDirectMem, xmx)); + Preconditions.checkState(maxDirectMem > cacheSize, String.format("Cache size (%s B) configured is more " + + "than the JVM MaxDirectMemory(%s B) value", cacheSize, maxDirectMem)); + } + private void attachZKSegmentManager(ServiceBuilder builder) { builder.withContainerManager(setup -> new ZKSegmentContainerManager(setup.getContainerRegistry(), @@ -321,6 +362,7 @@ public static void main(String[] args) throws Exception { // This will unfortunately include all System Properties as well, but knowing those can be useful too sometimes. log.info("Segment store configuration:"); config.forEach((key, value) -> log.info("{} = {}", key, value)); + validateConfig(config); serviceStarter.set(new ServiceStarter(config)); } catch (Throwable e) { log.error("Could not create a Service with default config, Aborting.", e); diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AbstractConnectionListener.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AbstractConnectionListener.java index a9bec10825b..ce095b3110a 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AbstractConnectionListener.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AbstractConnectionListener.java @@ -16,6 +16,7 @@ package io.pravega.segmentstore.server.host.handler; import com.google.common.annotations.VisibleForTesting; +import com.google.common.collect.ImmutableMap; import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.Channel; import io.netty.channel.ChannelHandler; @@ -41,13 +42,19 @@ import io.pravega.segmentstore.server.host.security.TLSConfigChangeEventConsumer; import io.pravega.segmentstore.server.host.security.TLSConfigChangeFileConsumer; import io.pravega.segmentstore.server.host.security.TLSHelper; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.HealthServiceManager; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; import io.pravega.shared.protocol.netty.RequestProcessor; import lombok.Getter; +import lombok.NonNull; import lombok.extern.slf4j.Slf4j; import java.io.FileNotFoundException; import java.nio.file.Files; import java.nio.file.Paths; +import java.util.Arrays; import java.util.List; import java.util.concurrent.atomic.AtomicReference; @@ -58,10 +65,15 @@ public abstract class AbstractConnectionListener implements AutoCloseable { private final String host; private final int port; - private Channel serverChannel; + + private Channel serverChannel; // tracks the status of the connection + private EventLoopGroup bossGroup; private EventLoopGroup workerGroup; + @VisibleForTesting + @Getter + private final HealthServiceManager healthServiceManager; private final ConnectionTracker connectionTracker; // TLS related params @@ -73,19 +85,52 @@ public abstract class AbstractConnectionListener implements AutoCloseable { private final String pathToTlsCertFile; private final String pathToTlsKeyFile; + private final String[] tlsProtocolVersion; private FileModificationMonitor tlsCertFileModificationMonitor; // used only if tls reload is enabled + /** + * Creates a new instance of the AdminConnectionListener class. + * + * @param enableTls Whether to enable SSL/TLS. + * @param enableTlsReload Whether to reload TLS when the X.509 certificate file is replaced. + * @param host The name of the host to listen to. + * @param port The port to listen on. + * @param certFile Path to the certificate file to be used for TLS. + * @param keyFile Path to be key file to be used for TLS. + * @param tlsProtocolVersion the version of the TLS protocol + */ + public AbstractConnectionListener(boolean enableTls, boolean enableTlsReload, String host, int port, + String certFile, String keyFile, String[] tlsProtocolVersion) { + + this(enableTls, enableTlsReload, host, port, certFile, keyFile, tlsProtocolVersion, null); + } + + /** + * Creates a new instance of the AdminConnectionListener class with HealthServiceManager. + * + * @param enableTls Whether to enable SSL/TLS. + * @param enableTlsReload Whether to reload TLS when the X.509 certificate file is replaced. + * @param host The name of the host to listen to. + * @param port The port to listen on. + * @param certFile Path to the certificate file to be used for TLS. + * @param keyFile Path to be key file to be used for TLS. + * @param tlsProtocolVersion The version of the TLS protocol + * @param healthServiceManager The healthService to register new health contributors related to the listeners. + */ public AbstractConnectionListener(boolean enableTls, boolean enableTlsReload, String host, int port, - String certFile, String keyFile) { + String certFile, String keyFile, String[] tlsProtocolVersion, + HealthServiceManager healthServiceManager) { this.enableTls = enableTls; this.enableTlsReload = this.enableTls && enableTlsReload; this.host = Exceptions.checkNotNullOrEmpty(host, "host"); this.port = port; this.pathToTlsCertFile = certFile; this.pathToTlsKeyFile = keyFile; + this.tlsProtocolVersion = Arrays.copyOf(tlsProtocolVersion, tlsProtocolVersion.length); InternalLoggerFactory.setDefaultFactory(Slf4JLoggerFactory.INSTANCE); this.connectionTracker = new ConnectionTracker(); + this.healthServiceManager = healthServiceManager; } /** @@ -112,7 +157,7 @@ public AbstractConnectionListener(boolean enableTls, boolean enableTlsReload, St */ public void startListening() { final AtomicReference sslCtx = this.enableTls ? - new AtomicReference<>(TLSHelper.newServerSslContext(pathToTlsCertFile, pathToTlsKeyFile)) : null; + new AtomicReference<>(TLSHelper.newServerSslContext(pathToTlsCertFile, pathToTlsKeyFile, tlsProtocolVersion)) : null; boolean nio = false; try { bossGroup = new EpollEventLoopGroup(1); @@ -156,6 +201,10 @@ public void initChannel(SocketChannel ch) { // Start the server. serverChannel = b.bind(host, port).awaitUninterruptibly().channel(); + + if (healthServiceManager != null) { + healthServiceManager.register(new ConnectionListenerHealthContributor(this)); + } } @VisibleForTesting @@ -185,12 +234,12 @@ FileModificationMonitor prepareCertificateMonitor(boolean isTLSCertPathSymLink, tlsCertificatePath, FileModificationPollingMonitor.class.getSimpleName()); result = new FileModificationPollingMonitor(Paths.get(tlsCertificatePath), - new TLSConfigChangeFileConsumer(sslCtx, tlsCertificatePath, tlsKeyPath)); + new TLSConfigChangeFileConsumer(sslCtx, tlsCertificatePath, tlsKeyPath, tlsProtocolVersion)); } else { // For non symbolic links we'll use the event-based watcher, which is more efficient than a // polling-based monitor. result = new FileModificationEventWatcher(Paths.get(tlsCertificatePath), - new TLSConfigChangeEventConsumer(sslCtx, tlsCertificatePath, tlsKeyPath)); + new TLSConfigChangeEventConsumer(sslCtx, tlsCertificatePath, tlsKeyPath, tlsProtocolVersion)); } return result; } catch (FileNotFoundException e) { @@ -221,4 +270,34 @@ public void close() { tlsCertFileModificationMonitor.stopMonitoring(); } } + + /** + * A contributor for managing health of a connection listener. + */ + private static class ConnectionListenerHealthContributor extends AbstractHealthContributor { + @NonNull + private final AbstractConnectionListener listener; + + private ConnectionListenerHealthContributor(AbstractConnectionListener listener) { + super(listener.getClass().getSimpleName()); + this.listener = listener; + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) { + Status status = Status.DOWN; + boolean running = listener.serverChannel.isOpen(); + if (running) { + status = Status.NEW; + } + + boolean ready = listener.serverChannel.isActive(); + if (ready) { + status = Status.UP; + } + + builder.details(ImmutableMap.of("host", listener.host, "port", listener.port)); + return status; + } + } } diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListener.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListener.java index 803a180ce06..6ecfab75b09 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListener.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListener.java @@ -22,6 +22,7 @@ import io.pravega.segmentstore.contracts.tables.TableStore; import io.pravega.segmentstore.server.host.delegationtoken.DelegationTokenVerifier; import io.pravega.segmentstore.server.host.delegationtoken.PassingTokenVerifier; +import io.pravega.shared.health.HealthServiceManager; import io.pravega.shared.protocol.netty.CommandDecoder; import io.pravega.shared.protocol.netty.CommandEncoder; import io.pravega.shared.protocol.netty.ExceptionLoggingHandler; @@ -40,7 +41,6 @@ @Slf4j public class AdminConnectionListener extends AbstractConnectionListener { //region Members - private final StreamSegmentStore store; private final TableStore tableStore; private final DelegationTokenVerifier tokenVerifier; @@ -57,11 +57,35 @@ public class AdminConnectionListener extends AbstractConnectionListener { * @param tokenVerifier The object to verify delegation token. * @param certFile Path to the certificate file to be used for TLS. * @param keyFile Path to be key file to be used for TLS. + * @param tlsProtocolVersion the version of the TLS protocol + */ + public AdminConnectionListener(boolean enableTls, boolean enableTlsReload, String host, int port, + StreamSegmentStore streamSegmentStore, TableStore tableStore, + DelegationTokenVerifier tokenVerifier, String certFile, String keyFile, String[] tlsProtocolVersion) { + this(enableTls, enableTlsReload, host, port, streamSegmentStore, tableStore, tokenVerifier, certFile, keyFile, + tlsProtocolVersion, null); + } + + /** + * Creates a new instance of the PravegaConnectionListener class with HealthServiceManager. + * + * @param enableTls Whether to enable SSL/TLS. + * @param enableTlsReload Whether to reload TLS when the X.509 certificate file is replaced. + * @param host The name of the host to listen to. + * @param port The port to listen on. + * @param streamSegmentStore The SegmentStore to delegate all requests to. + * @param tableStore The TableStore to delegate all requests to. + * @param tokenVerifier The object to verify delegation token. + * @param certFile Path to the certificate file to be used for TLS. + * @param keyFile Path to be key file to be used for TLS. + * @param tlsProtocolVersion the version of the TLS protocol + * @param healthServiceManager The healService to register new health contributors related to the listeners. */ public AdminConnectionListener(boolean enableTls, boolean enableTlsReload, String host, int port, StreamSegmentStore streamSegmentStore, TableStore tableStore, - DelegationTokenVerifier tokenVerifier, String certFile, String keyFile) { - super(enableTls, enableTlsReload, host, port, certFile, keyFile); + DelegationTokenVerifier tokenVerifier, String certFile, String keyFile, String[] tlsProtocolVersion, + HealthServiceManager healthServiceManager) { + super(enableTls, enableTlsReload, host, port, certFile, keyFile, tlsProtocolVersion, healthServiceManager); this.store = Preconditions.checkNotNull(streamSegmentStore, "streamSegmentStore"); this.tableStore = Preconditions.checkNotNull(tableStore, "tableStore"); this.tokenVerifier = (tokenVerifier != null) ? tokenVerifier : new PassingTokenVerifier(); diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImpl.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImpl.java index 542f81ce904..4db8125b012 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImpl.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImpl.java @@ -15,9 +15,11 @@ */ package io.pravega.segmentstore.server.host.handler; +import io.pravega.common.LoggerHelpers; import io.pravega.segmentstore.contracts.StreamSegmentStore; import io.pravega.segmentstore.contracts.tables.TableStore; import io.pravega.segmentstore.server.host.delegationtoken.DelegationTokenVerifier; +import io.pravega.segmentstore.server.host.delegationtoken.PassingTokenVerifier; import io.pravega.segmentstore.server.host.stat.SegmentStatsRecorder; import io.pravega.segmentstore.server.host.stat.TableSegmentStatsRecorder; import io.pravega.shared.protocol.netty.AdminRequestProcessor; @@ -37,12 +39,49 @@ public class AdminRequestProcessorImpl extends PravegaRequestProcessor implement //region Constructor + /** + * Creates a new instance of the AdminRequestProcessor class with no Metrics StatsRecorder. + * + * @param segmentStore The StreamSegmentStore to attach to (and issue requests to). + * @param tableStore The TableStore to attach to (and issue requests to). + * @param connection The ServerConnection to attach to (and send responses to). + */ + public AdminRequestProcessorImpl(@NonNull StreamSegmentStore segmentStore, @NonNull TableStore tableStore, + @NonNull ServerConnection connection) { + this(segmentStore, tableStore, new TrackedConnection(connection, new ConnectionTracker()), new PassingTokenVerifier()); + } + + /** + * Creates a new instance of the AdminRequestProcessor class with no Metrics StatsRecorder. + * + * @param segmentStore The StreamSegmentStore to attach to (and issue requests to). + * @param tableStore The TableStore to attach to (and issue requests to). + * @param connection The ServerConnection to attach to (and send responses to). + * @param tokenVerifier Verifier class that verifies delegation token. + */ public AdminRequestProcessorImpl(@NonNull StreamSegmentStore segmentStore, @NonNull TableStore tableStore, @NonNull TrackedConnection connection, @NonNull DelegationTokenVerifier tokenVerifier) { - super(segmentStore, tableStore, connection, SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), + this(segmentStore, tableStore, connection, SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), tokenVerifier, true); } + /** + * Creates a new instance of the AdminRequestProcessor class. + * + * @param segmentStore The StreamSegmentStore to attach to (and issue requests to). + * @param tableStore The TableStore to attach to (and issue requests to). + * @param connection The ServerConnection to attach to (and send responses to). + * @param statsRecorder A StatsRecorder for Metrics for Stream Segments. + * @param tableStatsRecorder A TableSegmentStatsRecorder for Metrics for Table Segments. + * @param tokenVerifier Verifier class that verifies delegation token. + * @param replyWithStackTraceOnError Whether client replies upon failed requests contain server-side stack traces or not. + */ + public AdminRequestProcessorImpl(@NonNull StreamSegmentStore segmentStore, @NonNull TableStore tableStore, @NonNull TrackedConnection connection, + @NonNull SegmentStatsRecorder statsRecorder, @NonNull TableSegmentStatsRecorder tableStatsRecorder, + @NonNull DelegationTokenVerifier tokenVerifier, boolean replyWithStackTraceOnError) { + super(segmentStore, tableStore, connection, statsRecorder, tableStatsRecorder, tokenVerifier, replyWithStackTraceOnError); + } + //endregion //region RequestProcessor Implementation @@ -63,5 +102,23 @@ public void keepAlive(WireCommands.KeepAlive keepAlive) { getConnection().send(keepAlive); } + @Override + public void flushToStorage(WireCommands.FlushToStorage flushToStorage) { + final String operation = "flushToStorage"; + final int containerId = flushToStorage.getContainerId(); + + if (!verifyToken(null, flushToStorage.getRequestId(), flushToStorage.getDelegationToken(), operation)) { + return; + } + + long trace = LoggerHelpers.traceEnter(log, operation, flushToStorage); + getSegmentStore().flushToStorage(containerId, TIMEOUT) + .thenAccept(v -> { + LoggerHelpers.traceLeave(log, operation, trace); + getConnection().send(new WireCommands.StorageFlushed(flushToStorage.getRequestId())); + }) + .exceptionally(ex -> handleException(flushToStorage.getRequestId(), null, operation, ex)); + } + //endregion } diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListener.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListener.java index dc9c2d6b674..7763f2177f9 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListener.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListener.java @@ -30,6 +30,7 @@ import java.util.List; import java.util.concurrent.ScheduledExecutorService; +import io.pravega.shared.health.HealthServiceManager; import io.pravega.shared.protocol.netty.AppendDecoder; import io.pravega.shared.protocol.netty.CommandDecoder; import io.pravega.shared.protocol.netty.CommandEncoder; @@ -37,6 +38,7 @@ import io.pravega.shared.protocol.netty.RequestProcessor; import lombok.extern.slf4j.Slf4j; +import static io.pravega.segmentstore.server.store.ServiceConfig.TLS_PROTOCOL_VERSION; import static io.pravega.shared.metrics.MetricNotifier.NO_OP_METRIC_NOTIFIER; import static io.pravega.shared.protocol.netty.WireCommands.MAX_WIRECOMMAND_SIZE; @@ -46,7 +48,6 @@ @Slf4j public final class PravegaConnectionListener extends AbstractConnectionListener { //region Members - private final StreamSegmentStore store; private final TableStore tableStore; private final SegmentStatsRecorder statsRecorder; @@ -69,17 +70,33 @@ public final class PravegaConnectionListener extends AbstractConnectionListener * @param streamSegmentStore The SegmentStore to delegate all requests to. * @param tableStore The SegmentStore to delegate all requests to. * @param tokenExpiryExecutor The executor to be used for running token expiration handling tasks. + * @param tlsProtocolVersion the version of the TLS protocol */ @VisibleForTesting public PravegaConnectionListener(boolean enableTls, int port, StreamSegmentStore streamSegmentStore, - TableStore tableStore, ScheduledExecutorService tokenExpiryExecutor) { + TableStore tableStore, ScheduledExecutorService tokenExpiryExecutor, String[] tlsProtocolVersion) { this(enableTls, false, "localhost", port, streamSegmentStore, tableStore, SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), null, - null, true, tokenExpiryExecutor); + null, true, tokenExpiryExecutor, tlsProtocolVersion, null); } /** - * Creates a new instance of the PravegaConnectionListener class. + * Creates a new instance of the PravegaConnectionListener class listening on localhost with no StatsRecorder. + * + * @param enableTls Whether to enable SSL/TLS. + * @param port The port to listen on. + * @param streamSegmentStore The SegmentStore to delegate all requests to. + * @param tableStore The SegmentStore to delegate all requests to. + * @param tokenExpiryExecutor The executor to be used for running token expiration handling tasks. + */ + @VisibleForTesting + public PravegaConnectionListener(boolean enableTls, int port, StreamSegmentStore streamSegmentStore, + TableStore tableStore, ScheduledExecutorService tokenExpiryExecutor) { + this(enableTls, port, streamSegmentStore, tableStore, tokenExpiryExecutor, TLS_PROTOCOL_VERSION.getDefaultValue().split(",")); + } + + /** + * Creates a new instance of the PravegaConnectionListener class with HealthServiceManager. * * @param enableTls Whether to enable SSL/TLS. * @param enableTlsReload Whether to reload TLS when the X.509 certificate file is replaced. @@ -94,12 +111,15 @@ public PravegaConnectionListener(boolean enableTls, int port, StreamSegmentStore * @param keyFile Path to be key file to be used for TLS. * @param replyWithStackTraceOnError Whether to send a server-side exceptions to the client in error messages. * @param executor The executor to be used for running token expiration handling tasks. + * @param tlsProtocolVersion the version of the TLS protocol + * @param healthServiceManager The healthService to register new health contributors related to the listeners. */ public PravegaConnectionListener(boolean enableTls, boolean enableTlsReload, String host, int port, StreamSegmentStore streamSegmentStore, TableStore tableStore, SegmentStatsRecorder statsRecorder, TableSegmentStatsRecorder tableStatsRecorder, DelegationTokenVerifier tokenVerifier, String certFile, String keyFile, - boolean replyWithStackTraceOnError, ScheduledExecutorService executor) { - super(enableTls, enableTlsReload, host, port, certFile, keyFile); + boolean replyWithStackTraceOnError, ScheduledExecutorService executor, String[] tlsProtocolVersion, + HealthServiceManager healthServiceManager) { + super(enableTls, enableTlsReload, host, port, certFile, keyFile, tlsProtocolVersion, healthServiceManager); this.store = Preconditions.checkNotNull(streamSegmentStore, "streamSegmentStore"); this.tableStore = Preconditions.checkNotNull(tableStore, "tableStore"); this.statsRecorder = Preconditions.checkNotNull(statsRecorder, "statsRecorder"); @@ -109,6 +129,32 @@ public PravegaConnectionListener(boolean enableTls, boolean enableTlsReload, Str this.tokenExpiryHandlerExecutor = executor; } + /** + * Creates a new instance of the PravegaConnectionListener class. + * + * @param enableTls Whether to enable SSL/TLS. + * @param enableTlsReload Whether to reload TLS when the X.509 certificate file is replaced. + * @param host The name of the host to listen to. + * @param port The port to listen on. + * @param streamSegmentStore The SegmentStore to delegate all requests to. + * @param tableStore The TableStore to delegate all requests to. + * @param statsRecorder (Optional) A StatsRecorder for Metrics for Stream Segments. + * @param tableStatsRecorder (Optional) A Table StatsRecorder for Metrics for Table Segments. + * @param tokenVerifier The object to verify delegation token. + * @param certFile Path to the certificate file to be used for TLS. + * @param keyFile Path to be key file to be used for TLS. + * @param replyWithStackTraceOnError Whether to send a server-side exceptions to the client in error messages. + * @param executor The executor to be used for running token expiration handling tasks. + * @param tlsProtocolVersion the version of the TLS protocol + */ + public PravegaConnectionListener(boolean enableTls, boolean enableTlsReload, String host, int port, StreamSegmentStore streamSegmentStore, TableStore tableStore, + SegmentStatsRecorder statsRecorder, TableSegmentStatsRecorder tableStatsRecorder, + DelegationTokenVerifier tokenVerifier, String certFile, String keyFile, + boolean replyWithStackTraceOnError, ScheduledExecutorService executor, String[] tlsProtocolVersion) { + this(enableTls, enableTlsReload, host, port, streamSegmentStore, tableStore, statsRecorder, tableStatsRecorder, + tokenVerifier, certFile, keyFile, replyWithStackTraceOnError, executor, tlsProtocolVersion, null); + } + @Override public RequestProcessor createRequestProcessor(TrackedConnection c) { PravegaRequestProcessor prp = new PravegaRequestProcessor(store, tableStore, c, statsRecorder, diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessor.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessor.java index e2b05c2966a..3468daaa312 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessor.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessor.java @@ -69,10 +69,10 @@ import io.pravega.shared.protocol.netty.WireCommands.CreateTableSegment; import io.pravega.shared.protocol.netty.WireCommands.DeleteSegment; import io.pravega.shared.protocol.netty.WireCommands.DeleteTableSegment; +import io.pravega.shared.protocol.netty.WireCommands.ErrorMessage.ErrorCode; import io.pravega.shared.protocol.netty.WireCommands.GetSegmentAttribute; import io.pravega.shared.protocol.netty.WireCommands.GetStreamSegmentInfo; import io.pravega.shared.protocol.netty.WireCommands.MergeSegments; -import io.pravega.shared.protocol.netty.WireCommands.MergeTableSegments; import io.pravega.shared.protocol.netty.WireCommands.NoSuchSegment; import io.pravega.shared.protocol.netty.WireCommands.OperationUnsupported; import io.pravega.shared.protocol.netty.WireCommands.ReadSegment; @@ -122,6 +122,7 @@ import static io.pravega.auth.AuthHandler.Permissions.READ; import static io.pravega.common.function.Callbacks.invokeSafely; import static io.pravega.segmentstore.contracts.Attributes.CREATION_TIME; +import static io.pravega.segmentstore.contracts.Attributes.ROLLOVER_SIZE; import static io.pravega.segmentstore.contracts.Attributes.SCALE_POLICY_RATE; import static io.pravega.segmentstore.contracts.Attributes.SCALE_POLICY_TYPE; import static io.pravega.segmentstore.contracts.ReadResultEntryType.Cache; @@ -144,6 +145,7 @@ public class PravegaRequestProcessor extends FailingRequestProcessor implements private static final TagLogger log = new TagLogger(LoggerFactory.getLogger(PravegaRequestProcessor.class)); private static final int MAX_READ_SIZE = 2 * 1024 * 1024; private static final String EMPTY_STACK_TRACE = ""; + @Getter(AccessLevel.PROTECTED) private final StreamSegmentStore segmentStore; private final TableStore tableStore; private final SegmentStatsRecorder statsRecorder; @@ -219,7 +221,7 @@ public void readSegment(ReadSegment readSegment) { wrapCancellationException(ex))); } - private boolean verifyToken(String segment, long requestId, String delegationToken, String operation) { + protected boolean verifyToken(String segment, long requestId, String delegationToken, String operation) { boolean isTokenValid = false; try { tokenVerifier.verifyToken(segment, delegationToken, READ); @@ -438,20 +440,26 @@ public void createSegment(CreateSegment createStreamSegment) { Timer timer = new Timer(); final String operation = "createSegment"; + if (createStreamSegment.getRolloverSizeBytes() < 0) { + log.warn("Segment rollover size bytes cannot be less than 0, actual is {}, fall back to default value", createStreamSegment.getRolloverSizeBytes()); + } + final long rolloverSizeBytes = createStreamSegment.getRolloverSizeBytes() < 0 ? 0 : createStreamSegment.getRolloverSizeBytes(); + Collection attributes = Arrays.asList( new AttributeUpdate(SCALE_POLICY_TYPE, AttributeUpdateType.Replace, ((Byte) createStreamSegment.getScaleType()).longValue()), new AttributeUpdate(SCALE_POLICY_RATE, AttributeUpdateType.Replace, ((Integer) createStreamSegment.getTargetRate()).longValue()), + new AttributeUpdate(ROLLOVER_SIZE, AttributeUpdateType.Replace, rolloverSizeBytes), new AttributeUpdate(CREATION_TIME, AttributeUpdateType.None, System.currentTimeMillis()) ); - if (!verifyToken(createStreamSegment.getSegment(), createStreamSegment.getRequestId(), createStreamSegment.getDelegationToken(), operation)) { + if (!verifyToken(createStreamSegment.getSegment(), createStreamSegment.getRequestId(), createStreamSegment.getDelegationToken(), operation)) { return; - } + } - log.info(createStreamSegment.getRequestId(), "Creating stream segment {}.", createStreamSegment); - segmentStore.createStreamSegment(createStreamSegment.getSegment(), SegmentType.STREAM_SEGMENT, attributes, TIMEOUT) - .thenAccept(v -> connection.send(new SegmentCreated(createStreamSegment.getRequestId(), createStreamSegment.getSegment()))) - .whenComplete((res, e) -> { + log.info(createStreamSegment.getRequestId(), "Creating stream segment {}.", createStreamSegment); + segmentStore.createStreamSegment(createStreamSegment.getSegment(), SegmentType.STREAM_SEGMENT, attributes, TIMEOUT) + .thenAccept(v -> connection.send(new SegmentCreated(createStreamSegment.getRequestId(), createStreamSegment.getSegment()))) + .whenComplete((res, e) -> { if (e == null) { statsRecorder.createSegment(createStreamSegment.getSegment(), createStreamSegment.getScaleType(), createStreamSegment.getTargetRate(), timer.getElapsed()); @@ -470,7 +478,17 @@ public void mergeSegments(MergeSegments mergeSegments) { } log.info(mergeSegments.getRequestId(), "Merging Segments {} ", mergeSegments); - segmentStore.mergeStreamSegment(mergeSegments.getTarget(), mergeSegments.getSource(), TIMEOUT) + + // Populate the AttributeUpdates for this mergeSegments operation, if any. + AttributeUpdateCollection attributeUpdates = new AttributeUpdateCollection(); + if (mergeSegments.getAttributeUpdates() != null) { + for (WireCommands.ConditionalAttributeUpdate update : mergeSegments.getAttributeUpdates()) { + attributeUpdates.add(new AttributeUpdate(AttributeId.fromUUID(update.getAttributeId()), + AttributeUpdateType.get(update.getAttributeUpdateType()), update.getNewValue(), update.getOldValue())); + } + } + + segmentStore.mergeStreamSegment(mergeSegments.getTarget(), mergeSegments.getSource(), attributeUpdates, TIMEOUT) .thenAccept(mergeResult -> { recordStatForTransaction(mergeResult, mergeSegments.getTarget()); connection.send(new WireCommands.SegmentsMerged(mergeSegments.getRequestId(), @@ -490,6 +508,11 @@ public void mergeSegments(MergeSegments mergeSegments) { properties.getLength())); }); return null; + } else if (Exceptions.unwrap(e) instanceof BadAttributeUpdateException) { + log.debug(mergeSegments.getRequestId(), "Conditional merge failed (Source segment={}, " + + "Target segment={}): {}", mergeSegments.getSource(), mergeSegments.getTarget(), e.toString()); + connection.send(new SegmentAttributeUpdated(mergeSegments.getRequestId(), false)); + return null; } else { return handleException(mergeSegments.getRequestId(), mergeSegments.getSource(), operation, e); } @@ -578,6 +601,25 @@ public void updateSegmentPolicy(UpdateSegmentPolicy updateSegmentPolicy) { }); } + @Override + public void getTableSegmentInfo(WireCommands.GetTableSegmentInfo getInfo) { + final String operation = "getTableSegmentInfo"; + + if (!verifyToken(getInfo.getSegmentName(), getInfo.getRequestId(), getInfo.getDelegationToken(), operation)) { + return; + } + + val timer = new Timer(); + log.debug(getInfo.getRequestId(), "Get Table Segment Info {}.", getInfo.getSegmentName()); + tableStore.getInfo(getInfo.getSegmentName(), TIMEOUT) + .thenAccept(info -> { + connection.send(new WireCommands.TableSegmentInfo(getInfo.getRequestId(), getInfo.getSegmentName(), + info.getStartOffset(), info.getLength(), info.getEntryCount(), info.getKeyLength())); + this.tableStatsRecorder.getInfo(getInfo.getSegmentName(), timer.getElapsed()); + }) + .exceptionally(e -> handleException(getInfo.getRequestId(), getInfo.getSegmentName(), operation, e)); + } + @Override public void createTableSegment(final CreateTableSegment createTableSegment) { final String operation = "createTableSegment"; @@ -596,6 +638,12 @@ public void createTableSegment(final CreateTableSegment createTableSegment) { configBuilder.keyLength(createTableSegment.getKeyLength()); } + if (createTableSegment.getRolloverSizeBytes() < 0) { + log.warn("Table segment rollover size bytes cannot be less than 0, actual is {}, fall back to default value", createTableSegment.getRolloverSizeBytes()); + } + final long rolloverSizeByes = createTableSegment.getRolloverSizeBytes() < 0 ? 0 : createTableSegment.getRolloverSizeBytes(); + configBuilder.rolloverSizeBytes(rolloverSizeByes); + tableStore.createSegment(createTableSegment.getSegment(), typeBuilder.build(), configBuilder.build(), TIMEOUT) .thenAccept(v -> { connection.send(new SegmentCreated(createTableSegment.getRequestId(), createTableSegment.getSegment())); @@ -623,37 +671,6 @@ public void deleteTableSegment(final DeleteTableSegment deleteTableSegment) { .exceptionally(e -> handleException(deleteTableSegment.getRequestId(), segment, operation, e)); } - @Override - public void mergeTableSegments(final MergeTableSegments mergeTableSegments) { - final String operation = "mergeTableSegments"; - - if (!verifyToken(mergeTableSegments.getSource(), mergeTableSegments.getRequestId(), mergeTableSegments.getDelegationToken(), operation)) { - return; - } - - log.info(mergeTableSegments.getRequestId(), "Merging table segments {}.", mergeTableSegments); - tableStore.merge(mergeTableSegments.getTarget(), mergeTableSegments.getSource(), TIMEOUT) - .thenRun(() -> connection.send(new WireCommands.SegmentsMerged(mergeTableSegments.getRequestId(), - mergeTableSegments.getTarget(), - mergeTableSegments.getSource(), -1))) - .exceptionally(e -> handleException(mergeTableSegments.getRequestId(), mergeTableSegments.getSource(), operation, e)); - } - - @Override - public void sealTableSegment(final WireCommands.SealTableSegment sealTableSegment) { - String segment = sealTableSegment.getSegment(); - final String operation = "sealTableSegment"; - - if (!verifyToken(segment, sealTableSegment.getRequestId(), sealTableSegment.getDelegationToken(), operation)) { - return; - } - - log.info(sealTableSegment.getRequestId(), "Sealing table segment {}.", sealTableSegment); - tableStore.seal(segment, TIMEOUT) - .thenRun(() -> connection.send(new SegmentSealed(sealTableSegment.getRequestId(), segment))) - .exceptionally(e -> handleException(sealTableSegment.getRequestId(), segment, operation, e)); - } - @Override public void updateTableEntries(final WireCommands.UpdateTableEntries updateTableEntries) { String segment = updateTableEntries.getSegment(); @@ -939,7 +956,7 @@ private WireCommands.TableEntries getTableEntriesCommand(final List //endregion - private Void handleException(long requestId, String segment, String operation, Throwable u) { + Void handleException(long requestId, String segment, String operation, Throwable u) { // use offset as -1L to handle exceptions when offset data is not available. return handleException(requestId, segment, -1L, operation, u); } @@ -962,7 +979,6 @@ private Void handleException(long requestId, String segment, long offset, String log.info(requestId, "Segment '{}' already exists and cannot perform operation '{}'.", segment, operation); invokeSafely(connection::send, new SegmentAlreadyExists(requestId, segment, clientReplyStackTrace), failureHandler); - } else if (u instanceof StreamSegmentNotExistsException) { log.warn(requestId, "Segment '{}' does not exist and cannot perform operation '{}'.", segment, operation); @@ -1024,7 +1040,7 @@ private Void handleException(long requestId, String segment, long offset, String } private boolean errorCodeExists(Throwable e) { - val errorCode = WireCommands.ErrorMessage.ErrorCode.valueOf(e.getClass()); + ErrorCode errorCode = WireCommands.ErrorMessage.ErrorCode.valueOf(e.getClass()); return errorCode != WireCommands.ErrorMessage.ErrorCode.UNSPECIFIED; } diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributor.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributor.java new file mode 100644 index 00000000000..60618ada404 --- /dev/null +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributor.java @@ -0,0 +1,57 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.host.health; + +import com.google.common.collect.ImmutableMap; +import com.google.common.util.concurrent.Service; +import io.pravega.segmentstore.server.SegmentContainer; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; +import lombok.NonNull; + + +/** + * A contributor to manage health of Segment Container. + */ +public class SegmentContainerHealthContributor extends AbstractHealthContributor { + private final SegmentContainer segmentContainer; + + public SegmentContainerHealthContributor(@NonNull SegmentContainer segmentContainer) { + super("SegmentContainer"); + this.segmentContainer = segmentContainer; + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) { + Status status = Status.DOWN; + + if (segmentContainer.state() == Service.State.NEW) { + status = Status.NEW; + } + + if (segmentContainer.state() == Service.State.STARTING) { + status = Status.STARTING; + } + + if (segmentContainer.state() == Service.State.RUNNING) { + status = Status.UP; + } + + builder.details(ImmutableMap.of("Id", segmentContainer.getId(), "ActiveSegments", segmentContainer.getActiveSegments())); + return status; + } +} diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributor.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributor.java new file mode 100644 index 00000000000..a1553aff7e9 --- /dev/null +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributor.java @@ -0,0 +1,52 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.segmentstore.server.host.health; + +import io.pravega.segmentstore.server.SegmentContainer; +import io.pravega.segmentstore.server.SegmentContainerRegistry; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; +import lombok.NonNull; + +/** + * A contributor to manage the health of Segment Container Registry. + */ +public class SegmentContainerRegistryHealthContributor extends AbstractHealthContributor { + private final SegmentContainerRegistry segmentContainerRegistry; + + public SegmentContainerRegistryHealthContributor(@NonNull SegmentContainerRegistry segmentContainerRegistry) { + super("SegmentContainerRegistry"); + this.segmentContainerRegistry = segmentContainerRegistry; + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) { + for (SegmentContainer container: segmentContainerRegistry.getContainers()) { + this.register(new SegmentContainerHealthContributor(container)); + } + + Status status = Status.DOWN; + boolean ready = !segmentContainerRegistry.isClosed(); + + if (ready) { + status = Status.UP; + } + + return status; + } +} diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/ZKHealthContributor.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/ZKHealthContributor.java new file mode 100644 index 00000000000..80bce9fafc4 --- /dev/null +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/health/ZKHealthContributor.java @@ -0,0 +1,56 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.segmentstore.server.host.health; + +import com.google.common.collect.ImmutableMap; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; +import lombok.NonNull; +import org.apache.curator.framework.CuratorFramework; +import org.apache.curator.framework.imps.CuratorFrameworkState; + +/** + * A contributor to manage the health of zookeeper client connection. + */ +public class ZKHealthContributor extends AbstractHealthContributor { + private final CuratorFramework zk; + + public ZKHealthContributor(@NonNull CuratorFramework zk) { + super("zookeeper"); + this.zk = zk; + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) { + Status status = Status.DOWN; + boolean running = this.zk.getState() == CuratorFrameworkState.STARTED; + if (running) { + status = Status.NEW; + } + + boolean ready = this.zk.getZookeeperClient().isConnected(); + if (ready) { + status = Status.UP; + } + + builder.details(ImmutableMap.of("zk-connection-url", this.zk.getZookeeperClient().getCurrentConnectionString())); + + return status; + } + +} diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumer.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumer.java index 4f2377d0829..8ebbfca047b 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumer.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumer.java @@ -34,8 +34,8 @@ public class TLSConfigChangeEventConsumer implements Consumer> { private final TLSConfigChangeHandler handler; public TLSConfigChangeEventConsumer(AtomicReference sslContext, String pathToCertificateFile, - String pathToKeyFile) { - handler = new TLSConfigChangeHandler(sslContext, pathToCertificateFile, pathToKeyFile); + String pathToKeyFile, String[] tlsProtocolVersion) { + handler = new TLSConfigChangeHandler(sslContext, pathToCertificateFile, pathToKeyFile, tlsProtocolVersion); } @Override diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumer.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumer.java index 9d963c2b45b..606307cd63a 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumer.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumer.java @@ -29,8 +29,8 @@ public class TLSConfigChangeFileConsumer implements Consumer { private final TLSConfigChangeHandler handler; public TLSConfigChangeFileConsumer(AtomicReference sslContext, String pathToCertificateFile, - String pathToKeyFile) { - handler = new TLSConfigChangeHandler(sslContext, pathToCertificateFile, pathToKeyFile); + String pathToKeyFile, String[] tlsProtocolVersion) { + handler = new TLSConfigChangeHandler(sslContext, pathToCertificateFile, pathToKeyFile, tlsProtocolVersion); } @Override diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeHandler.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeHandler.java index 87f038e9572..368af3802f4 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeHandler.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeHandler.java @@ -36,10 +36,11 @@ class TLSConfigChangeHandler { private @NonNull final AtomicReference sslContext; private @NonNull final String pathToCertificateFile; private @NonNull final String pathToKeyFile; + private @NonNull final String[] tlsProtocolVersion; public void handleTlsConfigChange() { log.info("Current reload count = {}", numOfConfigChangesSinceStart.incrementAndGet()); - sslContext.set(TLSHelper.newServerSslContext(pathToCertificateFile, pathToKeyFile)); + sslContext.set(TLSHelper.newServerSslContext(pathToCertificateFile, pathToKeyFile, tlsProtocolVersion)); } int getNumOfConfigChangesSinceStart() { diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSHelper.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSHelper.java index 1114e63d235..ca498a1e8be 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSHelper.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/security/TLSHelper.java @@ -37,15 +37,17 @@ public class TLSHelper { * * @param pathToCertificateFile the path to the PEM-encoded server certificate file * @param pathToServerKeyFile the path to the PEM-encoded file containing the server's encrypted private key + * @param tlsProtocolVersion the version of the TLS protocol * @return a {@link SslContext} built from the specified {@code pathToCertificateFile} and {@code pathToServerKeyFile} * @throws NullPointerException if either {@code pathToCertificateFile} or {@code pathToServerKeyFile} is null * @throws IllegalArgumentException if either {@code pathToCertificateFile} or {@code pathToServerKeyFile} is empty * @throws RuntimeException if there is a failure in building the {@link SslContext} */ - public static SslContext newServerSslContext(String pathToCertificateFile, String pathToServerKeyFile) { + public static SslContext newServerSslContext(String pathToCertificateFile, String pathToServerKeyFile, String[] tlsProtocolVersion) { Exceptions.checkNotNullOrEmpty(pathToCertificateFile, "pathToCertificateFile"); Exceptions.checkNotNullOrEmpty(pathToServerKeyFile, "pathToServerKeyFile"); - return newServerSslContext(new File(pathToCertificateFile), new File(pathToServerKeyFile)); + Exceptions.checkArgument(tlsProtocolVersion != null, "tlsProtocolVersion", "Invalid TLS Protocol Version"); + return newServerSslContext(new File(pathToCertificateFile), new File(pathToServerKeyFile), tlsProtocolVersion); } /** @@ -53,18 +55,22 @@ public static SslContext newServerSslContext(String pathToCertificateFile, Strin * * @param certificateFile the PEM-encoded server certificate file * @param serverKeyFile the PEM-encoded file containing the server's encrypted private key + * @param tlsProtocolVersion version of TLS protocol * @return a {@link SslContext} built from the specified {@code pathToCertificateFile} and {@code pathToServerKeyFile} * @throws NullPointerException if either {@code certificateFile} or {@code serverKeyFile} is null * @throws IllegalStateException if either {@code certificateFile} or {@code serverKeyFile} doesn't exist or is unreadable. * @throws RuntimeException if there is a failure in building the {@link SslContext} */ - public static SslContext newServerSslContext(File certificateFile, File serverKeyFile) { + public static SslContext newServerSslContext(File certificateFile, File serverKeyFile, String[] tlsProtocolVersion) { Preconditions.checkNotNull(certificateFile); Preconditions.checkNotNull(serverKeyFile); + Preconditions.checkNotNull(tlsProtocolVersion); ensureExistAndAreReadable(certificateFile, serverKeyFile); try { - SslContext result = SslContextBuilder.forServer(certificateFile, serverKeyFile).build(); + SslContext result = SslContextBuilder.forServer(certificateFile, serverKeyFile) + .protocols(tlsProtocolVersion) + .build(); log.debug("Done creating a new SSL Context for the server."); return result; } catch (SSLException e) { diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/SegmentAggregates.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/SegmentAggregates.java index 76556552887..c0cc961a805 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/SegmentAggregates.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/SegmentAggregates.java @@ -138,7 +138,7 @@ synchronized boolean update(long dataLength, int numOfEvents) { // reported update and current update by calling the decay function for all silent tick intervals // with event count as 0 for them. for (long i = 0; i < iterations - 1; i++) { - computeDecay(0, (double) TICK_INTERVAL / 1000.0); + computeDecay(0, TICK_INTERVAL / 1000.0); } double duration = (age - ((iterations - 1) * TICK_INTERVAL)) / 1000.0; computeDecay(count, duration); diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorder.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorder.java index 68e24f1a9c3..d354f8fda8e 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorder.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorder.java @@ -84,6 +84,14 @@ public interface TableSegmentStatsRecorder extends AutoCloseable { */ void iterateEntries(String tableSegmentName, int resultCount, Duration elapsed); + /** + * Notifies that a Get Table Segment Info was invoked. + * + * @param tableSegmentName Table Segment Name. + * @param elapsed Elapsed time. + */ + void getInfo(String tableSegmentName, Duration elapsed); + @Override void close(); @@ -122,6 +130,11 @@ public void iterateKeys(String tableSegmentName, int resultCount, Duration elaps public void iterateEntries(String tableSegmentName, int resultCount, Duration elapsed) { } + @Override + public void getInfo(String tableSegmentName, Duration elapsed) { + + } + @Override public void close() { } diff --git a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderImpl.java b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderImpl.java index 77bc58c7a9a..fc9c5cee12e 100644 --- a/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderImpl.java +++ b/segmentstore/server/host/src/main/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderImpl.java @@ -46,6 +46,8 @@ class TableSegmentStatsRecorderImpl implements TableSegmentStatsRecorder { private final Counter iterateKeys = createCounter(MetricsNames.TABLE_SEGMENT_ITERATE_KEYS); private final OpStatsLogger iterateEntriesLatency = createLogger(MetricsNames.TABLE_SEGMENT_ITERATE_ENTRIES_LATENCY); private final Counter iterateEntries = createCounter(MetricsNames.TABLE_SEGMENT_ITERATE_ENTRIES); + private final OpStatsLogger getInfoLatency = createLogger(MetricsNames.TABLE_SEGMENT_GET_INFO_LATENCY); + private final Counter getInfo = createCounter(MetricsNames.TABLE_SEGMENT_GET_INFO); //region AutoCloseable Implementation @@ -67,6 +69,8 @@ public void close() { this.iterateKeys.close(); this.iterateEntriesLatency.close(); this.iterateEntries.close(); + this.getInfo.close(); + this.getInfoLatency.close(); } //endregion @@ -113,6 +117,12 @@ public void iterateEntries(String tableSegmentName, int resultCount, Duration el this.iterateEntries.add(resultCount); } + @Override + public void getInfo(String tableSegmentName, Duration elapsed) { + this.getInfoLatency.reportSuccessEvent(elapsed); + this.getInfo.inc(); + } + //endregion protected OpStatsLogger createLogger(String name) { diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ExtendedS3IntegrationTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ExtendedS3IntegrationTest.java index 4a40927b4ed..8e1638e0736 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ExtendedS3IntegrationTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ExtendedS3IntegrationTest.java @@ -125,6 +125,8 @@ private class LocalExtendedS3SimpleStorageFactory implements SimpleStorageFactor private final ChunkedSegmentStorageConfig chunkedSegmentStorageConfig = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .journalSnapshotInfoUpdateFrequency(Duration.ofMillis(10)) .maxJournalUpdatesPerSnapshot(5) + .garbageCollectionDelay(Duration.ofMillis(10)) + .garbageCollectionSleep(Duration.ofMillis(10)) .selfCheckEnabled(true) .build(); diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/FileSystemIntegrationTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/FileSystemIntegrationTest.java index 3f6bbd5f063..2bf465ae79e 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/FileSystemIntegrationTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/FileSystemIntegrationTest.java @@ -61,6 +61,8 @@ protected ServiceBuilder createBuilder(ServiceBuilderConfig.Builder configBuilde new FileSystemSimpleStorageFactory(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .journalSnapshotInfoUpdateFrequency(Duration.ofMillis(10)) .maxJournalUpdatesPerSnapshot(5) + .garbageCollectionDelay(Duration.ofMillis(10)) + .garbageCollectionSleep(Duration.ofMillis(10)) .selfCheckEnabled(true) .build(), setup.getConfig(FileSystemStorageConfig::builder), diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/HDFSIntegrationTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/HDFSIntegrationTest.java index d457d928c02..b793d4f2dac 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/HDFSIntegrationTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/HDFSIntegrationTest.java @@ -87,6 +87,8 @@ protected ServiceBuilder createBuilder(ServiceBuilderConfig.Builder configBuilde new HDFSSimpleStorageFactory(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .journalSnapshotInfoUpdateFrequency(Duration.ofMillis(10)) .maxJournalUpdatesPerSnapshot(5) + .garbageCollectionDelay(Duration.ofMillis(10)) + .garbageCollectionSleep(Duration.ofMillis(10)) .selfCheckEnabled(true) .build(), setup.getConfig(HDFSStorageConfig::builder), diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/NonAppendExtendedS3IntegrationTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/NonAppendExtendedS3IntegrationTest.java index 6546efb41a9..aa7fa880588 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/NonAppendExtendedS3IntegrationTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/NonAppendExtendedS3IntegrationTest.java @@ -126,6 +126,8 @@ private class LocalExtendedS3SimpleStorageFactory implements SimpleStorageFactor private final ChunkedSegmentStorageConfig chunkedSegmentStorageConfig = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .journalSnapshotInfoUpdateFrequency(Duration.ofMillis(10)) .maxJournalUpdatesPerSnapshot(5) + .garbageCollectionDelay(Duration.ofMillis(10)) + .garbageCollectionSleep(Duration.ofMillis(10)) .appendEnabled(false) .selfCheckEnabled(true) .build(); diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/S3IntegrationTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/S3IntegrationTest.java new file mode 100644 index 00000000000..c391acbbfa8 --- /dev/null +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/S3IntegrationTest.java @@ -0,0 +1,153 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.host; + +import com.google.common.base.Preconditions; +import io.pravega.segmentstore.server.store.ServiceBuilder; +import io.pravega.segmentstore.server.store.ServiceBuilderConfig; +import io.pravega.segmentstore.storage.SimpleStorageFactory; +import io.pravega.segmentstore.storage.Storage; +import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorage; +import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorageConfig; +import io.pravega.segmentstore.storage.impl.bookkeeper.BookKeeperConfig; +import io.pravega.segmentstore.storage.impl.bookkeeper.BookKeeperLogFactory; +import io.pravega.segmentstore.storage.metadata.ChunkMetadataStore; +import io.pravega.storage.s3.S3ChunkStorage; +import io.pravega.storage.s3.S3ClientMock; +import io.pravega.storage.s3.S3Mock; +import io.pravega.storage.s3.S3StorageConfig; +import lombok.Getter; +import org.junit.After; +import org.junit.Before; + +import java.net.URI; +import java.time.Duration; +import java.util.UUID; +import java.util.concurrent.ScheduledExecutorService; + +/** + * End-to-end tests for SegmentStore, with integrated AWS S3 Storage and DurableDataLog. + */ +public class S3IntegrationTest extends BookKeeperIntegrationTestBase { + //region Test Configuration and Setup + + private String s3ConfigUri; + private S3Mock s3Mock; + + @Getter + private final ChunkedSegmentStorageConfig chunkedSegmentStorageConfig = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .journalSnapshotInfoUpdateFrequency(Duration.ofMillis(10)) + .maxJournalUpdatesPerSnapshot(5) + .garbageCollectionDelay(Duration.ofMillis(10)) + .garbageCollectionSleep(Duration.ofMillis(10)) + .minSizeLimitForConcat(100) + .maxSizeLimitForConcat(1000) + .selfCheckEnabled(true) + .build(); + + /** + * Starts BookKeeper. + */ + @Override + @Before + public void setUp() throws Exception { + super.setUp(); + s3ConfigUri = "https://localhost"; + String bucketName = "test-bucket"; + String prefix = "Integration" + UUID.randomUUID(); + s3Mock = new S3Mock(); + this.configBuilder.include(S3StorageConfig.builder() + .with(S3StorageConfig.CONFIGURI, s3ConfigUri) + .with(S3StorageConfig.BUCKET, bucketName) + .with(S3StorageConfig.PREFIX, prefix) + .with(S3StorageConfig.ACCESS_KEY, "access") + .with(S3StorageConfig.SECRET_KEY, "secret")); + } + + @Override + @After + public void tearDown() throws Exception { + super.tearDown(); + } + + //endregion + + //region StreamSegmentStoreTestBase Implementation + + @Override + protected ServiceBuilder createBuilder(ServiceBuilderConfig.Builder configBuilder, int instanceId, boolean useChunkedSegmentStorage) { + Preconditions.checkState(useChunkedSegmentStorage); + ServiceBuilderConfig builderConfig = getBuilderConfig(configBuilder, instanceId); + return ServiceBuilder + .newInMemoryBuilder(builderConfig) + .withStorageFactory(setup -> new LocalS3SimpleStorageFactory(setup.getConfig(S3StorageConfig::builder), setup.getStorageExecutor())) + .withDataLogFactory(setup -> new BookKeeperLogFactory(setup.getConfig(BookKeeperConfig::builder), + getBookkeeper().getZkClient(), setup.getCoreExecutor())); + } + + @Override + public void testEndToEnd() { + } + + @Override + public void testFlushToStorage() { + } + + @Override + public void testEndToEndWithFencing() { + } + //endregion + + private class LocalS3SimpleStorageFactory implements SimpleStorageFactory { + private final S3StorageConfig config; + + @Getter + private final ChunkedSegmentStorageConfig chunkedSegmentStorageConfig = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .journalSnapshotInfoUpdateFrequency(Duration.ofMillis(10)) + .maxJournalUpdatesPerSnapshot(5) + .garbageCollectionDelay(Duration.ofMillis(10)) + .garbageCollectionSleep(Duration.ofMillis(10)) + .selfCheckEnabled(true) + .build(); + + @Getter + private final ScheduledExecutorService executor; + + LocalS3SimpleStorageFactory(S3StorageConfig config, ScheduledExecutorService executor) { + this.config = Preconditions.checkNotNull(config, "config"); + this.executor = Preconditions.checkNotNull(executor, "executor"); + } + + @Override + public Storage createStorageAdapter(int containerId, ChunkMetadataStore metadataStore) { + URI uri = URI.create(s3ConfigUri); + S3ClientMock client = new S3ClientMock(s3Mock); + return new ChunkedSegmentStorage(containerId, + new S3ChunkStorage(client, this.config, executorService(), true), + metadataStore, + this.executor, + this.chunkedSegmentStorageConfig); + } + + /** + * Creates a new instance of a Storage adapter. + */ + @Override + public Storage createStorageAdapter() { + throw new UnsupportedOperationException("SimpleStorageFactory requires ChunkMetadataStore"); + } + } +} diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ServiceStarterTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ServiceStarterTest.java index dae7d9ec792..ef504c548f1 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ServiceStarterTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/ServiceStarterTest.java @@ -15,10 +15,18 @@ */ package io.pravega.segmentstore.server.host; +import com.sun.management.HotSpotDiagnosticMXBean; +import io.pravega.segmentstore.server.host.health.SegmentContainerRegistryHealthContributor; +import io.pravega.segmentstore.server.host.health.ZKHealthContributor; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.segmentstore.server.store.ServiceConfig; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.SerializedClassRunner; import io.pravega.test.common.TestingServerStarter; +import java.lang.management.ManagementFactory; +import java.util.Properties; import lombok.Cleanup; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.test.TestingServer; @@ -27,22 +35,35 @@ import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; -import java.io.IOException; +/** + * Test the functionality of ServiceStarter used to setup segment store. + */ @RunWith(SerializedClassRunner.class) public class ServiceStarterTest { - private String zkUrl; + private ServiceStarter serviceStarter; private TestingServer zkTestServer; + private ServiceBuilderConfig.Builder configBuilder; @Before - public void startZookeeper() throws Exception { + public void setup() throws Exception { zkTestServer = new TestingServerStarter().start(); - zkUrl = zkTestServer.getConnectString(); + String zkUrl = zkTestServer.getConnectString(); + configBuilder = ServiceBuilderConfig + .builder() + .include(ServiceConfig.builder() + .with(ServiceConfig.CONTAINER_COUNT, 1) + .with(ServiceConfig.ZK_URL, zkUrl) + .with(ServiceConfig.HEALTH_CHECK_INTERVAL_SECONDS, 2) + ); + serviceStarter = new ServiceStarter(configBuilder.build()); + serviceStarter.start(); } @After - public void stopZookeeper() throws IOException { + public void stopZookeeper() throws Exception { + serviceStarter.shutdown(); zkTestServer.close(); } @@ -54,16 +75,73 @@ public void stopZookeeper() throws IOException { */ @Test public void testCuratorClientCreation() throws Exception { - ServiceBuilderConfig.Builder configBuilder = ServiceBuilderConfig - .builder() - .include(ServiceConfig.builder() - .with(ServiceConfig.CONTAINER_COUNT, 1) - .with(ServiceConfig.ZK_URL, zkUrl)); - @Cleanup("shutdown") - ServiceStarter serviceStarter = new ServiceStarter(configBuilder.build()); @Cleanup CuratorFramework zkClient = serviceStarter.createZKClient(); zkClient.blockUntilConnected(); + @Cleanup + ZKHealthContributor zkHealthContributor = new ZKHealthContributor(zkClient); + Health.HealthBuilder builder = Health.builder().name(zkHealthContributor.getName()); + Status zkStatus = zkHealthContributor.doHealthCheck(builder); Assert.assertTrue(zkClient.getZookeeperClient().isConnected()); + Assert.assertEquals("HealthContributor should report an 'UP' Status.", Status.UP, zkStatus); + zkClient.close(); + zkStatus = zkHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'DOWN' Status.", Status.DOWN, zkStatus); + } + + /** + * Check health of SegmentContainerRegistry + */ + @Test + public void testSegmentContainerRegistryHealth() { + @Cleanup + SegmentContainerRegistryHealthContributor segmentContainerRegistryHealthContributor = new SegmentContainerRegistryHealthContributor(serviceStarter.getServiceBuilder().getSegmentContainerRegistry()); + Health.HealthBuilder builder = Health.builder().name(segmentContainerRegistryHealthContributor.getName()); + Status status = segmentContainerRegistryHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'UP' Status.", Status.UP, status); + } + + /** + * Check the health status of ServiceStarter + * + */ + @Test + public void testHealth() { + Health health = serviceStarter.getHealthServiceManager().getHealthSnapshot(); + Assert.assertEquals("HealthContributor should report an 'UP' Status.", Status.UP, health.getStatus()); + } + + /** + * Test for validating the SS memory settings config + */ + @Test + public void testMemoryConfig() { + //cache more than JVM MaxDirectMemory + AssertExtensions.assertThrows("Exception to be thrown for Cache size greater than JVM MaxDirectMemory", + () -> ServiceStarter.validateConfig(3013872542L, 1013872542L, 2013872542L, 8013872542L), + e -> e instanceof IllegalStateException); + + //MaxDirectMem + Xmx > System Memory + AssertExtensions.assertThrows("Exception to be thrown for MaxDirectMemory + Xmx being greater than System memory.", + () -> ServiceStarter.validateConfig(3013872542L, 2013872542L, 7013872542L, 8013872542L), + e -> e instanceof IllegalStateException); + + //must not throw exception + ServiceStarter.validateConfig(3013872542L, 1013872542L, 5013872542L, 8013872542L); + + //testing the parent config method. + long xmx = Runtime.getRuntime().maxMemory(); + Properties props = new Properties(); + props.setProperty(ServiceConfig.COMPONENT_CODE + "." + ServiceConfig.CACHE_POLICY_MAX_SIZE.getName(), String.valueOf(1013872542L + xmx)); + AssertExtensions.assertThrows("Exception to be thrown for Cache size greater than JVM MaxDirectMemory", + () -> ServiceStarter.validateConfig(this.configBuilder.include(props).build()), + e -> e instanceof IllegalStateException); + + long maxDirectMemorySize = Long.parseLong(ManagementFactory.getPlatformMXBean(HotSpotDiagnosticMXBean.class) + .getVMOption("MaxDirectMemorySize").getValue()); + maxDirectMemorySize = maxDirectMemorySize == 0 ? xmx : maxDirectMemorySize; + props.setProperty(ServiceConfig.COMPONENT_CODE + "." + ServiceConfig.CACHE_POLICY_MAX_SIZE.getName(), String.valueOf(maxDirectMemorySize - 100000)); + // must not throw exception . cache < JVM DM + ServiceStarter.validateConfig(this.configBuilder.include(props).build()); } } diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/StorageLoaderTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/StorageLoaderTest.java index 3f6f07a132c..0c198215dd4 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/StorageLoaderTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/StorageLoaderTest.java @@ -20,7 +20,6 @@ import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.segmentstore.server.store.ServiceConfig; import io.pravega.segmentstore.storage.ConfigSetup; -import io.pravega.segmentstore.storage.DurableDataLogException; import io.pravega.segmentstore.storage.StorageFactory; import io.pravega.segmentstore.storage.StorageLayoutType; import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorageConfig; @@ -60,7 +59,7 @@ public void testNoOpWithInMemoryStorage() throws Exception { .with(StorageExtraConfig.STORAGE_NO_OP_MODE, true)) .include(ServiceConfig.builder() .with(ServiceConfig.CONTAINER_COUNT, 1) - .with(ServiceConfig.STORAGE_IMPLEMENTATION, ServiceConfig.StorageType.INMEMORY)); + .with(ServiceConfig.STORAGE_IMPLEMENTATION, ServiceConfig.StorageType.INMEMORY.name())); ServiceBuilder builder = ServiceBuilder.newInMemoryBuilder(configBuilder.build()) .withStorageFactory(setup -> { @@ -173,7 +172,7 @@ public void testExtendedS3SimpleStorage() throws Exception { assertTrue(factory instanceof ExtendedS3SimpleStorageFactory); } - private StorageFactory getStorageFactory(ConfigSetup setup, ServiceConfig.StorageType storageType, String name, StorageLayoutType storageLayoutType) throws DurableDataLogException { + private StorageFactory getStorageFactory(ConfigSetup setup, ServiceConfig.StorageType storageType, String name, StorageLayoutType storageLayoutType) { @Cleanup("shutdownNow") ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(1, "test"); StorageLoader loader = new StorageLoader(); diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListenerTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListenerTest.java index 96375faf6d2..dff3f80c085 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListenerTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminConnectionListenerTest.java @@ -23,6 +23,7 @@ import io.pravega.shared.protocol.netty.CommandDecoder; import io.pravega.shared.protocol.netty.CommandEncoder; import io.pravega.shared.protocol.netty.ExceptionLoggingHandler; +import io.pravega.test.common.SecurityConfigDefaults; import lombok.Cleanup; import org.junit.Assert; import org.junit.Test; @@ -37,7 +38,7 @@ public class AdminConnectionListenerTest { public void testCreateEncodingStack() { @Cleanup AdminConnectionListener listener = new AdminConnectionListener(false, false, "localhost", - 6622, mock(StreamSegmentStore.class), mock(TableStore.class), new PassingTokenVerifier(), null, null); + 6622, mock(StreamSegmentStore.class), mock(TableStore.class), new PassingTokenVerifier(), null, null, SecurityConfigDefaults.TLS_PROTOCOL_VERSION); List stack = listener.createEncodingStack("connection"); // Check that the order of encoders is the right one. Assert.assertTrue(stack.get(0) instanceof ExceptionLoggingHandler); @@ -50,7 +51,7 @@ public void testCreateEncodingStack() { public void testCreateRequestProcessor() { @Cleanup AdminConnectionListener listener = new AdminConnectionListener(false, false, "localhost", - 6622, mock(StreamSegmentStore.class), mock(TableStore.class), new PassingTokenVerifier(), null, null); + 6622, mock(StreamSegmentStore.class), mock(TableStore.class), new PassingTokenVerifier(), null, null, SecurityConfigDefaults.TLS_PROTOCOL_VERSION); Assert.assertTrue(listener.createRequestProcessor(new TrackedConnection(new ServerConnectionInboundHandler())) instanceof AdminRequestProcessorImpl); } } diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorAuthFailedTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorAuthFailedTest.java new file mode 100644 index 00000000000..b8413992a97 --- /dev/null +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorAuthFailedTest.java @@ -0,0 +1,58 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.host.handler; + +import io.pravega.auth.InvalidTokenException; +import io.pravega.segmentstore.contracts.StreamSegmentStore; +import io.pravega.segmentstore.contracts.tables.TableStore; +import io.pravega.segmentstore.server.host.stat.SegmentStatsRecorder; +import io.pravega.segmentstore.server.host.stat.TableSegmentStatsRecorder; +import io.pravega.shared.protocol.netty.AdminRequestProcessor; +import io.pravega.shared.protocol.netty.WireCommands; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import static io.pravega.shared.protocol.netty.WireCommands.AuthTokenCheckFailed.ErrorCode.TOKEN_CHECK_FAILED; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; + +public class AdminRequestProcessorAuthFailedTest { + + private AdminRequestProcessor processor; + private ServerConnection connection; + + @Before + public void setUp() throws Exception { + StreamSegmentStore store = mock(StreamSegmentStore.class); + connection = mock(ServerConnection.class); + processor = new AdminRequestProcessorImpl(store, mock(TableStore.class), new TrackedConnection(connection), + SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), + (resource, token, expectedLevel) -> { + throw new InvalidTokenException("Token verification failed."); + }, false); + } + + @After + public void tearDown() throws Exception { + } + + @Test + public void flushToStorage() { + processor.flushToStorage(new WireCommands.FlushToStorage(0, "", 1)); + verify(connection).send(new WireCommands.AuthTokenCheckFailed(1, "", TOKEN_CHECK_FAILED)); + } +} diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImplTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImplTest.java new file mode 100644 index 00000000000..46862edf3ce --- /dev/null +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AdminRequestProcessorImplTest.java @@ -0,0 +1,51 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.host.handler; + +import io.pravega.segmentstore.contracts.StreamSegmentStore; +import io.pravega.segmentstore.contracts.tables.TableStore; +import io.pravega.segmentstore.server.store.ServiceBuilder; +import io.pravega.shared.protocol.netty.AdminRequestProcessor; +import io.pravega.shared.protocol.netty.WireCommands; +import io.pravega.test.common.SerializedClassRunner; +import lombok.Cleanup; +import lombok.extern.slf4j.Slf4j; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.mockito.InOrder; + +import static org.mockito.Mockito.inOrder; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; + +@Slf4j +@RunWith(SerializedClassRunner.class) +public class AdminRequestProcessorImplTest extends PravegaRequestProcessorTest { + + @Test(timeout = 60000) + public void testFlushToStorage() throws Exception { + @Cleanup + ServiceBuilder serviceBuilder = newInlineExecutionInMemoryBuilder(getBuilderConfig()); + serviceBuilder.initialize(); + StreamSegmentStore store = spy(serviceBuilder.createStreamSegmentService()); + ServerConnection connection = mock(ServerConnection.class); + InOrder order = inOrder(connection); + AdminRequestProcessor processor = new AdminRequestProcessorImpl(store, mock(TableStore.class), connection); + + processor.flushToStorage(new WireCommands.FlushToStorage(0, "", 1)); + order.verify(connection).send(new WireCommands.StorageFlushed(1)); + } +} diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AppendProcessorTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AppendProcessorTest.java index e972d136a5a..f33d10c9488 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AppendProcessorTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/AppendProcessorTest.java @@ -1085,6 +1085,7 @@ public void testAppendAfterSealThrows() throws Exception { @Cleanup AppendProcessor processor = AppendProcessor.defaultBuilder().store(store).connection(new TrackedConnection(connection, tracker)).build(); InOrder connectionVerifier = Mockito.inOrder(connection); + @SuppressWarnings("unchecked") Map attributes = mock(Map.class); when(store.getAttributes(streamSegmentName, Collections.singleton(AttributeId.fromUUID(clientId)), true, AppendProcessor.TIMEOUT)) .thenReturn(CompletableFuture.completedFuture(attributes)); diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/ConnectionTrackerTests.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/ConnectionTrackerTests.java index ac0bc26939e..eda6f57fc12 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/ConnectionTrackerTests.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/ConnectionTrackerTests.java @@ -18,6 +18,7 @@ import io.pravega.shared.protocol.netty.RequestProcessor; import io.pravega.shared.protocol.netty.WireCommand; import io.pravega.test.common.AssertExtensions; +import lombok.Cleanup; import lombok.Getter; import lombok.val; import org.junit.Assert; @@ -104,8 +105,10 @@ public void testTrackedConnection() { val singleLimit = ConnectionTracker.LOW_WATERMARK * 2; val baseTracker = new ConnectionTracker(allLimit, singleLimit); val c1 = new MockConnection(); + @Cleanup val t1 = new TrackedConnection(c1, baseTracker); val c2 = new MockConnection(); + @Cleanup val t2 = new TrackedConnection(c2, baseTracker); // A connection increased, but it's under both the per-connection limit and total limit. @@ -161,7 +164,6 @@ public void setRequestProcessor(RequestProcessor cp) { @Override public void close() { - throw new UnsupportedOperationException(); } @Override diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListenerTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListenerTest.java index ed576761bcf..71b2555e818 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListenerTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaConnectionListenerTest.java @@ -26,6 +26,9 @@ import io.pravega.segmentstore.server.host.delegationtoken.PassingTokenVerifier; import io.pravega.segmentstore.server.host.stat.SegmentStatsRecorder; import io.pravega.segmentstore.server.host.stat.TableSegmentStatsRecorder; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.HealthServiceManager; +import io.pravega.shared.health.Status; import io.pravega.shared.protocol.netty.AppendDecoder; import io.pravega.shared.protocol.netty.CommandDecoder; import io.pravega.shared.protocol.netty.CommandEncoder; @@ -36,6 +39,7 @@ import java.io.FileNotFoundException; import java.io.IOException; import java.net.ServerSocket; +import java.time.Duration; import java.util.List; import java.util.concurrent.atomic.AtomicReference; @@ -44,6 +48,7 @@ import org.junit.Assert; import org.junit.Test; +import static io.pravega.segmentstore.server.store.ServiceConfig.TLS_PROTOCOL_VERSION; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import static org.mockito.Mockito.mock; @@ -52,6 +57,7 @@ public class PravegaConnectionListenerTest { @Test public void testCtorSetsTlsReloadFalseByDefault() { + @Cleanup PravegaConnectionListener listener = new PravegaConnectionListener(false, 6222, mock(StreamSegmentStore.class), mock(TableStore.class), NoOpScheduledExecutor.get()); assertFalse(listener.isEnableTlsReload()); @@ -59,10 +65,11 @@ public void testCtorSetsTlsReloadFalseByDefault() { @Test public void testCtorSetsTlsReloadFalseIfTlsIsDisabled() { + @Cleanup PravegaConnectionListener listener = new PravegaConnectionListener(false, true, "localhost", 6222, mock(StreamSegmentStore.class), mock(TableStore.class), SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), - null, null, true, NoOpScheduledExecutor.get()); + null, null, true, NoOpScheduledExecutor.get(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); assertFalse(listener.isEnableTlsReload()); } @@ -71,7 +78,7 @@ public void testCloseWithoutStartListeningThrowsNoException() { PravegaConnectionListener listener = new PravegaConnectionListener(true, true, "localhost", 6222, mock(StreamSegmentStore.class), mock(TableStore.class), SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), - null, null, true, NoOpScheduledExecutor.get()); + null, null, true, NoOpScheduledExecutor.get(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); // Note that we do not invoke startListening() here, which among other things instantiates some of the object // state that is cleaned up upon invocation of close() in this line. @@ -82,12 +89,12 @@ public void testCloseWithoutStartListeningThrowsNoException() { public void testUsesEventWatcherForNonSymbolicLinks() { String pathToCertificateFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME; String pathToKeyFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME; - + @Cleanup PravegaConnectionListener listener = new PravegaConnectionListener(true, true, "whatever", -1, mock(StreamSegmentStore.class), mock(TableStore.class), SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), "dummy-tls-certificate-path", "dummy-tls-key-path", true, - NoOpScheduledExecutor.get()); + NoOpScheduledExecutor.get(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); AtomicReference dummySslCtx = new AtomicReference<>(null); @@ -101,12 +108,12 @@ public void testUsesEventWatcherForNonSymbolicLinks() { public void testUsesPollingMonitorForSymbolicLinks() { String pathToCertificateFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME; String pathToKeyFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME; - + @Cleanup PravegaConnectionListener listener = new PravegaConnectionListener(true, true, "whatever", -1, mock(StreamSegmentStore.class), mock(TableStore.class), SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), "dummy-tls-certificate-path", "dummy-tls-key-path", true, - NoOpScheduledExecutor.get()); + NoOpScheduledExecutor.get(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); AtomicReference dummySslCtx = new AtomicReference<>(null); @@ -120,12 +127,12 @@ public void testUsesPollingMonitorForSymbolicLinks() { public void testPrepareCertificateMonitorThrowsExceptionWithNonExistentFile() { String pathToCertificateFile = SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME; String pathToKeyFile = SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME; - + @Cleanup PravegaConnectionListener listener = new PravegaConnectionListener(true, true, "whatever", -1, mock(StreamSegmentStore.class), mock(TableStore.class), SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), "dummy-tls-certificate-path", "dummy-tls-key-path", true, - NoOpScheduledExecutor.get()); + NoOpScheduledExecutor.get(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); AtomicReference dummySslCtx = new AtomicReference<>(null); try { @@ -145,11 +152,11 @@ public void testPrepareCertificateMonitorThrowsExceptionWithNonExistentFile() { public void testEnableTlsContextReloadWhenStateIsValid() { String pathToCertificateFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME; String pathToKeyFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME; - + @Cleanup PravegaConnectionListener listener = new PravegaConnectionListener(true, true, "whatever", -1, mock(StreamSegmentStore.class), mock(TableStore.class), SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), - pathToCertificateFile, pathToKeyFile, true, NoOpScheduledExecutor.get()); + pathToCertificateFile, pathToKeyFile, true, NoOpScheduledExecutor.get(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); AtomicReference dummySslCtx = new AtomicReference<>(null); listener.enableTlsContextReload(dummySslCtx); @@ -193,4 +200,25 @@ public void testCreateRequestProcessor() { mock(StreamSegmentStore.class), mock(TableStore.class), NoOpScheduledExecutor.get()); Assert.assertTrue(listener.createRequestProcessor(new TrackedConnection(new ServerConnectionInboundHandler())) instanceof AppendProcessor); } + + // Test the health status created with pravega listener. + @Test + public void testHealth() { + @Cleanup + HealthServiceManager healthServiceManager = new HealthServiceManager(Duration.ofSeconds(2)); + healthServiceManager.start(); + int port = TestUtils.getAvailableListenPort(); + @Cleanup + PravegaConnectionListener listener = new PravegaConnectionListener(false, false, "localhost", + port, mock(StreamSegmentStore.class), mock(TableStore.class), SegmentStatsRecorder.noOp(), + TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), null, null, true, + NoOpScheduledExecutor.get(), TLS_PROTOCOL_VERSION.getDefaultValue().split(","), healthServiceManager); + + listener.startListening(); + Health health = listener.getHealthServiceManager().getHealthSnapshot(); + Assert.assertEquals("HealthContributor should report an 'UP' Status.", Status.UP, health.getStatus()); + listener.close(); + health = listener.getHealthServiceManager().getHealthSnapshot(); + Assert.assertEquals("HealthContributor should report an 'DOWN' Status.", Status.DOWN, health.getStatus()); + } } diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorAuthFailedTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorAuthFailedTest.java index d1aadfad921..58d29d624f5 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorAuthFailedTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorAuthFailedTest.java @@ -78,7 +78,7 @@ public void getStreamSegmentInfo() { @Test public void createSegment() { - processor.createSegment(new WireCommands.CreateSegment(100L, "segment", (byte) 0, 0, "token")); + processor.createSegment(new WireCommands.CreateSegment(100L, "segment", (byte) 0, 0, "token", 0)); verify(connection).send(new WireCommands.AuthTokenCheckFailed(100L, "", TOKEN_CHECK_FAILED)); } diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorTest.java index 36a17126ae5..842c77e38ec 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/handler/PravegaRequestProcessorTest.java @@ -29,7 +29,6 @@ import io.pravega.segmentstore.contracts.ReadResult; import io.pravega.segmentstore.contracts.ReadResultEntry; import io.pravega.segmentstore.contracts.ReadResultEntryType; -import io.pravega.segmentstore.contracts.SegmentProperties; import io.pravega.segmentstore.contracts.SegmentType; import io.pravega.segmentstore.contracts.StreamSegmentInformation; import io.pravega.segmentstore.contracts.StreamSegmentMergedException; @@ -49,6 +48,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.segmentstore.server.store.ServiceConfig; import io.pravega.segmentstore.server.store.StreamSegmentService; +import io.pravega.segmentstore.server.tables.TableExtensionConfig; import io.pravega.shared.NameUtils; import io.pravega.shared.metrics.MetricsConfig; import io.pravega.shared.metrics.MetricsProvider; @@ -85,7 +85,6 @@ import org.mockito.Mockito; import static io.netty.buffer.Unpooled.wrappedBuffer; -import static io.pravega.test.common.AssertExtensions.assertThrows; import static java.util.Arrays.asList; import static java.util.Collections.singletonList; import static java.util.stream.Collectors.toList; @@ -377,7 +376,7 @@ public void testCreateSegment() throws Exception { recorderMock, TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), false); // Execute and Verify createSegment/getStreamSegmentInfo calling stack is executed as design. - processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); verify(recorderMock).createSegment(eq(streamSegmentName), eq(WireCommands.CreateSegment.NO_SCALE), eq(0), any()); assertTrue(append(streamSegmentName, 1, store)); processor.getStreamSegmentInfo(new WireCommands.GetStreamSegmentInfo(1, streamSegmentName, "")); @@ -390,6 +389,23 @@ public void testCreateSegment() throws Exception { val segmentType = SegmentType.fromAttributes(si.getAttributes()); Assert.assertFalse(segmentType.isInternal() || segmentType.isCritical() || segmentType.isSystem() || segmentType.isTableSegment()); + // Verify the correct rollover size is passed down to the metadata store + // Verify default value + val attributes = si.getAttributes(); + Assert.assertEquals((long) attributes.get(Attributes.ROLLOVER_SIZE), 0L); + // Verify custom value + String streamSegmentName1 = "scope/stream/testCreateSegmentRolloverSizePositive"; + processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName1, WireCommands.CreateSegment.NO_SCALE, 0, "", 1024 * 1024L)); + val si1 = store.getStreamSegmentInfo(streamSegmentName1, PravegaRequestProcessor.TIMEOUT).join(); + val attributes1 = si1.getAttributes(); + Assert.assertEquals((long) attributes1.get(Attributes.ROLLOVER_SIZE), 1024 * 1024L); + // Verify invalid negative value + String streamSegmentName2 = "scope/stream/testCreateSegmentRolloverSizeNegative"; + processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName2, WireCommands.CreateSegment.NO_SCALE, 0, "", -1024L)); + val si2 = store.getStreamSegmentInfo(streamSegmentName2, PravegaRequestProcessor.TIMEOUT).join(); + val attributes2 = si2.getAttributes(); + Assert.assertEquals((long) attributes2.get(Attributes.ROLLOVER_SIZE), 0L); // fall back to default value + // TestCreateSealDelete may executed before this test case, // so createSegmentStats may record 1 or 2 createSegment operation here. } @@ -407,11 +423,11 @@ public void testTransaction() throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), connection); processor.createSegment(new WireCommands.CreateSegment(requestId, streamSegmentName, - WireCommands.CreateSegment.NO_SCALE, 0, "")); + WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(requestId, streamSegmentName)); String transactionName = NameUtils.getTransactionNameFromId(streamSegmentName, txnid); - processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); assertTrue(append(NameUtils.getTransactionNameFromId(streamSegmentName, txnid), 1, store)); processor.getStreamSegmentInfo(new WireCommands.GetStreamSegmentInfo(requestId, transactionName, "")); assertTrue(append(NameUtils.getTransactionNameFromId(streamSegmentName, txnid), 2, store)); @@ -428,7 +444,7 @@ public void testTransaction() throws Exception { txnid = UUID.randomUUID(); transactionName = NameUtils.getTransactionNameFromId(streamSegmentName, txnid); - processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); assertTrue(append(NameUtils.getTransactionNameFromId(streamSegmentName, txnid), 1, store)); order.verify(connection).send(new WireCommands.SegmentCreated(requestId, transactionName)); processor.getStreamSegmentInfo(new WireCommands.GetStreamSegmentInfo(requestId, transactionName, "")); @@ -446,7 +462,7 @@ public void testTransaction() throws Exception { txnid = UUID.randomUUID(); transactionName = NameUtils.getTransactionNameFromId(streamSegmentName, txnid); - processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); assertTrue(append(NameUtils.getTransactionNameFromId(streamSegmentName, txnid), 1, store)); processor.getStreamSegmentInfo(new WireCommands.GetStreamSegmentInfo(requestId, transactionName, "")); assertTrue(append(NameUtils.getTransactionNameFromId(streamSegmentName, txnid), 2, store)); @@ -482,12 +498,12 @@ public void testMergedTransaction() throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), connection); processor.createSegment(new WireCommands.CreateSegment(requestId, streamSegmentName, - WireCommands.CreateSegment.NO_SCALE, 0, "")); + WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(requestId, streamSegmentName)); String transactionName = NameUtils.getTransactionNameFromId(streamSegmentName, txnid); - processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(requestId, transactionName)); processor.mergeSegments(new WireCommands.MergeSegments(requestId, streamSegmentName, transactionName, "")); order.verify(connection).send(new WireCommands.SegmentsMerged(requestId, streamSegmentName, transactionName, 0)); @@ -498,9 +514,9 @@ public void testMergedTransaction() throws Exception { doReturn(Futures.failedFuture(new StreamSegmentNotExistsException(streamSegmentName))).when(store).sealStreamSegment( anyString(), any()); doReturn(Futures.failedFuture(new StreamSegmentNotExistsException(streamSegmentName))).when(store).mergeStreamSegment( - anyString(), anyString(), any()); + anyString(), anyString(), any(), any()); - processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(requestId, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(requestId, transactionName)); processor.mergeSegments(new WireCommands.MergeSegments(requestId, streamSegmentName, transactionName, "")); @@ -522,16 +538,16 @@ public void testMetricsOnSegmentMerge() throws Exception { //test txn segment merge CompletableFuture txnFuture = CompletableFuture.completedFuture(createMergeStreamSegmentResult(streamSegmentName, txnId)); - doReturn(txnFuture).when(store).mergeStreamSegment(anyString(), anyString(), any()); + doReturn(txnFuture).when(store).mergeStreamSegment(anyString(), anyString(), any(), any()); SegmentStatsRecorder recorderMock = mock(SegmentStatsRecorder.class); PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), new TrackedConnection(connection), recorderMock, TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), false); - processor.createSegment(new WireCommands.CreateSegment(0, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(0, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); String transactionName = NameUtils.getTransactionNameFromId(streamSegmentName, txnId); - processor.createSegment(new WireCommands.CreateSegment(1, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(1, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); processor.mergeSegments(new WireCommands.MergeSegments(2, streamSegmentName, transactionName, "")); - verify(recorderMock).merge(streamSegmentName, 100L, 10, (long) streamSegmentName.hashCode()); + verify(recorderMock).merge(streamSegmentName, 100L, 10, streamSegmentName.hashCode()); } private MergeStreamSegmentResult createMergeStreamSegmentResult(String streamSegmentName, UUID txnId) { @@ -541,21 +557,110 @@ private MergeStreamSegmentResult createMergeStreamSegmentResult(String streamSeg return new MergeStreamSegmentResult(100, 100, attributes); } - private SegmentProperties createSegmentProperty(String streamSegmentName, UUID txnId) { + @Test(timeout = 20000) + public void testConditionalSegmentMergeReplaceIfEquals() throws Exception { + String streamSegmentName = "scope/stream/txnSegment"; + UUID txnId = UUID.randomUUID(); + @Cleanup + ServiceBuilder serviceBuilder = newInlineExecutionInMemoryBuilder(getBuilderConfig()); + serviceBuilder.initialize(); + StreamSegmentStore store = spy(serviceBuilder.createStreamSegmentService()); + ServerConnection connection = mock(ServerConnection.class); + InOrder order = inOrder(connection); + SegmentStatsRecorder recorderMock = mock(SegmentStatsRecorder.class); + PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), new TrackedConnection(connection), + recorderMock, TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), false); - Map attributes = new HashMap<>(); - attributes.put(Attributes.EVENT_COUNT, 10L); - attributes.put(Attributes.CREATION_TIME, (long) streamSegmentName.hashCode()); + processor.createSegment(new WireCommands.CreateSegment(0, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0L)); + order.verify(connection).send(new WireCommands.SegmentCreated(0, streamSegmentName)); + String transactionName = NameUtils.getTransactionNameFromId(streamSegmentName, txnId); + processor.createSegment(new WireCommands.CreateSegment(1, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0L)); + order.verify(connection).send(new WireCommands.SegmentCreated(1, transactionName)); + + // Try to merge the transaction conditionally, when the attributes on the parent segment do not match. + UUID randomAttribute1 = UUID.randomUUID(); + UUID randomAttribute2 = UUID.randomUUID(); + List attributeUpdates = asList( + new WireCommands.ConditionalAttributeUpdate(randomAttribute1, WireCommands.ConditionalAttributeUpdate.REPLACE_IF_EQUALS, 1, streamSegmentName.hashCode()), + new WireCommands.ConditionalAttributeUpdate(randomAttribute2, WireCommands.ConditionalAttributeUpdate.REPLACE_IF_EQUALS, 2, streamSegmentName.hashCode()) + ); + + // The first attempt should fail as the attribute update is not going to work. + assertTrue(append(transactionName, 1, store)); + processor.mergeSegments(new WireCommands.MergeSegments(2, streamSegmentName, transactionName, "", attributeUpdates)); + order.verify(connection).send(new WireCommands.SegmentAttributeUpdated(2, false)); + + // Now, set the right attributes in the parent segment. + processor.updateSegmentAttribute(new WireCommands.UpdateSegmentAttribute(3, streamSegmentName, randomAttribute1, + streamSegmentName.hashCode(), WireCommands.NULL_ATTRIBUTE_VALUE, "")); + order.verify(connection).send(new WireCommands.SegmentAttributeUpdated(3, true)); + processor.updateSegmentAttribute(new WireCommands.UpdateSegmentAttribute(4, streamSegmentName, randomAttribute2, + streamSegmentName.hashCode(), WireCommands.NULL_ATTRIBUTE_VALUE, "")); + order.verify(connection).send(new WireCommands.SegmentAttributeUpdated(4, true)); + + // Merge segments conditionally, now it should work. + processor.mergeSegments(new WireCommands.MergeSegments(5, streamSegmentName, transactionName, "", attributeUpdates)); + order.verify(connection).send(new WireCommands.SegmentsMerged(5, streamSegmentName, transactionName, 1)); + + // Check the value of attributes post merge. + processor.getSegmentAttribute(new WireCommands.GetSegmentAttribute(6, streamSegmentName, randomAttribute1, "")); + order.verify(connection).send(new WireCommands.SegmentAttribute(6, 1)); + processor.getSegmentAttribute(new WireCommands.GetSegmentAttribute(7, streamSegmentName, randomAttribute2, "")); + order.verify(connection).send(new WireCommands.SegmentAttribute(7, 2)); + } - return StreamSegmentInformation.builder() - .name(txnId == null ? streamSegmentName + "#." : streamSegmentName + "#transaction." + txnId) - .sealed(true) - .deleted(false) - .lastModified(null) - .startOffset(0) - .length(100) - .attributes(attributes) - .build(); + @Test(timeout = 20000) + public void testConditionalSegmentMergeReplace() throws Exception { + String streamSegmentName = "scope/stream/txnSegment"; + UUID txnId = UUID.randomUUID(); + @Cleanup + ServiceBuilder serviceBuilder = newInlineExecutionInMemoryBuilder(getBuilderConfig()); + serviceBuilder.initialize(); + StreamSegmentStore store = spy(serviceBuilder.createStreamSegmentService()); + ServerConnection connection = mock(ServerConnection.class); + InOrder order = inOrder(connection); + SegmentStatsRecorder recorderMock = mock(SegmentStatsRecorder.class); + PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), new TrackedConnection(connection), + recorderMock, TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), false); + + processor.createSegment(new WireCommands.CreateSegment(0, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0L)); + order.verify(connection).send(new WireCommands.SegmentCreated(0, streamSegmentName)); + String transactionName = NameUtils.getTransactionNameFromId(streamSegmentName, txnId); + processor.createSegment(new WireCommands.CreateSegment(1, transactionName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0L)); + order.verify(connection).send(new WireCommands.SegmentCreated(1, transactionName)); + assertTrue(append(transactionName, 1, store)); + + // Update to perform. + UUID randomAttribute1 = UUID.randomUUID(); + UUID randomAttribute2 = UUID.randomUUID(); + List attributeUpdates = asList( + new WireCommands.ConditionalAttributeUpdate(randomAttribute1, WireCommands.ConditionalAttributeUpdate.REPLACE, 1, 0), + new WireCommands.ConditionalAttributeUpdate(randomAttribute2, WireCommands.ConditionalAttributeUpdate.REPLACE, 2, 0) + ); + + // Set a attributes in the parent segment with a certain value. + processor.updateSegmentAttribute(new WireCommands.UpdateSegmentAttribute(2, streamSegmentName, randomAttribute1, + streamSegmentName.hashCode(), WireCommands.NULL_ATTRIBUTE_VALUE, "")); + order.verify(connection).send(new WireCommands.SegmentAttributeUpdated(2, true)); + processor.updateSegmentAttribute(new WireCommands.UpdateSegmentAttribute(3, streamSegmentName, randomAttribute2, + streamSegmentName.hashCode(), WireCommands.NULL_ATTRIBUTE_VALUE, "")); + order.verify(connection).send(new WireCommands.SegmentAttributeUpdated(3, true)); + + // Check the value of attributes post merge. + processor.getSegmentAttribute(new WireCommands.GetSegmentAttribute(4, streamSegmentName, randomAttribute1, "")); + order.verify(connection).send(new WireCommands.SegmentAttribute(4, streamSegmentName.hashCode())); + processor.getSegmentAttribute(new WireCommands.GetSegmentAttribute(5, streamSegmentName, randomAttribute2, "")); + order.verify(connection).send(new WireCommands.SegmentAttribute(5, streamSegmentName.hashCode())); + + // Merge segments replacing attributes, now it should work. + processor.mergeSegments(new WireCommands.MergeSegments(6, streamSegmentName, transactionName, "", attributeUpdates)); + order.verify(connection).send(new WireCommands.SegmentsMerged(6, streamSegmentName, transactionName, 1)); + + // Check the value of attributes post merge. + processor.getSegmentAttribute(new WireCommands.GetSegmentAttribute(7, streamSegmentName, randomAttribute1, "")); + order.verify(connection).send(new WireCommands.SegmentAttribute(7, 1)); + processor.getSegmentAttribute(new WireCommands.GetSegmentAttribute(8, streamSegmentName, randomAttribute2, "")); + order.verify(connection).send(new WireCommands.SegmentAttribute(8, 2)); } @Test(timeout = 20000) @@ -571,7 +676,7 @@ public void testSegmentAttribute() throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), connection); // Execute and Verify createSegment/getStreamSegmentInfo calling stack is executed as design. - processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, streamSegmentName)); processor.getSegmentAttribute(new WireCommands.GetSegmentAttribute(2, streamSegmentName, attribute, "")); @@ -611,7 +716,7 @@ public void testCreateSealTruncateDelete() throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), connection); // Create a segment and append 2 bytes. - processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); assertTrue(append(streamSegmentName, 1, store)); assertTrue(append(streamSegmentName, 2, store)); @@ -662,7 +767,7 @@ public void testUnsupportedOperation() throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, mock(TableStore.class), connection); // Execute and Verify createSegment/getStreamSegmentInfo calling stack is executed as design. - processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "")); + processor.createSegment(new WireCommands.CreateSegment(1, streamSegmentName, WireCommands.CreateSegment.NO_SCALE, 0, "", 0)); order.verify(connection).send(new WireCommands.OperationUnsupported(1, "createSegment", "")); } @@ -690,11 +795,18 @@ private void testCreateTableSegment(int keyLength) throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, tableStore, new TrackedConnection(connection), SegmentStatsRecorder.noOp(), recorderMock, new PassingTokenVerifier(), false); + int requestId = 0; + + // GetInfo with non-existent Table Segment. + processor.getTableSegmentInfo(new WireCommands.GetTableSegmentInfo(++requestId, tableSegmentName, "")); + order.verify(connection).send(new WireCommands.NoSuchSegment(requestId, tableSegmentName, "", -1)); + // Execute and Verify createTableSegment calling stack is executed as design. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, keyLength, "")); - order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); - processor.createTableSegment(new WireCommands.CreateTableSegment(2, tableSegmentName, false, keyLength, "")); - order.verify(connection).send(new WireCommands.SegmentAlreadyExists(2, tableSegmentName, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(++requestId, tableSegmentName, false, keyLength, "", 0)); + order.verify(connection).send(new WireCommands.SegmentCreated(requestId, tableSegmentName)); + + processor.createTableSegment(new WireCommands.CreateTableSegment(++requestId, tableSegmentName, false, keyLength, "", 0)); + order.verify(connection).send(new WireCommands.SegmentAlreadyExists(requestId, tableSegmentName, "")); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); verifyNoMoreInteractions(recorderMock); @@ -706,30 +818,35 @@ private void testCreateTableSegment(int keyLength) throws Exception { Assert.assertEquals(keyLength > 0, segmentType.isFixedKeyLengthTableSegment()); val kl = si.getAttributes().getOrDefault(Attributes.ATTRIBUTE_ID_LENGTH, 0L); Assert.assertEquals(keyLength, (long) kl); - } - /** - * Verifies that the methods that are not yet implemented are not implemented by accident without unit tests. - * This test should be removed once every method tested in it is implemented. - */ - @Test(timeout = 20000) - public void testUnimplementedMethods() throws Exception { - // Set up PravegaRequestProcessor instance to execute requests against - String streamSegmentName = "scope/stream/test"; - @Cleanup - ServiceBuilder serviceBuilder = newInlineExecutionInMemoryBuilder(getBuilderConfig()); - serviceBuilder.initialize(); - StreamSegmentStore store = serviceBuilder.createStreamSegmentService(); - TableStore tableStore = serviceBuilder.createTableStoreService(); - ServerConnection connection = mock(ServerConnection.class); - PravegaRequestProcessor processor = new PravegaRequestProcessor(store, tableStore, connection); - - assertThrows("seal() is implemented.", - () -> processor.sealTableSegment(new WireCommands.SealTableSegment(1, streamSegmentName, "")), - ex -> ex instanceof UnsupportedOperationException); - assertThrows("merge() is implemented.", - () -> processor.mergeTableSegments(new WireCommands.MergeTableSegments(1, streamSegmentName, streamSegmentName, "")), - ex -> ex instanceof UnsupportedOperationException); + // Verify segment info can be retrieved. + processor.getTableSegmentInfo(new WireCommands.GetTableSegmentInfo(++requestId, tableSegmentName, "")); + order.verify(connection).send(new WireCommands.TableSegmentInfo(requestId, tableSegmentName, 0, 0, 0, keyLength)); + + // Verify invoking GetTableSegmentInfo on a non-existing segment returns NoSuchSegment + String nonExistingSegment = "nonExistingSegment"; + processor.getTableSegmentInfo(new WireCommands.GetTableSegmentInfo(++requestId, nonExistingSegment, "")); + order.verify(connection).send(new WireCommands.NoSuchSegment(requestId, nonExistingSegment, "", -1)); + + // Verify table segment has correct rollover size + // Verify default value + val attributes = si.getAttributes(); + val config = TableExtensionConfig.builder().build(); + Assert.assertEquals((long) attributes.get(Attributes.ROLLOVER_SIZE), config.getDefaultRolloverSize()); + + // Verify custom value + val tableSegmentName1 = "testCreateTableSegmentRolloverSizePositive"; + processor.createTableSegment(new WireCommands.CreateTableSegment(++requestId, tableSegmentName1, false, keyLength, "", 1024 * 1024L)); + val si1 = store.getStreamSegmentInfo(tableSegmentName1, PravegaRequestProcessor.TIMEOUT).join(); + val attributes1 = si1.getAttributes(); + Assert.assertEquals((long) attributes1.get(Attributes.ROLLOVER_SIZE), 1024 * 1024L); + + // Verify invalid value + val tableSegmentName2 = "testCreateTableSegmentRolloverSizeNegative"; + processor.createTableSegment(new WireCommands.CreateTableSegment(++requestId, tableSegmentName2, false, keyLength, "", -1024L)); + val si2 = store.getStreamSegmentInfo(tableSegmentName2, PravegaRequestProcessor.TIMEOUT).join(); + val attributes2 = si2.getAttributes(); + Assert.assertEquals((long) attributes2.get(Attributes.ROLLOVER_SIZE), config.getDefaultRolloverSize()); // fall back to default value } @Test(timeout = 20000) @@ -753,7 +870,7 @@ public void testUpdateEntries() throws Exception { ArrayList keys = generateKeys(3, rnd); // Execute and Verify createSegment calling stack is executed as design. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); @@ -831,7 +948,7 @@ public void testRemoveKeys() throws Exception { ArrayList keys = generateKeys(3, rnd); // Create a table segment and add data. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); TableEntry e1 = TableEntry.unversioned(keys.get(0), generateValue(rnd)); processor.updateTableEntries(new WireCommands.UpdateTableEntries(2, tableSegmentName, "", getTableEntries(singletonList(e1)), WireCommands.NULL_TABLE_SEGMENT_OFFSET)); @@ -890,7 +1007,7 @@ public void testDeleteEmptyTable() throws Exception { SegmentStatsRecorder.noOp(), recorderMock, new PassingTokenVerifier(), false); // Create a table segment. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); @@ -919,7 +1036,7 @@ public void testDeleteNonEmptyTable() throws Exception { ArrayList keys = generateKeys(2, rnd); // Create a table segment and add data. - processor.createTableSegment(new WireCommands.CreateTableSegment(3, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(3, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(3, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); @@ -955,7 +1072,7 @@ public void testReadTable() throws Exception { ArrayList keys = generateKeys(2, rnd); // Create a table segment and add data. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); recorderMockOrder.verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); TableEntry entry = TableEntry.unversioned(keys.get(0), generateValue(rnd)); @@ -1009,7 +1126,7 @@ public void testGetTableKeys() throws Exception { TableEntry e3 = TableEntry.unversioned(keys.get(2), generateValue(rnd)); // Create a table segment and add data. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); processor.updateTableEntries(new WireCommands.UpdateTableEntries(2, tableSegmentName, "", getTableEntries(asList(e1, e2, e3)), WireCommands.NULL_TABLE_SEGMENT_OFFSET)); @@ -1083,7 +1200,7 @@ public void testGetTableEntries() throws Exception { TableEntry e3 = TableEntry.unversioned(keys.get(2), testValue); // Create a table segment and add data. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); processor.updateTableEntries(new WireCommands.UpdateTableEntries(2, tableSegmentName, "", getTableEntries(asList(e1, e2, e3)), WireCommands.NULL_TABLE_SEGMENT_OFFSET)); @@ -1157,7 +1274,7 @@ public void testReadTableEntriesDeltaEmpty() throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, tableStore, new TrackedConnection(connection), SegmentStatsRecorder.noOp(), recorderMock, new PassingTokenVerifier(), false); - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); @@ -1183,7 +1300,7 @@ public void testReadTableEntriesDeltaOutOfBounds() throws Exception { PravegaRequestProcessor processor = new PravegaRequestProcessor(store, tableStore, new TrackedConnection(connection), SegmentStatsRecorder.noOp(), recorderMock, new PassingTokenVerifier(), false); - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); @@ -1231,7 +1348,7 @@ public void testReadTableEntriesDelta() throws Exception { TableEntry e3 = TableEntry.unversioned(keys.get(2), testValue); // Create a table segment and add data. - processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "")); + processor.createTableSegment(new WireCommands.CreateTableSegment(1, tableSegmentName, false, 0, "", 0)); order.verify(connection).send(new WireCommands.SegmentCreated(1, tableSegmentName)); verify(recorderMock).createTableSegment(eq(tableSegmentName), any()); processor.updateTableEntries(new WireCommands.UpdateTableEntries(2, tableSegmentName, "", @@ -1383,7 +1500,7 @@ private boolean append(String streamSegmentName, int number, StreamSegmentStore PravegaRequestProcessor.TIMEOUT)); } - private static ServiceBuilderConfig getBuilderConfig() { + static ServiceBuilderConfig getBuilderConfig() { return ServiceBuilderConfig .builder() .include(ServiceConfig.builder() @@ -1404,7 +1521,7 @@ private static ServiceBuilderConfig getReadOnlyBuilderConfig() { .build(); } - private static ServiceBuilder newInlineExecutionInMemoryBuilder(ServiceBuilderConfig config) { + static ServiceBuilder newInlineExecutionInMemoryBuilder(ServiceBuilderConfig config) { return ServiceBuilder.newInMemoryBuilder(config, (size, name, threadPriority) -> new InlineExecutor()) .withStreamSegmentStore(setup -> new SynchronousStreamSegmentStore(new StreamSegmentService( setup.getContainerRegistry(), setup.getSegmentToContainerMapper()))); diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributorTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributorTest.java new file mode 100644 index 00000000000..8a7f79e40b2 --- /dev/null +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerHealthContributorTest.java @@ -0,0 +1,68 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.host.health; + +import com.google.common.util.concurrent.Service; +import io.pravega.segmentstore.server.SegmentContainer; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +/** + * Test health contributor for SegmentContainer. + */ +public class SegmentContainerHealthContributorTest { + SegmentContainer segmentContainer; + SegmentContainerHealthContributor segmentContainerHealthContributor; + + @Before + public void setup() { + segmentContainer = mock(SegmentContainer.class); + segmentContainerHealthContributor = new SegmentContainerHealthContributor(segmentContainer); + } + + @After + public void tearDown() { + segmentContainer.close(); + segmentContainerHealthContributor.close(); + } + + /** + * Check health of SegmentContainer with different states. + */ + @Test + public void testSegmentContainerHealth() { + when(segmentContainer.state()).thenReturn(Service.State.NEW); + Health.HealthBuilder builder = Health.builder().name(segmentContainerHealthContributor.getName()); + Status status = segmentContainerHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'NEW' Status.", Status.NEW, status); + when(segmentContainer.state()).thenReturn(Service.State.STARTING); + status = segmentContainerHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'STARTING' Status.", Status.STARTING, status); + when(segmentContainer.state()).thenReturn(Service.State.RUNNING); + status = segmentContainerHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'UP' Status.", Status.UP, status); + when(segmentContainer.state()).thenReturn(Service.State.TERMINATED); + status = segmentContainerHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'DOWN' Status.", Status.DOWN, status); + } +} diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributorTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributorTest.java new file mode 100644 index 00000000000..e2031bd4ecf --- /dev/null +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/health/SegmentContainerRegistryHealthContributorTest.java @@ -0,0 +1,65 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.host.health; + +import io.pravega.segmentstore.server.SegmentContainer; +import io.pravega.segmentstore.server.SegmentContainerRegistry; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Test; + +import java.util.ArrayList; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +/** + * Test health contributor for SegmentContainerRegistry. + */ +public class SegmentContainerRegistryHealthContributorTest { + SegmentContainerRegistry segmentContainerRegistry; + SegmentContainerRegistryHealthContributor segmentContainerRegistryHealthContributor; + + @Before + public void setup() { + segmentContainerRegistry = mock(SegmentContainerRegistry.class); + segmentContainerRegistryHealthContributor = new SegmentContainerRegistryHealthContributor(segmentContainerRegistry); + when(segmentContainerRegistry.getContainers()).thenReturn(new ArrayList()); + } + + @After + public void tearDown() { + segmentContainerRegistry.close(); + segmentContainerRegistryHealthContributor.close(); + } + + /** + * Check health of SegmentContainerRegistry with different states. + */ + @Test + public void testSegmentContainerHealth() { + when(segmentContainerRegistry.isClosed()).thenReturn(true); + Health.HealthBuilder builder = Health.builder().name(segmentContainerRegistryHealthContributor.getName()); + Status health = segmentContainerRegistryHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'DOWN' Status.", Status.DOWN, health); + when(segmentContainerRegistry.isClosed()).thenReturn(false); + health = segmentContainerRegistryHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'UP' Status.", Status.UP, health); + } +} diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/load/AttributeLoadTests.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/load/AttributeLoadTests.java index 5422930ee2b..e19e390d179 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/load/AttributeLoadTests.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/load/AttributeLoadTests.java @@ -337,7 +337,7 @@ private AttributeId getRandomKey(int attributeCount, Random rnd) { } private long getValue(int batchId, int indexInBatch, int batchSize) { - return (long) (batchId * batchSize + indexInBatch); + return batchId * batchSize + indexInBatch; } private double calculateExcessPercentage(long actualDataSize, long theoreticalDataSize) { diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumerTests.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumerTests.java index 9f2c7b7ad8c..7178741e39b 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumerTests.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeEventConsumerTests.java @@ -29,20 +29,20 @@ public class TLSConfigChangeEventConsumerTests { @Test (expected = NullPointerException.class) public void testNullCtorArgumentsAreRejected() { - new TLSConfigChangeEventConsumer(new AtomicReference<>(null), null, null); + new TLSConfigChangeEventConsumer(new AtomicReference<>(null), null, null, null); } @Test (expected = IllegalArgumentException.class) public void testEmptyPathToCertificateFileIsRejected() { TLSConfigChangeEventConsumer subjectUnderTest = new TLSConfigChangeEventConsumer(new AtomicReference<>(null), - "", "non-existent"); + "", "non-existent", SecurityConfigDefaults.TLS_PROTOCOL_VERSION); subjectUnderTest.accept(null); } @Test (expected = IllegalArgumentException.class) public void testEmptyPathToKeyFileIsRejected() { TLSConfigChangeEventConsumer subjectUnderTest = new TLSConfigChangeEventConsumer(new AtomicReference<>(null), - "non-existent", ""); + "non-existent", "", SecurityConfigDefaults.TLS_PROTOCOL_VERSION); subjectUnderTest.accept(null); } @@ -52,10 +52,10 @@ public void testInvocationIncrementsReloadCounter() { String pathToKeyFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME; AtomicReference sslCtx = new AtomicReference<>(TLSHelper.newServerSslContext( - new File(pathToCertificateFile), new File(pathToKeyFile))); + new File(pathToCertificateFile), new File(pathToKeyFile), SecurityConfigDefaults.TLS_PROTOCOL_VERSION)); TLSConfigChangeEventConsumer subjectUnderTest = new TLSConfigChangeEventConsumer(sslCtx, pathToCertificateFile, - pathToKeyFile); + pathToKeyFile, SecurityConfigDefaults.TLS_PROTOCOL_VERSION); subjectUnderTest.accept(null); assertEquals(1, subjectUnderTest.getNumOfConfigChangesSinceStart()); diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumerTests.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumerTests.java index 92fe963f8be..34ccc3e8246 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumerTests.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSConfigChangeFileConsumerTests.java @@ -28,13 +28,13 @@ public class TLSConfigChangeFileConsumerTests { @Test(expected = NullPointerException.class) public void testNullCtorArgumentsAreRejected() { - new TLSConfigChangeFileConsumer(new AtomicReference<>(null), null, null); + new TLSConfigChangeFileConsumer(new AtomicReference<>(null), null, null, null); } @Test (expected = IllegalArgumentException.class) public void testEmptyPathToCertificateFileIsRejected() { TLSConfigChangeFileConsumer subjectUnderTest = new TLSConfigChangeFileConsumer(new AtomicReference<>(null), - "", "non-existent"); + "", "non-existent", SecurityConfigDefaults.TLS_PROTOCOL_VERSION); subjectUnderTest.accept(null); assertEquals(1, subjectUnderTest.getNumOfConfigChangesSinceStart()); @@ -43,7 +43,7 @@ public void testEmptyPathToCertificateFileIsRejected() { @Test (expected = IllegalArgumentException.class) public void testEmptyPathToKeyFileIsRejected() { TLSConfigChangeFileConsumer subjectUnderTest = new TLSConfigChangeFileConsumer(new AtomicReference<>(null), - "non-existent", ""); + "non-existent", "", SecurityConfigDefaults.TLS_PROTOCOL_VERSION); subjectUnderTest.accept(null); assertEquals(1, subjectUnderTest.getNumOfConfigChangesSinceStart()); } @@ -54,10 +54,10 @@ public void testInvocationIncrementsReloadCounter() { String pathToKeyFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME; AtomicReference sslCtx = new AtomicReference<>(TLSHelper.newServerSslContext( - new File(pathToCertificateFile), new File(pathToKeyFile))); + new File(pathToCertificateFile), new File(pathToKeyFile), SecurityConfigDefaults.TLS_PROTOCOL_VERSION)); TLSConfigChangeFileConsumer subjectUnderTest = new TLSConfigChangeFileConsumer(sslCtx, pathToCertificateFile, - pathToKeyFile); + pathToKeyFile, SecurityConfigDefaults.TLS_PROTOCOL_VERSION); subjectUnderTest.accept(null); assertEquals(1, subjectUnderTest.getNumOfConfigChangesSinceStart()); diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSHelperTests.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSHelperTests.java index 53afabc479a..588a8634291 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSHelperTests.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/security/TLSHelperTests.java @@ -36,7 +36,7 @@ public void testNewServerSslContextSucceedsWhenInputIsValid() { String pathToKeyFile = "../../../config/" + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME; SslContext sslCtx = TLSHelper.newServerSslContext(new File(pathToCertificateFile), - new File(pathToKeyFile)); + new File(pathToKeyFile), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); assertNotNull(sslCtx); } @@ -44,33 +44,33 @@ public void testNewServerSslContextSucceedsWhenInputIsValid() { @Test public void testNewServerSslContextFailsWhenInputIsNull() { assertThrows("Null pathToCertificateFile argument wasn't rejected.", - () -> TLSHelper.newServerSslContext(null, PATH_NONEMPTY), + () -> TLSHelper.newServerSslContext(null, PATH_NONEMPTY, SecurityConfigDefaults.TLS_PROTOCOL_VERSION), e -> e instanceof NullPointerException); assertThrows("Null pathToServerKeyFile argument wasn't rejected.", - () -> TLSHelper.newServerSslContext(PATH_NONEMPTY, null), + () -> TLSHelper.newServerSslContext(PATH_NONEMPTY, null, SecurityConfigDefaults.TLS_PROTOCOL_VERSION), e -> e instanceof NullPointerException); } @Test public void testNewServerSslContextFailsWhenInputIsEmpty() { assertThrows("Empty pathToCertificateFile argument wasn't rejected.", - () -> TLSHelper.newServerSslContext(PATH_EMPTY, PATH_NONEMPTY), + () -> TLSHelper.newServerSslContext(PATH_EMPTY, PATH_NONEMPTY, SecurityConfigDefaults.TLS_PROTOCOL_VERSION), e -> e instanceof IllegalArgumentException); assertThrows("Empty pathToServerKeyFile argument wasn't rejected.", - () -> TLSHelper.newServerSslContext(PATH_NONEMPTY, PATH_EMPTY), + () -> TLSHelper.newServerSslContext(PATH_NONEMPTY, PATH_EMPTY, SecurityConfigDefaults.TLS_PROTOCOL_VERSION), e -> e instanceof IllegalArgumentException); } @Test public void testNewServerSslContextFailsWhenInputFilesDontExist() { assertThrows("Non-existent pathToCertificateFile wasn't rejected.", - () -> TLSHelper.newServerSslContext(PATH_NONEXISTENT, PATH_NONEMPTY), + () -> TLSHelper.newServerSslContext(PATH_NONEXISTENT, PATH_NONEMPTY, SecurityConfigDefaults.TLS_PROTOCOL_VERSION), e -> e instanceof IllegalArgumentException); assertThrows("Non-existent pathToServerKeyFile argument wasn't rejected.", - () -> TLSHelper.newServerSslContext(PATH_NONEMPTY, PATH_NONEXISTENT), + () -> TLSHelper.newServerSslContext(PATH_NONEMPTY, PATH_NONEXISTENT, SecurityConfigDefaults.TLS_PROTOCOL_VERSION), e -> e instanceof IllegalArgumentException); } } diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/AutoScaleProcessorTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/AutoScaleProcessorTest.java index 20ee3533947..118a18a81e1 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/AutoScaleProcessorTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/AutoScaleProcessorTest.java @@ -188,7 +188,7 @@ public void scaleTest() { result4.complete(null); } }); - + @Cleanup AutoScaleProcessor monitor = new AutoScaleProcessor(writer, AutoScalerConfig.builder().with(AutoScalerConfig.MUTE_IN_SECONDS, 0) .with(AutoScalerConfig.COOLDOWN_IN_SECONDS, 0) @@ -389,6 +389,7 @@ public void testSteadyStateExpiry() { HashMap> map = new HashMap<>(); HashMap lastAccessedTime = new HashMap<>(); List evicted = new ArrayList<>(); + @SuppressWarnings("unchecked") SimpleCache> simpleCache = mock(SimpleCache.class); AtomicLong clock = new AtomicLong(0L); Function cleanup = m -> { diff --git a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderTest.java b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderTest.java index 5b91b16f932..246272cad44 100644 --- a/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderTest.java +++ b/segmentstore/server/host/src/test/java/io/pravega/segmentstore/server/host/stat/TableSegmentStatsRecorderTest.java @@ -80,6 +80,11 @@ public void testMetrics() { r.iterateEntries(SEGMENT_NAME, 8, ELAPSED); verify(r.getIterateEntriesLatency()).reportSuccessEvent(ELAPSED); verify(r.getIterateEntries()).add(8); + + // GetInfo + r.getInfo(SEGMENT_NAME, ELAPSED); + verify(r.getGetInfoLatency()).reportSuccessEvent(ELAPSED); + verify(r.getGetInfo()).inc(); } @RequiredArgsConstructor diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/CacheManager.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/CacheManager.java index 6a68173beb5..1ab9cb327d4 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/CacheManager.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/CacheManager.java @@ -17,6 +17,7 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Preconditions; +import com.google.common.collect.ImmutableMap; import com.google.common.util.concurrent.AbstractScheduledService; import io.pravega.common.Exceptions; import io.pravega.common.ObjectClosedException; @@ -37,7 +38,12 @@ import java.util.concurrent.atomic.AtomicReference; import javax.annotation.concurrent.GuardedBy; import javax.annotation.concurrent.ThreadSafe; + +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; +import io.pravega.shared.health.impl.AbstractHealthContributor; import lombok.Getter; +import lombok.NonNull; import lombok.extern.slf4j.Slf4j; /** @@ -604,5 +610,37 @@ public String toString() { } } + /** + * A contributor to manage the health of cache manager. + */ + public static class CacheManagerHealthContributor extends AbstractHealthContributor { + + private final CacheManager cacheManager; + + public CacheManagerHealthContributor(@NonNull CacheManager cacheManager) { + super("CacheManager"); + this.cacheManager = cacheManager; + } + + @Override + public Status doHealthCheck(Health.HealthBuilder builder) { + Status status = Status.DOWN; + boolean running = !cacheManager.closed.get(); + if (running) { + status = Status.UP; + } + + builder.details(ImmutableMap.of( + "cacheState", this.cacheManager.lastCacheState.get(), + "numOfClients", this.cacheManager.clients.size(), + "currentGeneration", this.cacheManager.currentGeneration, + "oldGeneration", this.cacheManager.oldestGeneration, + "essentialEntriesOnly", this.cacheManager.essentialEntriesOnly + )); + + return status; + } + } + //endregion } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainer.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainer.java index 69af239671d..d81c200f311 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainer.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainer.java @@ -17,8 +17,8 @@ import com.google.common.annotations.Beta; import com.google.common.annotations.VisibleForTesting; +import io.pravega.segmentstore.contracts.SegmentApi; import io.pravega.segmentstore.contracts.SegmentProperties; -import io.pravega.segmentstore.contracts.StreamSegmentStore; import io.pravega.segmentstore.server.logs.MetadataUpdateException; import io.pravega.segmentstore.server.logs.operations.OperationPriority; import java.time.Duration; @@ -29,7 +29,7 @@ /** * Defines a Container for StreamSegments. */ -public interface SegmentContainer extends StreamSegmentStore, Container { +public interface SegmentContainer extends SegmentApi, Container { /** * Gets a collection of SegmentProperties for all active Segments (Active Segment = a segment that is currently allocated * in the internal Container's Metadata (usually a segment with recent activity)). diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainerRegistry.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainerRegistry.java index 2b9d60d5a42..000583b5b62 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainerRegistry.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/SegmentContainerRegistry.java @@ -14,9 +14,9 @@ * limitations under the License. */ package io.pravega.segmentstore.server; - import io.pravega.segmentstore.contracts.ContainerNotFoundException; import java.time.Duration; +import java.util.Collection; import java.util.concurrent.CompletableFuture; /** @@ -38,6 +38,13 @@ public interface SegmentContainerRegistry extends AutoCloseable { */ SegmentContainer getContainer(int containerId) throws ContainerNotFoundException; + /** + * Gets a reference to the all the SegmentContainers. + * + * @return collection of SegmentContainers within the registry. + */ + Collection getContainers(); + /** * Starts processing the container with given Id. * @@ -50,6 +57,13 @@ public interface SegmentContainerRegistry extends AutoCloseable { */ CompletableFuture startContainer(int containerId, Duration timeout); + /** + * Tells if registry is closed. + * + * @return if the registry is closed. + */ + boolean isClosed(); + /** * Starts processing the container associated with the given handle. * diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/attributes/AttributeIndexConfig.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/attributes/AttributeIndexConfig.java index 5c0f19ad020..fc85365431b 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/attributes/AttributeIndexConfig.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/attributes/AttributeIndexConfig.java @@ -29,7 +29,7 @@ public class AttributeIndexConfig { //region Config Names public static final Property ATTRIBUTE_SEGMENT_ROLLING_SIZE = Property.named("attributeSegment.rolling.size.bytes", 32 * 1024 * 1024, "attributeSegmentRollingSizeBytes"); - private static final int MAX_INDEX_PAGE_SIZE_VALUE = (int) Short.MAX_VALUE; // Max allowed by BTreeIndex. + private static final int MAX_INDEX_PAGE_SIZE_VALUE = Short.MAX_VALUE; // Max allowed by BTreeIndex. public static final Property MAX_INDEX_PAGE_SIZE = Property.named("indexPage.size.bytes.max", MAX_INDEX_PAGE_SIZE_VALUE, "maxIndexPageSizeBytes"); private static final int MIN_INDEX_PAGE_SIZE_VALUE = 1024; private static final String COMPONENT_CODE = "attributeindex"; diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ContainerEventProcessorImpl.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ContainerEventProcessorImpl.java index 40363ff3073..bb2cff08888 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ContainerEventProcessorImpl.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ContainerEventProcessorImpl.java @@ -22,6 +22,7 @@ import io.pravega.common.Timer; import io.pravega.common.concurrent.AbstractThreadPoolService; import io.pravega.common.concurrent.Futures; +import io.pravega.common.concurrent.Services; import io.pravega.common.io.BoundedInputStream; import io.pravega.common.io.serialization.RevisionDataInput; import io.pravega.common.io.serialization.RevisionDataOutput; @@ -29,7 +30,6 @@ import io.pravega.common.util.BufferView; import io.pravega.common.util.ByteArraySegment; import io.pravega.segmentstore.contracts.SegmentType; -import io.pravega.segmentstore.contracts.StreamSegmentNotExistsException; import io.pravega.segmentstore.server.ContainerEventProcessor; import io.pravega.segmentstore.server.DirectSegmentAccess; import io.pravega.segmentstore.server.SegmentContainer; @@ -122,15 +122,13 @@ class ContainerEventProcessorImpl implements ContainerEventProcessor { * @return A future that, when completed, contains reference to the Segment to be used by a given * {@link ContainerEventProcessor.EventProcessor} based on its name. */ - private static Function> getOrCreateInternalSegment(SegmentContainer container, + @VisibleForTesting + static Function> getOrCreateInternalSegment(SegmentContainer container, MetadataStore metadataStore, Duration timeout) { - return s -> Futures.exceptionallyComposeExpecting( - container.forSegment(getEventProcessorSegmentName(container.getId(), s), timeout), - e -> e instanceof StreamSegmentNotExistsException, - () -> metadataStore.registerPinnedSegment(getEventProcessorSegmentName(container.getId(), s), - SYSTEM_CRITICAL_SEGMENT, null, timeout) // Segment should be pinned. - .thenCompose(l -> container.forSegment(getEventProcessorSegmentName(container.getId(), s), timeout))); + return s -> metadataStore.registerPinnedSegment(getEventProcessorSegmentName(container.getId(), s), + SYSTEM_CRITICAL_SEGMENT, null, timeout) // Segment should be pinned. + .thenCompose(l -> container.forSegment(getEventProcessorSegmentName(container.getId(), s), timeout)); } //endregion @@ -329,22 +327,30 @@ long getOutstandingBytes() { //region AutoCloseable implementation /** - * This method stop the service (superclass), auto-unregisters from the existing set of active + * This method stops the service (superclass), auto-unregisters from the existing set of active * {@link EventProcessor} instances (via onClose callback), and closes the metrics. */ @Override public void close() { if (!this.closed.getAndSet(true)) { log.info("{}: Closing EventProcessor.", this.traceObjectId); - super.close(); + Services.onStop(super.stopAsync(), + () -> log.info("{}: EventProcessor service shutdown complete.", this.traceObjectId), + this::failureCallback, + this.executor); this.metrics.close(); this.onClose.run(); } } + @VisibleForTesting + void failureCallback(Throwable ex) { + log.warn("{}: Problem shutting down EventProcessor service.", this.traceObjectId, ex); + } + //endregion - //region Runnable implementation + //region AbstractThreadPoolService implementation @Override protected Duration getShutdownTimeout() { diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/MetadataStore.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/MetadataStore.java index 7674b0d0b38..068073d9b81 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/MetadataStore.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/MetadataStore.java @@ -738,7 +738,7 @@ void completeExceptionally(Throwable ex) { @Data @Builder - protected static class SegmentInfo { + public static class SegmentInfo { private static final SegmentInfoSerializer SERIALIZER = new SegmentInfoSerializer(); private final long segmentId; private final SegmentProperties properties; @@ -800,10 +800,10 @@ static SegmentInfo deserialize(BufferView contents) { } } - static class SegmentInfoBuilder implements ObjectBuilder { + public static class SegmentInfoBuilder implements ObjectBuilder { } - private static class SegmentInfoSerializer extends VersionedSerializer.WithBuilder { + public static class SegmentInfoSerializer extends VersionedSerializer.WithBuilder { @Override protected SegmentInfo.SegmentInfoBuilder newBuilder() { return SegmentInfo.builder(); diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainer.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainer.java index f05b3455922..444906a6b80 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainer.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainer.java @@ -197,6 +197,12 @@ public CompletableFuture mergeStreamSegment(String tar return unsupported("mergeStreamSegment"); } + @Override + public CompletableFuture mergeStreamSegment(String targetStreamSegment, String sourceStreamSegment, + AttributeUpdateCollection attributes, Duration timeout) { + return unsupported("mergeStreamSegment"); + } + @Override public CompletableFuture sealStreamSegment(String streamSegmentName, Duration timeout) { return unsupported("sealStreamSegment"); diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StorageEventProcessor.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StorageEventProcessor.java new file mode 100644 index 00000000000..19408434940 --- /dev/null +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StorageEventProcessor.java @@ -0,0 +1,143 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.containers; + +import com.google.common.base.Function; +import com.google.common.base.Preconditions; +import io.pravega.common.concurrent.Futures; +import io.pravega.common.util.BufferView; +import io.pravega.segmentstore.server.ContainerEventProcessor; +import io.pravega.segmentstore.storage.chunklayer.AbstractTaskQueueManager; +import io.pravega.segmentstore.storage.chunklayer.GarbageCollector; +import lombok.Getter; +import lombok.extern.slf4j.Slf4j; +import lombok.val; + +import java.io.IOException; +import java.time.Duration; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ConcurrentHashMap; + +/** + * Implementation of {@link AbstractTaskQueueManager} that uses {@link io.pravega.segmentstore.server.ContainerEventProcessor.EventProcessor} + * as underlying implementation. + * This class acts as adaptor that converts calls on {@link io.pravega.segmentstore.server.ContainerEventProcessor.EventProcessor} + * into appropriate calls on {@link AbstractTaskQueueManager} and vice versa. + */ +@Slf4j +public class StorageEventProcessor implements AbstractTaskQueueManager { + + private final static GarbageCollector.TaskInfo.Serializer SERIALIZER = new GarbageCollector.TaskInfo.Serializer(); + + private final int containerID; + private final ContainerEventProcessor eventProcessor; + private final Function, CompletableFuture> callBack; + private final int maxItemsAtOnce; + private final String traceObjectId; + @Getter + private final ConcurrentHashMap eventProcessorMap = new ConcurrentHashMap<>(); + + /** + * Constructor. + * + * @param containerID Container id. + * @param eventProcessor Instance of {@link ContainerEventProcessor} to use. + * @param callBack Function which is called to process batch of events. + * @param maxItemsAtOnce Maximum nuber of + */ + public StorageEventProcessor(int containerID, + ContainerEventProcessor eventProcessor, + Function, + CompletableFuture> callBack, + int maxItemsAtOnce) { + this.containerID = containerID; + this.eventProcessor = Preconditions.checkNotNull(eventProcessor, "eventProcessor"); + this.callBack = Preconditions.checkNotNull(callBack, "callBack"); + this.maxItemsAtOnce = maxItemsAtOnce; + this.traceObjectId = String.format("StorageEventProcessor[%d]", containerID); + } + + /** + * Adds a queue by the given name. + * + * @param queueName Name of the queue. + * @param ignoreProcessing Whether the processing should be ignored. + */ + @Override + public CompletableFuture addQueue(String queueName, Boolean ignoreProcessing) { + Preconditions.checkNotNull(queueName, "queueName"); + val config = new ContainerEventProcessor.EventProcessorConfig(maxItemsAtOnce, Long.MAX_VALUE); + val f = ignoreProcessing ? + eventProcessor.forDurableQueue(queueName) : + eventProcessor.forConsumer(queueName, this::processEvents, config); + return f.thenAccept(processor -> eventProcessorMap.put(queueName, processor)); + } + + /** + * Adds a task to queue. + * + * @param queueName Name of the queue. + * @param task Task to add. + */ + @Override + public CompletableFuture addTask(String queueName, GarbageCollector.TaskInfo task) { + Preconditions.checkNotNull(queueName, "queueName"); + Preconditions.checkNotNull(task, "task"); + try { + val processor = eventProcessorMap.get(queueName); + Preconditions.checkArgument(null != processor, "Attempt to add to non existent queue (%s).", queueName); + return Futures.toVoid(processor.add(SERIALIZER.serialize(task), Duration.ofMillis(1000))); + } catch (Throwable e) { + return CompletableFuture.failedFuture(e); + } + } + + @Override + public void close() throws Exception { + for (val entry : eventProcessorMap.entrySet()) { + try { + entry.getValue().close(); + } catch (Exception e) { + log.error("{}: Error while closing event processor name={}.", traceObjectId, entry.getKey(), e); + } + } + } + + /** + * Callback invoked by {@link io.pravega.segmentstore.server.ContainerEventProcessor.EventProcessor} when one or more + * events have been read from the internal Segment. + * + * @param events List of events to process. + * @return A CompletableFuture that, when completed, will indicate the operation succeeded. + * If the operation failed, it will contain the cause of the failure. + */ + CompletableFuture processEvents(List events) { + Preconditions.checkNotNull(events, "events"); + log.debug("{}: processEvents called with {} events", traceObjectId, events.size()); + ArrayList batch = new ArrayList<>(); + for (val event : events) { + try { + batch.add(SERIALIZER.deserialize(event)); + } catch (IOException e) { + log.error("{}: processEvents failed while deserializing batch.", traceObjectId, e); + return CompletableFuture.failedFuture(e); + } + } + return callBack.apply(batch); + } +} diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StreamSegmentContainer.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StreamSegmentContainer.java index 7ac4baf0caf..e2186a792e1 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StreamSegmentContainer.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/containers/StreamSegmentContainer.java @@ -95,6 +95,7 @@ import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Consumer; +import java.util.function.Supplier; import java.util.stream.Collectors; import javax.annotation.Nullable; import lombok.Getter; @@ -226,10 +227,14 @@ private CompletableFuture initializeStorage() { this.storage.initialize(this.metadata.getContainerEpoch()); if (this.storage instanceof ChunkedSegmentStorage) { - ChunkedSegmentStorage chunkedStorage = (ChunkedSegmentStorage) this.storage; + ChunkedSegmentStorage chunkedSegmentStorage = (ChunkedSegmentStorage) this.storage; val snapshotInfoStore = getStorageSnapshotInfoStore(); // Bootstrap - return chunkedStorage.bootstrap(snapshotInfoStore); + StorageEventProcessor eventProcessor = new StorageEventProcessor(this.metadata.getContainerId(), + this.containerEventProcessor, + batch -> chunkedSegmentStorage.getGarbageCollector().processBatch(batch), + chunkedSegmentStorage.getConfig().getGarbageCollectionMaxConcurrency()); + return chunkedSegmentStorage.bootstrap(snapshotInfoStore, eventProcessor); } return CompletableFuture.completedFuture(null); } @@ -278,11 +283,7 @@ protected void doStart() { Services.startAsync(this.durableLog, this.executor) .thenComposeAsync(v -> startWhenDurableLogOnline(), this.executor) .whenComplete((v, ex) -> { - if (ex == null) { - // We are started and ready to accept requests when DurableLog starts. All other (secondary) services - // are not required for accepting new operations and can still start in the background. - notifyStarted(); - } else { + if (ex != null) { doStop(ex); } }); @@ -295,18 +296,26 @@ private CompletableFuture startWhenDurableLogOnline() { // Attach a listener to the DurableLog's awaitOnline() Future and initiate the services' startup when that // completes successfully. log.info("{}: DurableLog is OFFLINE. Not starting secondary services yet.", this.traceObjectId); + notifyStarted(); isReady = CompletableFuture.completedFuture(null); delayedStart = this.durableLog.awaitOnline() .thenComposeAsync(v -> initializeSecondaryServices(), this.executor); } else { // DurableLog is already online. Immediately initialize secondary services. In this particular case, it needs // to be done synchronously since we need to initialize Storage before notifying that we are fully started. - isReady = initializeSecondaryServices(); + isReady = initializeSecondaryServices().thenRun(() -> notifyStarted()); delayedStart = isReady; } - // Delayed start. Secondary services need not be started in order for us to accept requests. - delayedStart.thenComposeAsync(v -> startSecondaryServicesAsync(), this.executor) + // We are started and ready to accept requests when DurableLog starts. All other (secondary) services + // are not required for accepting new operations and can still start in the background. + delayedStart.thenComposeAsync(v -> { + if (this.storage instanceof ChunkedSegmentStorage) { + return ((ChunkedSegmentStorage) this.storage).finishBootstrap(); + } + return CompletableFuture.completedFuture(null); + }, this.executor) + .thenComposeAsync(v -> startSecondaryServicesAsync(), this.executor) .whenComplete((v, ex) -> { if (ex == null) { // Successful start. @@ -546,7 +555,14 @@ public CompletableFuture truncateStreamSegment(String streamSegmentName, l } @Override - public CompletableFuture mergeStreamSegment(String targetStreamSegment, String sourceStreamSegment, Duration timeout) { + public CompletableFuture mergeStreamSegment(String targetStreamSegment, String sourceStreamSegment, + Duration timeout) { + return mergeStreamSegment(targetStreamSegment, sourceStreamSegment, null, timeout); + } + + @Override + public CompletableFuture mergeStreamSegment(String targetStreamSegment, String sourceStreamSegment, + AttributeUpdateCollection attributes, Duration timeout) { ensureRunning(); logRequest("mergeStreamSegment", targetStreamSegment, sourceStreamSegment); @@ -561,7 +577,7 @@ public CompletableFuture mergeStreamSegment(String tar return this.metadataStore .getOrAssignSegmentId(targetStreamSegment, timer.getRemaining(), targetSegmentId -> this.metadataStore.getOrAssignSegmentId(sourceStreamSegment, timer.getRemaining(), - sourceSegmentId -> mergeStreamSegment(targetSegmentId, sourceSegmentId, timer))) + sourceSegmentId -> mergeStreamSegment(targetSegmentId, sourceSegmentId, attributes, timer))) .handleAsync((msr, ex) -> { if (ex == null || Exceptions.unwrap(ex) instanceof StreamSegmentMergedException) { // No exception or segment was already merged. Need to clear SegmentInfo for source. @@ -579,7 +595,9 @@ public CompletableFuture mergeStreamSegment(String tar }, this.executor); } - private CompletableFuture mergeStreamSegment(long targetSegmentId, long sourceSegmentId, TimeoutTimer timer) { + private CompletableFuture mergeStreamSegment(long targetSegmentId, long sourceSegmentId, + AttributeUpdateCollection attributeUpdates, + TimeoutTimer timer) { // Get a reference to the source segment's metadata now, before the merge. It may not be accessible afterwards. SegmentMetadata sourceMetadata = this.metadata.getStreamSegmentMetadata(sourceSegmentId); @@ -593,14 +611,20 @@ private CompletableFuture mergeStreamSegment(long targ // to and including the seal, so if there were any writes outstanding before, they should now be reflected in it. if (sourceMetadata.getLength() == 0) { // Source is still empty after sealing - OK to delete. - log.debug("{}: Deleting empty source segment instead of merging {}.", this.traceObjectId, sourceMetadata.getName()); - return deleteStreamSegment(sourceMetadata.getName(), timer.getRemaining()).thenApply(v2 -> - new MergeStreamSegmentResult(this.metadata.getStreamSegmentMetadata(targetSegmentId).getLength(), - sourceMetadata.getLength(), sourceMetadata.getAttributes())); + log.debug("{}: Updating attributes (if any) and deleting empty source segment instead of merging {}.", + this.traceObjectId, sourceMetadata.getName()); + // Execute the attribute update on the target segment only if needed. + Supplier> updateAttributesIfNeeded = () -> attributeUpdates == null ? + CompletableFuture.completedFuture(null) : + updateAttributesForSegment(targetSegmentId, attributeUpdates, timer.getRemaining()); + return updateAttributesIfNeeded.get() + .thenCompose(v2 -> deleteStreamSegment(sourceMetadata.getName(), timer.getRemaining()) + .thenApply(v3 -> new MergeStreamSegmentResult(this.metadata.getStreamSegmentMetadata(targetSegmentId).getLength(), + sourceMetadata.getLength(), sourceMetadata.getAttributes()))); } else { // Source now has some data - we must merge the two. - MergeSegmentOperation operation = new MergeSegmentOperation(targetSegmentId, sourceSegmentId); - return addOperation(operation, timer.getRemaining()).thenApply(v2 -> + MergeSegmentOperation operation = new MergeSegmentOperation(targetSegmentId, sourceSegmentId, attributeUpdates); + return processAttributeUpdaterOperation(operation, timer).thenApply(v2 -> new MergeStreamSegmentResult(operation.getStreamSegmentOffset() + operation.getLength(), operation.getLength(), sourceMetadata.getAttributes())); } @@ -608,9 +632,9 @@ private CompletableFuture mergeStreamSegment(long targ } else { // Source is not empty, so we cannot delete. Make use of the DurableLog's pipelining abilities by queueing up // the Merge right after the Seal. - MergeSegmentOperation operation = new MergeSegmentOperation(targetSegmentId, sourceSegmentId); + MergeSegmentOperation operation = new MergeSegmentOperation(targetSegmentId, sourceSegmentId, attributeUpdates); return CompletableFuture.allOf(sealResult, - addOperation(operation, timer.getRemaining())).thenApply(v2 -> + processAttributeUpdaterOperation(operation, timer)).thenApply(v2 -> new MergeStreamSegmentResult(operation.getStreamSegmentOffset() + operation.getLength(), operation.getLength(), sourceMetadata.getAttributes())); } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/OperationProcessor.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/OperationProcessor.java index 5fb6307e7ff..7ff8c24edfa 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/OperationProcessor.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/OperationProcessor.java @@ -48,6 +48,7 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionException; import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeoutException; import javax.annotation.concurrent.GuardedBy; import javax.annotation.concurrent.ThreadSafe; import lombok.Getter; @@ -66,6 +67,7 @@ class OperationProcessor extends AbstractThreadPoolService implements AutoClosea private static final Duration SHUTDOWN_TIMEOUT = Duration.ofSeconds(10); private static final int MAX_READ_AT_ONCE = 1000; private static final int MAX_COMMIT_QUEUE_SIZE = 50; + private static final Duration COMMIT_PROCESSOR_TIMEOUT = Duration.ofSeconds(10); private final UpdateableContainerMetadata metadata; private final MemoryStateUpdater stateUpdater; @@ -147,11 +149,12 @@ protected CompletableFuture doRun() { // As opposed from the QueueProcessor, this needs to process all pending commits and not discard them, even when // we receive a stop signal (from doStop()), otherwise we could be left with an inconsistent in-memory state. val commitProcessor = Futures - .loop(() -> isRunning() || this.commitQueue.size() > 0, - () -> this.commitQueue.take(MAX_COMMIT_QUEUE_SIZE) - .thenAcceptAsync(this::processCommits, this.executor), + .loop(() -> (isRunning() && !queueProcessor.isCompletedExceptionally()) || this.commitQueue.size() > 0, + () -> this.commitQueue.take(MAX_COMMIT_QUEUE_SIZE, COMMIT_PROCESSOR_TIMEOUT, this.executor) + .handleAsync(this::handleProcessCommits, this.executor), this.executor) .whenComplete((r, ex) -> { + log.info("{}: Completing and closing commitProcessor. Is OperationProcessor running? {}", this.traceObjectId, isRunning()); // The CommitProcessor is done. Safe to close its queue now, regardless of whether it failed or // shut down normally. val uncommittedOperations = this.commitQueue.close(); @@ -159,6 +162,7 @@ protected CompletableFuture doRun() { // Update the cacheUtilizationProvider with the fact that these operations are no longer pending for the cache. uncommittedOperations.stream().flatMap(Collection::stream).forEach(this.state::notifyOperationCommitted); if (ex != null) { + log.warn("{}: commitProcessor completed exceptionally {}.", this.traceObjectId, ex.toString()); throw new CompletionException(ex); } }); @@ -166,6 +170,20 @@ protected CompletableFuture doRun() { .exceptionally(this::iterationErrorHandler); } + @SneakyThrows + private Void handleProcessCommits(Queue> items, Throwable ex) { + // Check if there is an exception from taking elements from commitQueue. If we get a TimeoutException, it is + // expected, so do nothing. If any other exception comes form the commitQueue, then re-throw. + if (ex != null && Exceptions.unwrap(ex) instanceof TimeoutException) { + return null; + } else if (ex != null && !(Exceptions.unwrap(ex) instanceof TimeoutException)) { + throw ex; + } + // No exceptions and we got some elements to process. + processCommits(items); + return null; + } + @Override protected void doStop() { // We need to first stop the operation queue, which will prevent any new items from being processed. diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/SegmentMetadataUpdateTransaction.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/SegmentMetadataUpdateTransaction.java index 37719c06157..b8b31e81b8d 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/SegmentMetadataUpdateTransaction.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/SegmentMetadataUpdateTransaction.java @@ -403,9 +403,10 @@ void preProcessOperation(DeleteSegmentOperation operation) { * @throws StreamSegmentNotSealedException If the source Segment is not sealed. * @throws MetadataUpdateException If the operation cannot be processed because of the current state of the metadata. * @throws IllegalArgumentException If the operation is for a different Segment. + * @throws BadAttributeUpdateException If any of the given AttributeUpdates is invalid given the current state of the segment. */ void preProcessAsTargetSegment(MergeSegmentOperation operation, SegmentMetadataUpdateTransaction sourceMetadata) - throws StreamSegmentSealedException, StreamSegmentNotSealedException, MetadataUpdateException { + throws StreamSegmentSealedException, StreamSegmentNotSealedException, MetadataUpdateException, BadAttributeUpdateException { ensureSegmentId(operation); if (this.sealed) { @@ -425,6 +426,8 @@ void preProcessAsTargetSegment(MergeSegmentOperation operation, SegmentMetadataU } if (!this.recoveryMode) { + // Update attributes first on the target Segment, if any. + preProcessAttributes(operation.getAttributeUpdates()); // Assign entry Segment offset and update Segment offset afterwards. operation.setStreamSegmentOffset(this.length); } @@ -668,7 +671,7 @@ void acceptAsTargetSegment(MergeSegmentOperation operation, SegmentMetadataUpdat throw new MetadataUpdateException(containerId, "MergeSegmentOperation does not seem to have been pre-processed: " + operation); } - + acceptAttributes(operation.getAttributeUpdates()); this.length += transLength; this.isChanged = true; } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/operations/MergeSegmentOperation.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/operations/MergeSegmentOperation.java index ce7159ee619..888f819da64 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/operations/MergeSegmentOperation.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/logs/operations/MergeSegmentOperation.java @@ -18,17 +18,25 @@ import com.google.common.base.Preconditions; import io.pravega.common.io.serialization.RevisionDataInput; import io.pravega.common.io.serialization.RevisionDataOutput; +import io.pravega.segmentstore.contracts.AttributeId; +import io.pravega.segmentstore.contracts.AttributeUpdate; +import io.pravega.segmentstore.contracts.AttributeUpdateCollection; +import io.pravega.segmentstore.contracts.AttributeUpdateType; +import lombok.Getter; + import java.io.IOException; /** * Log Operation that indicates a Segment is to be merged into another Segment. */ -public class MergeSegmentOperation extends StorageOperation { +public class MergeSegmentOperation extends StorageOperation implements AttributeUpdaterOperation { //region Members private long streamSegmentOffset; private long length; private long sourceSegmentId; + @Getter + private AttributeUpdateCollection attributeUpdates; //endregion @@ -45,6 +53,12 @@ public MergeSegmentOperation(long targetSegmentId, long sourceSegmentId) { this.sourceSegmentId = sourceSegmentId; this.length = -1; this.streamSegmentOffset = -1; + this.attributeUpdates = null; + } + + public MergeSegmentOperation(long targetSegmentId, long sourceSegmentId, AttributeUpdateCollection attributeUpdates) { + this(targetSegmentId, sourceSegmentId); + this.attributeUpdates = attributeUpdates; } /** @@ -113,17 +127,23 @@ public long getLength() { @Override public String toString() { return String.format( - "%s, SourceSegmentId = %d, Length = %s, MergeOffset = %s", + "%s, SourceSegmentId = %d, Length = %s, MergeOffset = %s, Attributes = %d", super.toString(), getSourceSegmentId(), toString(getLength(), -1), - toString(getStreamSegmentOffset(), -1)); + toString(getStreamSegmentOffset(), -1), + this.attributeUpdates == null ? 0 : this.attributeUpdates.size()); } //endregion + //region Serializer + static class Serializer extends OperationSerializer { private static final int SERIALIZATION_LENGTH = 5 * Long.BYTES; + // Segment merges can be conditionally based on attributes. Each attribute update is serialized as a UUID + // (attributeId, 2 longs), attribute type (1 byte), old and new values (2 longs). + private static final int ATTRIBUTE_UUID_UPDATE_LENGTH = RevisionDataOutput.UUID_BYTES + Byte.BYTES + 2 * Long.BYTES; @Override protected OperationBuilder newBuilder() { @@ -137,7 +157,8 @@ protected byte getWriteVersion() { @Override protected void declareVersions() { - version(0).revision(0, this::write00, this::read00); + version(0).revision(0, this::write00, this::read00) + .revision(1, this::write01, this::read01); } @Override @@ -156,6 +177,23 @@ private void write00(MergeSegmentOperation o, RevisionDataOutput target) throws target.writeLong(o.streamSegmentOffset); } + private void write01(MergeSegmentOperation o, RevisionDataOutput target) throws IOException { + if (o.attributeUpdates == null || o.attributeUpdates.isEmpty()) { + target.getCompactIntLength(0); + return; + } + target.length(target.getCollectionLength(o.attributeUpdates.size(), ATTRIBUTE_UUID_UPDATE_LENGTH)); + target.writeCollection(o.attributeUpdates, this::writeAttributeUpdateUUID01); + } + + private void writeAttributeUpdateUUID01(RevisionDataOutput target, AttributeUpdate au) throws IOException { + target.writeLong(au.getAttributeId().getBitGroup(0)); + target.writeLong(au.getAttributeId().getBitGroup(1)); + target.writeByte(au.getUpdateType().getTypeId()); + target.writeLong(au.getValue()); + target.writeLong(au.getComparisonValue()); + } + private void read00(RevisionDataInput source, OperationBuilder b) throws IOException { b.instance.setSequenceNumber(source.readLong()); b.instance.setStreamSegmentId(source.readLong()); @@ -163,5 +201,21 @@ private void read00(RevisionDataInput source, OperationBuilder b) throws IOException { + if (source.getRemaining() > 0) { + b.instance.attributeUpdates = source.readCollection(this::readAttributeUpdateUUID01, AttributeUpdateCollection::new); + } + } + + private AttributeUpdate readAttributeUpdateUUID01(RevisionDataInput source) throws IOException { + return new AttributeUpdate( + AttributeId.uuid(source.readLong(), source.readLong()), + AttributeUpdateType.get(source.readByte()), + source.readLong(), + source.readLong()); + } } + + //endregion } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/mocks/SynchronousStreamSegmentStore.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/mocks/SynchronousStreamSegmentStore.java index 7c4b675e835..62d612c4a34 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/mocks/SynchronousStreamSegmentStore.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/mocks/SynchronousStreamSegmentStore.java @@ -70,6 +70,13 @@ public CompletableFuture> getAttributes(String streamSegm return result; } + @Override + public CompletableFuture flushToStorage(int containerId, Duration timeout) { + CompletableFuture result = impl.flushToStorage(containerId, timeout); + Futures.await(result); + return result; + } + @Override public CompletableFuture read(String streamSegmentName, long offset, int maxLength, Duration timeout) { CompletableFuture result = impl.read(streamSegmentName, offset, maxLength, timeout); @@ -99,6 +106,14 @@ public CompletableFuture mergeStreamSegment(String tar return result; } + @Override + public CompletableFuture mergeStreamSegment(String targetStreamSegment, String sourceStreamSegment, + AttributeUpdateCollection attributes, Duration timeout) { + CompletableFuture result = impl.mergeStreamSegment(targetStreamSegment, sourceStreamSegment, attributes, timeout); + Futures.await(result); + return result; + } + @Override public CompletableFuture sealStreamSegment(String streamSegmentName, Duration timeout) { CompletableFuture result = impl.sealStreamSegment(streamSegmentName, timeout); diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/reading/CacheIndexEntry.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/reading/CacheIndexEntry.java index 8c6c1986301..bc8f3183903 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/reading/CacheIndexEntry.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/reading/CacheIndexEntry.java @@ -65,7 +65,7 @@ boolean isDataEntry() { } @Override - public String toString() { + public synchronized String toString() { return String.format("%s, Address = %d", super.toString(), this.cacheAddress); } } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/SegmentContainerCollection.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/SegmentContainerCollection.java index 6156ae066d5..fd4022e6525 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/SegmentContainerCollection.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/SegmentContainerCollection.java @@ -66,10 +66,26 @@ public SegmentContainerCollection(SegmentContainerRegistry segmentContainerRegis */ protected CompletableFuture invoke(String streamSegmentName, Function> toInvoke, String methodName, Object... logArgs) { + int containerId = this.segmentToContainerMapper.getContainerId(streamSegmentName); + return invoke(containerId, toInvoke, methodName, logArgs); + } + + /** + * Executes the given Function on the StreamSegmentContainer that the given Id maps to. + * + * @param containerId The Id to fetch the Container for. + * @param toInvoke A Function that will be invoked on the Container. + * @param methodName The name of the calling method (for logging purposes). + * @param logArgs (Optional) A vararg array of items to be logged. + * @param Resulting type. + * @return Either the result of toInvoke or a CompletableFuture completed exceptionally with a ContainerNotFoundException + * in case the SegmentContainer that the Id maps to does not exist in this StreamSegmentService. + */ + protected CompletableFuture invoke(int containerId, Function> toInvoke, + String methodName, Object... logArgs) { long traceId = LoggerHelpers.traceEnter(log, methodName, logArgs); SegmentContainer container; try { - int containerId = this.segmentToContainerMapper.getContainerId(streamSegmentName); container = this.segmentContainerRegistry.getContainer(containerId); } catch (ContainerNotFoundException ex) { return Futures.failedFuture(ex); diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceBuilder.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceBuilder.java index db4e457abf6..a26f6078466 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceBuilder.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceBuilder.java @@ -44,6 +44,7 @@ import io.pravega.segmentstore.server.reading.ReadIndexConfig; import io.pravega.segmentstore.server.tables.ContainerTableExtension; import io.pravega.segmentstore.server.tables.ContainerTableExtensionImpl; +import io.pravega.segmentstore.server.tables.TableExtensionConfig; import io.pravega.segmentstore.server.tables.TableService; import io.pravega.segmentstore.server.writer.StorageWriterFactory; import io.pravega.segmentstore.server.writer.WriterConfig; @@ -81,6 +82,7 @@ public class ServiceBuilder implements AutoCloseable { private final ScheduledExecutorService storageExecutor; @Getter(AccessLevel.PUBLIC) private final ScheduledExecutorService lowPriorityExecutor; + @Getter(AccessLevel.PUBLIC) private final CacheManager cacheManager; private final AtomicReference operationLogFactory; private final AtomicReference readIndexFactory; @@ -247,10 +249,12 @@ public void initialize() throws DurableDataLogException { getSingleton(this.containerManager, this.segmentContainerManagerCreator).initialize(); } + + /** * Creates or gets the instance of the SegmentContainerRegistry used throughout this ServiceBuilder. */ - private SegmentContainerRegistry getSegmentContainerRegistry() { + public SegmentContainerRegistry getSegmentContainerRegistry() { return getSingleton(this.containerRegistry, this::createSegmentContainerRegistry); } @@ -294,7 +298,8 @@ protected SegmentContainerFactory createSegmentContainerFactory() { private Map, SegmentContainerExtension> createContainerExtensions( SegmentContainer container, ScheduledExecutorService executor) { - return Collections.singletonMap(ContainerTableExtension.class, new ContainerTableExtensionImpl(container, this.cacheManager, executor)); + TableExtensionConfig config = this.serviceBuilderConfig.getConfig(TableExtensionConfig::builder); + return Collections.singletonMap(ContainerTableExtension.class, new ContainerTableExtensionImpl(config, container, this.cacheManager, executor)); } private SegmentContainerRegistry createSegmentContainerRegistry() { diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceConfig.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceConfig.java index 78930940df6..2596493c877 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceConfig.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/ServiceConfig.java @@ -16,6 +16,7 @@ package io.pravega.segmentstore.server.store; import com.google.common.base.Strings; +import io.pravega.common.security.TLSProtocolVersion; import io.pravega.common.util.ConfigBuilder; import io.pravega.common.util.ConfigurationException; import io.pravega.common.util.Property; @@ -24,6 +25,7 @@ import java.net.Inet4Address; import java.net.UnknownHostException; import java.time.Duration; +import java.util.Arrays; import io.pravega.segmentstore.storage.StorageLayoutType; import io.pravega.shared.rest.RESTServerConfig; @@ -56,6 +58,8 @@ public class ServiceConfig { public static final Property REST_LISTENING_HOST = Property.named("rest.listener.host", "localhost"); public static final Property REST_LISTENING_PORT = Property.named("rest.listener.port", 6061); public static final Property REST_LISTENING_ENABLE = Property.named("rest.listener.enable", true); + public static final Property REST_KEYSTORE_FILE = Property.named("security.tls.server.keyStore.location", ""); + public static final Property REST_KEYSTORE_PASSWORD_FILE = Property.named("security.tls.server.keyStore.pwd.location", ""); public static final Property HEALTH_CHECK_INTERVAL_SECONDS = Property.named("health.interval.seconds", 10); // Not changing this configuration property (to "cluster.name"), as it is set by Pravega operator, and changing this @@ -65,7 +69,7 @@ public class ServiceConfig { // 3. Remove old property from the operator. public static final Property CLUSTER_NAME = Property.named("clusterName", "pravega-cluster"); public static final Property DATALOG_IMPLEMENTATION = Property.named("dataLog.impl.name", DataLogType.INMEMORY, "dataLogImplementation"); - public static final Property STORAGE_IMPLEMENTATION = Property.named("storage.impl.name", StorageType.HDFS, "storageImplementation"); + public static final Property STORAGE_IMPLEMENTATION = Property.named("storage.impl.name", StorageType.HDFS.name(), "storageImplementation"); public static final Property STORAGE_LAYOUT = Property.named("storage.layout", StorageLayoutType.ROLLING_STORAGE); public static final Property READONLY_SEGMENT_STORE = Property.named("readOnly.enable", false, "readOnlySegmentStore"); public static final Property CACHE_POLICY_MAX_SIZE = Property.named("cache.size.max", 4L * 1024 * 1024 * 1024, "cacheMaxSize"); @@ -78,6 +82,7 @@ public class ServiceConfig { // TLS-related config for the service public static final Property ENABLE_TLS = Property.named("security.tls.enable", false, "enableTls"); + public static final Property TLS_PROTOCOL_VERSION = Property.named("security.tls.protocolVersion", "TLSv1.2,TLSv1.3"); public static final Property CERT_FILE = Property.named("security.tls.server.certificate.location", "", "certFile"); public static final Property KEY_FILE = Property.named("security.tls.server.privateKey.location", "", "keyFile"); public static final Property ENABLE_TLS_RELOAD = Property.named("security.tls.certificate.autoReload.enable", false, "enableTlsReload"); @@ -250,7 +255,7 @@ public enum StorageType { * The Type of Storage Implementation to use. */ @Getter - private final StorageType storageImplementation; + private final String storageImplementation; /** * The Type of Storage layout to use. @@ -272,6 +277,12 @@ public enum StorageType { @Getter private final boolean enableTls; + /** + * Tls Protocol Version + */ + @Getter + private final String[] tlsProtocolVersion; + /** * Represents the certificate file for the TLS server. */ @@ -380,13 +391,15 @@ private ServiceConfig(TypedProperties properties) throws ConfigurationException this.zkSessionTimeoutMs = properties.getInt(ZK_SESSION_TIMEOUT_MS); this.clusterName = properties.get(CLUSTER_NAME); this.dataLogTypeImplementation = properties.getEnum(DATALOG_IMPLEMENTATION, DataLogType.class); - this.storageImplementation = properties.getEnum(STORAGE_IMPLEMENTATION, StorageType.class); + this.storageImplementation = properties.get(STORAGE_IMPLEMENTATION); this.storageLayout = properties.getEnum(STORAGE_LAYOUT, StorageLayoutType.class); this.readOnlySegmentStore = properties.getBoolean(READONLY_SEGMENT_STORE); this.secureZK = properties.getBoolean(SECURE_ZK); this.zkTrustStore = properties.get(ZK_TRUSTSTORE_LOCATION); this.zkTrustStorePasswordPath = properties.get(ZK_TRUST_STORE_PASSWORD_PATH); this.enableTls = properties.getBoolean(ENABLE_TLS); + TLSProtocolVersion tpr = new TLSProtocolVersion(properties.get(TLS_PROTOCOL_VERSION)); + this.tlsProtocolVersion = Arrays.copyOf(tpr.getProtocols(), tpr.getProtocols().length); this.keyFile = properties.get(KEY_FILE); this.certFile = properties.get(CERT_FILE); this.enableTlsReload = properties.getBoolean(ENABLE_TLS_RELOAD); @@ -405,8 +418,9 @@ private ServiceConfig(TypedProperties properties) throws ConfigurationException .host(properties.get(REST_LISTENING_HOST)) .port(properties.getInt(REST_LISTENING_PORT)) .tlsEnabled(properties.getBoolean(ENABLE_TLS)) - .keyFilePath(properties.get(KEY_FILE)) - .keyFilePasswordPath(properties.get(KEY_PASSWORD_FILE)) + .tlsProtocolVersion(TLSProtocolVersion.parse(properties.get(TLS_PROTOCOL_VERSION))) + .keyFilePath(properties.get(REST_KEYSTORE_FILE)) + .keyFilePasswordPath(properties.get(REST_KEYSTORE_PASSWORD_FILE)) .build(); this.restServerEnabled = properties.getBoolean(REST_LISTENING_ENABLE); this.healthCheckInterval = Duration.ofSeconds(properties.getInt(HEALTH_CHECK_INTERVAL_SECONDS)); @@ -450,9 +464,10 @@ public String toString() { Strings.isNullOrEmpty(zkTrustStorePasswordPath) ? "unspecified" : "specified")) .append(String.format("clusterName: %s, ", clusterName)) .append(String.format("dataLogTypeImplementation: %s, ", dataLogTypeImplementation.name())) - .append(String.format("storageImplementation: %s, ", storageImplementation.name())) + .append(String.format("storageImplementation: %s, ", storageImplementation)) .append(String.format("readOnlySegmentStore: %b, ", readOnlySegmentStore)) .append(String.format("enableTls: %b, ", enableTls)) + .append(String.format("tlsProtocolVersion: %s, ", Arrays.toString(tlsProtocolVersion))) .append(String.format("certFile is %s, ", Strings.isNullOrEmpty(certFile) ? "unspecified" : "specified")) .append(String.format("keyFile is %s, ", diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistry.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistry.java index 7c52ac2819d..628a11cbbd7 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistry.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistry.java @@ -27,11 +27,14 @@ import io.pravega.segmentstore.server.SegmentContainerRegistry; import java.time.Duration; import java.util.ArrayList; +import java.util.Collection; +import java.util.List; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.Executor; import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Consumer; + import lombok.Getter; import lombok.RequiredArgsConstructor; import lombok.Setter; @@ -85,6 +88,12 @@ public void close() { } } + + @Override + public boolean isClosed() { + return this.closed.get(); + } + //endregion //region SegmentContainerRegistry Implementation @@ -105,6 +114,15 @@ public SegmentContainer getContainer(int containerId) throws ContainerNotFoundEx return result.container; } + @Override + public Collection getContainers() { + List segmentContainers = new ArrayList(); + for (ContainerWithHandle containerHandle: containers.values()) { + segmentContainers.add(containerHandle.container); + } + return segmentContainers; + } + @Override public CompletableFuture startContainer(int containerId, Duration timeout) { Exceptions.checkNotClosed(this.closed.get(), this); @@ -238,6 +256,5 @@ public String toString() { return String.format("SegmentContainerId = %d", this.containerId); } } - //endregion } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentService.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentService.java index b81276ef748..4822904f338 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentService.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/store/StreamSegmentService.java @@ -86,6 +86,14 @@ public CompletableFuture> getAttributes(String streamSegm "getAttributes", streamSegmentName, attributeIds); } + @Override + public CompletableFuture flushToStorage(int containerId, Duration timeout) { + return invoke( + containerId, + container -> container.flushToStorage(timeout), + "flushToStorage"); + } + @Override public CompletableFuture read(String streamSegmentName, long offset, int maxLength, Duration timeout) { return invoke( @@ -116,7 +124,16 @@ public CompletableFuture mergeStreamSegment(String tar return invoke( sourceStreamSegment, container -> container.mergeStreamSegment(targetStreamSegment, sourceStreamSegment, timeout), - "mergeTransaction", targetStreamSegment, sourceStreamSegment); + "mergeStreamSegment", targetStreamSegment, sourceStreamSegment); + } + + @Override + public CompletableFuture mergeStreamSegment(String targetStreamSegment, String sourceStreamSegment, + AttributeUpdateCollection attributes, Duration timeout) { + return invoke( + sourceStreamSegment, + container -> container.mergeStreamSegment(targetStreamSegment, sourceStreamSegment, attributes, timeout), + "mergeStreamSegment", targetStreamSegment, sourceStreamSegment); } @Override diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyCache.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyCache.java index 7aa5494e93f..9228b4eb55e 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyCache.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyCache.java @@ -290,6 +290,17 @@ Map getTailHashes(long segmentId) { return forSegmentCache(segmentId, SegmentKeyCache::getTailBucketOffsets, Collections.emptyMap()); } + /** + * Gets a number representing the expected change in number of entries to the index once all the tail cache entries + * are included in it. + * + * @param segmentId The Id of the Segment to get the entry count delta. + * @return The tail entry update count delta. + */ + int getTailUpdateDelta(long segmentId) { + return forSegmentCache(segmentId, SegmentKeyCache::getTailEntryCountDelta, 0); + } + private T forSegmentCache(long segmentId, Function ifExists, T ifNotExists) { SegmentKeyCache cache; synchronized (this.segmentCaches) { diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyIndex.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyIndex.java index 2a80ff3cbd7..4ec127f8a2c 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyIndex.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerKeyIndex.java @@ -20,6 +20,7 @@ import io.pravega.common.Exceptions; import io.pravega.common.ObjectClosedException; import io.pravega.common.TimeoutTimer; +import io.pravega.common.Timer; import io.pravega.common.concurrent.AsyncSemaphore; import io.pravega.common.concurrent.Futures; import io.pravega.common.concurrent.MultiKeySequentialProcessor; @@ -35,6 +36,7 @@ import io.pravega.segmentstore.contracts.tables.TableSegmentNotEmptyException; import io.pravega.segmentstore.server.CacheManager; import io.pravega.segmentstore.server.DirectSegmentAccess; +import io.pravega.segmentstore.server.SegmentMetadata; import io.pravega.segmentstore.server.reading.AsyncReadResultProcessor; import java.io.IOException; import java.time.Duration; @@ -53,6 +55,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; import java.util.function.Function; import java.util.function.Supplier; import java.util.stream.Collectors; @@ -580,6 +583,26 @@ CompletableFuture> getUnindexedKeyHashes(DirectSegm ignored -> CompletableFuture.completedFuture(this.cache.getTailHashes(segment.getSegmentId()))); } + /** + * Gets the approximate number of unique entries in the Table Segment with given {@link SegmentMetadata}. + * + * The accuracy of this number depends on how the entries in the Tail Cache have been updated/removed. + * + * If unconditionally, then the Key Index cannot determine easily if those keys previously existed or not, so it will + * assume that every unconditional update adds a key and every unconditional removal removes a key. In this case, + * this value will be eventually consistent (it will converge once the background indexer processes the tail). + * + * If the values have been updated/removed using conditional operations, then this is an accurate representation of + * the number of entries in the index. Conditional updates/removals pre-validate the keys with the index so the + * Key Index can accurately tell how it will be modified. + * + * @param segmentMetadata The {@link SegmentMetadata} for the segment to query. + * @return The approximate number of unique entries. + */ + long getUniqueEntryCount(SegmentMetadata segmentMetadata) { + return IndexReader.getEntryCount(segmentMetadata) + this.cache.getTailUpdateDelta(segmentMetadata.getId()); + } + /** * Reads the tail section (beyond {@link TableAttributes#INDEX_OFFSET}) of the given segment and caches the latest * values recorded there in the tail index. @@ -596,45 +619,62 @@ CompletableFuture> getUnindexedKeyHashes(DirectSegm * * @param segment A {@link DirectSegmentAccess} representing the Segment for which to cache the tail index. */ - private void triggerCacheTailIndex(DirectSegmentAccess segment, long lastIndexedOffset, long segmentLength) { - long tailIndexLength = segmentLength - lastIndexedOffset; - if (lastIndexedOffset >= segmentLength) { + private void triggerCacheTailIndex(DirectSegmentAccess segment, long lastIndexedOffset, SegmentTracker.RecoveryTask task) { + long tailIndexLength = task.triggerIndexOffset - lastIndexedOffset; + if (lastIndexedOffset >= task.triggerIndexOffset) { // Fully caught up. Nothing else to do. log.debug("{}: Table Segment {} fully indexed.", this.traceObjectId, segment.getSegmentId()); return; } else if (tailIndexLength > this.config.getMaxTailCachePreIndexLength()) { - log.debug("{}: Table Segment {} cannot perform tail-caching because tail index too long ({}).", this.traceObjectId, + log.info("{}: Table Segment {} cannot perform tail-caching because tail index too long ({}).", this.traceObjectId, segment.getSegmentId(), tailIndexLength); return; } // Read the tail section of the segment and process its updates. All of this should already be in the cache so // we are not going to do any Storage reads. - SegmentProperties segmentInfo = segment.getInfo(); - log.debug("{}: Tail-caching started for Table Segment {}. LastIndexedOffset={}, SegmentLength={}.", - this.traceObjectId, segment.getSegmentId(), lastIndexedOffset, segmentLength); - ReadResult rr = segment.read(lastIndexedOffset, (int) tailIndexLength, this.config.getRecoveryTimeout()); - AsyncReadResultProcessor + log.info("{}: Tail-caching started for Table Segment {}. LastIndexedOffset={}, SegmentLength={}.", + this.traceObjectId, segment.getSegmentId(), lastIndexedOffset, task.triggerIndexOffset); + val preIndexOffset = new AtomicLong(lastIndexedOffset); + + // Begin a loop which will end when either we've reached the target offset (defined in the RecoveryTask) or when + // the recovery task itself has been cancelled (i.e., segment evicted, shutting down, etc.). + Futures.loop( + () -> !task.task.isDone() && preIndexOffset.get() < task.triggerIndexOffset, + () -> { + int maxLength = (int) Math.min(this.config.getMaxTailCachePreIndexBatchLength(), task.triggerIndexOffset - preIndexOffset.get()); + return preIndexBatch(segment, preIndexOffset.get(), maxLength) + .thenAccept(preIndexOffset::set); + }, + this.executor) + .exceptionally(ex -> { + log.warn("{}: Tail-caching failed for Table Segment {}; LastIndexedOffset={}, CurrentOffset={}, SegmentLength={}.", this.traceObjectId, segment.getSegmentId(), lastIndexedOffset, preIndexOffset, task.triggerIndexOffset, Exceptions.unwrap(ex)); + return null; + }); + } + + private CompletableFuture preIndexBatch(DirectSegmentAccess segment, long startOffset, int maxLength) { + log.trace("{}: Tail-caching batch started for Table Segment {}. StartOffset={}, MaxLength={}.", + this.traceObjectId, segment.getSegmentId(), startOffset, maxLength); + val timer = new Timer(); + ReadResult rr = segment.read(startOffset, maxLength, this.config.getRecoveryTimeout()); + return AsyncReadResultProcessor .processAll(rr, this.executor, this.config.getRecoveryTimeout()) - .thenAcceptAsync(inputData -> { + .thenApplyAsync(inputData -> { // Parse out all Table Keys and collect their latest offsets, as well as whether they were deleted. val updates = new TailUpdates(); - collectLatestOffsets(inputData, lastIndexedOffset, (int) tailIndexLength, updates); + collectLatestOffsets(inputData, startOffset, maxLength, updates); // Incorporate that into the cache. this.cache.includeTailCache(segment.getSegmentId(), updates.byBucket); - log.debug("{}: Tail-caching complete for Table Segment {}. Key Update Count={}, Bucket Update Count={}.", - this.traceObjectId, segment.getSegmentId(), updates.getKeyCount(), updates.byBucket.size()); + log.debug("{}: Tail-caching batch complete for Table Segment {}. StartOffset={}, EndOffset={}, Key Update Count={}, Bucket Update Count={}, Elapsed={}ms.", + this.traceObjectId, segment.getSegmentId(), startOffset, updates.getMaxOffset(), updates.getKeyCount(), updates.byBucket.size(), timer.getElapsedMillis()); - // Notify the Segment Tracker that this segment has been recovered, so it can unblock any calls it - // may have collected. - this.segmentTracker.updateSegmentIndexOffset(segment.getSegmentId(), segmentLength, 0, updates.byBucket.size() > 0); - }, this.executor) - .exceptionally(ex -> { - log.warn("{}: Tail-caching failed for Table Segment {}.", this.traceObjectId, segment.getSegmentId(), Exceptions.unwrap(ex)); - return null; - }); + // Notify the Segment Tracker that this segment has been recovered up to whatever offset we were able to process. + this.segmentTracker.updateSegmentIndexOffset(segment.getSegmentId(), updates.getMaxOffset(), 0, updates.byBucket.size() > 0); + return updates.getMaxOffset(); + }, this.executor); } @SneakyThrows(IOException.class) @@ -643,11 +683,17 @@ private void collectLatestOffsets(BufferView input, long startOffset, int maxLen long nextOffset = startOffset; final long maxOffset = startOffset + maxLength; val inputReader = input.getBufferViewReader(); - while (nextOffset < maxOffset) { - val e = AsyncTableEntryReader.readEntryComponents(inputReader, nextOffset, serializer); - val hash = this.keyHasher.hash(e.getKey()); - result.add(hash, nextOffset, e.getHeader().isDeletion()); - nextOffset += e.getHeader().getTotalLength(); + try { + while (nextOffset < maxOffset) { + val e = AsyncTableEntryReader.readEntryComponents(inputReader, nextOffset, serializer); + val hash = this.keyHasher.hash(e.getKey()); + result.add(hash, nextOffset, e.getHeader().getTotalLength(), e.getHeader().isDeletion()); + nextOffset += e.getHeader().getTotalLength(); + } + } catch (BufferView.Reader.OutOfBoundsException ex) { + // We chose an arbitrary end offset, which may have been in the middle of an entry. As such, this exception + // is the only way we know when to stop. When this happens, the TailUpdate will be positioned on an entry + // boundary, which will be the first one to be read in the next iteration. } } @@ -668,11 +714,14 @@ private static class TailUpdates { final Map byBucket = new HashMap<>(); @Getter private int keyCount = 0; + @Getter + private long maxOffset = -1; - void add(UUID keyHash, long offset, boolean isDeletion) { + void add(UUID keyHash, long offset, int serializationLength, boolean isDeletion) { CacheBucketOffset cbo = new CacheBucketOffset(offset, isDeletion); this.byBucket.put(keyHash, cbo); this.keyCount++; + this.maxOffset = offset + serializationLength; } } @@ -758,11 +807,11 @@ void updateSegmentIndexOffset(long segmentId, long indexOffset, int processedSiz if (task != null) { if (removed) { // Normally nobody should be waiting on this, but in case they did, there's nothing we can do about it now. - log.debug("{}: TableSegment {} evicted; cancelling dependent tasks.", traceObjectId, segmentId); + log.info("{}: TableSegment {} evicted; cancelling dependent tasks.", traceObjectId, segmentId); task.task.cancel(true); - } else { + } else if (indexOffset >= task.triggerIndexOffset) { // Notify whoever is waiting that it's all clear to execute. - log.debug("{}: TableSegment {} fully recovered; triggering dependent tasks.", traceObjectId, segmentId); + log.info("{}: TableSegment {} fully recovered ({} ms); triggering dependent tasks.", traceObjectId, segmentId, task.timer.getElapsedMillis()); task.task.complete(cacheUpdated); } } @@ -867,7 +916,7 @@ CompletableFuture waitIfNeeded(DirectSegmentAccess segment, Function= 0; - triggerCacheTailIndex(segment, lastIndexedOffset, segmentLength); + triggerCacheTailIndex(segment, lastIndexedOffset, task); } // A recovery task is registered. Queue behind it. @@ -908,6 +957,7 @@ private class RecoveryTask { final long segmentId; final long triggerIndexOffset; final CompletableFuture task = new CompletableFuture<>(); + final Timer timer = new Timer(); } } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerTableExtensionImpl.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerTableExtensionImpl.java index 3f7920c3719..034c4347b92 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerTableExtensionImpl.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/ContainerTableExtensionImpl.java @@ -22,6 +22,7 @@ import io.pravega.common.util.BufferView; import io.pravega.segmentstore.contracts.AttributeUpdate; import io.pravega.segmentstore.contracts.AttributeUpdateType; +import io.pravega.segmentstore.contracts.BadSegmentTypeException; import io.pravega.segmentstore.contracts.SegmentType; import io.pravega.segmentstore.contracts.tables.IteratorArgs; import io.pravega.segmentstore.contracts.tables.IteratorItem; @@ -29,6 +30,7 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.server.CacheManager; import io.pravega.segmentstore.server.SegmentContainer; import io.pravega.segmentstore.server.SegmentMetadata; @@ -71,17 +73,19 @@ public class ContainerTableExtensionImpl implements ContainerTableExtension { /** * Creates a new instance of the ContainerTableExtensionImpl class. * + * @param config Configuration. * @param segmentContainer The {@link SegmentContainer} to associate with. * @param cacheManager The {@link CacheManager} to use to manage the cache. * @param executor An Executor to use for async tasks. */ - public ContainerTableExtensionImpl(SegmentContainer segmentContainer, CacheManager cacheManager, ScheduledExecutorService executor) { - this(TableExtensionConfig.builder().build(), segmentContainer, cacheManager, KeyHasher.sha256(), executor); + public ContainerTableExtensionImpl(TableExtensionConfig config, SegmentContainer segmentContainer, CacheManager cacheManager, ScheduledExecutorService executor) { + this(config, segmentContainer, cacheManager, KeyHasher.sha256(), executor); } /** * Creates a new instance of the ContainerTableExtensionImpl class with custom {@link KeyHasher}. * + * @param config Configuration. * @param segmentContainer The {@link SegmentContainer} to associate with. * @param cacheManager The {@link CacheManager} to use to manage the cache. * @param hasher The {@link KeyHasher} to use. @@ -144,7 +148,7 @@ private TableSegmentLayout selectLayout(String segmentName, SegmentType segmentT return this.hashTableLayout; } - throw new IllegalArgumentException(String.format("Segment Type '%s' not supported (Segment = '%s').", segmentType, segmentName)); + throw new BadSegmentTypeException(segmentName, SegmentType.builder().tableSegment().build(), segmentType); } //endregion @@ -175,18 +179,6 @@ public CompletableFuture deleteSegment(@NonNull String segmentName, boolea .thenComposeAsync(segment -> selectLayout(segment.getInfo()).deleteSegment(segmentName, mustBeEmpty, timer.getRemaining()), this.executor); } - @Override - public CompletableFuture merge(@NonNull String targetSegmentName, @NonNull String sourceSegmentName, Duration timeout) { - Exceptions.checkNotClosed(this.closed.get(), this); - throw new UnsupportedOperationException("merge"); - } - - @Override - public CompletableFuture seal(String segmentName, Duration timeout) { - Exceptions.checkNotClosed(this.closed.get(), this); - throw new UnsupportedOperationException("seal"); - } - @Override public CompletableFuture> put(@NonNull String segmentName, @NonNull List entries, Duration timeout) { return put(segmentName, entries, TableSegmentLayout.NO_OFFSET, timeout); @@ -245,6 +237,16 @@ public CompletableFuture>> entryDeltaIter .thenApplyAsync(segment -> selectLayout(segment.getInfo()).entryDeltaIterator(segment, fromPosition, fetchTimeout), this.executor); } + @Override + public CompletableFuture getInfo(String segmentName, Duration timeout) { + Exceptions.checkNotClosed(this.closed.get(), this); + logRequest("getInfo", segmentName); + val timer = new TimeoutTimer(timeout); + return this.segmentContainer + .forSegment(segmentName, timer.getRemaining()) + .thenComposeAsync(segment -> selectLayout(segment.getInfo()).getInfo(segment, timer.getRemaining()), this.executor); + } + //endregion //region Helpers diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/DeltaIteratorState.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/DeltaIteratorState.java index 32b365da4b5..c5be25ed24f 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/DeltaIteratorState.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/DeltaIteratorState.java @@ -101,6 +101,7 @@ public static DeltaIteratorState deserialize(BufferView data) { * * @return The {@link ArrayView} that was used for serialization. */ + @Override @SneakyThrows(IOException.class) public ArrayView serialize() { return SERIALIZER.serialize(this); diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayout.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayout.java index 1f64dd9a2c3..ca0183dd1b7 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayout.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayout.java @@ -49,6 +49,7 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.server.AttributeIterator; import io.pravega.segmentstore.server.DirectSegmentAccess; import io.pravega.segmentstore.server.UpdateableSegmentMetadata; @@ -125,6 +126,9 @@ Map getNewSegmentAttributes(@NonNull TableSegmentConfig confi val result = new HashMap(); result.put(Attributes.ATTRIBUTE_ID_LENGTH, (long) config.getKeyLength()); result.putAll(this.config.getDefaultCompactionAttributes()); + if (config.getRolloverSizeBytes() > 0) { + result.put(Attributes.ROLLOVER_SIZE, config.getRolloverSizeBytes()); + } return result; } @@ -272,6 +276,20 @@ AsyncIterator> entryDeltaIterator(@NonNull DirectSegmen throw new UnsupportedOperationException("entryDeltaIterator"); } + @Override + CompletableFuture getInfo(@NonNull DirectSegmentAccess segment, Duration timeout) { + val m = segment.getInfo(); + return segment.getExtendedAttributeCount(timeout) + .thenApply(entryCount -> TableSegmentInfo.builder() + .name(m.getName()) + .length(m.getLength()) + .startOffset(m.getStartOffset()) + .type(m.getType()) + .entryCount(entryCount) + .keyLength(getSegmentKeyLength(m)) + .build()); + } + //endregion //region Helpers @@ -476,6 +494,7 @@ static IteratorStateImpl deserialize(BufferView data) throws IOException { * * @return The {@link ArrayView} that was used for serialization. */ + @Override @SneakyThrows(IOException.class) public ArrayView serialize() { return SERIALIZER.serialize(this); diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayout.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayout.java index 3219d0c1cd4..8aeef1abd6c 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayout.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayout.java @@ -28,6 +28,7 @@ import io.pravega.common.util.BufferView; import io.pravega.common.util.IllegalDataFormatException; import io.pravega.segmentstore.contracts.AttributeId; +import io.pravega.segmentstore.contracts.Attributes; import io.pravega.segmentstore.contracts.SegmentType; import io.pravega.segmentstore.contracts.tables.IteratorArgs; import io.pravega.segmentstore.contracts.tables.IteratorItem; @@ -36,6 +37,7 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.server.CacheManager; import io.pravega.segmentstore.server.DirectSegmentAccess; import io.pravega.segmentstore.server.SegmentMetadata; @@ -47,6 +49,7 @@ import java.util.ArrayList; import java.util.Collection; import java.util.Collections; +import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.UUID; @@ -103,7 +106,12 @@ Collection createWriterSegmentProcessors(UpdateableSegme @Override Map getNewSegmentAttributes(@NonNull TableSegmentConfig config) { Preconditions.checkArgument(config.getKeyLength() == 0, "Segment KeyLength must be 0 for HashTableSegments; actual %s.", config.getKeyLength()); - return this.config.getDefaultCompactionAttributes(); + val result = new HashMap(); + result.putAll(this.config.getDefaultCompactionAttributes()); + if (config.getRolloverSizeBytes() > 0) { + result.put(Attributes.ROLLOVER_SIZE, config.getRolloverSizeBytes()); + } + return result; } @Override @@ -223,6 +231,19 @@ AsyncIterator> entryDeltaIterator(@NonNull DirectSegmen .build(); } + @Override + CompletableFuture getInfo(@NonNull DirectSegmentAccess segment, Duration timeout) { + val m = segment.getInfo(); + return CompletableFuture.completedFuture(TableSegmentInfo.builder() + .name(m.getName()) + .length(m.getLength()) + .startOffset(m.getStartOffset()) + .type(m.getType()) + .entryCount(this.keyIndex.getUniqueEntryCount(m)) + .keyLength(0) // Variable key length. + .build()); + } + private CompletableFuture>> newIterator(@NonNull DirectSegmentAccess segment, @NonNull IteratorArgs args, @NonNull GetBucketReader createBucketReader) { Preconditions.checkArgument(args.getFrom() == null && args.getTo() == null, "Range Iterators not supported for HashTableSegments."); @@ -426,6 +447,7 @@ static IteratorStateImpl deserialize(BufferView data) throws IOException { * * @return The {@link ArrayView} that was used for serialization. */ + @Override @SneakyThrows(IOException.class) public ArrayView serialize() { return SERIALIZER.serialize(this); diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/SegmentKeyCache.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/SegmentKeyCache.java index 73790238e0f..784833fe9b5 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/SegmentKeyCache.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/SegmentKeyCache.java @@ -324,6 +324,16 @@ synchronized Map getTailBucketOffsets() { return new HashMap<>(this.tailOffsets); } + /** + * Gets a number representing the expected change in number of entries to the index once all the tail cache entries + * are included in it. + * + * @return The tail entry update count delta. + */ + synchronized int getTailEntryCountDelta() { + return this.tailOffsets.values().stream().mapToInt(o -> o.isRemoval() ? -1 : 1).sum(); + } + @Override public synchronized String toString() { return String.format("LIO = %s, Entries = %s, Backpointers = %s, BucketOffsets = %s.", diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableExtensionConfig.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableExtensionConfig.java index fc850d05041..67cc57674a0 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableExtensionConfig.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableExtensionConfig.java @@ -17,28 +17,50 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.collect.ImmutableMap; +import io.pravega.common.util.ConfigBuilder; +import io.pravega.common.util.ConfigurationException; +import io.pravega.common.util.Property; +import io.pravega.common.util.TypedProperties; import io.pravega.segmentstore.contracts.AttributeId; import io.pravega.segmentstore.contracts.Attributes; import io.pravega.segmentstore.contracts.SegmentProperties; import io.pravega.segmentstore.contracts.tables.TableAttributes; import java.time.Duration; +import java.time.temporal.ChronoUnit; import java.util.Map; import java.util.concurrent.TimeoutException; -import lombok.Builder; -import lombok.Data; +import lombok.Getter; /** * Configuration for {@link ContainerTableExtensionImpl} and sub-components. + * NOTE: This should only be used for testing or cluster repair purposes. Even though these settings can be set externally, + * it is not recommended to expose them to users. The defaults are chosen conservatively to ensure proper system functioning + * under most circumstances. */ -@Data -@Builder +@Getter public class TableExtensionConfig { + public static final Property MAX_TAIL_CACHE_PREINDEX_LENGTH = Property.named("preindex.bytes.max", (long) EntrySerializer.MAX_BATCH_SIZE * 4); + public static final Property MAX_TAIL_CACHE_PREINDEX_BATCH_SIZE = Property.named("preindex.batch.bytes.max", EntrySerializer.MAX_BATCH_SIZE * 4); + public static final Property RECOVERY_TIMEOUT = Property.named("recovery.timeout.millis", 60000); + public static final Property MAX_UNINDEXED_LENGTH = Property.named("unindexed.bytes.max", EntrySerializer.MAX_BATCH_SIZE * 4); + public static final Property MAX_COMPACTION_SIZE = Property.named("compaction.bytes.max", EntrySerializer.MAX_SERIALIZATION_LENGTH * 4); + public static final Property COMPACTION_FREQUENCY = Property.named("compaction.frequency.millis", 30000); + public static final Property DEFAULT_MIN_UTILIZATION = Property.named("utilization.min", 75); + public static final Property DEFAULT_ROLLOVER_SIZE = Property.named("rollover.size.bytes", (long) EntrySerializer.MAX_SERIALIZATION_LENGTH * 4 * 4); + public static final Property MAX_BATCH_SIZE = Property.named("batch.size.bytes", EntrySerializer.MAX_BATCH_SIZE); + private static final String COMPONENT_CODE = "tables"; + /** * The maximum unindexed length ({@link SegmentProperties#getLength() - {@link TableAttributes#INDEX_OFFSET}}) of a * Segment for which {@link ContainerKeyIndex} {@code triggerCacheTailIndex} can be invoked. */ - @Builder.Default - private int maxTailCachePreIndexLength = EntrySerializer.MAX_BATCH_SIZE * 4; + private final long maxTailCachePreIndexLength; + + /** + * The maximum number of bytes to read and process at once from the segment while performing preindexing. + * See {@link #getMaxTailCachePreIndexLength()}. + */ + private final int maxTailCachePreIndexBatchLength; /** * The maximum allowed unindexed length ({@link SegmentProperties#getLength() - {@link TableAttributes#INDEX_OFFSET}}) @@ -47,49 +69,70 @@ public class TableExtensionConfig { * {@link ContainerKeyIndex#notifyIndexOffsetChanged} indicates that this value has been reduced sufficiently in order * to allow it to proceed. */ - @Builder.Default - private final int maxUnindexedLength = EntrySerializer.MAX_BATCH_SIZE * 4; + private final int maxUnindexedLength; /** * The default value to supply to a {@link WriterTableProcessor} to indicate how big compactions need to be. * We need to return a value that is large enough to encompass the largest possible Table Entry (otherwise * compaction will stall), but not too big, as that will introduce larger indexing pauses when compaction is running. */ - @Builder.Default - private final int maxCompactionSize = EntrySerializer.MAX_SERIALIZATION_LENGTH * 4; + private final int maxCompactionSize; /** * The amount of time to wait between successive compaction attempts on the same Table Segment. This may not apply * to all Table Segment Layouts (i.e., it only applies to Fixed-Key-Length Table Segments). */ - @Builder.Default - private final Duration compactionFrequency = Duration.ofSeconds(30); + private final Duration compactionFrequency; /** * Default value to set for the {@link TableAttributes#MIN_UTILIZATION} for every new Table Segment. */ - @Builder.Default - private final long defaultMinUtilization = 75L; + private final long defaultMinUtilization; /** * Default value to set for the {@link Attributes#ROLLOVER_SIZE} for every new Table Segment. */ - @Builder.Default - private final long defaultRolloverSize = EntrySerializer.MAX_SERIALIZATION_LENGTH * 4 * 4; + private final long defaultRolloverSize; /** - * The maximum size of a single update batch. For unit test purposes only. Do not tinker with in production code. + * The maximum size of a single update batch. For unit test purposes only. + * IMPORTANT: Do not tinker with in production code. Enforced to be at most {@link EntrySerializer#MAX_BATCH_SIZE}. */ - @Builder.Default @VisibleForTesting - private final int maxBatchSize = EntrySerializer.MAX_BATCH_SIZE; + private final int maxBatchSize; /** * The maximum amount of time to wait for a Table Segment Recovery. If any recovery takes more than this amount of time, * all registered calls will be failed with a {@link TimeoutException}. */ - @Builder.Default - private Duration recoveryTimeout = Duration.ofSeconds(60); + private final Duration recoveryTimeout; + + private TableExtensionConfig(TypedProperties properties) throws ConfigurationException { + this.maxTailCachePreIndexLength = properties.getPositiveLong(MAX_TAIL_CACHE_PREINDEX_LENGTH); + this.maxTailCachePreIndexBatchLength = properties.getPositiveInt(MAX_TAIL_CACHE_PREINDEX_BATCH_SIZE); + this.maxUnindexedLength = properties.getPositiveInt(MAX_UNINDEXED_LENGTH); + this.maxCompactionSize = properties.getPositiveInt(MAX_COMPACTION_SIZE); + this.compactionFrequency = properties.getDuration(COMPACTION_FREQUENCY, ChronoUnit.MILLIS); + this.defaultMinUtilization = properties.getNonNegativeInt(DEFAULT_MIN_UTILIZATION); + if (this.defaultMinUtilization > 100) { + throw new ConfigurationException(String.format("Property '%s' must be a value within [0, 100].", DEFAULT_MIN_UTILIZATION)); + } + this.defaultRolloverSize = properties.getPositiveLong(DEFAULT_ROLLOVER_SIZE); + this.maxBatchSize = properties.getPositiveInt(MAX_BATCH_SIZE); + if (this.maxBatchSize > EntrySerializer.MAX_BATCH_SIZE) { + throw new ConfigurationException(String.format("Property '%s' must be a value within [0, %s].", DEFAULT_MIN_UTILIZATION, EntrySerializer.MAX_BATCH_SIZE)); + } + this.recoveryTimeout = properties.getDuration(RECOVERY_TIMEOUT, ChronoUnit.MILLIS); + } + + /** + * Creates a new ConfigBuilder that can be used to create instances of this class. + * + * @return A new Builder for this class. + */ + public static ConfigBuilder builder() { + return new ConfigBuilder<>(COMPONENT_CODE, TableExtensionConfig::new); + } /** * The default Segment Attributes to set for every new Table Segment. These values will override the corresponding diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableSegmentLayout.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableSegmentLayout.java index 698c4bbf396..3bda3c0435f 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableSegmentLayout.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableSegmentLayout.java @@ -24,6 +24,7 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.contracts.tables.TableStore; import io.pravega.segmentstore.server.DirectSegmentAccess; import io.pravega.segmentstore.server.UpdateableSegmentMetadata; @@ -174,6 +175,15 @@ protected TableSegmentLayout(@NonNull Connector connector, @NonNull TableExtensi */ abstract AsyncIterator> entryDeltaIterator(@NonNull DirectSegmentAccess segment, long fromPosition, Duration fetchTimeout); + /** + * Gets information about a Table Segment. + * + * @param segment A {@link DirectSegmentAccess} representing the segment to query. + * @param timeout Timeout for the operation. + * @return See {@link TableStore#getInfo}. + */ + abstract CompletableFuture getInfo(@NonNull DirectSegmentAccess segment, Duration timeout); + //endregion //region Helpers diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableService.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableService.java index 20afdcc1ad3..b4699720261 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableService.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableService.java @@ -24,6 +24,7 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.contracts.tables.TableStore; import io.pravega.segmentstore.server.SegmentContainerRegistry; import io.pravega.segmentstore.server.store.SegmentContainerCollection; @@ -69,20 +70,6 @@ public CompletableFuture deleteSegment(String segmentName, boolean mustBeE "deleteSegment", segmentName, mustBeEmpty); } - @Override - public CompletableFuture merge(String targetSegmentName, String sourceSegmentName, Duration timeout) { - return invokeExtension(targetSegmentName, - e -> e.merge(targetSegmentName, sourceSegmentName, timeout), - "merge", targetSegmentName, sourceSegmentName); - } - - @Override - public CompletableFuture seal(String segmentName, Duration timeout) { - return invokeExtension(segmentName, - e -> e.seal(segmentName, timeout), - "seal", segmentName); - } - @Override public CompletableFuture> put(String segmentName, List entries, Duration timeout) { return invokeExtension(segmentName, @@ -139,6 +126,13 @@ public CompletableFuture>> entryDeltaIter "entryDeltaIterator", segmentName, fromPosition, fetchTimeout); } + @Override + public CompletableFuture getInfo(String segmentName, Duration timeout) { + return invokeExtension(segmentName, + e -> e.getInfo(segmentName, timeout), + "getInfo", segmentName, timeout); + } + //endregion //region Helpers diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableWriterConnector.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableWriterConnector.java index e9d1185f05a..8acac25515c 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableWriterConnector.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/TableWriterConnector.java @@ -72,6 +72,14 @@ interface TableWriterConnector extends AutoCloseable { */ int getMaxCompactionSize(); + /** + * Gets a value representing the maximum number of bytes to attempt to index (flush) at once. + * @return The maximum flush size. + */ + default int getMaxFlushSize() { + return 134217728; // 128MB + } + /** * This method will be invoked by the {@link WriterTableProcessor} when it is closed. */ diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/WriterTableProcessor.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/WriterTableProcessor.java index 14d4853bbf0..3dad0208f92 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/WriterTableProcessor.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/tables/WriterTableProcessor.java @@ -15,6 +15,7 @@ */ package io.pravega.segmentstore.server.tables; +import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Preconditions; import io.pravega.common.Exceptions; import io.pravega.common.TimeoutTimer; @@ -32,7 +33,7 @@ import io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation; import io.pravega.segmentstore.server.logs.operations.Operation; import java.time.Duration; -import java.util.ArrayList; +import java.util.ArrayDeque; import java.util.Collection; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ScheduledExecutorService; @@ -274,7 +275,6 @@ private CompletableFuture flushWithSingleRetry(DirectSeg */ private void flushComplete(TableWriterFlushResult flushResult) { log.debug("{}: FlushComplete (State={}).", this.traceObjectId, this.aggregator); - this.aggregator.reset(); this.aggregator.setLastIndexedOffset(flushResult.lastIndexedOffset); this.connector.notifyIndexOffsetChanged(this.aggregator.getLastIndexedOffset(), flushResult.processedBytes); } @@ -292,7 +292,13 @@ private void flushComplete(TableWriterFlushResult flushResult) { */ private CompletableFuture flushOnce(DirectSegmentAccess segment, TimeoutTimer timer) { // Index all the keys in the segment range pointed to by the aggregator. - KeyUpdateCollection keyUpdates = readKeysFromSegment(segment, this.aggregator.getFirstOffset(), this.aggregator.getLastOffset(), timer); + long lastOffset = this.aggregator.getLastIndexToProcessAtOnce(this.connector.getMaxFlushSize()); + assert lastOffset - this.aggregator.getFirstOffset() <= this.connector.getMaxFlushSize(); + if (lastOffset < this.aggregator.getLastOffset()) { + log.info("{}: Partial flush initiated up to offset {}. State: {}.", this.traceObjectId, lastOffset, this.aggregator); + } + + KeyUpdateCollection keyUpdates = readKeysFromSegment(segment, this.aggregator.getFirstOffset(), lastOffset, timer); log.debug("{}: Flush.ReadFromSegment KeyCount={}, UpdateCount={}, HighestCopiedOffset={}, LastIndexedOffset={}.", this.traceObjectId, keyUpdates.getUpdates().size(), keyUpdates.getTotalUpdateCount(), keyUpdates.getHighestCopiedOffset(), keyUpdates.getLastIndexedOffset()); @@ -464,51 +470,36 @@ private void logBucketUpdates(Collection bucketUpdates) { //region Helper Classes @ThreadSafe - private static class OperationAggregator { - @GuardedBy("this") - private long firstSeqNo; - @GuardedBy("this") - private long lastOffset; + @VisibleForTesting + static class OperationAggregator { @GuardedBy("this") private long lastIndexedOffset; @GuardedBy("this") - private final ArrayList appendOffsets; + private final ArrayDeque appends; OperationAggregator(long lastIndexedOffset) { - this.appendOffsets = new ArrayList<>(); - reset(); + this.appends = new ArrayDeque<>(); this.lastIndexedOffset = lastIndexedOffset; } - synchronized void reset() { - this.firstSeqNo = Operation.NO_SEQUENCE_NUMBER; - this.lastOffset = -1; - this.appendOffsets.clear(); - } - synchronized void add(CachedStreamSegmentAppendOperation op) { - if (this.appendOffsets.size() == 0) { - this.firstSeqNo = op.getSequenceNumber(); - } - - this.lastOffset = op.getLastStreamSegmentOffset(); - this.appendOffsets.add(op.getStreamSegmentOffset()); + this.appends.add(op); } synchronized boolean isEmpty() { - return this.appendOffsets.isEmpty(); + return this.appends.isEmpty(); } synchronized long getFirstSequenceNumber() { - return this.firstSeqNo; + return this.appends.isEmpty() ? Operation.NO_SEQUENCE_NUMBER : this.appends.peekFirst().getSequenceNumber(); } synchronized long getFirstOffset() { - return this.appendOffsets.size() == 0 ? -1 : this.appendOffsets.get(0); + return this.appends.isEmpty() ? -1 : this.appends.peekFirst().getStreamSegmentOffset(); } synchronized long getLastOffset() { - return this.lastOffset; + return this.appends.isEmpty() ? -1 : this.appends.peekLast().getLastStreamSegmentOffset(); } synchronized long getLastIndexedOffset() { @@ -516,19 +507,22 @@ synchronized long getLastIndexedOffset() { } synchronized boolean setLastIndexedOffset(long value) { - if (this.appendOffsets.size() > 0) { + if (!this.appends.isEmpty()) { if (value >= getLastOffset()) { // Clear everything - anyway we do not have enough info to determine if this is valid or not. - reset(); + this.appends.clear(); } else { - // First, make sure we set this to a valid value. - int index = this.appendOffsets.indexOf(value); - if (index < 0) { - return false; + // Remove all appends whose entries have been fully indexed. + while (!this.appends.isEmpty() && this.appends.peekFirst().getLastStreamSegmentOffset() <= value) { + // All the entries in this append have been indexed. It's safe to remove it. + this.appends.removeFirst(); } - // Clear out smaller offsets. - this.appendOffsets.subList(0, index).clear(); + // If we have any leftover appends, check if the desired lastIndexedOffset falls on an append boundary. + // If not, do not change it and report back. + if (!this.appends.isEmpty() && this.appends.peekFirst().getStreamSegmentOffset() != value) { + return false; + } } } @@ -539,14 +533,36 @@ synchronized boolean setLastIndexedOffset(long value) { return true; } + synchronized long getLastIndexToProcessAtOnce(int maxLength) { + val first = this.appends.peekFirst(); + if (first == null) { + return -1; // Nothing to process. + } + + // We are optimistic. The majority of our cases will fit in one batch, so we start from the end. + long maxOffset = first.getStreamSegmentOffset() + maxLength; + val i = this.appends.descendingIterator(); + while (i.hasNext()) { + val lastOffset = i.next().getLastStreamSegmentOffset(); + if (lastOffset <= maxOffset) { + // We found the last append which can fit wholly within the given maxLength + return lastOffset; + } + } + + // If we get here, then maxLength is smaller than the first append's length. In order to continue, we have + // no choice but to process that first append anyway. + return first.getLastStreamSegmentOffset(); + } + synchronized int size() { - return this.appendOffsets.size(); + return this.appends.size(); } @Override public synchronized String toString() { return String.format("Count = %d, FirstSN = %d, FirstOffset = %d, LastOffset = %d, LIdx = %s", - this.appendOffsets.size(), this.firstSeqNo, getFirstOffset(), getLastOffset(), getLastIndexedOffset()); + this.appends.size(), getFirstSequenceNumber(), getFirstOffset(), getLastOffset(), getLastIndexedOffset()); } } diff --git a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/writer/SegmentAggregator.java b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/writer/SegmentAggregator.java index 9680f9c1fc6..72465e209d5 100644 --- a/segmentstore/server/src/main/java/io/pravega/segmentstore/server/writer/SegmentAggregator.java +++ b/segmentstore/server/src/main/java/io/pravega/segmentstore/server/writer/SegmentAggregator.java @@ -1176,6 +1176,9 @@ private long getRolloverSize() { // Configured value. long rolloverSize = this.metadata.getAttributes().getOrDefault(Attributes.ROLLOVER_SIZE, SegmentRollingPolicy.NO_ROLLING.getMaxLength()); + // rolloverSize being zero means the default value should be used. + rolloverSize = rolloverSize == 0 ? SegmentRollingPolicy.NO_ROLLING.getMaxLength() : rolloverSize; + // Make sure it does not exceed configured max value. return Math.min(rolloverSize, this.config.getMaxRolloverSize()); } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/CacheManagerTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/CacheManagerTests.java index 5ffaf1bd9d1..93ed9c5b217 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/CacheManagerTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/CacheManagerTests.java @@ -23,8 +23,11 @@ import io.pravega.segmentstore.storage.cache.CacheState; import io.pravega.segmentstore.storage.cache.DirectMemoryCache; import io.pravega.segmentstore.storage.cache.NoOpCache; +import io.pravega.shared.health.Health; +import io.pravega.shared.health.Status; import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.ThreadPooledTestSuite; +import io.pravega.segmentstore.server.CacheManager.CacheManagerHealthContributor; import java.time.Duration; import java.util.ArrayList; import java.util.Collections; @@ -145,7 +148,7 @@ public void testIncrementCurrentGeneration() { // Fail the test if we get an unexpected value for currentGeneration. clients.forEach(c -> c.setUpdateGenerationsImpl((current, oldest, essentialOnly) -> { - Assert.assertEquals("Unexpected value for current generation.", currentGeneration.get(), (int) current); + Assert.assertEquals("Unexpected value for current generation.", currentGeneration.get(), current); updatedClients.add(c); return false; })); @@ -237,7 +240,7 @@ public void testIncrementOldestGeneration() { cache.setUsedBytes(policy.getEvictionThreshold() - 1); client.setCacheStatus(defaultOldestGeneration, currentGeneration.get()); client.setUpdateGenerationsImpl((current, oldest, essentialOnly) -> { - Assert.assertEquals("Not expecting a change for oldestGeneration", currentOldestGeneration.get(), (int) oldest); + Assert.assertEquals("Not expecting a change for oldestGeneration", currentOldestGeneration.get(), oldest); return false; }); } @@ -603,6 +606,28 @@ public void testCleanupListeners() { cm.getUtilizationProvider().registerCleanupListener(l2); // This should have no effect. } + /** + * Tests the health contributor made with cache manager + */ + @Test + public void testCacheHealth() { + final CachePolicy policy = new CachePolicy(Integer.MAX_VALUE, Duration.ofHours(10000), Duration.ofHours(1)); + + @Cleanup + val cache = new TestCache(policy.getMaxSize()); + cache.setStoredBytes(1); // The Cache Manager won't do anything if there's no stored data. + @Cleanup + TestCacheManager cm = new TestCacheManager(policy, cache, executorService()); + + CacheManagerHealthContributor cacheManagerHealthContributor = new CacheManagerHealthContributor(cm); + Health.HealthBuilder builder = Health.builder().name(cacheManagerHealthContributor.getName()); + Status status = cacheManagerHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'UP' Status.", Status.UP, status); + cm.close(); + status = cacheManagerHealthContributor.doHealthCheck(builder); + Assert.assertEquals("HealthContributor should report an 'DOWN' Status.", Status.DOWN, status); + } + private static class TestCleanupListener implements ThrottleSourceListener { @Getter private int callCount = 0; diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/ReadResultMock.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/ReadResultMock.java index 2bfb2da0c3e..1c8885e6cd9 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/ReadResultMock.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/ReadResultMock.java @@ -63,12 +63,12 @@ private static CompletableReadResultEntry noopSupplier(long startOffset, int rem //region ReadResult Implementation @Override - public boolean hasNext() { + public synchronized boolean hasNext() { return this.consumedLength < getMaxResultLength(); } @Override - public ReadResultEntry next() { + public synchronized ReadResultEntry next() { if (!hasNext()) { return null; } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/SegmentStoreMetricsTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/SegmentStoreMetricsTests.java index ae4aee2ffd6..fd17fd010c9 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/SegmentStoreMetricsTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/SegmentStoreMetricsTests.java @@ -16,6 +16,7 @@ package io.pravega.segmentstore.server; import io.pravega.common.AbstractTimer; +import io.pravega.common.concurrent.ExecutorServiceHelpers; import io.pravega.segmentstore.server.logs.operations.CompletableOperation; import io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation; import io.pravega.segmentstore.server.logs.operations.OperationPriority; @@ -24,6 +25,7 @@ import io.pravega.shared.metrics.MetricRegistryUtils; import io.pravega.shared.metrics.MetricsConfig; import io.pravega.shared.metrics.MetricsProvider; +import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.SerializedClassRunner; import java.time.Duration; import java.util.Arrays; @@ -31,6 +33,7 @@ import java.util.Random; import java.util.concurrent.CompletableFuture; import java.util.concurrent.TimeUnit; +import java.util.concurrent.ScheduledExecutorService; import lombok.Cleanup; import lombok.extern.slf4j.Slf4j; import lombok.val; @@ -196,6 +199,22 @@ public void testCacheManagerMetrics() { assertNull(MetricRegistryUtils.getTimer(MetricsNames.CACHE_MANAGER_ITERATION_DURATION)); } + @Test + public void testThreadPoolMetrics() throws Exception { + @Cleanup("shutdown") + ScheduledExecutorService coreExecutor = ExecutorServiceHelpers.newScheduledThreadPool(30, "core", Thread.NORM_PRIORITY); + @Cleanup("shutdown") + ScheduledExecutorService storageExecutor = ExecutorServiceHelpers.newScheduledThreadPool(30, "storage-io", Thread.NORM_PRIORITY); + + @Cleanup + SegmentStoreMetrics.ThreadPool pool = new SegmentStoreMetrics.ThreadPool(coreExecutor, storageExecutor); + + AssertExtensions.assertEventuallyEquals(true, () -> MetricRegistryUtils.getTimer(MetricsNames.THREAD_POOL_QUEUE_SIZE).count() == 1, 2000); + AssertExtensions.assertEventuallyEquals(true, () -> MetricRegistryUtils.getTimer(MetricsNames.STORAGE_THREAD_POOL_ACTIVE_THREADS).count() == 1, 2000); + AssertExtensions.assertEventuallyEquals(true, () -> MetricRegistryUtils.getTimer(MetricsNames.STORAGE_THREAD_POOL_QUEUE_SIZE).count() == 1, 2000); + AssertExtensions.assertEventuallyEquals(true, () -> MetricRegistryUtils.getTimer(MetricsNames.STORAGE_THREAD_POOL_ACTIVE_THREADS).count() == 1, 2000); + } + @Test public void testOperationProcessorMetrics() { int containerId = 1; diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/TableStoreMock.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/TableStoreMock.java index 5358425c161..53c8e1ea80f 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/TableStoreMock.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/TableStoreMock.java @@ -30,6 +30,7 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.contracts.tables.TableStore; import java.time.Duration; import java.util.Collection; @@ -123,27 +124,22 @@ public CompletableFuture> get(String segmentName, List merge(String targetSegmentName, String sourceSegmentName, Duration timeout) { - throw new UnsupportedOperationException(); - } - - @Override - public CompletableFuture seal(String segmentName, Duration timeout) { + public CompletableFuture>> keyIterator(String segmentName, IteratorArgs args) { throw new UnsupportedOperationException(); } @Override - public CompletableFuture>> keyIterator(String segmentName, IteratorArgs args) { + public CompletableFuture>> entryIterator(String segmentName, IteratorArgs args) { throw new UnsupportedOperationException(); } @Override - public CompletableFuture>> entryIterator(String segmentName, IteratorArgs args) { + public CompletableFuture>> entryDeltaIterator(String segmentName, long fromPosition, Duration fetchTimeout) { throw new UnsupportedOperationException(); } @Override - public CompletableFuture>> entryDeltaIterator(String segmentName, long fromPosition, Duration fetchTimeout) { + public CompletableFuture getInfo(String segmentName, Duration timeout) { throw new UnsupportedOperationException(); } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/DebugStreamSegmentContainerTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/DebugStreamSegmentContainerTests.java index 4ae77af9942..6d6e231825a 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/DebugStreamSegmentContainerTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/DebugStreamSegmentContainerTests.java @@ -40,6 +40,7 @@ import io.pravega.segmentstore.server.reading.ReadIndexConfig; import io.pravega.segmentstore.server.tables.ContainerTableExtension; import io.pravega.segmentstore.server.tables.ContainerTableExtensionImpl; +import io.pravega.segmentstore.server.tables.TableExtensionConfig; import io.pravega.segmentstore.server.writer.StorageWriterFactory; import io.pravega.segmentstore.server.writer.WriterConfig; import io.pravega.segmentstore.storage.AsyncStorageWrapper; @@ -151,6 +152,7 @@ public class DebugStreamSegmentContainerTests extends ThreadPooledTestSuite { @Rule public Timeout globalTimeout = Timeout.millis(TEST_TIMEOUT_MILLIS); + @Override protected int getThreadPoolSize() { return THREAD_POOL_COUNT; } @@ -533,7 +535,7 @@ public SegmentContainerFactory.CreateExtensions getDefaultExtensions() { } private ContainerTableExtension createTableExtension(SegmentContainer c, ScheduledExecutorService e) { - return new ContainerTableExtensionImpl(c, this.cacheManager, e); + return new ContainerTableExtensionImpl(TableExtensionConfig.builder().build(), c, this.cacheManager, e); } @Override diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/MetadataStoreTestBase.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/MetadataStoreTestBase.java index fe481af7900..4099254edf9 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/MetadataStoreTestBase.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/MetadataStoreTestBase.java @@ -407,7 +407,7 @@ public void testGetOrAssignStreamSegmentId() { final long baseSegmentId = 1000; final long minSegmentLength = 1; final int segmentCount = 50; - Function getSegmentLength = segmentName -> minSegmentLength + (long) MathHelpers.abs(segmentName.hashCode()); + Function getSegmentLength = segmentName -> minSegmentLength + MathHelpers.abs(segmentName.hashCode()); Function getSegmentStartOffset = segmentName -> getSegmentLength.apply(segmentName) / 2; @Cleanup diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainerTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainerTests.java index cf95621ca68..f75c860797e 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainerTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/ReadOnlySegmentContainerTests.java @@ -18,6 +18,7 @@ import com.google.common.collect.ImmutableMap; import io.pravega.common.util.ByteArraySegment; import io.pravega.segmentstore.contracts.AttributeId; +import io.pravega.segmentstore.contracts.AttributeUpdateCollection; import io.pravega.segmentstore.contracts.Attributes; import io.pravega.segmentstore.contracts.SegmentType; import io.pravega.segmentstore.contracts.StreamSegmentInformation; @@ -132,6 +133,7 @@ public void testUnsupportedOperations() { assertUnsupported("truncateStreamSegment", () -> context.container.truncateStreamSegment(SEGMENT_NAME, 0, TIMEOUT)); assertUnsupported("deleteStreamSegment", () -> context.container.deleteStreamSegment(SEGMENT_NAME, TIMEOUT)); assertUnsupported("mergeTransaction", () -> context.container.mergeStreamSegment(SEGMENT_NAME, SEGMENT_NAME, TIMEOUT)); + assertUnsupported("mergeTransaction", () -> context.container.mergeStreamSegment(SEGMENT_NAME, SEGMENT_NAME, new AttributeUpdateCollection(), TIMEOUT)); } private byte[] populate(int length, int truncationOffset, TestContext context) { diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StorageEventProcessorTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StorageEventProcessorTests.java new file mode 100644 index 00000000000..1a546e1cd30 --- /dev/null +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StorageEventProcessorTests.java @@ -0,0 +1,147 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.containers; + + +import io.pravega.common.util.BufferView; +import io.pravega.common.util.ByteArraySegment; +import io.pravega.segmentstore.server.ContainerEventProcessor; +import io.pravega.segmentstore.storage.chunklayer.ChunkedSegmentStorageConfig; +import io.pravega.segmentstore.storage.chunklayer.GarbageCollector; +import io.pravega.test.common.AssertExtensions; +import io.pravega.test.common.ThreadPooledTestSuite; +import lombok.Cleanup; +import lombok.val; +import org.junit.Test; + +import java.io.EOFException; +import java.util.ArrayList; +import java.util.concurrent.CompletableFuture; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +public class StorageEventProcessorTests extends ThreadPooledTestSuite { + final static int CONTAINER_ID = 42; + @Test + public void testInvalidArgs() throws Exception { + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG; + @Cleanup val mockEventProcessor = mock(ContainerEventProcessor.class); + AssertExtensions.assertThrows("Should not allow null eventProcessor", + () -> { + @Cleanup val x = new StorageEventProcessor(CONTAINER_ID, null, v -> CompletableFuture.completedFuture(null), 10); + }, + ex -> ex instanceof NullPointerException); + AssertExtensions.assertThrows("Should not allow null callBack", + () -> { + @Cleanup val x = new StorageEventProcessor(CONTAINER_ID, mockEventProcessor, null, 10); + }, + ex -> ex instanceof NullPointerException); + + // Create valid instance + @Cleanup val x = new StorageEventProcessor(CONTAINER_ID, mockEventProcessor, v -> CompletableFuture.completedFuture(null), 10); + + // Test the invalid parameters + AssertExtensions.assertThrows("Should not allow null queueName", + () -> { + x.addQueue(null, false); + }, + ex -> ex instanceof NullPointerException); + AssertExtensions.assertThrows("Should not allow null queueName", + () -> { + x.addTask(null, GarbageCollector.TaskInfo.builder().name("test").build()); + }, + ex -> ex instanceof NullPointerException); + AssertExtensions.assertThrows("Should not allow null task", + () -> { + x.addTask("test", null); + }, + ex -> ex instanceof NullPointerException); + AssertExtensions.assertFutureThrows("Should not allow null task", + x.addTask("nonExistent", GarbageCollector.TaskInfo.builder().name("test").build()), + ex -> ex instanceof IllegalArgumentException); + } + + @Test + public void testForConsumerRegistration() throws Exception { + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG; + @Cleanup val mockContainerEventProcessor = mock(ContainerEventProcessor.class); + @Cleanup val mockEventProcessor = mock(ContainerEventProcessor.EventProcessor.class); + doReturn(CompletableFuture.completedFuture(mockEventProcessor)).when(mockContainerEventProcessor).forConsumer(eq("test"), any(), any()); + doReturn(CompletableFuture.completedFuture(111L)).when(mockEventProcessor).add(any(), any()); + @Cleanup val x = new StorageEventProcessor(CONTAINER_ID, mockContainerEventProcessor, batch -> CompletableFuture.completedFuture(null), 10); + // Test forConsumer + x.addQueue("test", false).join(); + verify(mockContainerEventProcessor, times(1)).forConsumer(eq("test"), any(), any()); + + x.addTask("test", GarbageCollector.TaskInfo.builder() + .name("task1") + .transactionId(1) + .taskType(2) + .scheduledTime(3) + .build()); + verify(mockEventProcessor, times(1)).add(any(), any()); + + // Test some tasks + val serializer = new GarbageCollector.TaskInfo.Serializer(); + val data1 = new ArrayList(); + data1.add(serializer.serialize(GarbageCollector.TaskInfo.builder() + .name("task1") + .taskType(GarbageCollector.TaskInfo.DELETE_CHUNK) + .transactionId(1) + .build())); + data1.add(serializer.serialize(GarbageCollector.TaskInfo.builder() + .name("task2") + .taskType(GarbageCollector.TaskInfo.DELETE_CHUNK) + .transactionId(1) + .build())); + x.processEvents(data1).join(); + + // Invalid data + val data2 = new ArrayList(); + data2.add(new ByteArraySegment(new byte[0])); + + AssertExtensions.assertFutureThrows( "should throw parsing error", + x.processEvents(data2), + ex -> ex instanceof EOFException); + } + + @Test + public void testDurableQueueRegistration() throws Exception { + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG; + @Cleanup val mockContainerEventProcessor = mock(ContainerEventProcessor.class); + @Cleanup val mockEventProcessor = mock(ContainerEventProcessor.EventProcessor.class); + doReturn(CompletableFuture.completedFuture(mockEventProcessor)).when(mockContainerEventProcessor).forDurableQueue(eq("durable")); + doReturn(CompletableFuture.completedFuture(222L)).when(mockEventProcessor).add(any(), any()); + @Cleanup val x = new StorageEventProcessor(CONTAINER_ID, mockContainerEventProcessor, batch -> CompletableFuture.completedFuture(null), 10); + + // Test forDurableQueue + x.addQueue("durable", true).join(); + verify(mockContainerEventProcessor, times(1)).forDurableQueue(eq("durable")); + x.addTask("durable", GarbageCollector.TaskInfo.builder() + .name("task1") + .transactionId(1) + .taskType(2) + .scheduledTime(3) + .build()); + verify(mockEventProcessor, times(1)).add(any(), any()); + } +} + diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentContainerTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentContainerTests.java index 6d584aedecf..5ca7465b51b 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentContainerTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentContainerTests.java @@ -45,7 +45,6 @@ import io.pravega.segmentstore.contracts.StreamSegmentMergedException; import io.pravega.segmentstore.contracts.StreamSegmentNotExistsException; import io.pravega.segmentstore.contracts.StreamSegmentSealedException; -import io.pravega.segmentstore.contracts.StreamSegmentStore; import io.pravega.segmentstore.contracts.TooManyActiveSegmentsException; import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.server.CacheManager; @@ -89,6 +88,7 @@ import io.pravega.segmentstore.server.reading.TestReadResultHandler; import io.pravega.segmentstore.server.tables.ContainerTableExtension; import io.pravega.segmentstore.server.tables.ContainerTableExtensionImpl; +import io.pravega.segmentstore.server.tables.TableExtensionConfig; import io.pravega.segmentstore.server.writer.StorageWriterFactory; import io.pravega.segmentstore.server.writer.WriterConfig; import io.pravega.segmentstore.storage.AsyncStorageWrapper; @@ -160,6 +160,7 @@ import static io.pravega.common.concurrent.ExecutorServiceHelpers.newScheduledThreadPool; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; /** * Tests for StreamSegmentContainer class. @@ -261,7 +262,7 @@ public class StreamSegmentContainerTests extends ThreadPooledTestSuite { @Override protected int getThreadPoolSize() { - return 5; + return 2; } /** @@ -1306,7 +1307,8 @@ public void testTransactionOperations() throws Exception { appendToParentsAndTransactions(segmentNames, transactionsBySegment, lengths, segmentContents, context); // 3. Merge all the Transaction. - mergeTransactions(transactionsBySegment, lengths, segmentContents, context); + Futures.allOf(mergeTransactions(transactionsBySegment, lengths, segmentContents, context, false)) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); // 4. Add more appends (to the parent segments) ArrayList> appendFutures = new ArrayList<>(); @@ -1326,7 +1328,6 @@ public void testTransactionOperations() throws Exception { } } } - Futures.allOf(appendFutures).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); // 5. Verify their contents. @@ -1339,6 +1340,213 @@ public void testTransactionOperations() throws Exception { context.container.stopAsync().awaitTerminated(); } + /** + * Test the createTransaction, append-to-Transaction, mergeTransaction methods with attribute updates. + */ + @Test + public void testConditionalTransactionOperations() throws Exception { + @Cleanup + TestContext context = createContext(); + context.container.startAsync().awaitRunning(); + + // 1. Create the StreamSegments. + ArrayList segmentNames = createSegments(context); + HashMap> transactionsBySegment = createTransactions(segmentNames, context); + activateAllSegments(segmentNames, context); + transactionsBySegment.values().forEach(s -> activateAllSegments(s, context)); + + // 2. Add some appends. + HashMap lengths = new HashMap<>(); + HashMap segmentContents = new HashMap<>(); + appendToParentsAndTransactions(segmentNames, transactionsBySegment, lengths, segmentContents, context); + + // 3. Correctly update attribute on parent Segments. Each source Segment will be initialized with a value and + // after the merge, that value should have been updated. + ArrayList> opFutures = new ArrayList<>(); + for (Map.Entry> e : transactionsBySegment.entrySet()) { + String parentName = e.getKey(); + for (String transactionName : e.getValue()) { + opFutures.add(context.container.updateAttributes( + parentName, + AttributeUpdateCollection.from(new AttributeUpdate(AttributeId.fromUUID(UUID.nameUUIDFromBytes(transactionName.getBytes())), + AttributeUpdateType.None, transactionName.hashCode())), + TIMEOUT)); + } + } + Futures.allOf(opFutures).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + + // 4. Merge all the Transactions. Now this should work. + Futures.allOf(mergeTransactions(transactionsBySegment, lengths, segmentContents, context, true)) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + + // 5. Add more appends (to the parent segments) + ArrayList> appendFutures = new ArrayList<>(); + HashMap>> getAttributeFutures = new HashMap<>(); + for (int i = 0; i < 5; i++) { + for (String segmentName : segmentNames) { + RefCountByteArraySegment appendData = getAppendData(segmentName, APPENDS_PER_SEGMENT + i); + appendFutures.add(context.container.append(segmentName, appendData, null, TIMEOUT)); + lengths.put(segmentName, lengths.getOrDefault(segmentName, 0L) + appendData.getLength()); + recordAppend(segmentName, appendData, segmentContents, null); + + // Verify that we can no longer append to Transaction. + for (String transactionName : transactionsBySegment.get(segmentName)) { + AssertExtensions.assertThrows( + "An append was allowed to a merged Transaction " + transactionName, + context.container.append(transactionName, new ByteArraySegment("foo".getBytes()), null, TIMEOUT)::join, + ex -> ex instanceof StreamSegmentMergedException || ex instanceof StreamSegmentNotExistsException); + getAttributeFutures.put(transactionName, context.container.getAttributes(segmentName, + Collections.singletonList(AttributeId.fromUUID(UUID.nameUUIDFromBytes(transactionName.getBytes()))), + true, TIMEOUT)); + } + } + } + + Futures.allOf(appendFutures).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + Futures.allOf(getAttributeFutures.values()).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + + // 6. Verify their contents. + checkReadIndex(segmentContents, lengths, context); + + // 7. Writer moving data to Storage. + waitForSegmentsInStorage(segmentNames, context).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + checkStorage(segmentContents, lengths, context); + + // 8. Verify that the parent Segment contains the expected attributes updated. + for (Map.Entry>> transactionAndAttribute : getAttributeFutures.entrySet()) { + Map segmentAttributeUpdated = transactionAndAttribute.getValue().join(); + AttributeId transactionAttributeId = AttributeId.fromUUID(UUID.nameUUIDFromBytes(transactionAndAttribute.getKey().getBytes())); + // Conditional merges in mergeTransactions() update the attribute value to adding 1. + Assert.assertEquals(transactionAndAttribute.getKey().hashCode() + 1, segmentAttributeUpdated.get(transactionAttributeId).longValue()); + } + + context.container.stopAsync().awaitTerminated(); + } + + /** + * Test the createTransaction, append-to-Transaction, mergeTransaction methods with invalid attribute updates. + */ + @Test + public void testConditionalTransactionOperationsWithWrongAttributes() throws Exception { + @Cleanup + TestContext context = createContext(); + context.container.startAsync().awaitRunning(); + + // 1. Create the StreamSegments. + ArrayList segmentNames = createSegments(context); + HashMap> transactionsBySegment = createTransactions(segmentNames, context); + activateAllSegments(segmentNames, context); + transactionsBySegment.values().forEach(s -> activateAllSegments(s, context)); + + // 2. Add some appends. + HashMap lengths = new HashMap<>(); + HashMap segmentContents = new HashMap<>(); + appendToParentsAndTransactions(segmentNames, transactionsBySegment, lengths, segmentContents, context); + + // 3. Wrongly update attribute on parent Segments. First, we update the attributes with a wrong value to + // validate that Segments do not get merged when attribute updates fail. + ArrayList> opFutures = new ArrayList<>(); + for (Map.Entry> e : transactionsBySegment.entrySet()) { + String parentName = e.getKey(); + for (String transactionName : e.getValue()) { + opFutures.add(context.container.updateAttributes( + parentName, + AttributeUpdateCollection.from(new AttributeUpdate(AttributeId.fromUUID(UUID.nameUUIDFromBytes(transactionName.getBytes())), + AttributeUpdateType.None, 0)), + TIMEOUT)); + } + } + Futures.allOf(opFutures).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + + // 4. Merge all the Transaction and expect this to fail. + for (CompletableFuture mergeTransaction : mergeTransactions(transactionsBySegment, lengths, segmentContents, context, true)) { + AssertExtensions.assertMayThrow("If the transaction merge fails, it should be due to BadAttributeUpdateException", + () -> mergeTransaction, + ex -> ex instanceof BadAttributeUpdateException); + } + context.container.stopAsync().awaitTerminated(); + } + + /** + * Test in detail the basic situations that a conditional segment merge can face. + */ + @Test + public void testBasicConditionalMergeScenarios() throws Exception { + @Cleanup + TestContext context = createContext(); + context.container.startAsync().awaitRunning(); + final String parentSegment = "parentSegment"; + + // This will be the attribute update to execute against the parent segment. + Function attributeUpdateForTxn = txnName -> AttributeUpdateCollection.from( + new AttributeUpdate(AttributeId.fromUUID(UUID.nameUUIDFromBytes(txnName.getBytes())), + AttributeUpdateType.ReplaceIfEquals, txnName.hashCode() + 1, txnName.hashCode())); + + Function getAttributeValue = txnName -> { + AttributeId attributeId = AttributeId.fromUUID(UUID.nameUUIDFromBytes(txnName.getBytes())); + return context.container.getAttributes(parentSegment, Collections.singletonList(attributeId), true, TIMEOUT) + .join().get(attributeId); + }; + + // Create a parent Segment. + context.container.createStreamSegment(parentSegment, getSegmentType(parentSegment), null, TIMEOUT) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + SegmentType segmentType = getSegmentType(parentSegment); + + // Case 1: Create and empty transaction that fails to merge conditionally due to bad attributes. + String txnName = NameUtils.getTransactionNameFromId(parentSegment, UUID.randomUUID()); + AttributeId txnAttributeId = AttributeId.fromUUID(UUID.nameUUIDFromBytes(txnName.getBytes())); + context.container.createStreamSegment(txnName, segmentType, null, TIMEOUT) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + AttributeUpdateCollection attributeUpdates = attributeUpdateForTxn.apply(txnName); + AssertExtensions.assertFutureThrows("Transaction was expected to fail on attribute update", + context.container.mergeStreamSegment(parentSegment, txnName, attributeUpdates, TIMEOUT), + ex -> ex instanceof BadAttributeUpdateException); + Assert.assertEquals(Attributes.NULL_ATTRIBUTE_VALUE, (long) getAttributeValue.apply(txnName)); + + // Case 2: Now, we prepare the attributes in the parent segment so the merge of the empty transaction succeeds. + context.container.updateAttributes( + parentSegment, + AttributeUpdateCollection.from(new AttributeUpdate(txnAttributeId, AttributeUpdateType.Replace, txnName.hashCode())), + TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + // As the source segment is empty, the amount of merged data should be 0. + Assert.assertEquals(0L, context.container.mergeStreamSegment(parentSegment, txnName, attributeUpdates, TIMEOUT) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS).getMergedDataLength()); + // But the attribute related to that transaction merge on the parent segment should have been updated. + Assert.assertEquals(txnName.hashCode() + 1L, (long) context.container.getAttributes(parentSegment, + Collections.singletonList(txnAttributeId), true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS).get(txnAttributeId)); + + // Case 3: Create a non-empty transaction that should fail due to a conditional attribute update failure. + txnName = NameUtils.getTransactionNameFromId(parentSegment, UUID.randomUUID()); + txnAttributeId = AttributeId.fromUUID(UUID.nameUUIDFromBytes(txnName.getBytes())); + attributeUpdates = attributeUpdateForTxn.apply(txnName); + context.container.createStreamSegment(txnName, segmentType, null, TIMEOUT) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + // Add some appends to the transaction. + RefCountByteArraySegment appendData = getAppendData(txnName, 1); + context.container.append(txnName, appendData, null, TIMEOUT) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + // Attempt the conditional merge. + AssertExtensions.assertFutureThrows("Transaction was expected to fail on attribute update", + context.container.mergeStreamSegment(parentSegment, txnName, attributeUpdates, TIMEOUT), + ex -> ex instanceof BadAttributeUpdateException); + Assert.assertEquals(Attributes.NULL_ATTRIBUTE_VALUE, (long) getAttributeValue.apply(txnName)); + + // Case 4: Now, we prepare the attributes in the parent segment so the merge of the non-empty transaction succeeds. + context.container.updateAttributes( + parentSegment, + AttributeUpdateCollection.from(new AttributeUpdate(txnAttributeId, AttributeUpdateType.Replace, txnName.hashCode())), + TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + // As the source segment is non-empty, the amount of merged data should be greater than 0. + Assert.assertTrue(context.container.mergeStreamSegment(parentSegment, txnName, attributeUpdates, TIMEOUT) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS).getMergedDataLength() > 0); + // The attribute related to that transaction merge on the parent segment should have been updated as well. + Assert.assertEquals(txnName.hashCode() + 1L, (long) context.container.getAttributes(parentSegment, + Collections.singletonList(txnAttributeId), true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS).get(txnAttributeId)); + + context.container.stopAsync().awaitTerminated(); + } + /** * Tests the ability to perform future (tail) reads. Scenarios tested include: * * Regular appends @@ -1396,7 +1604,8 @@ public void testFutureReads() throws Exception { appendToParentsAndTransactions(segmentNames, transactionsBySegment, lengths, segmentContents, context); // 4. Merge all the Transactions. - mergeTransactions(transactionsBySegment, lengths, segmentContents, context); + Futures.allOf(mergeTransactions(transactionsBySegment, lengths, segmentContents, context, false)) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); // 5. Add more appends (to the parent segments) ArrayList> operationFutures = new ArrayList<>(); @@ -2180,7 +2389,7 @@ public void testEventProcessorFaultyHandler() throws Exception { * @throws Exception */ @Test(timeout = 10000) - public void testEventProcessorMultiplePConsumers() throws Exception { + public void testEventProcessorMultipleConsumers() throws Exception { @Cleanup TestContext context = createContext(); val container = (StreamSegmentContainer) context.container; @@ -2238,6 +2447,8 @@ public void testEventProcessorDurableQueueAndSwitchToConsumer() throws Exception .get(TIMEOUT_FUTURE.toSeconds(), TimeUnit.SECONDS)); // Close the processor and unregister it. processor.close(); + // Make sure that EventProcessor eventually terminates. + ((ContainerEventProcessorImpl.EventProcessorImpl) processor).awaitTerminated(); // Now, re-create the Event Processor with a handler to consume the events. ContainerEventProcessor.EventProcessorConfig eventProcessorConfig = @@ -2258,6 +2469,12 @@ public void testEventProcessorDurableQueueAndSwitchToConsumer() throws Exception // Wait for all items to be processed. AssertExtensions.assertEventuallyEquals(true, () -> processorResults.size() == allEventsToProcess, 10000); Assert.assertArrayEquals(processorResults.toArray(), IntStream.iterate(0, v -> v + 1).limit(allEventsToProcess).boxed().toArray()); + // Just check failure callback. + ((ContainerEventProcessorImpl.EventProcessorImpl) processor).failureCallback(new IntentionalException()); + // Close the processor and unregister it. + processor.close(); + // Make sure that EventProcessor eventually terminates. + ((ContainerEventProcessorImpl.EventProcessorImpl) processor).awaitTerminated(); } /** @@ -2277,6 +2494,25 @@ public void testEventProcessorEventRejectionOnMaxOutstanding() throws Exception ContainerEventProcessorTests.testEventRejectionOnMaxOutstanding(containerEventProcessor); } + /** + * Tests that call to getOrCreateInternalSegment is idempotent and always provides pinned segments. + */ + @Test(timeout = 30000) + public void testPinnedSegmentReload() { + @Cleanup + TestContext context = createContext(); + val container = (StreamSegmentContainer) context.container; + container.startAsync().awaitRunning(); + Function> segmentSupplier = + ContainerEventProcessorImpl.getOrCreateInternalSegment(container, container.metadataStore, TIMEOUT_EVENT_PROCESSOR_ITERATION); + long segmentId = segmentSupplier.apply("dummySegment").join().getSegmentId(); + for (int i = 0; i < 10; i++) { + DirectSegmentAccess segment = segmentSupplier.apply("dummySegment").join(); + assertTrue(segment.getInfo().isPinned()); + assertEquals(segmentId, segment.getSegmentId()); + } + } + /** * Attempts to activate the targetSegment in the given Container. Since we do not have access to the internals of the * Container, we need to trigger this somehow, hence the need for this complex code. We need to trigger a truncation, @@ -2506,7 +2742,9 @@ private void appendToParentsAndTransactions(Collection segmentNames, Has Futures.allOf(appendFutures).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); } - private void mergeTransactions(HashMap> transactionsBySegment, HashMap lengths, HashMap segmentContents, TestContext context) throws Exception { + private ArrayList> mergeTransactions(HashMap> transactionsBySegment, HashMap lengths, + HashMap segmentContents, TestContext context, + boolean conditionalMerge) throws Exception { ArrayList> mergeFutures = new ArrayList<>(); int i = 0; for (Map.Entry> e : transactionsBySegment.entrySet()) { @@ -2517,7 +2755,15 @@ private void mergeTransactions(HashMap> transactionsBy mergeFutures.add(Futures.toVoid(context.container.sealStreamSegment(transactionName, TIMEOUT))); } - mergeFutures.add(Futures.toVoid(context.container.mergeStreamSegment(parentName, transactionName, TIMEOUT))); + // Use both calls, with and without attribute updates for mergeSegments. + if (conditionalMerge) { + AttributeUpdateCollection attributeUpdates = AttributeUpdateCollection.from( + new AttributeUpdate(AttributeId.fromUUID(UUID.nameUUIDFromBytes(transactionName.getBytes())), + AttributeUpdateType.ReplaceIfEquals, transactionName.hashCode() + 1, transactionName.hashCode())); + mergeFutures.add(Futures.toVoid(context.container.mergeStreamSegment(parentName, transactionName, attributeUpdates, TIMEOUT))); + } else { + mergeFutures.add(Futures.toVoid(context.container.mergeStreamSegment(parentName, transactionName, TIMEOUT))); + } // Update parent length. lengths.put(parentName, lengths.get(parentName) + lengths.get(transactionName)); @@ -2529,7 +2775,7 @@ private void mergeTransactions(HashMap> transactionsBy } } - Futures.allOf(mergeFutures).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + return mergeFutures; } private RefCountByteArraySegment getAppendData(String segmentName, int appendId) { @@ -2677,7 +2923,7 @@ private void activateAllSegments(Collection segmentNames, TestContext co Futures.allOf(futures).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); } - private CompletableFuture activateSegment(String segmentName, StreamSegmentStore container) { + private CompletableFuture activateSegment(String segmentName, SegmentContainer container) { return container.read(segmentName, 0, 1, TIMEOUT).thenAccept(ReadResult::close); } @@ -2759,7 +3005,7 @@ SegmentContainerFactory.CreateExtensions getDefaultExtensions() { } private ContainerTableExtension createTableExtension(SegmentContainer c, ScheduledExecutorService e) { - return new ContainerTableExtensionImpl(c, this.cacheManager, e); + return new ContainerTableExtensionImpl(TableExtensionConfig.builder().build(), c, this.cacheManager, e); } private SegmentContainerFactory.CreateExtensions createExtensions(SegmentContainerFactory.CreateExtensions additional) { @@ -2952,6 +3198,7 @@ public WatchableInMemoryStorageFactory(ScheduledExecutorService executor) { super(executor); } + @Override public Storage createStorageAdapter() { return new WatchableAsyncStorageWrapper(new RollingStorage(this.baseStorage), this.executor); } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentMetadataTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentMetadataTests.java index 9e180228fae..a075276918c 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentMetadataTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/containers/StreamSegmentMetadataTests.java @@ -23,15 +23,12 @@ import io.pravega.segmentstore.server.SegmentMetadataComparer; import io.pravega.segmentstore.server.UpdateableSegmentMetadata; import io.pravega.test.common.AssertExtensions; - import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.Map; import java.util.Random; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeoutException; import java.util.function.BiPredicate; import java.util.function.Consumer; import java.util.stream.Collectors; @@ -153,7 +150,7 @@ public void testCleanupAttributes() { expectedValues.put(coreAttributeId, 1000L); metadata.updateAttributes(Collections.singletonMap(coreAttributeId, 1000L)); for (int i = 0; i < attributeCount; i++) { - AttributeId attributeId = AttributeId.uuid(0, (long) i); + AttributeId attributeId = AttributeId.uuid(0, i); extendedAttributes.add(attributeId); metadata.setLastUsed(i); metadata.updateAttributes(Collections.singletonMap(attributeId, (long) i)); @@ -218,7 +215,7 @@ private void checkAttributesEqual(Map expected, Map streamSegmentIds = createStreamSegmentsInMetadata(streamSegmentCount, context.metadata); + AbstractMap transactions = createTransactionsInMetadata(streamSegmentIds, transactionsPerStreamSegment, context.metadata); + List operations = generateOperations(streamSegmentIds, transactions, appendsPerStreamSegment, + METADATA_CHECKPOINT_EVERY, mergeTransactions, sealStreamSegments); + + CompletableFuture failedFuture = CompletableFuture.failedFuture(new ObjectClosedException("Intentional")); + doReturn(failedFuture).when(dataLog).append(any(CompositeArrayView.class), any(Duration.class)); + List completionFutures = processOperations(operations, operationProcessor); + + // Wait for all such operations to complete. We are expecting exceptions, so verify that we do. + AssertExtensions.assertFutureThrows("No operations failed or failed with wrong exception.", + OperationWithCompletion.allOf(completionFutures), + ex -> ex instanceof ObjectClosedException); + + // Verify that the OperationProcessor automatically shuts down and that it has the right failure cause. + ServiceListeners.awaitShutdown(operationProcessor, TIMEOUT, false); + Assert.assertEquals("OperationProcessor is not in a failed state after ObjectClosedException detected.", + Service.State.FAILED, operationProcessor.state()); + Assert.assertFalse("OperationProcessor running when it should not.", operationProcessor.isRunning()); + } + private List processOperations(Collection operations, OperationProcessor operationProcessor) { List completionFutures = new ArrayList<>(); operations.forEach(op -> completionFutures.add(new OperationWithCompletion(op, operationProcessor.process(op, OperationPriority.Normal)))); diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/ThrottlerTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/ThrottlerTests.java index 63f069d7b92..c5dfe8b7fcc 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/ThrottlerTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/ThrottlerTests.java @@ -250,7 +250,7 @@ public void testInterruptedIncreasingDelayMetrics() throws Exception { // the duration supplied. AssertExtensions.assertLessThanOrEqual( "Throttler should be at most the first supplied delay", - (int) suppliedDelays.get(0), + suppliedDelays.get(0), (int) getThrottlerMetric(calculator.getName()) ); } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/operations/ConditionalMergeSegmentOperationTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/operations/ConditionalMergeSegmentOperationTests.java new file mode 100644 index 00000000000..dd3be943910 --- /dev/null +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/logs/operations/ConditionalMergeSegmentOperationTests.java @@ -0,0 +1,37 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.logs.operations; + +import io.pravega.segmentstore.contracts.AttributeId; +import io.pravega.segmentstore.contracts.AttributeUpdate; +import io.pravega.segmentstore.contracts.AttributeUpdateCollection; +import io.pravega.segmentstore.contracts.AttributeUpdateType; + +import java.util.Random; + +/** + * Unit tests for MergeSegmentOperation class when AttributeUpdates are used. + */ +public class ConditionalMergeSegmentOperationTests extends MergeSegmentOperationTests { + @Override + protected MergeSegmentOperation createOperation(Random random) { + AttributeUpdateCollection attributeUpdates = AttributeUpdateCollection.from( + new AttributeUpdate(AttributeId.randomUUID(), AttributeUpdateType.ReplaceIfEquals, 0, Long.MIN_VALUE), + new AttributeUpdate(AttributeId.randomUUID(), AttributeUpdateType.ReplaceIfEquals, 0, Long.MIN_VALUE)); + return new MergeSegmentOperation(random.nextLong(), random.nextLong(), attributeUpdates); + } +} + diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceBuilderConfigTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceBuilderConfigTests.java index 6bec997806a..a7c2496048c 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceBuilderConfigTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceBuilderConfigTests.java @@ -131,8 +131,13 @@ && isSupportedType(f.getGenericType().getTypeName())) { if (p.getDefaultValue() != null && p.getDefaultValue() instanceof Boolean) { configBuilder.with(p, nextValue.incrementAndGet() % 2 == 0); } else { - // Any number can be interpreted as a string or number. - configBuilder.with(p, Integer.toString(nextValue.incrementAndGet())); + //Property security.tls.protocolVersion cannot be an Integer. + if (p.equals(ServiceConfig.TLS_PROTOCOL_VERSION)) { + configBuilder.with(p, p.getDefaultValue()); + } else { + // Any number can be interpreted as a string or number. + configBuilder.with(p, Integer.toString(nextValue.incrementAndGet())); + } } } } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceConfigTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceConfigTests.java index 9c0460c1e41..190cc3e6977 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceConfigTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/ServiceConfigTests.java @@ -73,6 +73,7 @@ public void testDefaultSecurityConfigValues() { assertFalse(config.isEnableTlsReload()); assertEquals("", config.getCertFile()); assertEquals("", config.getKeyFile()); + Assert.assertArrayEquals(new String[]{"TLSv1.2", "TLSv1.3"}, config.getTlsProtocolVersion()); } // region Tests that verify the toString() method. diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistryTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistryTests.java index 19cabf99409..56cfd56c975 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistryTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentContainerRegistryTests.java @@ -104,6 +104,37 @@ public void testGetContainer() throws Exception { ex -> ex instanceof ContainerNotFoundException); } + /** + * Tests the getContainers method for registered and unregistered containers. + */ + @Test + public void testGetContainers() throws Exception { + final int containerCount = 1000; + TestContainerFactory factory = new TestContainerFactory(); + @Cleanup + StreamSegmentContainerRegistry registry = new StreamSegmentContainerRegistry(factory, executorService()); + + HashSet expectedContainerIds = new HashSet<>(); + for (int containerId = 0; containerId < containerCount; containerId++) { + registry.startContainer(containerId, TIMEOUT); + expectedContainerIds.add(containerId); + } + + HashSet actualHandleIds = new HashSet<>(); + for (SegmentContainer segmentContainer : registry.getContainers()) { + actualHandleIds.add(segmentContainer.getId()); + Assert.assertTrue("Wrong container Java type.", segmentContainer instanceof TestContainer); + segmentContainer.close(); + } + + AssertExtensions.assertContainsSameElements("Unexpected container ids registered.", expectedContainerIds, actualHandleIds); + + AssertExtensions.assertThrows( + "getContainer did not throw when passed an invalid container id.", + () -> registry.getContainer(containerCount + 1), + ex -> ex instanceof ContainerNotFoundException); + } + /** * Tests the ability to stop the container via the stopContainer() method. */ @@ -390,6 +421,12 @@ public CompletableFuture mergeStreamSegment(String tar return null; } + @Override + public CompletableFuture mergeStreamSegment(String targetStreamSegment, String sourceStreamSegment, + AttributeUpdateCollection attributes, Duration timeout) { + return null; + } + @Override public CompletableFuture sealStreamSegment(String streamSegmentName, Duration timeout) { return null; diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentStoreTestBase.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentStoreTestBase.java index cf1a5f2499c..327a411a722 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentStoreTestBase.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/store/StreamSegmentStoreTestBase.java @@ -424,6 +424,87 @@ void endToEndProcess(boolean verifySegmentContent, boolean useChunkedStorage) th log.info("Finished."); } + /** + * Tests an end-to-end scenario for the SegmentStore, using a read-write SegmentStore to add some segment data. + * Using another instance to verify that the segments have been successfully persisted to Storage. + * This test does not use ChunkedSegmentStorage. + * + * @throws Exception If an exception occurred. + */ + @Test(timeout = 120000) + public void testFlushToStorage() throws Exception { + endToEndFlushToStorage(false); + } + + /** + * Tests an end-to-end scenario for the SegmentStore, using a read-write SegmentStore to add some segment data. + * Using another instance to verify that the segments have been successfully persisted to Storage. + * This test uses ChunkedSegmentStorage. + * + * @throws Exception If an exception occurred. + */ + @Test(timeout = 120000) + public void testFlushToStorageWithChunkedStorage() throws Exception { + endToEndFlushToStorage(true); + } + + /** + * End to end test to verify storage flush API. + * + * @param useChunkedStorage whether to use ChunkedSegmentStorage or instead use AsyncStorageWrapper. + * @throws Exception If an exception occurred. + */ + void endToEndFlushToStorage(boolean useChunkedStorage) throws Exception { + ArrayList segmentNames; + HashMap lengths = new HashMap<>(); + ArrayList appendBuffers = new ArrayList<>(); + HashMap startOffsets = new HashMap<>(); + HashMap segmentContents = new HashMap<>(); + long expectedAttributeValue = 0; + int instanceId = 0; + + // Phase 1: Create segments and add some appends. + log.info("Starting Phase 1."); + try (val builder = createBuilder(++instanceId, useChunkedStorage)) { + val segmentStore = builder.createStreamSegmentService(); + + // Create the StreamSegments. + segmentNames = createSegments(segmentStore); + log.info("Created Segments: {}.", String.join(", ", segmentNames)); + + // Add some appends. + ArrayList segmentsAndTransactions = new ArrayList<>(segmentNames); + appendData(segmentsAndTransactions, segmentContents, lengths, appendBuffers, segmentStore).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + expectedAttributeValue += ATTRIBUTE_UPDATE_DELTA; + log.info("Finished appending data."); + + checkSegmentStatus(lengths, startOffsets, false, false, expectedAttributeValue, segmentStore); + log.info("Finished Phase 1"); + } + + // Verify all buffers have been released. + checkAppendLeaks(appendBuffers); + appendBuffers.clear(); + + log.info("Starting Phase 2."); + try (val builder = createBuilder(++instanceId, useChunkedStorage);) { + val segmentStore = builder.createStreamSegmentService(); + for (int id = 1; id < CONTAINER_COUNT; id++) { + segmentStore.flushToStorage(id, TIMEOUT); + } + // Wait for all the data to move to Storage. + waitForSegmentsInStorage(segmentNames, segmentStore) + .get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + log.info("Finished waiting for segments in Storage."); + + checkStorage(segmentContents, segmentStore); + log.info("Finished Storage check."); + log.info("Finished Phase 2."); + } + + log.info("Finished."); + } + /** * Tests an end-to-end scenario for the SegmentStore where operations are continuously executed while the SegmentStore * itself is being fenced out by new instances. The difference between this and testEndToEnd() is that this does not diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyCacheTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyCacheTests.java index b1e8677c2fc..3111a5f8024 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyCacheTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyCacheTests.java @@ -400,7 +400,7 @@ public void testCacheEviction() { keyCache.updateGenerations(i, 0, false); val keyHash = KEY_HASHER.hash(newTableKey(rnd).getKey()); for (long segmentId = 0; segmentId < segmentCount; segmentId++) { - keyCache.includeExistingKey(segmentId, keyHash, (long) i); + keyCache.includeExistingKey(segmentId, keyHash, i); expectedResult.put(new TestKey(segmentId, keyHash), new CacheBucketOffset(i, false)); } } @@ -684,7 +684,8 @@ private void applyBatches(HashMap batchesBySegment, long ba .collect(Collectors.toList()); // Fetch initial tail hashes now, before we apply the updates - val expectedTailHashes = new HashMap(keyCache.getTailHashes(segmentId)); + val expectedTailHashes = new HashMap<>(keyCache.getTailHashes(segmentId)); + Assert.assertEquals(getExpectedTailUpdateDelta(expectedTailHashes.values()), keyCache.getTailUpdateDelta(segmentId)); // Update the Cache. val batchUpdateResult = keyCache.includeUpdateBatch(segmentId, e.getValue(), batchOffset); @@ -714,9 +715,22 @@ private void applyBatches(HashMap batchesBySegment, long ba val actual = tailHashes.get(expected.getKey()); Assert.assertEquals("Unexpected tail hash.", expected.getValue(), actual); } + Assert.assertEquals(getExpectedTailUpdateDelta(expectedTailHashes.values()), keyCache.getTailUpdateDelta(segmentId)); } } + private int getExpectedTailUpdateDelta(Collection tailOffsets) { + int r = 0; + for (val c : tailOffsets) { + if (c.isRemoval()) { + r--; + } else { + r++; + } + } + return r; + } + private void updateSegmentIndexOffsets(ContainerKeyCache keyCache, long offset) { for (long segmentId = 0; segmentId < SEGMENT_COUNT; segmentId++) { keyCache.updateSegmentIndexOffset(segmentId, offset); diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyIndexTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyIndexTests.java index 97eea6afe4a..385b8f41c9d 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyIndexTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/ContainerKeyIndexTests.java @@ -19,9 +19,7 @@ import io.pravega.common.TimeoutTimer; import io.pravega.common.concurrent.Futures; import io.pravega.common.util.BufferView; -import io.pravega.common.util.BufferViewComparator; import io.pravega.common.util.ByteArraySegment; -import io.pravega.segmentstore.contracts.SegmentType; import io.pravega.segmentstore.contracts.tables.BadKeyVersionException; import io.pravega.segmentstore.contracts.tables.KeyNotExistsException; import io.pravega.segmentstore.contracts.tables.TableAttributes; @@ -31,7 +29,6 @@ import io.pravega.segmentstore.server.CacheManager; import io.pravega.segmentstore.server.CachePolicy; import io.pravega.segmentstore.server.SegmentMock; -import io.pravega.segmentstore.server.TableStoreMock; import io.pravega.segmentstore.storage.cache.CacheStorage; import io.pravega.segmentstore.storage.cache.DirectMemoryCache; import io.pravega.test.common.AssertExtensions; @@ -80,7 +77,6 @@ public class ContainerKeyIndexTests extends ThreadPooledTestSuite { private static final long SHORT_TIMEOUT_MILLIS = TIMEOUT.toMillis() / 3; private static final KeyHasher HASHER = KeyHashers.DEFAULT_HASHER; private static final int TEST_MAX_TAIL_CACHE_PRE_INDEX_LENGTH = 128 * 1024; - private static final Comparator KEY_COMPARATOR = BufferViewComparator.create()::compare; @Rule public Timeout globalTimeout = new Timeout(2 * TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); @@ -415,13 +411,36 @@ public void testGetBucketOffsetDirect() { /** * Checks the ability for the {@link ContainerKeyIndex} class to properly handle recovery situations where the Table - * Segment may not have been fully indexed when the first request for it is received. + * Segment may not have been fully indexed when the first request for it is received. This test verifies the case + * when we can do preindexing all at once. */ @Test - public void testRecovery() throws Exception { + public void testRecoveryOneBatch() throws Exception { + testRecovery(TableExtensionConfig.builder() + .with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_LENGTH, (long) TEST_MAX_TAIL_CACHE_PRE_INDEX_LENGTH) + .with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_BATCH_SIZE, Integer.MAX_VALUE) + .with(TableExtensionConfig.RECOVERY_TIMEOUT, (int) ContainerKeyIndexTests.SHORT_TIMEOUT_MILLIS) + .build()); + } + + /** + * Checks the ability for the {@link ContainerKeyIndex} class to properly handle recovery situations where the Table + * Segment may not have been fully indexed when the first request for it is received. This test verifies the case + * when the unindexed part is too large to process at once so it needs to be broken down into batches. + */ + @Test + public void testRecoveryMultipleBatches() throws Exception { + testRecovery(TableExtensionConfig.builder() + .with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_LENGTH, (long) TEST_MAX_TAIL_CACHE_PRE_INDEX_LENGTH) + .with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_BATCH_SIZE, 1024) + .with(TableExtensionConfig.RECOVERY_TIMEOUT, (int) ContainerKeyIndexTests.SHORT_TIMEOUT_MILLIS) + .build()); + } + + private void testRecovery(TableExtensionConfig config) throws Exception { val s = new EntrySerializer(); @Cleanup - val context = new TestContext(); + val context = new TestContext(config); // Setup the segment with initial attributes. val iw = new IndexWriter(HASHER, executorService()); @@ -796,8 +815,9 @@ private void checkPrevailingUpdate(List updates, TestContext context Assert.assertEquals("Unexpected offset.", expectedOffset, actualOffset); } - // Check sorted index. - val keys = highestUpdate.batch.getItems().stream().map(i -> i.getKey().getKey()).collect(Collectors.toList()); + val expectedUniqueEntryCount = highestUpdateHashes.size(); + val actualUniqueEntryCount = context.index.getUniqueEntryCount(context.segment.getMetadata()); + Assert.assertEquals("Unexpected value for getUniqueEntryCount", expectedUniqueEntryCount, actualUniqueEntryCount); } private void checkBackpointers(List updates, TestContext context) { @@ -936,7 +956,6 @@ private class TestContext implements AutoCloseable { final CacheStorage cacheStorage; final CacheManager cacheManager; final SegmentMock segment; - final TableStoreMock sortedKeyStorage; final ContainerKeyIndex index; final TimeoutTimer timer; final Random random; @@ -945,21 +964,22 @@ private class TestContext implements AutoCloseable { TestContext() { // This is for most tests. Due to variability in test environments, we do not want to set a very small value // for most tests; we will customize this only for those tests that we want to test this feature on. - this(TableExtensionConfig.builder().build().getMaxUnindexedLength()); + this(TableExtensionConfig.builder() + .with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_LENGTH, (long) TEST_MAX_TAIL_CACHE_PRE_INDEX_LENGTH) + .with(TableExtensionConfig.RECOVERY_TIMEOUT, (int) ContainerKeyIndexTests.SHORT_TIMEOUT_MILLIS) + .build()); } TestContext(int maxUnindexedSize) { + this(TableExtensionConfig.builder().with(TableExtensionConfig.MAX_UNINDEXED_LENGTH, maxUnindexedSize).build()); + } + + TestContext(TableExtensionConfig config) { this.cacheStorage = new DirectMemoryCache(Integer.MAX_VALUE); this.cacheManager = new CacheManager(CachePolicy.INFINITE, this.cacheStorage, executorService()); this.segment = new SegmentMock(executorService()); this.segment.updateAttributes(TableAttributes.DEFAULT_VALUES); - this.sortedKeyStorage = new TableStoreMock(executorService()); - this.sortedKeyStorage.createSegment(this.segment.getInfo().getName(), SegmentType.TABLE_SEGMENT_HASH, TIMEOUT).join(); - this.defaultConfig = TableExtensionConfig.builder() - .maxTailCachePreIndexLength(TEST_MAX_TAIL_CACHE_PRE_INDEX_LENGTH) - .maxUnindexedLength(maxUnindexedSize) - .recoveryTimeout(Duration.ofMillis(ContainerKeyIndexTests.SHORT_TIMEOUT_MILLIS)) - .build(); + this.defaultConfig = config; this.index = createIndex(this.defaultConfig, executorService()); this.timer = new TimeoutTimer(TIMEOUT); this.random = new Random(0); diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayoutTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayoutTests.java index 9fde8d53d16..38e458ab7a3 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayoutTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/FixedKeyLengthTableSegmentLayoutTests.java @@ -69,7 +69,7 @@ protected boolean supportsDeleteIfEmpty() { } @Override - protected WriterTableProcessor createWriterTableProcessor(TableContext context) { + protected WriterTableProcessor createWriterTableProcessor(ContainerTableExtension ext, TableContext context) { return null; } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayoutTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayoutTests.java index e473f00be34..323bf074abf 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayoutTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/HashTableSegmentLayoutTests.java @@ -73,8 +73,8 @@ protected boolean supportsDeleteIfEmpty() { } @Override - protected WriterTableProcessor createWriterTableProcessor(TableContext context) { - val p = (WriterTableProcessor) context.ext.createWriterSegmentProcessors(context.segment().getMetadata()).stream().findFirst().orElse(null); + protected WriterTableProcessor createWriterTableProcessor(ContainerTableExtension ext, TableContext context) { + val p = (WriterTableProcessor) ext.createWriterSegmentProcessors(context.segment().getMetadata()).stream().findFirst().orElse(null); Assert.assertNotNull(p); context.segment().setAppendCallback((offset, length) -> addToProcessor(offset, length, p)); return p; @@ -185,8 +185,11 @@ public void testRecovery() throws Exception { // Generate a set of TestEntryData (List, ExpectedResults. // Process each TestEntryData in turn. After each time, re-create the Extension. // Verify gets are blocked on indexing. Then index, verify unblocked and then re-create the Extension, and verify again. + val recoveryConfig = TableExtensionConfig.builder() + .with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_BATCH_SIZE, (MAX_KEY_LENGTH + MAX_VALUE_LENGTH) * 11) + .build(); @Cleanup - val context = new TableContext(executorService()); + val context = new TableContext(recoveryConfig, executorService()); // Create the Segment. context.ext.createSegment(SEGMENT_NAME, SegmentType.TABLE_SEGMENT_HASH, TIMEOUT).join(); @@ -258,7 +261,7 @@ public void testThrottling() throws Exception { // We set up throttling such that we allow 'unthrottledCount' through, but block (throttle) on the next one. val config = TableExtensionConfig.builder() - .maxUnindexedLength(unthrottledCount * (keyLength + valueLength + EntrySerializer.HEADER_LENGTH)) + .with(TableExtensionConfig.MAX_UNINDEXED_LENGTH, unthrottledCount * (keyLength + valueLength + EntrySerializer.HEADER_LENGTH)) .build(); val s = new EntrySerializer(); diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableContext.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableContext.java index 77b57776b37..56cc42e5b44 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableContext.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableContext.java @@ -299,6 +299,12 @@ public CompletableFuture getStreamSegmentInfo(String streamSe throw new UnsupportedOperationException("Not Expected"); } + @Override + public CompletableFuture mergeStreamSegment(String targetSegmentName, String sourceSegmentName, + AttributeUpdateCollection attributes, Duration timeout) { + throw new UnsupportedOperationException("Not Expected"); + } + @Override public CompletableFuture mergeStreamSegment(String targetSegmentName, String sourceSegmentName, Duration timeout) { throw new UnsupportedOperationException("Not Expected"); diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableEntryDeltaIteratorTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableEntryDeltaIteratorTests.java index 5ad6fe50a7d..b5d7f322ba9 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableEntryDeltaIteratorTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableEntryDeltaIteratorTests.java @@ -63,7 +63,9 @@ public class TableEntryDeltaIteratorTests extends ThreadPooledTestSuite { private static final int NUM_ENTRIES = 10; private static final int MAX_UPDATE_COUNT = 10000; - private static final TableExtensionConfig CONFIG = TableExtensionConfig.builder().maxCompactionSize(50000).build(); + private static final TableExtensionConfig CONFIG = TableExtensionConfig.builder() + .with(TableExtensionConfig.MAX_COMPACTION_SIZE, 50000) + .build(); private static final TableEntry NON_EXISTING_ENTRY = TableEntry.notExists( new ByteArraySegment("NULL".getBytes()), @@ -83,6 +85,7 @@ public void testEmptySegment() { @Test public void testUsingInvalidArgs() { + @Cleanup TableContext context = new TableContext(CONFIG, executorService()); context.ext.createSegment(SEGMENT_NAME, SegmentType.TABLE_SEGMENT_HASH, TIMEOUT).join(); AssertExtensions.assertSuppliedFutureThrows( diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableExtensionConfigTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableExtensionConfigTests.java new file mode 100644 index 00000000000..54f74009782 --- /dev/null +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableExtensionConfigTests.java @@ -0,0 +1,71 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.server.tables; + +import io.pravega.common.util.ConfigurationException; +import io.pravega.test.common.AssertExtensions; +import java.time.Duration; +import lombok.val; +import org.junit.Assert; +import org.junit.Test; + +/** + * Unit tests for the {@link TableExtensionConfig} class. + */ +public class TableExtensionConfigTests { + @Test + public void testDefaultValues() { + val b = TableExtensionConfig.builder(); + val defaultConfig = b.build(); + Assert.assertEquals(EntrySerializer.MAX_BATCH_SIZE * 4, defaultConfig.getMaxTailCachePreIndexLength()); + Assert.assertEquals(EntrySerializer.MAX_BATCH_SIZE * 4, defaultConfig.getMaxTailCachePreIndexBatchLength()); + Assert.assertEquals(Duration.ofSeconds(60), defaultConfig.getRecoveryTimeout()); + Assert.assertEquals(EntrySerializer.MAX_BATCH_SIZE * 4, defaultConfig.getMaxUnindexedLength()); + Assert.assertEquals(EntrySerializer.MAX_SERIALIZATION_LENGTH * 4, defaultConfig.getMaxCompactionSize()); + Assert.assertEquals(Duration.ofSeconds(30), defaultConfig.getCompactionFrequency()); + Assert.assertEquals(75, defaultConfig.getDefaultMinUtilization()); + Assert.assertEquals(EntrySerializer.MAX_SERIALIZATION_LENGTH * 4 * 4, defaultConfig.getDefaultRolloverSize()); + Assert.assertEquals(EntrySerializer.MAX_BATCH_SIZE, defaultConfig.getMaxBatchSize()); + } + + @Test + public void testBuilder() { + val b = TableExtensionConfig.builder(); + b.with(TableExtensionConfig.DEFAULT_MIN_UTILIZATION, 101); + AssertExtensions.assertThrows(ConfigurationException.class, b::build); // 101 is out of the range [0, 100] + + b.with(TableExtensionConfig.DEFAULT_MIN_UTILIZATION, 10); + b.with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_LENGTH, 11L); + b.with(TableExtensionConfig.MAX_TAIL_CACHE_PREINDEX_BATCH_SIZE, 111); + b.with(TableExtensionConfig.RECOVERY_TIMEOUT, 12); + b.with(TableExtensionConfig.MAX_UNINDEXED_LENGTH, 13); + b.with(TableExtensionConfig.MAX_COMPACTION_SIZE, 14); + b.with(TableExtensionConfig.COMPACTION_FREQUENCY, 15); + b.with(TableExtensionConfig.DEFAULT_ROLLOVER_SIZE, 16L); + b.with(TableExtensionConfig.MAX_BATCH_SIZE, 17); + + val c = b.build(); + Assert.assertEquals(10, c.getDefaultMinUtilization()); + Assert.assertEquals(11L, c.getMaxTailCachePreIndexLength()); + Assert.assertEquals(111, c.getMaxTailCachePreIndexBatchLength()); + Assert.assertEquals(Duration.ofMillis(12), c.getRecoveryTimeout()); + Assert.assertEquals(13, c.getMaxUnindexedLength()); + Assert.assertEquals(14, c.getMaxCompactionSize()); + Assert.assertEquals(Duration.ofMillis(15), c.getCompactionFrequency()); + Assert.assertEquals(16, c.getDefaultRolloverSize()); + Assert.assertEquals(17, c.getMaxBatchSize()); + } +} diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableSegmentLayoutTestBase.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableSegmentLayoutTestBase.java index 083e98251be..acb776f6524 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableSegmentLayoutTestBase.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/TableSegmentLayoutTestBase.java @@ -108,13 +108,18 @@ protected int getThreadPoolSize() { */ protected abstract boolean supportsDeleteIfEmpty(); + protected WriterTableProcessor createWriterTableProcessor(TableContext context) { + return createWriterTableProcessor(context.ext, context); + } + /** * When implemented in a derived class, creates a {@link WriterTableProcessor} for async indexing. * + * @param ext The extension to attach to. * @param context TableContext. * @return A new {@link WriterTableProcessor}, or null if not supported ({@link #shouldExpectWriterTableProcessors()} == false). */ - protected abstract WriterTableProcessor createWriterTableProcessor(TableContext context); + protected abstract WriterTableProcessor createWriterTableProcessor(ContainerTableExtension ext, TableContext context); /** * When implemented in a derived class, determines if {@link WriterTableProcessor}s are expected/supported. @@ -210,24 +215,6 @@ public void testDeleteIfEmpty() { ex -> ex instanceof StreamSegmentNotExistsException); } - /** - * Verifies that the methods that are not yet implemented are not implemented by accident without unit tests. - * This test should be removed once every method tested in it is implemented. - */ - @Test - public void testUnimplementedMethods() { - @Cleanup - val context = new TableContext(executorService()); - AssertExtensions.assertThrows( - "merge() is implemented.", - () -> context.ext.merge(SEGMENT_NAME, SEGMENT_NAME, TIMEOUT), - ex -> ex instanceof UnsupportedOperationException); - AssertExtensions.assertThrows( - "seal() is implemented.", - () -> context.ext.seal(SEGMENT_NAME, TIMEOUT), - ex -> ex instanceof UnsupportedOperationException); - } - /** * Tests operations that currently accept an offset argument, and whether they fail expectedly. */ @@ -345,8 +332,8 @@ public void testCompactionWithIterators() { @SneakyThrows protected void testTableSegmentCompacted(KeyHasher keyHasher, CheckTable checkTable) { val config = TableExtensionConfig.builder() - .maxCompactionSize((MAX_KEY_LENGTH + MAX_VALUE_LENGTH) * BATCH_SIZE) - .compactionFrequency(Duration.ofMillis(1)) + .with(TableExtensionConfig.MAX_COMPACTION_SIZE, (MAX_KEY_LENGTH + MAX_VALUE_LENGTH) * BATCH_SIZE) + .with(TableExtensionConfig.COMPACTION_FREQUENCY, 1) .build(); @Cleanup val context = new TableContext(config, keyHasher, executorService()); @@ -542,6 +529,8 @@ protected void testBatchUpdates(int updateCount, int maxBatchSize, KeyHasher key @Cleanup val ext2 = context.createExtension(); + @Cleanup + val processor2 = createWriterTableProcessor(ext2, context); checkTable.accept(last.expectedEntries, removedKeys, ext2); // Finally, remove all data. @@ -550,6 +539,9 @@ protected void testBatchUpdates(int updateCount, int maxBatchSize, KeyHasher key .collect(Collectors.toList()); ext2.remove(SEGMENT_NAME, finalRemoval, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); removedKeys.addAll(last.expectedEntries.keySet()); + if (processor2 != null) { + processor2.flush(TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + } checkTable.accept(Collections.emptyMap(), removedKeys, ext2); deleteSegment(Collections.emptyList(), supportsDeleteIfEmpty(), ext2); } @@ -600,6 +592,9 @@ protected void check(Map expectedEntries, Collection keyInfo, TableStore tableStore val expectedResult = new ArrayList>(); for (val e : bySegment.entrySet()) { String segmentName = e.getKey(); + boolean fixedKeyLength = isFixedKeyLength(segmentName); + val info = tableStore.getInfo(segmentName, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + Assert.assertEquals(segmentName, info.getName()); + AssertExtensions.assertGreaterThan("Unexpected length for " + segmentName, 0, info.getLength()); + val expectedKeyLength = isFixedKeyLength(segmentName) ? getFixedKeyLength(segmentName) : 0; + Assert.assertEquals("Unexpected key length for " + segmentName, expectedKeyLength, info.getKeyLength()); + Assert.assertEquals(fixedKeyLength, info.getType().isFixedKeyLengthTableSegment()); + val keys = new ArrayList(); for (val se : e.getValue()) { keys.add(se.getKey()); @@ -288,20 +299,20 @@ private void check(HashMap keyInfo, TableStore tableStore val result = new ArrayList(); return ei.forEachRemaining(i -> result.addAll(i.getEntries()), executorService()) .thenApply(v -> { - if (isFixedKeyLength(segmentName)) { + if (fixedKeyLength) { checkSortedOrder(result); } return result; }); }); iteratorFutures.add(entryIteratorFuture); - if (!isFixedKeyLength(segmentName)) { + if (!fixedKeyLength) { unsortedIteratorFutures.add(entryIteratorFuture); // For simplicity, always start from beginning of TableSegment. offsetIteratorFutures.add(tableStore.entryDeltaIterator(segmentName, 0L, TIMEOUT) .thenCompose(ei -> { val result = new ArrayList>(); - return ei.forEachRemaining(i -> result.add(i), executorService()) + return ei.forEachRemaining(result::add, executorService()) .thenApply(v -> result); })); } diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/WriterTableProcessorTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/WriterTableProcessorTests.java index 4c849adb8de..1e266adba1c 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/WriterTableProcessorTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/tables/WriterTableProcessorTests.java @@ -23,7 +23,6 @@ import io.pravega.segmentstore.contracts.AttributeUpdate; import io.pravega.segmentstore.contracts.AttributeUpdateCollection; import io.pravega.segmentstore.contracts.AttributeUpdateType; -import io.pravega.segmentstore.contracts.SegmentType; import io.pravega.segmentstore.contracts.tables.TableAttributes; import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; @@ -31,7 +30,6 @@ import io.pravega.segmentstore.server.DirectSegmentAccess; import io.pravega.segmentstore.server.SegmentMetadata; import io.pravega.segmentstore.server.SegmentMock; -import io.pravega.segmentstore.server.TableStoreMock; import io.pravega.segmentstore.server.UpdateableSegmentMetadata; import io.pravega.segmentstore.server.containers.StreamSegmentMetadata; import io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation; @@ -74,6 +72,8 @@ public class WriterTableProcessorTests extends ThreadPooledTestSuite { private static final int UPDATE_BATCH_SIZE = 689; private static final double REMOVE_FRACTION = 0.3; // 30% of generated operations are removes. private static final int MAX_COMPACT_LENGTH = (MAX_KEY_LENGTH + MAX_VALUE_LENGTH) * UPDATE_BATCH_SIZE; + private static final int DEFAULT_MAX_FLUSH_SIZE = 128 * 1024 * 1024; // Default from TableWriterConnector. + private static final int MAX_FLUSH_ATTEMPTS = 100; // To make sure we don't get stuck in an infinite flush loop. private static final Duration TIMEOUT = Duration.ofSeconds(30); @Rule public Timeout globalTimeout = new Timeout(TIMEOUT.toMillis() * 4, TimeUnit.MILLISECONDS); @@ -147,6 +147,15 @@ public void testFlush() throws Exception { testFlushWithHasher(KeyHashers.DEFAULT_HASHER); } + /** + * Tests the {@link WriterTableProcessor#flush} method using a non-collision-prone KeyHasher. + */ + @Test + public void testFlushSmallBatches() throws Exception { + int maxBatchSize = (MAX_KEY_LENGTH + MAX_VALUE_LENGTH) * 7; + testFlushWithHasher(KeyHashers.DEFAULT_HASHER, 0, maxBatchSize); + } + /** * Tests the {@link WriterTableProcessor#flush} method using a collision-prone KeyHasher. */ @@ -155,7 +164,6 @@ public void testFlushCollisions() throws Exception { testFlushWithHasher(KeyHashers.COLLISION_HASHER); } - /** * Tests the {@link WriterTableProcessor#flush} method using a non-collision-prone KeyHasher and forcing compactions. */ @@ -234,16 +242,85 @@ public void testReconcileTableIndexOffset() throws Exception { Assert.assertFalse("Unexpected result from mustFlush() after full reconciliation.", context.processor.mustFlush()); } + /** + * Tests {@link WriterTableProcessor.OperationAggregator} + */ + @Test + public void testOperationAggregator() { + @Cleanup + val context = new TestContext(); + val a = new WriterTableProcessor.OperationAggregator(123L); + + // Empty (nothing in it). + Assert.assertEquals(123L, a.getLastIndexedOffset()); + Assert.assertEquals(-1L, a.getFirstOffset()); + Assert.assertEquals(-1L, a.getLastOffset()); + Assert.assertEquals(Operation.NO_SEQUENCE_NUMBER, a.getFirstSequenceNumber()); + Assert.assertTrue(a.isEmpty()); + Assert.assertEquals(0, a.size()); + Assert.assertEquals(-1L, a.getLastIndexToProcessAtOnce(12345)); + + a.setLastIndexedOffset(124L); + Assert.assertEquals(124L, a.getLastIndexedOffset()); + + // Add one operation. + val op1 = generateSimulatedAppend(123L, 1000, context); + a.add(op1); + Assert.assertEquals(124L, a.getLastIndexedOffset()); + Assert.assertEquals(op1.getStreamSegmentOffset(), a.getFirstOffset()); + Assert.assertEquals(op1.getLastStreamSegmentOffset(), a.getLastOffset()); + Assert.assertEquals(op1.getSequenceNumber(), a.getFirstSequenceNumber()); + Assert.assertFalse(a.isEmpty()); + Assert.assertEquals(1, a.size()); + Assert.assertEquals(op1.getLastStreamSegmentOffset(), a.getLastIndexToProcessAtOnce(12)); + Assert.assertEquals(op1.getLastStreamSegmentOffset(), a.getLastIndexToProcessAtOnce(123456)); + + // Add a second operation. + val op2 = generateSimulatedAppend(op1.getLastStreamSegmentOffset() + 1, 1000, context); + a.add(op2); + Assert.assertEquals(124L, a.getLastIndexedOffset()); + Assert.assertEquals(op1.getStreamSegmentOffset(), a.getFirstOffset()); + Assert.assertEquals(op2.getLastStreamSegmentOffset(), a.getLastOffset()); + Assert.assertEquals(op1.getSequenceNumber(), a.getFirstSequenceNumber()); + Assert.assertFalse(a.isEmpty()); + Assert.assertEquals(2, a.size()); + Assert.assertEquals(op1.getLastStreamSegmentOffset(), a.getLastIndexToProcessAtOnce(12)); + Assert.assertEquals(op1.getLastStreamSegmentOffset(), a.getLastIndexToProcessAtOnce((int) op1.getLength() + 1)); + Assert.assertEquals(op2.getLastStreamSegmentOffset(), a.getLastIndexToProcessAtOnce(123456)); + + // Test setLastIndexedOffset. + boolean r = a.setLastIndexedOffset(op1.getStreamSegmentOffset() + 1); + Assert.assertFalse(r); + Assert.assertEquals(124L, a.getLastIndexedOffset()); + Assert.assertEquals(2, a.size()); + + r = a.setLastIndexedOffset(op2.getStreamSegmentOffset()); + Assert.assertTrue(r); + Assert.assertEquals(op2.getStreamSegmentOffset(), a.getLastIndexedOffset()); + Assert.assertEquals(1, a.size()); + + r = a.setLastIndexedOffset(op2.getLastStreamSegmentOffset() + 1); + Assert.assertTrue(r); + Assert.assertEquals(op2.getLastStreamSegmentOffset() + 1, a.getLastIndexedOffset()); + Assert.assertEquals(0, a.size()); + } + private void testFlushWithHasher(KeyHasher hasher) throws Exception { testFlushWithHasher(hasher, 0); } private void testFlushWithHasher(KeyHasher hasher, int minSegmentUtilization) throws Exception { + testFlushWithHasher(hasher, minSegmentUtilization, DEFAULT_MAX_FLUSH_SIZE); + } + + private void testFlushWithHasher(KeyHasher hasher, int minSegmentUtilization, int maxFlushSize) throws Exception { // Generate a set of operations, each containing one or more entries. Each entry is an update or a remove. // Towards the beginning we have more updates than removes, then removes will prevail. @Cleanup val context = new TestContext(hasher); context.setMinUtilization(minSegmentUtilization); + context.setMaxFlushSize(maxFlushSize); + val batches = generateAndPopulateEntries(context); val allKeys = new HashMap(); // All keys, whether added or removed. @@ -258,12 +335,16 @@ private void testFlushWithHasher(KeyHasher hasher, int minSegmentUtilization) th Assert.assertEquals("Unexpected LUSN before call to flush().", batch.operations.get(0).getSequenceNumber(), context.processor.getLowestUncommittedSequenceNumber()); - // Flush. - val initialNotifyCount = context.connector.notifyCount.get(); - val f1 = context.processor.flush(TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); - AssertExtensions.assertGreaterThan("No calls to notifyIndexOffsetChanged().", - initialNotifyCount, context.connector.notifyCount.get()); - Assert.assertTrue(f1.isAnythingFlushed()); + // Flush at least once. If maxFlushSize is not the default, then we're in a test that wants to verify repeated + // flushes; in that case we should flush until there's nothing more to flush. + int remainingFlushes = MAX_FLUSH_ATTEMPTS; + do { + val initialNotifyCount = context.connector.notifyCount.get(); + val f1 = context.processor.flush(TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS); + AssertExtensions.assertGreaterThan("No calls to notifyIndexOffsetChanged().", + initialNotifyCount, context.connector.notifyCount.get()); + Assert.assertTrue(f1.isAnythingFlushed()); + } while (maxFlushSize < DEFAULT_MAX_FLUSH_SIZE && --remainingFlushes > 0 && context.processor.mustFlush()); // Post-flush validation. Assert.assertFalse("Unexpected value from mustFlush() after call to flush().", context.processor.mustFlush()); @@ -498,7 +579,7 @@ private CachedStreamSegmentAppendOperation generateSimulatedAppend(long offset, } @RequiredArgsConstructor - private class TestBatchData { + private static class TestBatchData { final HashMap expectedEntries; final List operations = new ArrayList<>(); } @@ -513,9 +594,9 @@ private class TestContext implements AutoCloseable { final TableWriterConnectorImpl connector; final WriterTableProcessor processor; final IndexReader indexReader; - final TableStoreMock tableStoreMock; final Random random; final AtomicLong sequenceNumber; + final AtomicInteger maxFlushSize = new AtomicInteger(128 * 1024 * 1024); TestContext() { this(KeyHashers.DEFAULT_HASHER); @@ -526,7 +607,6 @@ private class TestContext implements AutoCloseable { this.serializer = new EntrySerializer(); this.keyHasher = hasher; this.segmentMock = new SegmentMock(this.metadata, executorService()); - this.tableStoreMock = new TableStoreMock(executorService()); this.random = new Random(0); this.sequenceNumber = new AtomicLong(0); initializeSegment(); @@ -545,6 +625,10 @@ long nextSequenceNumber() { return this.sequenceNumber.incrementAndGet(); } + void setMaxFlushSize(int value) { + this.maxFlushSize.set(value); + } + void setMinUtilization(int value) { Preconditions.checkArgument(value >= 0 && value <= 100); this.segmentMock.updateAttributes( @@ -562,9 +646,6 @@ private void initializeSegment() { new AttributeUpdate(TableAttributes.COMPACTION_OFFSET, AttributeUpdateType.Replace, INITIAL_LAST_INDEXED_OFFSET)), TIMEOUT).join(); this.segmentMock.append(new ByteArraySegment(new byte[(int) INITIAL_LAST_INDEXED_OFFSET]), null, TIMEOUT).join(); - - // Create the Table Segment Mock. - this.tableStoreMock.createSegment(SEGMENT_NAME, SegmentType.TABLE_SEGMENT_HASH, TIMEOUT).join(); } private class TableWriterConnectorImpl implements TableWriterConnector { @@ -617,6 +698,11 @@ public int getMaxCompactionSize() { return MAX_COMPACT_LENGTH; } + @Override + public int getMaxFlushSize() { + return maxFlushSize.get(); + } + @Override public void close() { this.closed.set(true); diff --git a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/writer/StorageWriterTests.java b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/writer/StorageWriterTests.java index 7282d480103..c50d0edc4db 100644 --- a/segmentstore/server/src/test/java/io/pravega/segmentstore/server/writer/StorageWriterTests.java +++ b/segmentstore/server/src/test/java/io/pravega/segmentstore/server/writer/StorageWriterTests.java @@ -1028,7 +1028,7 @@ private ArrayList createSegments(TestContext context) { // Add the operation to the log. StreamSegmentMapOperation mapOp = new StreamSegmentMapOperation(context.storage.getStreamSegmentInfo(name, TIMEOUT).join()); - mapOp.setStreamSegmentId((long) i); + mapOp.setStreamSegmentId(i); context.dataSource.add(mapOp); } diff --git a/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLog.java b/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLog.java index 8588150b2d6..1eed152fb43 100644 --- a/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLog.java +++ b/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLog.java @@ -181,7 +181,7 @@ public void close() { } // Close the write queue and cancel the pending writes. - this.writes.close().forEach(w -> w.fail(new ObjectClosedException("BookKeeperLog has been closed."), true)); + this.writes.close().forEach(w -> w.fail(new ObjectClosedException(this), true)); if (writeLedger != null) { try { diff --git a/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLogFactory.java b/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLogFactory.java index f3688d9be8c..84ee695d509 100644 --- a/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLogFactory.java +++ b/segmentstore/storage/impl/src/main/java/io/pravega/segmentstore/storage/impl/bookkeeper/BookKeeperLogFactory.java @@ -172,7 +172,7 @@ private BookKeeper startBookKeeperClient() throws Exception { .setZkTimeout((int) this.config.getZkConnectionTimeout().toMillis()); if (this.config.isTLSEnabled()) { - config = (ClientConfiguration) config.setTLSProvider("OpenSSL"); + config = config.setTLSProvider("OpenSSL"); config = config.setTLSTrustStore(this.config.getTlsTrustStore()); config.setTLSTrustStorePasswordPath(this.config.getTlsTrustStorePasswordPath()); } diff --git a/segmentstore/storage/impl/src/test/java/io/pravega/segmentstore/storage/impl/bookkeeper/SequentialAsyncProcessorTests.java b/segmentstore/storage/impl/src/test/java/io/pravega/segmentstore/storage/impl/bookkeeper/SequentialAsyncProcessorTests.java index 10306b75d8f..a8c3e7deb71 100644 --- a/segmentstore/storage/impl/src/test/java/io/pravega/segmentstore/storage/impl/bookkeeper/SequentialAsyncProcessorTests.java +++ b/segmentstore/storage/impl/src/test/java/io/pravega/segmentstore/storage/impl/bookkeeper/SequentialAsyncProcessorTests.java @@ -25,6 +25,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; +import lombok.Cleanup; import lombok.val; import org.junit.Assert; import org.junit.Rule; @@ -57,6 +58,7 @@ public void testRunAsync() throws Exception { val retry = Retry.withExpBackoff(1, 2, 3) .retryWhen(t -> true); val error = new AtomicReference(); + @Cleanup val p = new SequentialAsyncProcessor( () -> { count.incrementAndGet(); @@ -98,6 +100,7 @@ public void testRunAsyncErrors() throws Exception { return Exceptions.unwrap(t) instanceof IntentionalException; }); val error = new CompletableFuture(); + @Cleanup val p = new SequentialAsyncProcessor( () -> { count.incrementAndGet(); diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/StorageFactoryInfo.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/StorageFactoryInfo.java index 7ce6a062062..f8515004c26 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/StorageFactoryInfo.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/StorageFactoryInfo.java @@ -26,6 +26,7 @@ public class StorageFactoryInfo { /** * Name of storage binding. + * This name is used in config file to uniquely identify storage binding to load. */ private final String name; diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/SyncStorage.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/SyncStorage.java index 1db0b930ecb..5094b208bb0 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/SyncStorage.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/SyncStorage.java @@ -237,6 +237,7 @@ default boolean supportsReplace() { * * @param segment A {@link SegmentHandle} representing the Segment to replace. * @param contents A {@link BufferView} representing the new contents of the Segment. + * @throws StreamSegmentException An eror occured generally one of the below: * @throws StreamSegmentNotExistsException When the given Segment does not exist in Storage. * @throws StorageNotPrimaryException When this Storage instance is no longer primary for this Segment (it was * fenced out). diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/AbstractTaskQueueManager.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/AbstractTaskQueueManager.java new file mode 100644 index 00000000000..6121de02fb9 --- /dev/null +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/AbstractTaskQueueManager.java @@ -0,0 +1,42 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package io.pravega.segmentstore.storage.chunklayer; + +import java.util.concurrent.CompletableFuture; + +/** + * Manages group of background task queues. + * + * @param Type of tasks. + */ +public interface AbstractTaskQueueManager extends AutoCloseable { + /** + * Adds a queue by the given name. + * + * @param queueName Name of the queue. + * @param ignoreProcessing Whether the processing should be ignored. + */ + CompletableFuture addQueue(String queueName, Boolean ignoreProcessing); + + /** + * Adds a task to queue. + * + * @param queueName Name of the queue. + * @param task Task to add. + */ + CompletableFuture addTask(String queueName, TaskType task); +} diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/BaseChunkStorage.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/BaseChunkStorage.java index 1bcbb1e587a..8a3044cfc14 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/BaseChunkStorage.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/BaseChunkStorage.java @@ -129,6 +129,7 @@ protected CompletableFuture doSetReadOnlyAsync(ChunkHandle handle, boolean }, opContext); } + @Override protected CompletableFuture doTruncateAsync(ChunkHandle handle, long offset, OperationContext opContext) { return execute(() -> doTruncate(handle, offset), opContext); } diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkIterator.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkIterator.java index 3ef41bd31c2..e8e9b2ebede 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkIterator.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkIterator.java @@ -22,28 +22,29 @@ import io.pravega.segmentstore.storage.metadata.SegmentMetadata; import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Executor; import java.util.function.BiConsumer; /** * Helper class for iterating over list of chunks. */ class ChunkIterator { - private final ChunkedSegmentStorage chunkedSegmentStorage; + private final Executor executor; private final MetadataTransaction txn; private volatile String currentChunkName; private volatile String lastChunkName; private volatile ChunkMetadata currentMetadata; - ChunkIterator(ChunkedSegmentStorage chunkedSegmentStorage, MetadataTransaction txn, SegmentMetadata segmentMetadata) { - this.chunkedSegmentStorage = Preconditions.checkNotNull(chunkedSegmentStorage, "chunkedSegmentStorage"); + ChunkIterator(Executor executor, MetadataTransaction txn, SegmentMetadata segmentMetadata) { + this.executor = Preconditions.checkNotNull(executor, "executor"); this.txn = Preconditions.checkNotNull(txn, "txn"); Preconditions.checkNotNull(segmentMetadata, "segmentMetadata"); // The following can be null. this.currentChunkName = segmentMetadata.getFirstChunk(); } - ChunkIterator(ChunkedSegmentStorage chunkedSegmentStorage, MetadataTransaction txn, String startChunkName, String lastChunkName) { - this.chunkedSegmentStorage = Preconditions.checkNotNull(chunkedSegmentStorage, "chunkedSegmentStorage"); + ChunkIterator(Executor executor, MetadataTransaction txn, String startChunkName, String lastChunkName) { + this.executor = Preconditions.checkNotNull(executor, "executor"); this.txn = Preconditions.checkNotNull(txn, "txn"); // The following can be null. this.currentChunkName = startChunkName; @@ -59,7 +60,7 @@ public CompletableFuture forEach(BiConsumer consume consumer.accept(currentMetadata, currentChunkName); // Move next currentChunkName = currentMetadata.getNextChunk(); - }, chunkedSegmentStorage.getExecutor()), - chunkedSegmentStorage.getExecutor()); + }, executor), + executor); } } diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorage.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorage.java index 57ea38228ee..bcf11bc3616 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorage.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorage.java @@ -49,8 +49,22 @@ * * For concats, {@link ChunkStorage} supports both native and append, ChunkedSegmentStorage will invoke appropriate method depending * on size of target and source chunks. (Eg. ECS) - * - * It is recommended that the implementations should extend {@link BaseChunkStorage}. + *

+ * To implement custom {@link ChunkStorage} implementation: + *

    + *
  1. Implement {@link ChunkStorage}. It is recommended that the implementations should extend {@link AsyncBaseChunkStorage} + * or {@link BaseChunkStorage}.
  2. + *
  3. Implement {@link io.pravega.segmentstore.storage.SimpleStorageFactory}.
  4. + *
  5. Implement {@link io.pravega.segmentstore.storage.StorageFactoryCreator}. + *
      + *
    • Return {@link io.pravega.segmentstore.storage.StorageFactoryInfo} object containing name to use to identify. + * This identifier is used in config.properties file to identify implementation to use.
    • + *
    • The bindings are loaded using ServiceLoader (https://docs.oracle.com/javase/7/docs/api/java/util/ServiceLoader.html)
    • + *
    • Add resource file named "io.pravega.segmentstore.storage.StorageFactoryCreator" to the implementation jar. + * That file contains name of the class implementing {@link io.pravega.segmentstore.storage.StorageFactoryCreator}.
    • + *
    + *
  6. + *
*/ @Beta public interface ChunkStorage extends AutoCloseable, StatsReporter { diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageMetrics.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageMetrics.java index 09516305902..4b71505702d 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageMetrics.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageMetrics.java @@ -65,6 +65,19 @@ public class ChunkStorageMetrics { static final Counter SLTS_SYSTEM_WRITE_BYTES = STATS_LOGGER.createCounter(MetricsNames.SLTS_SYSTEM_WRITE_BYTES); static final Counter SLTS_CONCAT_BYTES = STATS_LOGGER.createCounter(MetricsNames.SLTS_CONCAT_BYTES); + static final Counter SLTS_GC_TASK_PROCESSED = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_TASK_PROCESSED); + + static final Counter SLTS_GC_CHUNK_NEW = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_CHUNK_NEW); + static final Counter SLTS_GC_CHUNK_QUEUED = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_CHUNK_QUEUED); + static final Counter SLTS_GC_CHUNK_DELETED = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_CHUNK_DELETED); + static final Counter SLTS_GC_CHUNK_RETRY = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_CHUNK_RETRY); + static final Counter SLTS_GC_CHUNK_FAILED = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_CHUNK_FAILED); + + static final Counter SLTS_GC_SEGMENT_QUEUED = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_SEGMENT_QUEUED); + static final Counter SLTS_GC_SEGMENT_PROCESSED = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_SEGMENT_PROCESSED); + static final Counter SLTS_GC_SEGMENT_RETRY = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_SEGMENT_RETRY); + static final Counter SLTS_GC_SEGMENT_FAILED = STATS_LOGGER.createCounter(MetricsNames.SLTS_GC_SEGMENT_FAILED); + static final OpStatsLogger SLTS_NUM_CHUNKS_READ = STATS_LOGGER.createStats(MetricsNames.SLTS_NUM_CHUNKS_READ); static final OpStatsLogger SLTS_SYSTEM_NUM_CHUNKS_READ = STATS_LOGGER.createStats(MetricsNames.SLTS_SYSTEM_NUM_CHUNKS_READ); static final OpStatsLogger SLTS_NUM_CHUNKS_ADDED = STATS_LOGGER.createStats(MetricsNames.SLTS_NUM_CHUNKS_ADDED); diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorage.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorage.java index 3195528cc40..2df63dc1e7e 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorage.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorage.java @@ -20,6 +20,7 @@ import io.pravega.common.Exceptions; import io.pravega.common.LoggerHelpers; import io.pravega.common.Timer; +import io.pravega.common.concurrent.Futures; import io.pravega.common.concurrent.MultiKeySequentialProcessor; import io.pravega.common.util.ImmutableDate; import io.pravega.segmentstore.contracts.SegmentProperties; @@ -46,7 +47,6 @@ import javax.annotation.concurrent.GuardedBy; import java.io.InputStream; import java.time.Duration; -import java.util.ArrayList; import java.util.Arrays; import java.util.ConcurrentModificationException; import java.util.HashSet; @@ -87,7 +87,7 @@ public class ChunkedSegmentStorage implements Storage, StatsReporter { /** * Metadata store containing all storage data. - * Initialized by segment container via {@link ChunkedSegmentStorage#bootstrap(SnapshotInfoStore)} ()}. + * Initialized by segment container via {@link ChunkedSegmentStorage#bootstrap(SnapshotInfoStore, AbstractTaskQueueManager)} ()}. */ @Getter private final ChunkMetadataStore metadataStore; @@ -118,7 +118,7 @@ public class ChunkedSegmentStorage implements Storage, StatsReporter { /** * Id of the current Container. - * Initialized by segment container via {@link ChunkedSegmentStorage#bootstrap(SnapshotInfoStore)} ()}. + * Initialized by segment container via {@link ChunkedSegmentStorage#bootstrap(SnapshotInfoStore, AbstractTaskQueueManager)}. */ @Getter private final int containerId; @@ -154,6 +154,8 @@ public class ChunkedSegmentStorage implements Storage, StatsReporter { private final ScheduledFuture reporter; + private AbstractTaskQueueManager taskQueue; + /** * Creates a new instance of the ChunkedSegmentStorage class. * @@ -176,7 +178,10 @@ public ChunkedSegmentStorage(int containerId, ChunkStorage chunkStorage, ChunkMe chunkStorage, metadataStore, config, - executor); + executor, + System::currentTimeMillis, + duration -> Futures.delayedFuture(duration, executor)); + this.systemJournal = new SystemJournal(containerId, chunkStorage, metadataStore, @@ -191,14 +196,22 @@ public ChunkedSegmentStorage(int containerId, ChunkStorage chunkStorage, ChunkMe * Initializes the ChunkedSegmentStorage and bootstrap the metadata about storage metadata segments by reading and processing the journal. * * @param snapshotInfoStore Store that saves {@link SnapshotInfo}. + * @param taskQueue Task queue to use for garbage collection. */ - public CompletableFuture bootstrap(SnapshotInfoStore snapshotInfoStore) { + public CompletableFuture bootstrap(SnapshotInfoStore snapshotInfoStore, AbstractTaskQueueManager taskQueue) { this.logPrefix = String.format("ChunkedSegmentStorage[%d]", containerId); - + this.taskQueue = taskQueue; // Now bootstrap - return this.systemJournal.bootstrap(epoch, snapshotInfoStore) - .thenRun(() -> garbageCollector.initialize()); + return this.systemJournal.bootstrap(epoch, snapshotInfoStore); + } + + /** + * Concludes and finalizes the boostrap. + * @return + */ + public CompletableFuture finishBootstrap() { + return garbageCollector.initialize(taskQueue); } @Override @@ -211,7 +224,9 @@ public CompletableFuture openWrite(String streamSegmentName) { checkInitialized(); return executeSerialized(() -> { val traceId = LoggerHelpers.traceEnter(log, "openWrite", streamSegmentName); + val timer = new Timer(); Preconditions.checkNotNull(streamSegmentName, "streamSegmentName"); + log.debug("{} openWrite - started segment={}.", logPrefix, streamSegmentName); return tryWith(metadataStore.beginTransaction(false, streamSegmentName), txn -> txn.get(streamSegmentName) .thenComposeAsync(storageMetadata -> { @@ -222,14 +237,7 @@ public CompletableFuture openWrite(String streamSegmentName) { final CompletableFuture f; if (segmentMetadata.getOwnerEpoch() < this.epoch) { log.debug("{} openWrite - Segment needs ownership change - segment={}.", logPrefix, segmentMetadata.getName()); - f = claimOwnership(txn, segmentMetadata) - .exceptionally(e -> { - val ex = Exceptions.unwrap(e); - if (ex instanceof StorageMetadataWritesFencedOutException) { - throw new CompletionException(new StorageNotPrimaryException(streamSegmentName, ex)); - } - throw new CompletionException(ex); - }); + f = claimOwnership(txn, segmentMetadata); } else { f = CompletableFuture.completedFuture(null); } @@ -239,11 +247,19 @@ public CompletableFuture openWrite(String streamSegmentName) { // This instance is the owner, return a handle. val retValue = SegmentStorageHandle.writeHandle(streamSegmentName); + log.debug("{} openWrite - finished segment={} latency={}.", logPrefix, streamSegmentName, timer.getElapsedMillis()); LoggerHelpers.traceLeave(log, "openWrite", traceId, retValue); return retValue; }, executor); }, executor), - executor); + executor) + .handleAsync( (v, ex) -> { + if (null != ex) { + log.debug("{} openWrite - exception segment={} latency={}.", logPrefix, streamSegmentName, timer.getElapsedMillis(), ex); + handleException(streamSegmentName, ex); + } + return v; + }, executor); }, streamSegmentName); } @@ -339,7 +355,7 @@ public CompletableFuture create(String streamSegmentName, Segment return executeSerialized(() -> { val traceId = LoggerHelpers.traceEnter(log, "create", streamSegmentName, rollingPolicy); val timer = new Timer(); - + log.debug("{} create - started segment={}, rollingPolicy={}, latency={}.", logPrefix, streamSegmentName, rollingPolicy); return tryWith(metadataStore.beginTransaction(false, streamSegmentName), txn -> { // Retrieve metadata and make sure it does not exist. return txn.get(streamSegmentName) @@ -365,27 +381,28 @@ public CompletableFuture create(String streamSegmentName, Segment Duration elapsed = timer.getElapsed(); SLTS_CREATE_LATENCY.reportSuccessEvent(elapsed); SLTS_CREATE_COUNT.inc(); - log.debug("{} create - segment={}, rollingPolicy={}, latency={}.", logPrefix, streamSegmentName, rollingPolicy, elapsed.toMillis()); + log.debug("{} create - finished segment={}, rollingPolicy={}, latency={}.", logPrefix, streamSegmentName, rollingPolicy, elapsed.toMillis()); LoggerHelpers.traceLeave(log, "create", traceId, retValue); return retValue; - }, executor) - .handleAsync((v, e) -> { - handleException(streamSegmentName, e); - return v; }, executor); }, executor); + }, executor) + .handleAsync((v, e) -> { + if (null != e) { + log.debug("{} create - exception segment={}, rollingPolicy={}, latency={}.", logPrefix, streamSegmentName, rollingPolicy, timer.getElapsedMillis(), e); + handleException(streamSegmentName, e); + } + return v; }, executor); }, streamSegmentName); } private void handleException(String streamSegmentName, Throwable e) { - if (null != e) { val ex = Exceptions.unwrap(e); if (ex instanceof StorageMetadataWritesFencedOutException) { throw new CompletionException(new StorageNotPrimaryException(streamSegmentName, ex)); } throw new CompletionException(ex); - } } @Override @@ -412,6 +429,8 @@ public CompletableFuture seal(SegmentHandle handle, Duration timeout) { checkInitialized(); return executeSerialized(() -> { val traceId = LoggerHelpers.traceEnter(log, "seal", handle); + Timer timer = new Timer(); + log.debug("{} seal - started segment={} latency={}.", logPrefix, handle.getSegmentName()); Preconditions.checkNotNull(handle, "handle"); String streamSegmentName = handle.getSegmentName(); Preconditions.checkNotNull(streamSegmentName, "streamSegmentName"); @@ -435,16 +454,14 @@ public CompletableFuture seal(SegmentHandle handle, Duration timeout) { } }, executor) .thenRunAsync(() -> { - log.debug("{} seal - segment={}.", logPrefix, handle.getSegmentName()); + log.debug("{} seal - finished segment={} latency={}.", logPrefix, handle.getSegmentName(), timer.getElapsedMillis()); LoggerHelpers.traceLeave(log, "seal", traceId, handle); - }, executor) - .exceptionally(e -> { - val ex = Exceptions.unwrap(e); - if (ex instanceof StorageMetadataWritesFencedOutException) { - throw new CompletionException(new StorageNotPrimaryException(streamSegmentName, ex)); - } - throw new CompletionException(ex); - }), executor); + }, executor), executor) + .exceptionally( ex -> { + log.warn("{} seal - exception segment={} latency={}.", logPrefix, handle.getSegmentName(), timer.getElapsedMillis(), ex); + handleException(streamSegmentName, ex); + return null; + }); }, handle.getSegmentName()); } @@ -499,6 +516,7 @@ public CompletableFuture delete(SegmentHandle handle, Duration timeout) { } return executeSerialized(() -> { val traceId = LoggerHelpers.traceEnter(log, "delete", handle); + log.debug("{} delete - started segment={}, latency={}.", logPrefix, handle.getSegmentName()); val timer = new Timer(); val streamSegmentName = handle.getSegmentName(); return tryWith(metadataStore.beginTransaction(false, streamSegmentName), txn -> txn.get(streamSegmentName) @@ -509,41 +527,29 @@ public CompletableFuture delete(SegmentHandle handle, Duration timeout) { checkOwnership(streamSegmentName, segmentMetadata); segmentMetadata.setActive(false); - - // Delete chunks - val chunksToDelete = new ArrayList(); - return new ChunkIterator(this, txn, segmentMetadata) - .forEach((metadata, name) -> { - metadata.setActive(false); - txn.update(metadata); - chunksToDelete.add(name); - }) - .thenRunAsync(() -> deleteBlockIndexEntriesForChunk(txn, streamSegmentName, segmentMetadata.getStartOffset(), segmentMetadata.getLength()), - executor) - .thenRunAsync(() -> txn.delete(streamSegmentName), executor) - .thenComposeAsync(v -> - txn.commit() - .thenRunAsync(() -> { - // Collect garbage - garbageCollector.addToGarbage(chunksToDelete); - - // Update the read index. - readIndexCache.remove(streamSegmentName); - - val elapsed = timer.getElapsed(); - SLTS_DELETE_LATENCY.reportSuccessEvent(elapsed); - SLTS_DELETE_COUNT.inc(); - log.debug("{} delete - segment={}, latency={}.", logPrefix, handle.getSegmentName(), elapsed.toMillis()); - LoggerHelpers.traceLeave(log, "delete", traceId, handle); - }, executor) - .exceptionally(e -> { - val ex = Exceptions.unwrap(e); - if (ex instanceof StorageMetadataWritesFencedOutException) { - throw new CompletionException(new StorageNotPrimaryException(streamSegmentName, ex)); - } - throw new CompletionException(ex); - }), executor); - }, executor), executor); + txn.update(segmentMetadata); + // Collect garbage + return garbageCollector.addSegmentToGarbage(txn.getVersion(), streamSegmentName) + .thenComposeAsync(vv -> { + // Commit metadata. + return txn.commit() + .thenRunAsync(() -> { + // Update the read index. + readIndexCache.remove(streamSegmentName); + + val elapsed = timer.getElapsed(); + SLTS_DELETE_LATENCY.reportSuccessEvent(elapsed); + SLTS_DELETE_COUNT.inc(); + log.debug("{} delete - finished segment={}, latency={}.", logPrefix, handle.getSegmentName(), elapsed.toMillis()); + LoggerHelpers.traceLeave(log, "delete", traceId, handle); + }, executor); + }, executor); + }, executor), executor) + .exceptionally( ex -> { + log.warn("{} delete - exception segment={}, latency={}.", logPrefix, handle.getSegmentName(), timer.getElapsedMillis(), ex); + handleException(streamSegmentName, ex); + return null; + }); }, handle.getSegmentName()); } @@ -579,8 +585,10 @@ public CompletableFuture openRead(String streamSegmentName) { checkInitialized(); return executeParallel(() -> { val traceId = LoggerHelpers.traceEnter(log, "openRead", streamSegmentName); + val timer = new Timer(); // Validate preconditions and return handle. Preconditions.checkNotNull(streamSegmentName, "streamSegmentName"); + log.debug("{} openRead - started segment={}.", logPrefix, streamSegmentName); return tryWith(metadataStore.beginTransaction(false, streamSegmentName), txn -> txn.get(streamSegmentName).thenComposeAsync(storageMetadata -> { val segmentMetadata = (SegmentMetadata) storageMetadata; @@ -593,24 +601,25 @@ public CompletableFuture openRead(String streamSegmentName) { // In case of a fail-over, length recorded in metadata will be lagging behind its actual length in the storage. // This can happen with lazy commits that were still not committed at the time of fail-over. f = executeSerialized(() -> - claimOwnership(txn, segmentMetadata) - .exceptionally(e -> { - val ex = Exceptions.unwrap(e); - if (ex instanceof StorageMetadataWritesFencedOutException) { - throw new CompletionException(new StorageNotPrimaryException(streamSegmentName, ex)); - } - throw new CompletionException(ex); - }), streamSegmentName); + claimOwnership(txn, segmentMetadata), streamSegmentName); } else { f = CompletableFuture.completedFuture(null); } return f.thenApplyAsync(v -> { val retValue = SegmentStorageHandle.readHandle(streamSegmentName); + log.debug("{} openRead - finished segment={} latency={}.", logPrefix, streamSegmentName, timer.getElapsedMillis()); LoggerHelpers.traceLeave(log, "openRead", traceId, retValue); return retValue; }, executor); }, executor), - executor); + executor) + .handleAsync( (v, ex) -> { + if (null != ex) { + log.debug("{} openRead - exception segment={} latency={}.", logPrefix, streamSegmentName, timer.getElapsedMillis(), ex); + handleException(streamSegmentName, ex); + } + return v; + }, executor); }, streamSegmentName); } @@ -625,14 +634,14 @@ public CompletableFuture getStreamSegmentInfo(String streamSe checkInitialized(); return executeParallel(() -> { val traceId = LoggerHelpers.traceEnter(log, "getStreamSegmentInfo", streamSegmentName); + val timer = new Timer(); Preconditions.checkNotNull(streamSegmentName, "streamSegmentName"); + log.debug("{} getStreamSegmentInfo - started segment={}.", logPrefix, streamSegmentName); return tryWith(metadataStore.beginTransaction(true, streamSegmentName), txn -> txn.get(streamSegmentName) .thenApplyAsync(storageMetadata -> { SegmentMetadata segmentMetadata = (SegmentMetadata) storageMetadata; - if (null == segmentMetadata) { - throw new CompletionException(new StreamSegmentNotExistsException(streamSegmentName)); - } + checkSegmentExists(streamSegmentName, segmentMetadata); segmentMetadata.checkInvariants(); val retValue = StreamSegmentInformation.builder() @@ -642,9 +651,18 @@ public CompletableFuture getStreamSegmentInfo(String streamSe .startOffset(segmentMetadata.getStartOffset()) .lastModified(new ImmutableDate(segmentMetadata.getLastModified())) .build(); + log.debug("{} getStreamSegmentInfo - finished segment={} latency={}.", logPrefix, streamSegmentName, timer.getElapsedMillis()); LoggerHelpers.traceLeave(log, "getStreamSegmentInfo", traceId, retValue); return retValue; - }, executor), executor); + }, executor), + executor) + .handleAsync( (v, ex) -> { + if (null != ex) { + log.debug("{} getStreamSegmentInfo - exception segment={}.", logPrefix, streamSegmentName, ex); + handleException(streamSegmentName, ex); + } + return v; + }, executor); }, streamSegmentName); } @@ -678,7 +696,8 @@ public void report() { public void close() { close("metadataStore", this.metadataStore); close("garbageCollector", this.garbageCollector); - close("chunkStorage", this.chunkStorage); + // taskQueue is per instance so safe to close this here. + close("taskQueue", this.taskQueue); this.reporter.cancel(true); this.closed.set(true); } @@ -725,10 +744,7 @@ void addBlockIndexEntriesForChunk(MetadataTransaction txn, String segmentName, S * Delete block index entries for given chunk. */ void deleteBlockIndexEntriesForChunk(MetadataTransaction txn, String segmentName, long startOffset, long endOffset) { - val firstBlock = startOffset / config.getIndexBlockSize(); - for (long offset = firstBlock * config.getIndexBlockSize(); offset < endOffset; offset += config.getIndexBlockSize()) { - txn.delete(NameUtils.getSegmentReadIndexBlockName(segmentName, offset)); - } + this.garbageCollector.deleteBlockIndexEntriesForChunk(txn, segmentName, startOffset, endOffset); } /** @@ -754,7 +770,14 @@ void reportMetricsForSystemSegment(SegmentMetadata segmentMetadata) { * */ private CompletableFuture executeSerialized(Callable> operation, String... segmentNames) { Exceptions.checkNotClosed(this.closed.get(), this); - return this.taskProcessor.add(Arrays.asList(segmentNames), () -> executeExclusive(operation, segmentNames)); + if (segmentNames.length == 1 && this.systemJournal.isStorageSystemSegment(segmentNames[0])) { + // To maintain consistency of snapshot, all operations on any of the storage system segments are linearized + // on the entire group. + val segments = this.systemJournal.getSystemSegments(); + return this.taskProcessor.add(Arrays.asList(segments), () -> executeExclusive(operation, segments)); + } else { + return this.taskProcessor.add(Arrays.asList(segmentNames), () -> executeExclusive(operation, segmentNames)); + } } /** diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfig.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfig.java index 0a1a442eeff..aea37825bce 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfig.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfig.java @@ -50,6 +50,7 @@ public class ChunkedSegmentStorageConfig { public static final Property GARBAGE_COLLECTION_MAX_QUEUE_SIZE = Property.named("garbage.collection.queue.size.max", 16 * 1024); public static final Property GARBAGE_COLLECTION_SLEEP = Property.named("garbage.collection.sleep.millis", 10); public static final Property GARBAGE_COLLECTION_MAX_ATTEMPTS = Property.named("garbage.collection.attempts.max", 3); + public static final Property GARBAGE_COLLECTION_MAX_TXN_BATCH_SIZE = Property.named("garbage.collection.txn.batch.size.max", 5000); public static final Property MAX_METADATA_ENTRIES_IN_BUFFER = Property.named("metadata.buffer.size.max", 1024); public static final Property MAX_METADATA_ENTRIES_IN_CACHE = Property.named("metadata.cache.size.max", 5000); @@ -79,6 +80,7 @@ public class ChunkedSegmentStorageConfig { .garbageCollectionMaxQueueSize(16 * 1024) .garbageCollectionSleep(Duration.ofMillis(10)) .garbageCollectionMaxAttempts(3) + .garbageCollectionTransactionBatchSize(5000) .indexBlockSize(1024 * 1024) .maxEntriesInCache(5000) .maxEntriesInTxnBuffer(1024) @@ -197,6 +199,12 @@ public class ChunkedSegmentStorageConfig { @Getter final private int garbageCollectionMaxAttempts; + /** + * Max number of metadata entries to update in a single transaction during garbage collection. + */ + @Getter + final private int garbageCollectionTransactionBatchSize; + /** * Maximum number of metadata entries to keep in recent transaction buffer. */ @@ -262,6 +270,7 @@ public class ChunkedSegmentStorageConfig { this.garbageCollectionMaxQueueSize = properties.getInt(GARBAGE_COLLECTION_MAX_QUEUE_SIZE); this.garbageCollectionSleep = Duration.ofMillis(properties.getInt(GARBAGE_COLLECTION_SLEEP)); this.garbageCollectionMaxAttempts = properties.getInt(GARBAGE_COLLECTION_MAX_ATTEMPTS); + this.garbageCollectionTransactionBatchSize = properties.getPositiveInt(GARBAGE_COLLECTION_MAX_TXN_BATCH_SIZE); this.journalSnapshotInfoUpdateFrequency = Duration.ofMinutes(properties.getInt(JOURNAL_SNAPSHOT_UPDATE_FREQUENCY)); this.maxJournalUpdatesPerSnapshot = properties.getInt(MAX_PER_SNAPSHOT_UPDATE_COUNT); this.maxJournalReadAttempts = properties.getInt(MAX_JOURNAL_READ_ATTEMPTS); diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ConcatOperation.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ConcatOperation.java index 43ff2e33269..c30356d63b4 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ConcatOperation.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ConcatOperation.java @@ -68,6 +68,7 @@ class ConcatOperation implements Callable> { traceId = LoggerHelpers.traceEnter(log, "concat", targetHandle, offset, sourceSegment); } + @Override public CompletableFuture call() { checkPreconditions(); log.debug("{} concat - started op={}, target={}, source={}, offset={}.", @@ -106,11 +107,14 @@ private CompletableFuture performConcat(MetadataTransaction txn) { } return f.thenComposeAsync(v2 -> { targetSegmentMetadata.checkInvariants(); - - // Finally commit transaction. - return txn.commit() - .exceptionally(this::handleException) - .thenRunAsync(this::postCommit, chunkedSegmentStorage.getExecutor()); + // Collect garbage. + return chunkedSegmentStorage.getGarbageCollector().addChunksToGarbage(txn.getVersion(), chunksToDelete) + .thenComposeAsync(v4 -> { + // Finally commit transaction. + return txn.commit() + .exceptionally(this::handleException) + .thenRunAsync(this::postCommit, chunkedSegmentStorage.getExecutor()); + }, chunkedSegmentStorage.getExecutor()); }, chunkedSegmentStorage.getExecutor()); }, chunkedSegmentStorage.getExecutor()); } @@ -126,8 +130,6 @@ private Void handleException(Throwable e) { } private void postCommit() { - // Collect garbage. - chunkedSegmentStorage.getGarbageCollector().addToGarbage(chunksToDelete); // Update the read index. chunkedSegmentStorage.getReadIndexCache().remove(sourceSegment); chunkedSegmentStorage.getReadIndexCache().addIndexEntries(targetHandle.getSegmentName(), newReadIndexEntries); @@ -177,8 +179,10 @@ private CompletableFuture updateMetadata(MetadataTransaction txn) { targetSegmentMetadata.setChunkCount(targetSegmentMetadata.getChunkCount() + sourceSegmentMetadata.getChunkCount()); // Delete read index block entries for source. - chunkedSegmentStorage.deleteBlockIndexEntriesForChunk(txn, sourceSegment, sourceSegmentMetadata.getStartOffset(), sourceSegmentMetadata.getLength()); - + // To avoid possibility of unintentional deadlock, skip this step for storage system segments. + if (!sourceSegmentMetadata.isStorageSystemSegment()) { + chunkedSegmentStorage.deleteBlockIndexEntriesForChunk(txn, sourceSegment, sourceSegmentMetadata.getStartOffset(), sourceSegmentMetadata.getLength()); + } txn.update(targetSegmentMetadata); txn.delete(sourceSegment); diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/DefragmentOperation.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/DefragmentOperation.java index 49533126612..8e20f671b5f 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/DefragmentOperation.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/DefragmentOperation.java @@ -15,6 +15,7 @@ */ package io.pravega.segmentstore.storage.chunklayer; +import com.google.common.base.Preconditions; import io.pravega.common.Exceptions; import io.pravega.common.concurrent.Futures; import io.pravega.segmentstore.storage.metadata.ChunkMetadata; @@ -121,7 +122,6 @@ class DefragmentOperation implements Callable> { private final AtomicLong targetSizeAfterConcat = new AtomicLong(); private volatile String nextChunkName; private volatile ChunkMetadata next = null; - private final AtomicLong writeAtOffset = new AtomicLong(); private final AtomicLong readAtOffset = new AtomicLong(); private final AtomicLong bytesToRead = new AtomicLong(); @@ -146,6 +146,7 @@ class DefragmentOperation implements Callable> { this.currentIndexOffset.set(currentIndexOffset); } + @Override public CompletableFuture call() { // The algorithm is actually very simple. // It tries to concat all small chunks using appends first. @@ -154,6 +155,8 @@ public CompletableFuture call() { useAppend.set(true); targetChunkName = startChunkName; + val oldChunkCount = segmentMetadata.getChunkCount(); + // Iterate through chunk list // Make sure no invariants are broken. return Futures.loop( @@ -205,6 +208,9 @@ public CompletableFuture call() { }, chunkedSegmentStorage.getExecutor()), chunkedSegmentStorage.getExecutor()) .thenComposeAsync(vvv -> { + Preconditions.checkState(oldChunkCount - chunksToDelete.size() == segmentMetadata.getChunkCount(), + "Number of chunks do not match. old value (%s) - number of chunks deleted (%s) must match current chunk count(%s)", + oldChunkCount, chunksToDelete.size(), segmentMetadata.getChunkCount()); segmentMetadata.checkInvariants(); return updateReadIndex(); }, chunkedSegmentStorage.getExecutor()); @@ -216,10 +222,25 @@ private CompletableFuture concatChunks() { concatArgs[i] = ConcatArgument.fromChunkInfo(chunksToConcat.get(i)); } final CompletableFuture f; - if ((!useAppend.get() && chunkedSegmentStorage.getChunkStorage().supportsConcat()) || !chunkedSegmentStorage.shouldAppend()) { + + if (!useAppend.get() && chunkedSegmentStorage.getChunkStorage().supportsConcat()) { + for (int i = 0; i < chunksToConcat.size() - 1; i++) { + Preconditions.checkState(concatArgs[i].getLength() < chunkedSegmentStorage.getConfig().getMaxSizeLimitForConcat(), + "ConcatArgument out of bound. {}", concatArgs[i]); + Preconditions.checkState( concatArgs[i].getLength() > chunkedSegmentStorage.getConfig().getMinSizeLimitForConcat(), + "ConcatArgument out of bound. {}", concatArgs[i]); + } f = chunkedSegmentStorage.getChunkStorage().concat(concatArgs); } else { - f = concatUsingAppend(concatArgs); + if (chunkedSegmentStorage.shouldAppend()) { + f = concatUsingAppend(concatArgs); + } else { + Preconditions.checkState(chunkedSegmentStorage.getChunkStorage().supportsConcat(), + "ChunkStorage must support Concat."); + Preconditions.checkState(concatArgs[0].getLength() > chunkedSegmentStorage.getConfig().getMinSizeLimitForConcat(), + "ConcatArgument out of bound. {}", concatArgs[0]); + f = concatUsingTailConcat(concatArgs); + } } return f.thenComposeAsync(v -> { @@ -259,40 +280,94 @@ private CompletableFuture concatChunks() { } private CompletableFuture gatherChunks() { + chunksToConcat = Collections.synchronizedList(new ArrayList<>()); + return txn.get(targetChunkName) .thenComposeAsync(storageMetadata -> { target = (ChunkMetadata) storageMetadata; - chunksToConcat = Collections.synchronizedList(new ArrayList<>()); - targetSizeAfterConcat.set(target.getLength()); // Add target to the list of chunks + targetSizeAfterConcat.set(target.getLength()); chunksToConcat.add(new ChunkInfo(targetSizeAfterConcat.get(), targetChunkName)); nextChunkName = target.getNextChunk(); - return txn.get(nextChunkName) - .thenComposeAsync(storageMetadata1 -> { - - next = (ChunkMetadata) storageMetadata1; - // Gather list of chunks that can be appended together. - return Futures.loop( - () -> - null != nextChunkName - && !(useAppend.get() && chunkedSegmentStorage.getConfig().getMinSizeLimitForConcat() < next.getLength()) - && !(targetSizeAfterConcat.get() + next.getLength() > segmentMetadata.getMaxRollinglength() || next.getLength() > chunkedSegmentStorage.getConfig().getMaxSizeLimitForConcat()), - () -> txn.get(nextChunkName) - .thenAcceptAsync(storageMetadata2 -> { - next = (ChunkMetadata) storageMetadata2; - chunksToConcat.add(new ChunkInfo(next.getLength(), nextChunkName)); - targetSizeAfterConcat.addAndGet(next.getLength()); - - nextChunkName = next.getNextChunk(); - }, chunkedSegmentStorage.getExecutor()), - chunkedSegmentStorage.getExecutor()); - }, chunkedSegmentStorage.getExecutor()); + // Skip over when first chunk is smaller than min concat size or is greater than max concat size. + if (!chunkedSegmentStorage.shouldAppend()) { + if (target.getLength() <= chunkedSegmentStorage.getConfig().getMinSizeLimitForConcat() + || target.getLength() > chunkedSegmentStorage.getConfig().getMaxSizeLimitForConcat()) { + return CompletableFuture.completedFuture(null); + } + } + + val shouldContinueGathering = new AtomicBoolean(true); + return Futures.loop( + () -> shouldContinueGathering.get(), + () -> txn.get(nextChunkName) + .thenAcceptAsync(storageMetadata2 -> { + next = (ChunkMetadata) storageMetadata2; + if (shouldContinue()) { + chunksToConcat.add(new ChunkInfo(next.getLength(), nextChunkName)); + targetSizeAfterConcat.addAndGet(next.getLength()); + + nextChunkName = next.getNextChunk(); + } else { + shouldContinueGathering.set(false); + } + }, chunkedSegmentStorage.getExecutor()), + chunkedSegmentStorage.getExecutor()); }, chunkedSegmentStorage.getExecutor()); } + private boolean shouldContinue() { + if (null == nextChunkName) { + return false; + } + // Make sure target size is below max rolling size. + if (targetSizeAfterConcat.get() > segmentMetadata.getMaxRollinglength() + || targetSizeAfterConcat.get() + next.getLength() > segmentMetadata.getMaxRollinglength() + || next.getLength() > chunkedSegmentStorage.getConfig().getMaxSizeLimitForConcat()) { + return false; + } + + // Make sure source chunk is greater than min concat size and smaller than max concat size allowed. + if (!chunkedSegmentStorage.shouldAppend()) { + if (targetSizeAfterConcat.get() > chunkedSegmentStorage.getConfig().getMaxSizeLimitForConcat()) { + return false; + } + } + return true; + } + + private CompletableFuture concatUsingTailConcat(ConcatArgument[] concatArgs) { + currentArgIndex.set(1); + val length = new AtomicLong(concatArgs[0].getLength()); + return Futures.loop(() -> currentArgIndex.get() < concatArgs.length, + () -> { + val args = new ConcatArgument[2]; + args[0] = ConcatArgument.builder() + .name(concatArgs[0].getName()) + .length(length.get()) + .build(); + args[1] = concatArgs[currentArgIndex.get()]; + + Preconditions.checkState(concatArgs[0].getLength() <= chunkedSegmentStorage.getConfig().getMaxSizeLimitForConcat(), + "ConcatArgument out of bound. {}", concatArgs[0]); + Preconditions.checkState( concatArgs[0].getLength() >= chunkedSegmentStorage.getConfig().getMinSizeLimitForConcat(), + "ConcatArgument out of bound. {}", concatArgs[0]); + Preconditions.checkState(concatArgs[1].getLength() <= chunkedSegmentStorage.getConfig().getMaxSizeLimitForConcat(), + "ConcatArgument out of bound. {}", concatArgs[1]); + + return chunkedSegmentStorage.getChunkStorage().concat(args) + .thenRunAsync(() -> { + length.addAndGet(concatArgs[currentArgIndex.get()].getLength()); + currentArgIndex.incrementAndGet(); + }, chunkedSegmentStorage.getExecutor()); + }, + chunkedSegmentStorage.getExecutor()) + .thenApplyAsync(v -> 0, chunkedSegmentStorage.getExecutor()); + } + private CompletableFuture concatUsingAppend(ConcatArgument[] concatArgs) { writeAtOffset.set(concatArgs[0].getLength()); val writeHandle = ChunkHandle.writeHandle(concatArgs[0].getName()); @@ -328,7 +403,7 @@ private CompletableFuture copyBytes(ChunkHandle writeHandle, ConcatArgumen } private CompletableFuture updateReadIndex() { - return new ChunkIterator(chunkedSegmentStorage, txn, startChunkName, lastChunkName) + return new ChunkIterator(chunkedSegmentStorage.getExecutor(), txn, startChunkName, lastChunkName) .forEach((metadata, name) -> { newReadIndexEntries.add(ChunkNameOffsetPair.builder() .chunkName(name) diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollector.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollector.java index 756ac4e15e0..2db6b8bfef8 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollector.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollector.java @@ -16,33 +16,57 @@ package io.pravega.segmentstore.storage.chunklayer; import com.google.common.base.Preconditions; -import com.google.common.primitives.Ints; import io.pravega.common.Exceptions; -import io.pravega.common.concurrent.AbstractThreadPoolService; -import io.pravega.common.concurrent.ExecutorServiceHelpers; +import io.pravega.common.ObjectBuilder; import io.pravega.common.concurrent.Futures; -import io.pravega.common.concurrent.Services; +import io.pravega.common.concurrent.MultiKeySequentialProcessor; +import io.pravega.common.io.serialization.RevisionDataInput; +import io.pravega.common.io.serialization.RevisionDataOutput; +import io.pravega.common.io.serialization.VersionedSerializer; import io.pravega.segmentstore.storage.metadata.ChunkMetadata; import io.pravega.segmentstore.storage.metadata.ChunkMetadataStore; +import io.pravega.segmentstore.storage.metadata.MetadataTransaction; +import io.pravega.segmentstore.storage.metadata.SegmentMetadata; +import io.pravega.shared.NameUtils; +import lombok.Builder; import lombok.Data; +import lombok.EqualsAndHashCode; import lombok.Getter; +import lombok.NonNull; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import lombok.val; +import java.io.IOException; import java.time.Duration; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.Callable; import java.util.concurrent.CompletableFuture; -import java.util.concurrent.DelayQueue; -import java.util.concurrent.Delayed; +import java.util.concurrent.CompletionException; import java.util.concurrent.ScheduledExecutorService; -import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Function; import java.util.function.Supplier; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_CHUNK_DELETED; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_CHUNK_FAILED; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_CHUNK_NEW; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_CHUNK_QUEUED; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_CHUNK_RETRY; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_SEGMENT_FAILED; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_SEGMENT_PROCESSED; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_SEGMENT_QUEUED; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_SEGMENT_RETRY; +import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_GC_TASK_PROCESSED; import static io.pravega.shared.MetricsNames.SLTS_GC_QUEUE_SIZE; /** @@ -62,13 +86,7 @@ * */ @Slf4j -public class GarbageCollector extends AbstractThreadPoolService implements AutoCloseable, StatsReporter { - private static final Duration SHUTDOWN_TIMEOUT = Duration.ofSeconds(10); - /** - * Set of garbage chunks. - */ - @Getter - private final DelayQueue garbageChunks = new DelayQueue<>(); +public class GarbageCollector implements AutoCloseable, StatsReporter { private final ChunkStorage chunkStorage; @@ -78,8 +96,6 @@ public class GarbageCollector extends AbstractThreadPoolService implements AutoC private final AtomicBoolean closed = new AtomicBoolean(); - private final AtomicBoolean suspended = new AtomicBoolean(); - /** * Keeps track of queue size. * Size is an expensive operation on DelayQueue. @@ -90,22 +106,36 @@ public class GarbageCollector extends AbstractThreadPoolService implements AutoC @Getter private final AtomicLong iterationId = new AtomicLong(); - private CompletableFuture loopFuture; - private final Supplier currentTimeSupplier; - private final Supplier> delaySupplier; + private final Function> delaySupplier; private final ScheduledExecutorService storageExecutor; + @Getter + private AbstractTaskQueueManager taskQueue; + + private final String traceObjectId; + + @Getter + private final String taskQueueName; + + @Getter + private final String failedQueueName; + + /** + * Instance of {@link MultiKeySequentialProcessor}. + */ + private final MultiKeySequentialProcessor taskScheduler; + /** * Constructs a new instance. * - * @param containerId Container id of the owner container. - * @param chunkStorage ChunkStorage instance to use for writing all logs. - * @param metadataStore ChunkMetadataStore for owner container. - * @param config Configuration options for this ChunkedSegmentStorage instance. - * @param executorService ScheduledExecutorService to use. + * @param containerId Container id of the owner container. + * @param chunkStorage ChunkStorage instance to use for writing all logs. + * @param metadataStore ChunkMetadataStore for owner container. + * @param config Configuration options for this ChunkedSegmentStorage instance. + * @param executorService ScheduledExecutorService to use. */ public GarbageCollector(int containerId, ChunkStorage chunkStorage, ChunkMetadataStore metadataStore, @@ -113,7 +143,7 @@ public GarbageCollector(int containerId, ChunkStorage chunkStorage, ScheduledExecutorService executorService) { this(containerId, chunkStorage, metadataStore, config, executorService, System::currentTimeMillis, - () -> Futures.delayedFuture(config.getGarbageCollectionSleep(), executorService)); + duration -> Futures.delayedFuture(duration, executorService)); } /** @@ -132,240 +162,443 @@ public GarbageCollector(int containerId, ChunkStorage chunkStorage, ChunkedSegmentStorageConfig config, ScheduledExecutorService storageExecutor, Supplier currentTimeSupplier, - Supplier> delaySupplier) { - super(String.format("GarbageCollector[%d]", containerId), ExecutorServiceHelpers.newScheduledThreadPool(1, "storage-gc")); - try { - this.chunkStorage = Preconditions.checkNotNull(chunkStorage, "chunkStorage"); - this.metadataStore = Preconditions.checkNotNull(metadataStore, "metadataStore"); - this.config = Preconditions.checkNotNull(config, "config"); - this.currentTimeSupplier = Preconditions.checkNotNull(currentTimeSupplier, "currentTimeSupplier"); - this.delaySupplier = Preconditions.checkNotNull(delaySupplier, "delaySupplier"); - this.storageExecutor = Preconditions.checkNotNull(storageExecutor, "storageExecutor"); - } catch (Exception ex) { - this.executor.shutdownNow(); - throw ex; - } + Function> delaySupplier) { + this.chunkStorage = Preconditions.checkNotNull(chunkStorage, "chunkStorage"); + this.metadataStore = Preconditions.checkNotNull(metadataStore, "metadataStore"); + this.config = Preconditions.checkNotNull(config, "config"); + this.currentTimeSupplier = Preconditions.checkNotNull(currentTimeSupplier, "currentTimeSupplier"); + this.delaySupplier = Preconditions.checkNotNull(delaySupplier, "delaySupplier"); + this.storageExecutor = Preconditions.checkNotNull(storageExecutor, "storageExecutor"); + this.traceObjectId = String.format("GarbageCollector[%d]", containerId); + this.taskQueueName = String.format("GC.queue.%d", containerId); + this.failedQueueName = String.format("GC.failed.queue.%d", containerId); + this.taskScheduler = new MultiKeySequentialProcessor<>(storageExecutor); } /** * Initializes this instance. + * + * @param taskQueue Task queue to use. */ - public void initialize() { - Services.startAsync(this, this.executor); + public CompletableFuture initialize(AbstractTaskQueueManager taskQueue) { + this.taskQueue = Preconditions.checkNotNull(taskQueue, "taskQueue"); + return taskQueue.addQueue(this.taskQueueName, false) + .thenComposeAsync(v -> taskQueue.addQueue(this.failedQueueName, true), storageExecutor); } /** - * Gets a value indicating how much to wait for the service to shut down, before failing it. + * Adds given chunks to list of garbage chunks. * - * @return The Duration. + * @param chunksToDelete List of chunks to delete. */ - @Override - protected Duration getShutdownTimeout() { - return SHUTDOWN_TIMEOUT; + CompletableFuture addChunksToGarbage(long transactionId, Collection chunksToDelete) { + Preconditions.checkState(null != taskQueue, "taskQueue must not be null."); + val futures = new ArrayList>(); + val startTime = currentTimeSupplier.get() + config.getGarbageCollectionDelay().toMillis(); + + chunksToDelete.forEach(chunkToDelete -> futures.add(addChunkToGarbage(transactionId, chunkToDelete, startTime, 0))); + return Futures.allOf(futures); } /** - * Main execution of the Service. When this Future completes, the service auto-shuts down. + * Adds given chunk to list of garbage chunks. * - * @return A CompletableFuture that, when completed, indicates the service is terminated. If the Future completed - * exceptionally, the Service will shut down with failure, otherwise it will terminate normally. + * @param chunkToDelete Name of the chunk to delete. + * @param startTime Start time. + * @param attempts Number of attempts to delete this chunk so far. */ - @Override - protected CompletableFuture doRun() { - loopFuture = Futures.loop( - this::canRun, - () -> delaySupplier.get() - .thenComposeAsync(v -> deleteGarbage(true, config.getGarbageCollectionMaxConcurrency()), executor) - .handleAsync((v, ex) -> { - if (null != ex) { - log.error("{}: Error during doRun.", traceObjectId, ex); - } - return null; - }, executor), - executor); - return loopFuture; + CompletableFuture addChunkToGarbage(long transactionId, String chunkToDelete, long startTime, int attempts) { + Preconditions.checkState(null != taskQueue, "taskQueue must not be null."); + return taskQueue.addTask(taskQueueName, new TaskInfo(chunkToDelete, startTime, attempts, TaskInfo.DELETE_CHUNK, transactionId)) + .thenRunAsync(() -> { + queueSize.incrementAndGet(); + SLTS_GC_CHUNK_QUEUED.inc(); + }, this.storageExecutor); } - private boolean canRun() { - return isRunning() && getStopException() == null && !closed.get(); + /** + * Adds segment to the GC. + * + * @param transactionId Transaction id. + * @param segmentToDelete Name of segment to delete. + * @return A CompletableFuture that, when completed, will indicate the operation succeeded. + * If the operation failed, it will contain the cause of the failure. + */ + CompletableFuture addSegmentToGarbage(long transactionId, String segmentToDelete) { + Preconditions.checkState(null != taskQueue, "taskQueue must not be null."); + val startTime = currentTimeSupplier.get() + config.getGarbageCollectionDelay().toMillis(); + return taskQueue.addTask(taskQueueName, new TaskInfo(segmentToDelete, startTime, 0, TaskInfo.DELETE_SEGMENT, transactionId)) + .thenRunAsync(() -> { + queueSize.incrementAndGet(); + SLTS_GC_SEGMENT_QUEUED.inc(); + }, this.storageExecutor); } /** - * Sets whether background cleanup is suspended or not. + * Adds segment to the GC. * - * @param value Boolean indicating whether to suspend background processing or not. + * @param taskInfo Task info + * @return A CompletableFuture that, when completed, will indicate the operation succeeded. + * If the operation failed, it will contain the cause of the failure. */ - void setSuspended(boolean value) { - suspended.set(value); + CompletableFuture addSegmentToGarbage(TaskInfo taskInfo) { + Preconditions.checkState(null != taskQueue, "taskQueue must not be null."); + return taskQueue.addTask(taskQueueName, taskInfo) + .thenRunAsync(() -> { + queueSize.incrementAndGet(); + SLTS_GC_SEGMENT_QUEUED.inc(); + }, this.storageExecutor); } /** - * Adds given chunks to list of garbage chunks. + * Adds new chunk to track * - * @param chunksToDelete List of chunks to delete. + * @param transactionId TransactionId + * @param chunktoTrack Name of chunk to track. + * @return A CompletableFuture that, when completed, will indicate the operation succeeded. + * If the operation failed, it will contain the cause of the failure. */ - void addToGarbage(Collection chunksToDelete) { - val currentTime = currentTimeSupplier.get(); + CompletableFuture trackNewChunk(long transactionId, String chunktoTrack) { + Preconditions.checkState(null != taskQueue, "taskQueue must not be null."); + val startTime = currentTimeSupplier.get() + config.getGarbageCollectionDelay().toMillis(); + // Simply add delete chunk task for newly tracked chunk and update metrics. + return taskQueue.addTask(taskQueueName, new TaskInfo(chunktoTrack, startTime, 0, TaskInfo.DELETE_CHUNK, transactionId)) + .thenRunAsync(() -> { + queueSize.incrementAndGet(); + SLTS_GC_CHUNK_NEW.inc(); + }, this.storageExecutor); + } + + /** + * Add the task to failed queue. + */ + private CompletableFuture failTask(TaskInfo infoToRetire) { + Preconditions.checkState(null != taskQueue, "taskQueue must not be null."); + return taskQueue.addTask(failedQueueName, infoToRetire); + } - chunksToDelete.forEach(chunkToDelete -> addToGarbage(chunkToDelete, currentTime + config.getGarbageCollectionDelay().toMillis(), 0)); + /** + * Perform delete segment related tasks. + */ + private CompletableFuture deleteSegment(TaskInfo taskInfo) { + val streamSegmentName = taskInfo.getName(); + val txn = metadataStore.beginTransaction(true, streamSegmentName); + return txn.get(streamSegmentName) + .thenComposeAsync(storageMetadata -> { + val segmentMetadata = (SegmentMetadata) storageMetadata; + if (null == segmentMetadata) { + log.debug("{}: deleteGarbage - Segment metadata does not exist. segment={}.", traceObjectId, streamSegmentName); + return CompletableFuture.completedFuture(null); + } else if (segmentMetadata.isActive()) { + log.debug("{}: deleteGarbage - Segment is not marked as deleted. segment={}.", traceObjectId, streamSegmentName); + return CompletableFuture.completedFuture(null); + } else { + val chunksToDelete = Collections.synchronizedSet(new HashSet()); + val currentBatch = Collections.synchronizedSet(new HashSet()); + val currentChunkName = new AtomicReference(segmentMetadata.getFirstChunk()); + + return Futures.loop( + () -> null != currentChunkName.get(), + () -> txn.get(currentChunkName.get()) + .thenComposeAsync(metadata -> { + val chunkMetadata = (ChunkMetadata) metadata; + CompletableFuture retFuture = CompletableFuture.completedFuture(null); + + // Skip if metadata is possibly deleted in last attempt, we are done. + if (null == chunkMetadata) { + currentChunkName.set(null); + return retFuture; + } + + // Add to list of chunks to delete + chunksToDelete.add(chunkMetadata.getName()); + + // Add to batch and commit batch if required. + currentBatch.add(chunkMetadata); + if (chunkMetadata.isActive()) { + if (currentBatch.size() > config.getGarbageCollectionTransactionBatchSize()) { + // Commit batch + retFuture = addTransactionForUpdateBatch(currentBatch, streamSegmentName); + // Clear batch + currentBatch.clear(); + } + } + // Move next + currentChunkName.set(chunkMetadata.getNextChunk()); + return retFuture; + }, storageExecutor), + storageExecutor) + .thenComposeAsync( v -> { + if (currentBatch.size() > 0) { + return addTransactionForUpdateBatch(currentBatch, streamSegmentName); + } + return CompletableFuture.completedFuture(null); + }, storageExecutor) + .thenComposeAsync(v -> this.addChunksToGarbage(txn.getVersion(), chunksToDelete), storageExecutor) + .thenComposeAsync(v -> deleteBlockIndexEntriesForSegment(streamSegmentName, segmentMetadata.getStartOffset(), segmentMetadata.getLength())) + .thenComposeAsync(v -> { + val innerTxn = metadataStore.beginTransaction(false, segmentMetadata.getName()); + innerTxn.delete(segmentMetadata.getName()); + return innerTxn.commit() + .whenCompleteAsync((vv, ex) -> innerTxn.close(), storageExecutor); + }, storageExecutor) + .handleAsync((v, e) -> { + txn.close(); + if (null != e) { + log.error(String.format("%s deleteGarbage - Could not delete metadata for garbage segment=%s.", + traceObjectId, streamSegmentName), e); + return true; + } + return false; + }, storageExecutor) + .thenComposeAsync(failed -> { + if (failed) { + if (taskInfo.getAttempts() < config.getGarbageCollectionMaxAttempts()) { + val attempts = taskInfo.attempts + 1; + SLTS_GC_SEGMENT_RETRY.inc(); + return addSegmentToGarbage(taskInfo.toBuilder().attempts(attempts).build()); + } else { + SLTS_GC_SEGMENT_FAILED.inc(); + log.info("{}: deleteGarbage - could not delete after max attempts segment={}.", traceObjectId, taskInfo.getName()); + return failTask(taskInfo); + } + } else { + SLTS_GC_SEGMENT_PROCESSED.inc(); + return CompletableFuture.completedFuture(null); + } + }, storageExecutor); + } + }, storageExecutor); + } - if (queueSize.get() >= config.getGarbageCollectionMaxQueueSize()) { - log.warn("{}: deleteGarbage - Queue full. Could not delete garbage. Chunks skipped", traceObjectId); + private CompletableFuture addTransactionForUpdateBatch(Set batch, String name) { + // create a sub transaction for a batch. + val innerTxn = metadataStore.beginTransaction(false, name); + for (val chunkMetadata : batch) { + chunkMetadata.setActive(false); + innerTxn.update(chunkMetadata); } + return innerTxn.commit() + .whenCompleteAsync((vv, ex) -> innerTxn.close(), storageExecutor); } /** - * Adds given chunk to list of garbage chunks. - * - * @param chunkToDelete Name of the chunk to delete. - * @param startTime Start time. - * @param attempts Number of attempts to delete this chunk so far. + * Delete block index entries for given chunk. */ - void addToGarbage(String chunkToDelete, long startTime, int attempts) { - if (queueSize.get() < config.getGarbageCollectionMaxQueueSize()) { - garbageChunks.add(new GarbageChunkInfo(chunkToDelete, startTime, attempts)); - queueSize.incrementAndGet(); - } else { - log.debug("{}: deleteGarbage - Queue full. Could not delete garbage. chunk {}.", traceObjectId, chunkToDelete); + void deleteBlockIndexEntriesForChunk(MetadataTransaction txn, String segmentName, long startOffset, long endOffset) { + val firstBlock = startOffset / config.getIndexBlockSize(); + for (long offset = firstBlock * config.getIndexBlockSize(); offset < endOffset; offset += config.getIndexBlockSize()) { + txn.delete(NameUtils.getSegmentReadIndexBlockName(segmentName, offset)); } } /** - * Delete the garbage chunks. - * - * This method retrieves a few eligible chunks for deletion at a time. - * The chunk is deleted only if the metadata for it does not exist or is marked inactive. - * If there are any errors then failed chunk is enqueued back up to a max number of attempts. - * If suspended or there are no items then it "sleeps" for time specified by configuration. - * - * @param isBackground True if the caller is background task else False if called explicitly. - * @param maxItems Maximum number of items to delete at a time. - * @return CompletableFuture which is completed when garbage is deleted. + * Delete block index entries for given segment. */ - CompletableFuture deleteGarbage(boolean isBackground, int maxItems) { - log.debug("{}: Iteration {} started.", traceObjectId, iterationId.get()); - // Sleep if suspended. - if (suspended.get() && isBackground) { - log.info("{}: deleteGarbage - suspended - sleeping for {}.", traceObjectId, config.getGarbageCollectionDelay()); - return CompletableFuture.completedFuture(false); + CompletableFuture deleteBlockIndexEntriesForSegment(String segmentName, long startOffset, long endOffset) { + val firstBlock = startOffset / config.getIndexBlockSize(); + AtomicBoolean isDone = new AtomicBoolean(false); + AtomicLong offset = new AtomicLong(firstBlock * config.getIndexBlockSize()); + + return Futures.loop( + () -> !isDone.get(), + () -> { + val currentBatch = new HashSet(); + while (offset.get() < endOffset) { + val name = NameUtils.getSegmentReadIndexBlockName(segmentName, offset.get()); + if (currentBatch.size() >= config.getGarbageCollectionTransactionBatchSize()) { + return addTransactionForDeleteBatch(currentBatch, segmentName); + } + currentBatch.add(name); + offset.addAndGet(config.getIndexBlockSize()); + } + // We are done + isDone.set(true); + if (currentBatch.size() > 0) { + return addTransactionForDeleteBatch(currentBatch, segmentName); + } else { + return CompletableFuture.completedFuture(null); + } + }, + storageExecutor); + } + + private CompletableFuture addTransactionForDeleteBatch(Set batch, String segmentName) { + // create a sub transaction for a batch. + val innerTxn = metadataStore.beginTransaction(false, segmentName); + for (val entryName : batch) { + innerTxn.delete(entryName); } + return innerTxn.commit() + .whenCompleteAsync((vv, ex) -> innerTxn.close(), storageExecutor); + } - // Find chunks to delete. - val chunksToDelete = new ArrayList(); - int count = 0; + /** + * Process a batch of tasks. + * + * @param batch List of {@link TaskInfo} to process. + * @return A CompletableFuture that, when completed, will indicate the operation succeeded. + * If the operation failed, it will contain the cause of the failure. + */ + public CompletableFuture processBatch(List batch) { + ArrayList> futures = new ArrayList<>(); + for (val infoToDelete : batch) { + if (metadataStore.isTransactionActive(infoToDelete.transactionId)) { + log.debug("{}: deleteGarbage - transaction is still active - re-queuing {}.", traceObjectId, infoToDelete.transactionId); + taskQueue.addTask(taskQueueName, infoToDelete); + } else { + val f = executeSerialized(() -> processTask(infoToDelete), infoToDelete.name); + val now = currentTimeSupplier.get(); + if (infoToDelete.scheduledTime > currentTimeSupplier.get()) { + futures.add(delaySupplier.apply(Duration.ofMillis(infoToDelete.scheduledTime - now)) + .thenComposeAsync(v -> f, storageExecutor)); + } else { + futures.add(f); + } + } + } + return Futures.allOf(futures) + .thenRunAsync(() -> { + queueSize.addAndGet(-batch.size()); + SLTS_GC_TASK_PROCESSED.add(batch.size()); + }, storageExecutor); + } - // Wait until you have at least one item or timeout expires. - GarbageChunkInfo info = Exceptions.handleInterruptedCall(() -> garbageChunks.poll(config.getGarbageCollectionDelay().toMillis(), TimeUnit.MILLISECONDS)); - log.trace("{}: deleteGarbage - retrieved {}", traceObjectId, info); - while (null != info ) { - queueSize.decrementAndGet(); - chunksToDelete.add(info); + /** + * Executes the given Callable asynchronously and returns a CompletableFuture that will be completed with the result. + * The operations are serialized on the segmentNames provided. + * + * @param operation The Callable to execute. + * @param Return type of the operation. + * @param keyNames The names of the keys involved in this operation (for sequencing purposes). + * @return A CompletableFuture that, when completed, will contain the result of the operation. + * If the operation failed, it will contain the cause of the failure. + */ + private CompletableFuture executeSerialized(Callable> operation, String... keyNames) { + Exceptions.checkNotClosed(this.closed.get(), this); + return this.taskScheduler.add(Arrays.asList(keyNames), () -> executeExclusive(operation, keyNames)); + } - count++; - if (count >= maxItems) { - break; + /** + * Executes the given Callable asynchronously and exclusively. + * It returns a CompletableFuture that will be completed with the result. + * The operations are not allowed to be concurrent. + * + * @param operation The Callable to execute. + * @param Return type of the operation. + * @param keyNames The names of the keys involved in this operation (for sequencing purposes). + * @return A CompletableFuture that, when completed, will contain the result of the operation. + * If the operation failed, it will contain the cause of the failure. + */ + private CompletableFuture executeExclusive(Callable> operation, String... keyNames) { + return CompletableFuture.completedFuture(null).thenComposeAsync(v -> { + Exceptions.checkNotClosed(this.closed.get(), this); + try { + return operation.call(); + } catch (Exception e) { + throw new CompletionException(Exceptions.unwrap(e)); } - // Do not block - info = garbageChunks.poll(); - log.trace("{}: deleteGarbage - retrieved {}", traceObjectId, info); - } + }, this.storageExecutor); + } - // Sleep if no chunks to delete. - if (count == 0) { - log.debug("{}: deleteGarbage - no work - sleeping for {}.", traceObjectId, config.getGarbageCollectionDelay()); - return CompletableFuture.completedFuture(false); + private CompletableFuture processTask(TaskInfo infoToDelete) { + if (infoToDelete.taskType == TaskInfo.DELETE_CHUNK) { + return deleteChunk(infoToDelete); } + if (infoToDelete.taskType == TaskInfo.DELETE_SEGMENT) { + return deleteSegment(infoToDelete); + } + if (infoToDelete.taskType == TaskInfo.DELETE_JOURNAL) { + return deleteChunk(infoToDelete); + } + log.info("{}: processTask - Ignoring unknown type of task {}.", traceObjectId, infoToDelete); + return CompletableFuture.completedFuture(null); + } - // For each chunk delete if the chunk is not present at all in the metadata or is present but marked as inactive. - ArrayList> futures = new ArrayList<>(); - for (val infoToDelete : chunksToDelete) { - val chunkToDelete = infoToDelete.name; - val failed = new AtomicBoolean(); - val txn = metadataStore.beginTransaction(false, chunkToDelete); - val future = - txn.get(infoToDelete.name) - .thenComposeAsync(metadata -> { - val chunkMetadata = (ChunkMetadata) metadata; - // Delete if the chunk is not present at all in the metadata or is present but marked as inactive. - val shouldDeleteChunk = null == chunkMetadata || !chunkMetadata.isActive(); - val shouldDeleteMetadata = new AtomicBoolean(null != metadata && !chunkMetadata.isActive()); - - // Delete chunk from storage. - if (shouldDeleteChunk) { - return chunkStorage.delete(ChunkHandle.writeHandle(chunkToDelete)) - .handleAsync((v, e) -> { - if (e != null) { - val ex = Exceptions.unwrap(e); - if (ex instanceof ChunkNotFoundException) { - // Ignore - nothing to do here. - log.debug("{}: deleteGarbage - Could not delete garbage chunk={}.", traceObjectId, chunkToDelete); - } else { - log.warn("{}: deleteGarbage - Could not delete garbage chunk={}.", traceObjectId, chunkToDelete); - shouldDeleteMetadata.set(false); - failed.set(true); - } - } else { - log.debug("{}: deleteGarbage - deleted chunk={}.", traceObjectId, chunkToDelete); - } - return v; - }, storageExecutor) - .thenRunAsync(() -> { - if (shouldDeleteMetadata.get()) { - txn.delete(chunkToDelete); - log.debug("{}: deleteGarbage - deleted metadata for chunk={}.", traceObjectId, chunkToDelete); - } - }, storageExecutor) - .thenComposeAsync(v -> txn.commit(), storageExecutor) - .handleAsync((v, e) -> { - if (e != null) { - log.error(String.format("%s deleteGarbage - Could not delete metadata for garbage chunk=%s.", - traceObjectId, chunkToDelete), e); - failed.set(true); - } - return v; - }, storageExecutor); - } else { - log.info("{}: deleteGarbage - Chunk is not marked as garbage chunk={}.", traceObjectId, chunkToDelete); - return CompletableFuture.completedFuture(null); - } - }, storageExecutor) - .whenCompleteAsync((v, ex) -> { - // Queue it back. - if (failed.get()) { - if (infoToDelete.getAttempts() < config.getGarbageCollectionMaxAttempts()) { - log.debug("{}: deleteGarbage - adding back chunk={}.", traceObjectId, chunkToDelete); - addToGarbage(chunkToDelete, - infoToDelete.getScheduledDeleteTime() + config.getGarbageCollectionDelay().toMillis(), - infoToDelete.getAttempts() + 1); + private CompletableFuture deleteChunk(TaskInfo infoToDelete) { + val chunkToDelete = infoToDelete.name; + val failed = new AtomicReference(); + val txn = metadataStore.beginTransaction(false, chunkToDelete); + return txn.get(infoToDelete.name) + .thenComposeAsync(metadata -> { + val chunkMetadata = (ChunkMetadata) metadata; + // Delete if the chunk is not present at all in the metadata or is present but marked as inactive. + val shouldDeleteChunk = null == chunkMetadata || !chunkMetadata.isActive(); + val shouldDeleteMetadata = new AtomicBoolean(null != metadata && !chunkMetadata.isActive()); + + // Delete chunk from storage. + if (shouldDeleteChunk) { + return chunkStorage.delete(ChunkHandle.writeHandle(chunkToDelete)) + .handleAsync((v, e) -> { + if (e != null) { + val ex = Exceptions.unwrap(e); + if (ex instanceof ChunkNotFoundException) { + // Ignore - nothing to do here. + log.debug("{}: deleteGarbage - Could not delete garbage chunk={}.", traceObjectId, chunkToDelete); + } else { + log.warn("{}: deleteGarbage - Could not delete garbage chunk={}.", traceObjectId, chunkToDelete); + shouldDeleteMetadata.set(false); + failed.set(e); + } } else { - log.info("{}: deleteGarbage - could not delete after max attempts chunk={}.", traceObjectId, chunkToDelete); + SLTS_GC_CHUNK_DELETED.inc(); + log.debug("{}: deleteGarbage - deleted chunk={}.", traceObjectId, chunkToDelete); } - } - if (ex != null) { - log.error(String.format("%s deleteGarbage - Could not find garbage chunk=%s.", - traceObjectId, chunkToDelete), ex); - } - txn.close(); - }, executor); - futures.add(future); - } - return Futures.allOf(futures) - .thenApplyAsync( v -> { - log.debug("{}: Iteration {} ended.", traceObjectId, iterationId.getAndIncrement()); - return true; - }, executor); + return v; + }, storageExecutor) + .thenRunAsync(() -> { + if (shouldDeleteMetadata.get()) { + txn.delete(chunkToDelete); + log.debug("{}: deleteGarbage - deleted metadata for chunk={}.", traceObjectId, chunkToDelete); + } + }, storageExecutor) + .thenComposeAsync(v -> txn.commit(), storageExecutor) + .handleAsync((v, e) -> { + if (e != null) { + log.error(String.format("%s deleteGarbage - Could not delete metadata for garbage chunk=%s.", + traceObjectId, chunkToDelete), e); + failed.set(e); + } + return v; + }, storageExecutor); + } else { + log.debug("{}: deleteGarbage - Chunk is not marked as garbage chunk={}.", traceObjectId, chunkToDelete); + return CompletableFuture.completedFuture(null); + } + }, storageExecutor) + .thenComposeAsync(v -> { + if (failed.get() != null) { + if (infoToDelete.getAttempts() < config.getGarbageCollectionMaxAttempts()) { + log.debug("{}: deleteGarbage - adding back chunk={}.", traceObjectId, chunkToDelete); + SLTS_GC_CHUNK_RETRY.inc(); + return addChunkToGarbage(txn.getVersion(), chunkToDelete, + infoToDelete.getScheduledTime() + config.getGarbageCollectionDelay().toMillis(), + infoToDelete.getAttempts() + 1); + } else { + SLTS_GC_CHUNK_FAILED.inc(); + log.info("{}: deleteGarbage - could not delete after max attempts chunk={}.", traceObjectId, chunkToDelete); + return failTask(infoToDelete); + + } + } + return CompletableFuture.completedFuture(null); + }, storageExecutor) + .whenCompleteAsync((v, ex) -> { + if (ex != null) { + log.error(String.format("%s deleteGarbage - Could not find garbage chunk=%s.", + traceObjectId, chunkToDelete), ex); + } + txn.close(); + }, storageExecutor); } @Override - public void close() { - Services.stopAsync(this, executor); + public void close() throws Exception { if (!this.closed.get()) { - if (null != loopFuture) { - loopFuture.cancel(true); + if (null != taskQueue) { + this.taskQueue.close(); } closed.set(true); - executor.shutdownNow(); - super.close(); } } @@ -374,22 +607,87 @@ public void report() { ChunkStorageMetrics.DYNAMIC_LOGGER.reportGaugeValue(SLTS_GC_QUEUE_SIZE, queueSize.get()); } - @RequiredArgsConstructor + /** + * Represents a Task info. + */ + public static abstract class AbstractTaskInfo { + public static final int DELETE_CHUNK = 1; + public static final int DELETE_SEGMENT = 2; + public static final int DELETE_JOURNAL = 3; + + /** + * Serializer that implements {@link VersionedSerializer}. + */ + public static class AbstractTaskInfoSerializer extends VersionedSerializer.MultiType { + /** + * Declare all supported serializers of subtypes. + * + * @param builder A MultiType.Builder that can be used to declare serializers. + */ + @Override + protected void declareSerializers(Builder builder) { + // Unused values (Do not repurpose!): + // - 0: Unsupported Serializer. + builder.serializer(TaskInfo.class, 1, new TaskInfo.Serializer()); + } + } + } + + /** + * Represents background task. + */ @Data - class GarbageChunkInfo implements Delayed { - @Getter + @RequiredArgsConstructor + @Builder(toBuilder = true) + @EqualsAndHashCode(callSuper = true) + public static class TaskInfo extends AbstractTaskInfo { + @NonNull private final String name; - private final long scheduledDeleteTime; + private final long scheduledTime; private final int attempts; + private final int taskType; + private final long transactionId; - @Override - public long getDelay(TimeUnit timeUnit) { - return timeUnit.convert(scheduledDeleteTime - currentTimeSupplier.get(), TimeUnit.MILLISECONDS); + /** + * Builder that implements {@link ObjectBuilder}. + */ + public static class TaskInfoBuilder implements ObjectBuilder { } - @Override - public int compareTo(Delayed delayed) { - return Ints.saturatedCast(scheduledDeleteTime - ((GarbageChunkInfo) delayed).scheduledDeleteTime); + /** + * Serializer that implements {@link VersionedSerializer}. + */ + public static class Serializer extends VersionedSerializer.WithBuilder { + @Override + protected TaskInfo.TaskInfoBuilder newBuilder() { + return TaskInfo.builder(); + } + + @Override + protected byte getWriteVersion() { + return 0; + } + + @Override + protected void declareVersions() { + version(0).revision(0, this::write00, this::read00); + } + + private void write00(TaskInfo object, RevisionDataOutput output) throws IOException { + output.writeUTF(object.name); + output.writeCompactLong(object.scheduledTime); + output.writeCompactInt(object.attempts); + output.writeCompactInt(object.taskType); + output.writeLong(object.transactionId); + } + + private void read00(RevisionDataInput input, TaskInfo.TaskInfoBuilder b) throws IOException { + b.name(input.readUTF()); + b.scheduledTime(input.readCompactLong()); + b.attempts(input.readCompactInt()); + b.taskType(input.readCompactInt()); + b.transactionId(input.readLong()); + } } } } diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ReadOperation.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ReadOperation.java index 0e09f3531f3..dd0b7378bb1 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ReadOperation.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/ReadOperation.java @@ -87,6 +87,7 @@ class ReadOperation implements Callable> { timer = new Timer(); } + @Override public CompletableFuture call() { // Validate preconditions. checkPreconditions(); diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/SystemJournal.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/SystemJournal.java index 346387c209a..188b74f32ef 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/SystemJournal.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/SystemJournal.java @@ -38,7 +38,9 @@ import java.util.Collections; import java.util.HashMap; import java.util.HashSet; +import java.util.List; import java.util.Map; +import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionException; @@ -181,6 +183,11 @@ public class SystemJournal { */ final private AtomicReference currentHandle = new AtomicReference<>(); + /** + * List of chunks (journals & snapshots) to delete after snapshot. + */ + final private List pendingGarbageChunks = Collections.synchronizedList(new ArrayList<>()); + /** * Configuration {@link ChunkedSegmentStorageConfig} for the {@link ChunkedSegmentStorage}. */ @@ -222,7 +229,7 @@ public SystemJournal(int containerId, ChunkStorage chunkStorage, ChunkMetadataSt this.garbageCollector = Preconditions.checkNotNull(garbageCollector, "garbageCollector"); this.containerId = containerId; this.systemSegments = getChunkStorageSystemSegments(containerId); - this.systemSegmentsPrefix = NameUtils.INTERNAL_SCOPE_NAME; + this.systemSegmentsPrefix = NameUtils.INTERNAL_CONTAINER_PREFIX; this.currentTimeSupplier = Preconditions.checkNotNull(currentTimeSupplier, "currentTimeSupplier"); this.executor = Preconditions.checkNotNull(executor, "executor"); this.taskProcessor = new MultiKeySequentialProcessor<>(this.executor); @@ -264,23 +271,23 @@ private static class BootstrapState { /** * Keep track of offsets at which chunks were added to the system segments. */ - final private Map chunkStartOffsets = new HashMap<>(); + final private Map chunkStartOffsets = Collections.synchronizedMap(new HashMap<>()); /** * Keep track of offsets at which system segments were truncated. * We don't need to apply each truncate operation, only need to apply the final truncate offset. */ - final private Map finalTruncateOffsets = new HashMap<>(); + final private Map finalTruncateOffsets = Collections.synchronizedMap(new HashMap<>()); /** * Final first chunk start offsets for all segments. */ - final private Map finalFirstChunkStartsAtOffsets = new HashMap<>(); + final private Map finalFirstChunkStartsAtOffsets = Collections.synchronizedMap(new HashMap<>()); /** * Keep track of already processed records. */ - final private HashSet visitedRecords = new HashSet<>(); + final private Set visitedRecords = Collections.synchronizedSet(new HashSet<>()); /** * Number of journals processed. @@ -334,17 +341,16 @@ public CompletableFuture bootstrap(long epoch, SnapshotInfoStore snapshotI return applyFinalTruncateOffsets(txn, state); }, executor) .thenComposeAsync(v -> { - // Step 5: Create a snapshot record and validate it. However do not save it yet. - return createSystemSnapshotRecord(txn, true, config.isSelfCheckEnabled()) - .thenComposeAsync(systemSnapshotRecord -> checkInvariants(systemSnapshotRecord), executor); - }, executor) - .thenAcceptAsync(v -> { - // Step 6: Check invariants. These should never fail. + // Step 5: Check invariants. These should never fail. if (config.isSelfCheckEnabled()) { Preconditions.checkState(currentFileIndex.get() == 0, "currentFileIndex must be zero"); Preconditions.checkState(systemJournalOffset.get() == 0, "systemJournalOffset must be zero"); Preconditions.checkState(newChunkRequired.get(), "newChunkRequired must be true"); } + // Step 6: Create a snapshot record and validate it. Save it in journals. + return createSystemSnapshotRecord(txn, true, config.isSelfCheckEnabled()) + .thenComposeAsync(systemSnapshotRecord -> writeRecordBatch(Collections.singletonList(systemSnapshotRecord)), executor) + .thenRunAsync(() -> newChunkRequired.set(true), executor); }, executor) .thenComposeAsync(v -> { // Step 7: Finally commit all data. @@ -434,6 +440,9 @@ private CompletableFuture writeRecordBatch(Collection if (attempt.get() >= config.getMaxJournalWriteAttempts()) { throw new CompletionException(ex); } + log.warn("SystemJournal[{}] Error while writing journal {}. Attempt#{}", containerId, + getSystemJournalChunkName(containerId, epoch, currentFileIndex.get()), attempt.get(), e); + // In case of partial write during previous failure, this time we'll get InvalidOffsetException. // In that case we start a new journal file and retry. if (ex instanceof InvalidOffsetException) { @@ -531,8 +540,15 @@ private CompletableFuture writeSnapshotInfo(long snapshotId) { .build(); return snapshotInfoStore.writeSnapshotInfo(info) .thenAcceptAsync(v1 -> { + val oldSnapshotInfo = lastSavedSnapshotInfo.get(); log.info("SystemJournal[{}] Snapshot info saved.{}", containerId, info); lastSavedSnapshotInfo.set(info); + if (null != oldSnapshotInfo) { + val oldSnapshotFile = NameUtils.getSystemJournalSnapshotFileName(containerId, epoch, oldSnapshotInfo.getSnapshotId()); + pendingGarbageChunks.add(oldSnapshotFile); + } + garbageCollector.addChunksToGarbage(-1, pendingGarbageChunks); + pendingGarbageChunks.clear(); }, executor) .exceptionally(e -> { log.error("Unable to persist snapshot info.{}", currentSnapshotIndex, e); @@ -588,7 +604,8 @@ private SystemSnapshotRecord readSnapshotRecord(SnapshotInfo snapshotInfo, byte[ } catch (Exception e) { val ex = Exceptions.unwrap(e); if (ex instanceof EOFException) { - log.warn("SystemJournal[{}] Incomplete snapshot found, skipping {}.", containerId, snapshotInfo, e); + log.error("SystemJournal[{}] Incomplete snapshot found, skipping {}.", containerId, snapshotInfo, e); + throw new CompletionException(e); } else if (ex instanceof ChunkNotFoundException) { log.warn("SystemJournal[{}] Missing snapshot, skipping {}.", containerId, snapshotInfo, e); } else { @@ -605,12 +622,22 @@ private SystemSnapshotRecord readSnapshotRecord(SnapshotInfo snapshotInfo, byte[ private CompletableFuture applySystemSnapshotRecord(MetadataTransaction txn, BootstrapState state, SystemSnapshotRecord systemSnapshot) { + + //Now apply if (null != systemSnapshot) { + // validate + systemSnapshot.checkInvariants(); + // reset all state so far + txn.getData().clear(); + state.finalTruncateOffsets.clear(); + state.finalFirstChunkStartsAtOffsets.clear(); + state.chunkStartOffsets.clear(); + log.debug("SystemJournal[{}] Applying snapshot that includes journals up to epoch={} journal index={}", containerId, systemSnapshot.epoch, systemSnapshot.fileIndex); log.trace("SystemJournal[{}] Processing system log snapshot {}.", containerId, systemSnapshot); // Initialize the segments and their chunks. - for (SegmentSnapshotRecord segmentSnapshot : systemSnapshot.segmentSnapshotRecords) { + for (val segmentSnapshot : systemSnapshot.segmentSnapshotRecords) { // Update segment data. segmentSnapshot.segmentMetadata.setActive(true) .setOwnershipChanged(true) @@ -625,12 +652,11 @@ private CompletableFuture applySystemSnapshotRecord(Metada // Add chunk metadata and keep track of start offsets for each chunk. long offset = segmentSnapshot.segmentMetadata.getFirstChunkStartOffset(); - for (ChunkMetadata metadata : segmentSnapshot.chunkMetadataCollection) { + for (val metadata : segmentSnapshot.chunkMetadataCollection) { txn.create(metadata); // make sure that the record is marked pinned. txn.markPinned(metadata); - state.chunkStartOffsets.put(metadata.getName(), offset); offset += metadata.getLength(); } @@ -638,8 +664,8 @@ private CompletableFuture applySystemSnapshotRecord(Metada } else { log.debug("SystemJournal[{}] No previous snapshot present.", containerId); // Initialize with default values. - for (String systemSegment : systemSegments) { - SegmentMetadata segmentMetadata = SegmentMetadata.builder() + for (val systemSegment : systemSegments) { + val segmentMetadata = SegmentMetadata.builder() .name(systemSegment) .ownerEpoch(epoch) .maxRollinglength(config.getStorageMetadataRollingPolicy().getMaxLength()) @@ -654,9 +680,7 @@ private CompletableFuture applySystemSnapshotRecord(Metada } // Validate - return checkInvariants(systemSnapshot) - .thenComposeAsync(v -> - validateSystemSnapshotExistsInTxn(txn, systemSnapshot), executor) + return validateSystemSnapshotExistsInTxn(txn, systemSnapshot) .thenApplyAsync(v -> { log.debug("SystemJournal[{}] Done applying snapshots.", containerId); return systemSnapshot; @@ -716,66 +740,39 @@ private CompletableFuture validateSegment(MetadataTransaction txn, String } /** - * Check invariants for given {@link SystemSnapshotRecord}. + * Read contents from file. */ - private CompletableFuture checkInvariants(SystemSnapshotRecord systemSnapshot) { - if (null != systemSnapshot) { - for (val segmentSnapshot : systemSnapshot.getSegmentSnapshotRecords()) { - segmentSnapshot.segmentMetadata.checkInvariants(); - Preconditions.checkState(segmentSnapshot.segmentMetadata.isStorageSystemSegment(), - "Segment must be storage segment. Segment snapshot= %s", segmentSnapshot); - Preconditions.checkState(segmentSnapshot.segmentMetadata.getChunkCount() == segmentSnapshot.chunkMetadataCollection.size(), - "Chunk count must match. Segment snapshot= %s", segmentSnapshot); - if (segmentSnapshot.chunkMetadataCollection.size() == 0) { - Preconditions.checkState(segmentSnapshot.segmentMetadata.getFirstChunk() == null, - "First chunk must be null. Segment snapshot= %s", segmentSnapshot); - Preconditions.checkState(segmentSnapshot.segmentMetadata.getLastChunk() == null, - "Last chunk must be null. Segment snapshot= %s", segmentSnapshot); - } else if (segmentSnapshot.chunkMetadataCollection.size() == 1) { - Preconditions.checkState(segmentSnapshot.segmentMetadata.getFirstChunk() != null, - "First chunk must not be null. Segment snapshot= %s", segmentSnapshot); - Preconditions.checkState(segmentSnapshot.segmentMetadata.getFirstChunk().equals(segmentSnapshot.segmentMetadata.getLastChunk()), - "First chunk and last chunk should be same. Segment snapshot= %s", segmentSnapshot); - } else { - Preconditions.checkState(segmentSnapshot.segmentMetadata.getFirstChunk() != null, - "First chunk must be not be null. Segment snapshot= %s", segmentSnapshot); - Preconditions.checkState(segmentSnapshot.segmentMetadata.getLastChunk() != null, - "Last chunk must not be null. Segment snapshot= %s", segmentSnapshot); - Preconditions.checkState(!segmentSnapshot.segmentMetadata.getFirstChunk().equals(segmentSnapshot.segmentMetadata.getLastChunk()), - "First chunk and last chunk should not match. Segment snapshot= %s", segmentSnapshot); - } - ChunkMetadata previous = null; - for (val metadata : segmentSnapshot.getChunkMetadataCollection()) { - if (previous != null) { - Preconditions.checkState(previous.getNextChunk().equals(metadata.getName()), - "In correct link . chunk %s must point to chunk %s. Segment snapshot= %s", - previous.getName(), metadata.getName(), segmentSnapshot); - } - previous = metadata; - } - } - } - return CompletableFuture.completedFuture(null); + private CompletableFuture getContents(String chunkPath) { + return getContents(chunkPath, false); } /** * Read contents from file. */ - private CompletableFuture getContents(String chunkPath) { + private CompletableFuture getContents(String chunkPath, boolean supressExceptionWarning) { val isReadDone = new AtomicBoolean(); + val shouldBreak = new AtomicBoolean(); val attempt = new AtomicInteger(); val lastException = new AtomicReference(); val retValue = new AtomicReference(); // Try config.getMaxJournalReadAttempts() times. return Futures.loop( - () -> attempt.get() < config.getMaxJournalReadAttempts() && !isReadDone.get(), + () -> attempt.get() < config.getMaxJournalReadAttempts() && !isReadDone.get() && !shouldBreak.get(), () -> readFully(chunkPath, retValue) .handleAsync((v, e) -> { attempt.incrementAndGet(); if (e != null) { // record the exception lastException.set(e); - log.warn("SystemJournal[{}] Error while reading journal {}.", containerId, chunkPath, lastException); + val ex = Exceptions.unwrap(e); + boolean shouldLog = true; + if (!shouldRetry(ex)) { + shouldBreak.set(true); + shouldLog = !supressExceptionWarning; + } + if (shouldLog) { + log.warn("SystemJournal[{}] Error while reading journal {}. Attempt#{}", containerId, chunkPath, attempt.get(), lastException.get()); + } return null; } else { // no exception, we are done reading. Return the value. @@ -786,7 +783,7 @@ private CompletableFuture getContents(String chunkPath) { executor) .handleAsync((v, e) -> { // If read is not done and we have exception then throw. - if (!isReadDone.get() && lastException.get() != null) { + if (shouldBreak.get() || (!isReadDone.get() && lastException.get() != null)) { throw new CompletionException(lastException.get()); } return v; @@ -794,6 +791,14 @@ private CompletableFuture getContents(String chunkPath) { .thenApplyAsync(v -> retValue.get(), executor); } + /** + * Returns whether operation should be retried after given exception. + */ + private boolean shouldRetry(Throwable ex) { + // Skip retry if we know chunk does not exist. + return !(ex instanceof ChunkNotFoundException); + } + /** * Read given chunk in its entirety. */ @@ -833,6 +838,7 @@ private CompletableFuture applySystemLogOperations(MetadataTransaction txn val epochToStartScanning = new AtomicLong(); val fileIndexToRecover = new AtomicInteger(1); + val journalsProcessed = Collections.synchronizedList(new ArrayList()); // Starting with journal file after last snapshot, if (null != systemSnapshotRecord) { epochToStartScanning.set(systemSnapshotRecord.epoch); @@ -850,37 +856,54 @@ private CompletableFuture applySystemLogOperations(MetadataTransaction txn fileIndexToRecover.set(1); } - // Process one file at a time. + // Process one journal at a time. + val scanAhead = new AtomicInteger(); val isScanDone = new AtomicBoolean(); return Futures.loop( () -> !isScanDone.get(), () -> { val systemLogName = getSystemJournalChunkName(containerId, epochToRecover.get(), fileIndexToRecover.get()); - return chunkStorage.exists(systemLogName) - .thenComposeAsync(exists -> { - if (!exists) { - // File does not exist. We have reached end of our scanning. - isScanDone.set(true); - log.debug("SystemJournal[{}] Done applying journal operations for epoch={}. Last journal index={}", - containerId, epochToRecover.get(), fileIndexToRecover.get()); - return CompletableFuture.completedFuture(null); - } else { - // Read contents. - return getContents(systemLogName) - // Apply record batches from the file. - .thenComposeAsync(contents -> processJournalContents(txn, state, systemLogName, new ByteArrayInputStream(contents)), executor) - // Move to next file. - .thenAcceptAsync(v -> { - fileIndexToRecover.incrementAndGet(); - state.filesProcessedCount.incrementAndGet(); - }, executor); + return getContents(systemLogName, true) + .thenApplyAsync(contents -> { + // We successfully read the contents. + journalsProcessed.add(systemLogName); + // Reset scan ahead counter. + scanAhead.set(0); + return contents; + }, executor) + // Apply record batches from the file. + .thenComposeAsync(contents -> processJournalContents(txn, state, systemLogName, new ByteArrayInputStream(contents)), executor) + .handleAsync((v, e) -> { + if (null != e) { + val ex = Exceptions.unwrap(e); + if (ex instanceof ChunkNotFoundException) { + // Journal chunk does not exist. + log.debug("SystemJournal[{}] Journal does not exist for epoch={}. Last journal index={}", + containerId, epochToRecover.get(), fileIndexToRecover.get()); + + // Check whether we have reached end of our scanning (including scan ahead). + if (scanAhead.incrementAndGet() > config.getMaxJournalWriteAttempts()) { + isScanDone.set(true); + log.debug("SystemJournal[{}] Done applying journal operations for epoch={}. Last journal index={}", + containerId, epochToRecover.get(), fileIndexToRecover.get()); + return null; + } + } else { + throw new CompletionException(e); + } } + + // Move to next journal. + fileIndexToRecover.incrementAndGet(); + state.filesProcessedCount.incrementAndGet(); + return v; }, executor); }, executor); }, v -> epochToRecover.incrementAndGet(), - executor); + executor) + .thenRunAsync(() -> pendingGarbageChunks.addAll(journalsProcessed), executor); } private CompletableFuture processJournalContents(MetadataTransaction txn, BootstrapState state, String systemLogName, ByteArrayInputStream input) { @@ -924,27 +947,30 @@ private CompletableFuture applyRecord(MetadataTransaction txn, } state.visitedRecords.add(record); state.recordsProcessedCount.incrementAndGet(); - + CompletableFuture retValue = null; // ChunkAddedRecord. if (record instanceof ChunkAddedRecord) { val chunkAddedRecord = (ChunkAddedRecord) record; - return applyChunkAddition(txn, state.chunkStartOffsets, + retValue = applyChunkAddition(txn, state.chunkStartOffsets, chunkAddedRecord.getSegmentName(), nullToEmpty(chunkAddedRecord.getOldChunkName()), chunkAddedRecord.getNewChunkName(), chunkAddedRecord.getOffset()); - } - - // TruncationRecord. - if (record instanceof TruncationRecord) { + } else if (record instanceof TruncationRecord) { + // TruncationRecord. val truncationRecord = (TruncationRecord) record; state.finalTruncateOffsets.put(truncationRecord.getSegmentName(), truncationRecord.getOffset()); state.finalFirstChunkStartsAtOffsets.put(truncationRecord.getSegmentName(), truncationRecord.getStartOffset()); - return CompletableFuture.completedFuture(null); + retValue = CompletableFuture.completedFuture(null); + } else if (record instanceof SystemSnapshotRecord) { + val snapshotRecord = (SystemSnapshotRecord) record; + retValue = Futures.toVoid(applySystemSnapshotRecord(txn, state, snapshotRecord)); } - - // Unknown record. - return CompletableFuture.failedFuture(new IllegalStateException(String.format("Unknown record type encountered. record = %s", record))); + if (null == retValue) { + // Unknown record. + retValue = CompletableFuture.failedFuture(new IllegalStateException(String.format("Unknown record type encountered. record = %s", record))); + } + return retValue; } /** @@ -952,10 +978,10 @@ private CompletableFuture applyRecord(MetadataTransaction txn, */ private CompletableFuture adjustLastChunkLengths(MetadataTransaction txn) { val futures = new ArrayList>(); - for (String systemSegment : systemSegments) { + for (val systemSegment : systemSegments) { val f = txn.get(systemSegment) .thenComposeAsync(m -> { - SegmentMetadata segmentMetadata = (SegmentMetadata) m; + val segmentMetadata = (SegmentMetadata) m; segmentMetadata.checkInvariants(); CompletableFuture ff; // Update length of last chunk in metadata to what we actually find on LTS. @@ -965,7 +991,7 @@ private CompletableFuture adjustLastChunkLengths(MetadataTransaction txn) long length = chunkInfo.getLength(); return txn.get(segmentMetadata.getLastChunk()) .thenAcceptAsync(mm -> { - ChunkMetadata lastChunk = (ChunkMetadata) mm; + val lastChunk = (ChunkMetadata) mm; Preconditions.checkState(null != lastChunk, "lastChunk must not be null. Segment=%s", segmentMetadata); lastChunk.setLength(length); txn.update(lastChunk); @@ -998,7 +1024,7 @@ private CompletableFuture adjustLastChunkLengths(MetadataTransaction txn) private CompletableFuture applyFinalTruncateOffsets(MetadataTransaction txn, BootstrapState state) { val futures = new ArrayList>(); - for (String systemSegment : systemSegments) { + for (val systemSegment : systemSegments) { if (state.finalTruncateOffsets.containsKey(systemSegment)) { val truncateAt = state.finalTruncateOffsets.get(systemSegment); val firstChunkStartsAt = state.finalFirstChunkStartsAtOffsets.get(systemSegment); @@ -1051,8 +1077,9 @@ private CompletableFuture applyChunkAddition(MetadataTransaction txn, Map< .thenAcceptAsync(mmm -> { val chunkToDelete = (ChunkMetadata) mmm; txn.delete(toDelete.get()); - toDelete.set(chunkToDelete.getNextChunk()); segmentMetadata.setChunkCount(segmentMetadata.getChunkCount() - 1); + // move to next chunk in list of now zombie chunks + toDelete.set(chunkToDelete.getNextChunk()); }, executor), executor) .thenAcceptAsync(v -> { @@ -1060,7 +1087,7 @@ private CompletableFuture applyChunkAddition(MetadataTransaction txn, Map< oldChunk.setNextChunk(newChunkName); // Set length - long oldLength = chunkStartOffsets.get(oldChunkName); + val oldLength = chunkStartOffsets.get(oldChunkName); oldChunk.setLength(offset - oldLength); txn.update(oldChunk); @@ -1069,6 +1096,7 @@ private CompletableFuture applyChunkAddition(MetadataTransaction txn, Map< } else { segmentMetadata.setFirstChunk(newChunkName); segmentMetadata.setStartOffset(offset); + Preconditions.checkState(segmentMetadata.getChunkCount() == 0, "Chunk count must be 0. %s", segmentMetadata); f = CompletableFuture.completedFuture(null); } return f.thenComposeAsync(v -> { @@ -1093,7 +1121,7 @@ private CompletableFuture applyChunkAddition(MetadataTransaction txn, Map< private CompletableFuture applyTruncate(MetadataTransaction txn, String segmentName, long truncateAt, long firstChunkStartsAt) { return txn.get(segmentName) .thenComposeAsync(metadata -> { - SegmentMetadata segmentMetadata = (SegmentMetadata) metadata; + val segmentMetadata = (SegmentMetadata) metadata; segmentMetadata.checkInvariants(); val currentChunkName = new AtomicReference<>(segmentMetadata.getFirstChunk()); val currentMetadata = new AtomicReference(); @@ -1149,7 +1177,7 @@ private CompletableFuture createSystemSnapshotRecord(Metad .build(); val futures = Collections.synchronizedList(new ArrayList>()); - for (String systemSegment : systemSegments) { + for (val systemSegment : systemSegments) { // Find segment metadata. val future = txn.get(systemSegment) .thenComposeAsync(metadata -> { @@ -1210,7 +1238,10 @@ private CompletableFuture createSystemSnapshotRecord(Metad futures.add(future); } return Futures.allOf(futures) - .thenApplyAsync(v -> systemSnapshot, executor); + .thenApplyAsync(vv -> { + systemSnapshot.checkInvariants(); + return systemSnapshot; + }, executor); } /** @@ -1241,7 +1272,7 @@ private CompletableFuture writeSystemSnapshotRecord(SystemSnapshotRecor try { val snapshotReadback = SYSTEM_SNAPSHOT_SERIALIZER.deserialize(contents); if (config.isSelfCheckEnabled()) { - checkInvariants(snapshotReadback); + snapshotReadback.checkInvariants(); } Preconditions.checkState(systemSnapshot.equals(snapshotReadback), "Records do not match %s != %s", snapshotReadback, systemSnapshot); // Record as successful. @@ -1259,6 +1290,8 @@ private CompletableFuture writeSystemSnapshotRecord(SystemSnapshotRecor attempt.incrementAndGet(); if (e != null) { lastException.set(Exceptions.unwrap(e)); + // Add failed file as garbage. + pendingGarbageChunks.add(snapshotFile); return null; } else { return v; @@ -1287,6 +1320,7 @@ private CompletableFuture writeToJournal(ByteArraySegment bytes) { new ByteArrayInputStream(bytes.array(), bytes.arrayOffset(), bytes.getLength())) .thenAcceptAsync(h -> { currentHandle.set(h); + pendingGarbageChunks.add(h.getChunkName()); systemJournalOffset.addAndGet(bytes.getLength()); newChunkRequired.set(false); }, executor); @@ -1590,6 +1624,45 @@ static class SegmentSnapshotRecord extends SystemJournalRecord { @NonNull private final Collection chunkMetadataCollection; + /** + * Check invariants. + */ + public void checkInvariants() { + segmentMetadata.checkInvariants(); + Preconditions.checkState(segmentMetadata.isStorageSystemSegment(), + "Segment must be storage segment. Segment snapshot= %s", this); + Preconditions.checkState(segmentMetadata.getChunkCount() == chunkMetadataCollection.size(), + "Chunk count must match. Segment snapshot= %s", this); + + long dataSize = 0; + ChunkMetadata previous = null; + ChunkMetadata firstChunk = null; + for (val metadata : getChunkMetadataCollection()) { + dataSize += metadata.getLength(); + if (previous != null) { + Preconditions.checkState(previous.getNextChunk().equals(metadata.getName()), + "In correct link . chunk %s must point to chunk %s. Segment snapshot= %s", + previous.getName(), metadata.getName(), this); + } else { + firstChunk = metadata; + } + previous = metadata; + } + Preconditions.checkState(dataSize == segmentMetadata.getLength() - segmentMetadata.getFirstChunkStartOffset(), + "Data size does not match dataSize (%s). Segment=%s", dataSize, segmentMetadata); + + if (chunkMetadataCollection.size() > 0) { + Preconditions.checkState(segmentMetadata.getFirstChunk().equals(firstChunk.getName()), + "First chunk name is wrong. Segment snapshot= %s", this); + Preconditions.checkState(segmentMetadata.getLastChunk().equals(previous.getName()), + "Last chunk name is wrong. Segment snapshot= %s", this); + Preconditions.checkState(previous.getNextChunk() == null, + "Invalid last chunk Segment snapshot= %s", this); + Preconditions.checkState(segmentMetadata.getLength() == segmentMetadata.getLastChunkStartOffset() + previous.getLength(), + "Last chunk start offset is wrong. snapshot= %s", this); + } + } + /** * Builder that implements {@link ObjectBuilder}. */ @@ -1656,6 +1729,15 @@ static class SystemSnapshotRecord extends SystemJournalRecord { @NonNull private final Collection segmentSnapshotRecords; + /** + * Check invariants. + */ + public void checkInvariants() { + for (val segmentSnapshot : getSegmentSnapshotRecords()) { + segmentSnapshot.checkInvariants(); + } + } + /** * Builder that implements {@link ObjectBuilder}. */ @@ -1668,7 +1750,7 @@ public static class SystemSnapshotRecordBuilder implements ObjectBuilder { private static final SegmentSnapshotRecord.Serializer CHUNK_METADATA_SERIALIZER = new SegmentSnapshotRecord.Serializer(); private static final RevisionDataOutput.ElementSerializer ELEMENT_SERIALIZER = CHUNK_METADATA_SERIALIZER::serialize; - private static final RevisionDataInput.ElementDeserializer ELEMENT_DESERIALIZER = dataInput -> (SegmentSnapshotRecord) CHUNK_METADATA_SERIALIZER.deserialize(dataInput.getBaseStream()); + private static final RevisionDataInput.ElementDeserializer ELEMENT_DESERIALIZER = dataInput -> CHUNK_METADATA_SERIALIZER.deserialize(dataInput.getBaseStream()); @Override protected SystemSnapshotRecord.SystemSnapshotRecordBuilder newBuilder() { diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/TruncateOperation.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/TruncateOperation.java index 374cb62ee8f..fe59cae3b64 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/TruncateOperation.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/TruncateOperation.java @@ -67,6 +67,7 @@ class TruncateOperation implements Callable> { timer = new Timer(); } + @Override public CompletableFuture call() { checkPreconditions(); log.debug("{} truncate - started op={}, segment={}, offset={}.", @@ -82,27 +83,50 @@ public CompletableFuture call() { if (segmentMetadata.getStartOffset() >= offset) { // Nothing to do + logEnd(); return CompletableFuture.completedFuture(null); } + val oldChunkCount = segmentMetadata.getChunkCount(); val oldStartOffset = segmentMetadata.getStartOffset(); return updateFirstChunk(txn) .thenComposeAsync(v -> deleteChunks(txn) .thenComposeAsync( vvv -> { - txn.update(segmentMetadata); // Check invariants. + segmentMetadata.checkInvariants(); Preconditions.checkState(segmentMetadata.getLength() == oldLength, "truncate should not change segment length. oldLength=%s Segment=%s", oldLength, segmentMetadata); - segmentMetadata.checkInvariants(); + Preconditions.checkState(oldChunkCount - chunksToDelete.size() == segmentMetadata.getChunkCount(), + "Number of chunks do not match. old value (%s) - number of chunks deleted (%s) must match current chunk count(%s)", + oldChunkCount, chunksToDelete.size(), segmentMetadata.getChunkCount()); + if (null != currentMetadata && null != segmentMetadata.getFirstChunk()) { + Preconditions.checkState(segmentMetadata.getFirstChunk().equals(currentMetadata.getName()), + "First chunk name must match current metadata. Expected = %s Actual = %s", segmentMetadata.getFirstChunk(), currentMetadata.getName()); + Preconditions.checkState(segmentMetadata.getStartOffset() <= segmentMetadata.getFirstChunkStartOffset() + currentMetadata.getLength(), + "segment start offset (%s) must be less than or equal to first chunk start offset (%s)+ first chunk length (%s)", + segmentMetadata.getStartOffset(), segmentMetadata.getFirstChunkStartOffset(), currentMetadata.getLength()); + if (segmentMetadata.getChunkCount() == 1) { + Preconditions.checkState(segmentMetadata.getLength() - segmentMetadata.getFirstChunkStartOffset() == currentMetadata.getLength(), + "Length of first chunk (%s) must match segment length (%s) - first chunk start offset (%s) when there is only one chunk", + currentMetadata.getLength(), segmentMetadata.getLength(), segmentMetadata.getFirstChunkStartOffset()); + } + } // Remove read index block entries. - chunkedSegmentStorage.deleteBlockIndexEntriesForChunk(txn, streamSegmentName, oldStartOffset, segmentMetadata.getStartOffset()); - - // Finally commit. - return commit(txn) - .handleAsync(this::handleException, chunkedSegmentStorage.getExecutor()) - .thenRunAsync(this::postCommit, chunkedSegmentStorage.getExecutor()); + // To avoid possibility of unintentional deadlock, skip this step for storage system segments. + if (!segmentMetadata.isStorageSystemSegment()) { + chunkedSegmentStorage.deleteBlockIndexEntriesForChunk(txn, streamSegmentName, oldStartOffset, segmentMetadata.getStartOffset()); + } + + // Collect garbage. + return chunkedSegmentStorage.getGarbageCollector().addChunksToGarbage(txn.getVersion(), chunksToDelete) + .thenComposeAsync( vv -> { + // Finally commit. + return commit(txn) + .handleAsync(this::handleException, chunkedSegmentStorage.getExecutor()) + .thenRunAsync(this::postCommit, chunkedSegmentStorage.getExecutor()); + }, chunkedSegmentStorage.getExecutor()); }, chunkedSegmentStorage.getExecutor()), chunkedSegmentStorage.getExecutor()); }, chunkedSegmentStorage.getExecutor()), @@ -110,11 +134,8 @@ public CompletableFuture call() { } private void postCommit() { - // Collect garbage. - chunkedSegmentStorage.getGarbageCollector().addToGarbage(chunksToDelete); // Update the read index by removing all entries below truncate offset. chunkedSegmentStorage.getReadIndexCache().truncateReadIndex(handle.getSegmentName(), segmentMetadata.getStartOffset()); - logEnd(); } diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/WriteOperation.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/WriteOperation.java index 64085256357..fdc60c344be 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/WriteOperation.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/chunklayer/WriteOperation.java @@ -42,7 +42,6 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; -import java.util.stream.Collectors; import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_NUM_CHUNKS_ADDED; import static io.pravega.segmentstore.storage.chunklayer.ChunkStorageMetrics.SLTS_SYSTEM_NUM_CHUNKS_ADDED; @@ -93,6 +92,7 @@ class WriteOperation implements Callable> { timer = new Timer(); } + @Override public CompletableFuture call() { // Validate preconditions. checkPreconditions(); @@ -132,7 +132,6 @@ public CompletableFuture call() { postCommit(), chunkedSegmentStorage.getExecutor()) .exceptionally(this::handleException), chunkedSegmentStorage.getExecutor()) - .whenCompleteAsync((value, e) -> collectGarbage(), chunkedSegmentStorage.getExecutor()) .thenRunAsync(this::logEnd, chunkedSegmentStorage.getExecutor()), chunkedSegmentStorage.getExecutor()); }, chunkedSegmentStorage.getExecutor()); @@ -190,13 +189,6 @@ private void logEnd() { LoggerHelpers.traceLeave(log, "write", traceId, handle, offset); } - private void collectGarbage() { - if (!isCommitted && chunksAddedCount.get() > 0) { - // Collect garbage. - chunkedSegmentStorage.getGarbageCollector().addToGarbage(newReadIndexEntries.stream().map(ChunkNameOffsetPair::getChunkName).collect(Collectors.toList())); - } - } - private CompletableFuture commit(MetadataTransaction txn) { // commit all system log records if required. if (isSystemSegment && chunksAddedCount.get() > 0) { @@ -213,6 +205,8 @@ private CompletableFuture commit(MetadataTransaction txn) { } private CompletableFuture writeData(MetadataTransaction txn) { + val oldChunkCount = segmentMetadata.getChunkCount(); + val oldLength = segmentMetadata.getLength(); return Futures.loop( () -> bytesRemaining.get() > 0, () -> { @@ -249,6 +243,17 @@ private CompletableFuture writeData(MetadataTransaction txn) { .thenRunAsync(() -> { // Check invariants. segmentMetadata.checkInvariants(); + Preconditions.checkState(oldChunkCount + chunksAddedCount.get() == segmentMetadata.getChunkCount(), + "Number of chunks do not match. old value (%s) + number of chunks added (%s) must match current chunk count(%s)", + oldChunkCount, chunksAddedCount.get(), segmentMetadata.getChunkCount()); + Preconditions.checkState(oldLength + length == segmentMetadata.getLength(), + "New length must match. old value (%s) + length (%s) must match current chunk count(%s)", + oldLength, length, segmentMetadata.getLength()); + if (null != lastChunkMetadata.get()) { + Preconditions.checkState(segmentMetadata.getLastChunkStartOffset() + lastChunkMetadata.get().getLength() == segmentMetadata.getLength(), + "Last chunk start offset (%s) + Last chunk length (%s) must match segment length (%s)", + segmentMetadata.getLastChunkStartOffset(), lastChunkMetadata.get().getLength(), segmentMetadata.getLength()); + } }, chunkedSegmentStorage.getExecutor()); } @@ -271,44 +276,48 @@ private CompletableFuture addNewChunk(MetadataTransaction txn) { // Create new chunk String newChunkName = getNewChunkName(handle.getSegmentName(), segmentMetadata.getLength()); - CompletableFuture createdHandle; - if (chunkedSegmentStorage.shouldAppend()) { - createdHandle = chunkedSegmentStorage.getChunkStorage().create(newChunkName); - } else { - createdHandle = CompletableFuture.completedFuture(ChunkHandle.writeHandle(newChunkName)); - } - return createdHandle - .thenAcceptAsync(h -> { - chunkHandle = h; - String previousLastChunkName = lastChunkMetadata.get() == null ? null : lastChunkMetadata.get().getName(); - - // update first and last chunks. - lastChunkMetadata.set(updateMetadataForChunkAddition(txn, - segmentMetadata, - newChunkName, - isFirstWriteAfterFailover, - lastChunkMetadata.get())); - - // Record the creation of new chunk. - if (isSystemSegment) { - addSystemLogRecord(systemLogRecords, - handle.getSegmentName(), - segmentMetadata.getLength(), - previousLastChunkName, - newChunkName); - txn.markPinned(lastChunkMetadata.get()); - } - // Update read index. - newReadIndexEntries.add(new ChunkNameOffsetPair(segmentMetadata.getLength(), newChunkName)); - isFirstWriteAfterFailover = false; - skipOverFailedChunk = false; - didSegmentLayoutChange = true; - chunksAddedCount.incrementAndGet(); + return chunkedSegmentStorage.getGarbageCollector().trackNewChunk(txn.getVersion(), newChunkName) + .thenComposeAsync( v -> { + CompletableFuture createdHandle; + if (chunkedSegmentStorage.shouldAppend()) { + createdHandle = chunkedSegmentStorage.getChunkStorage().create(newChunkName); + } else { + createdHandle = CompletableFuture.completedFuture(ChunkHandle.writeHandle(newChunkName)); + } + return createdHandle + .thenAcceptAsync(h -> { + chunkHandle = h; + String previousLastChunkName = lastChunkMetadata.get() == null ? null : lastChunkMetadata.get().getName(); - log.debug("{} write - New chunk added - op={}, segment={}, chunk={}, offset={}.", - chunkedSegmentStorage.getLogPrefix(), System.identityHashCode(this), handle.getSegmentName(), newChunkName, segmentMetadata.getLength()); - }, chunkedSegmentStorage.getExecutor()); + // update first and last chunks. + lastChunkMetadata.set(updateMetadataForChunkAddition(txn, + segmentMetadata, + newChunkName, + isFirstWriteAfterFailover, + lastChunkMetadata.get())); + + // Record the creation of new chunk. + if (isSystemSegment) { + addSystemLogRecord(systemLogRecords, + handle.getSegmentName(), + segmentMetadata.getLength(), + previousLastChunkName, + newChunkName); + txn.markPinned(lastChunkMetadata.get()); + } + // Update read index. + newReadIndexEntries.add(new ChunkNameOffsetPair(segmentMetadata.getLength(), newChunkName)); + + isFirstWriteAfterFailover = false; + skipOverFailedChunk = false; + didSegmentLayoutChange = true; + chunksAddedCount.incrementAndGet(); + + log.debug("{} write - New chunk added - op={}, segment={}, chunk={}, offset={}.", + chunkedSegmentStorage.getLogPrefix(), System.identityHashCode(this), handle.getSegmentName(), newChunkName, segmentMetadata.getLength()); + }, chunkedSegmentStorage.getExecutor()); + }, chunkedSegmentStorage.getExecutor()); } private void checkState() { diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/BaseMetadataStore.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/BaseMetadataStore.java index eef08fd88bb..bc28a681db3 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/BaseMetadataStore.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/BaseMetadataStore.java @@ -149,6 +149,8 @@ abstract public class BaseMetadataStore implements ChunkMetadataStore { */ private final ConcurrentHashMap bufferedTxnData; + private final ConcurrentHashMap activeTxns; + /** * Set of active records from commits that are in-flight. These records should not be evicted until the active commits finish. */ @@ -218,6 +220,7 @@ public BaseMetadataStore(ChunkedSegmentStorageConfig config, Executor executor) version = new AtomicLong(System.currentTimeMillis()); // Start with unique number. fenced = new AtomicBoolean(false); bufferedTxnData = new ConcurrentHashMap<>(); // Don't think we need anything fancy here. But we'll measure and see. + activeTxns = new ConcurrentHashMap<>(); activeKeys = ConcurrentHashMultiset.create(); maxEntriesInTxnBuffer = config.getMaxEntriesInTxnBuffer(); maxEntriesInCache = config.getMaxEntriesInCache(); @@ -235,7 +238,23 @@ public BaseMetadataStore(ChunkedSegmentStorageConfig config, Executor executor) @Override public MetadataTransaction beginTransaction(boolean isReadonly, String... keysToLock) { // Each transaction gets a unique number which is monotonically increasing. - return new MetadataTransaction(this, isReadonly, version.incrementAndGet(), keysToLock); + val txn = new MetadataTransaction(this, isReadonly, version.incrementAndGet(), keysToLock); + activeTxns.put(txn.getVersion(), txn); + return txn; + } + + /** + * Closes the transaction. + * @param txn transaction to close. + */ + @Override + public void closeTransaction(MetadataTransaction txn) { + activeTxns.remove(txn.getVersion()); + } + + @Override + public boolean isTransactionActive(long txnId) { + return activeTxns.containsKey(txnId); } /** @@ -553,6 +572,7 @@ private void evictFromBuffer(List keysToEvict) { * @param txn transaction to abort. * throws StorageMetadataException If there are any errors. */ + @Override public CompletableFuture abort(MetadataTransaction txn) { Preconditions.checkArgument(null != txn, "txn must not be null"); // Do nothing @@ -875,6 +895,7 @@ public void close() { * Explicitly marks the store as fenced. * Once marked fenced no modifications to data should be allowed. */ + @Override public void markFenced() { this.fenced.set(true); } diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/ChunkMetadataStore.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/ChunkMetadataStore.java index b114926883d..3a228abce6b 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/ChunkMetadataStore.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/ChunkMetadataStore.java @@ -94,6 +94,18 @@ public interface ChunkMetadataStore extends AutoCloseable, StatsReporter { */ MetadataTransaction beginTransaction(boolean isReadonly, String... keysToLock); + /** + * Closes the transaction. + * @param txn transaction to close. + */ + void closeTransaction(MetadataTransaction txn); + + /** + * Returns whether give transaction is active or not. + * @param txnId transaction Id to check. + */ + boolean isTransactionActive(long txnId); + /** * Retrieves the metadata for given key. * diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/MetadataTransaction.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/MetadataTransaction.java index d0650389e77..34b46c8f378 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/MetadataTransaction.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/MetadataTransaction.java @@ -260,10 +260,12 @@ public CompletableFuture abort() { * {@link AutoCloseable#close()} implementation. * */ + @Override public void close() { if (!isCommitted || isAborted) { store.abort(this); } + store.closeTransaction(this); } } diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/SegmentMetadata.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/SegmentMetadata.java index c8398ce514a..1f2d9753ad0 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/SegmentMetadata.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/metadata/SegmentMetadata.java @@ -214,7 +214,7 @@ public void checkInvariants() { Preconditions.checkState(lastChunkStartOffset >= 0, "lastChunkStartOffset should be non-negative. %s", this); Preconditions.checkState(firstChunkStartOffset <= startOffset, "startOffset must not be smaller than firstChunkStartOffset. %s", this); Preconditions.checkState(length >= lastChunkStartOffset, "lastChunkStartOffset must not be greater than length. %s", this); - Preconditions.checkState(firstChunkStartOffset <= lastChunkStartOffset, "lastChunkStartOffset must not be greater than firstChunkStartOffset. %s", this); + Preconditions.checkState(firstChunkStartOffset <= lastChunkStartOffset, "firstChunkStartOffset must not be greater than lastChunkStartOffset. %s", this); Preconditions.checkState(chunkCount >= 0, "chunkCount should be non-negative. %s", this); Preconditions.checkState(length >= startOffset, "length must be greater or equal to startOffset. %s", this); if (null == firstChunk) { diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryChunkStorage.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryChunkStorage.java index 043ef6ef4bc..808ebaba7dd 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryChunkStorage.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryChunkStorage.java @@ -61,9 +61,28 @@ protected ChunkInfo doGetInfo(String chunkName) throws ChunkStorageException, Il .build(); } + @Override + protected ChunkHandle doCreateWithContent(String chunkName, int length, InputStream data) throws ChunkStorageException { + Preconditions.checkNotNull(chunkName); + if (null != chunks.putIfAbsent(chunkName, new InMemoryChunk(chunkName))) { + throw new ChunkAlreadyExistsException(chunkName, "InMemoryChunkStorage::doCreate"); + } + ChunkHandle handle = new ChunkHandle(chunkName, false); + int bytesWritten = doWriteInternal(handle, 0, length, data); + if (bytesWritten < length) { + doDelete(ChunkHandle.writeHandle(chunkName)); + throw new ChunkStorageException(chunkName, "doCreateWithContent - invalid length returned"); + } + return handle; + } + @Override protected ChunkHandle doCreate(String chunkName) throws ChunkStorageException, IllegalArgumentException { Preconditions.checkNotNull(chunkName); + if (!supportsAppend()) { + throw new UnsupportedOperationException("Attempt to create empty object when append is not supported."); + } + if (null != chunks.putIfAbsent(chunkName, new InMemoryChunk(chunkName))) { throw new ChunkAlreadyExistsException(chunkName, "InMemoryChunkStorage::doCreate"); } @@ -139,6 +158,14 @@ protected int doRead(ChunkHandle handle, long fromOffset, int length, byte[] buf @Override protected int doWrite(ChunkHandle handle, long offset, int length, InputStream data) throws ChunkStorageException { + if (!supportsAppend()) { + throw new UnsupportedOperationException("Attempt to create empty object when append is not supported."); + } + + return doWriteInternal(handle, offset, length, data); + } + + private int doWriteInternal(ChunkHandle handle, long offset, int length, InputStream data) throws ChunkStorageException { InMemoryChunk chunk = getInMemoryChunk(handle); long oldLength = chunk.getLength(); if (chunk.isReadOnly) { diff --git a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryMetadataStore.java b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryMetadataStore.java index 6085945ba0c..8bfef431008 100644 --- a/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryMetadataStore.java +++ b/segmentstore/storage/src/main/java/io/pravega/segmentstore/storage/mocks/InMemoryMetadataStore.java @@ -73,6 +73,7 @@ public InMemoryMetadataStore(ChunkedSegmentStorageConfig config, Executor execut * @param key Key for the metadata record. * @return Associated {@link io.pravega.segmentstore.storage.metadata.BaseMetadataStore.TransactionData}. */ + @Override protected CompletableFuture read(String key) { synchronized (this) { TransactionData data = backingStore.get(key); @@ -103,6 +104,7 @@ protected CompletableFuture read(String key) { * * @param dataList List of transaction data to write. */ + @Override protected CompletableFuture writeAll(Collection dataList) { CompletableFuture f; if (writeCallback != null) { diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/IdempotentStorageTestBase.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/IdempotentStorageTestBase.java index 46f8069e4f0..762040e7aad 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/IdempotentStorageTestBase.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/IdempotentStorageTestBase.java @@ -240,7 +240,7 @@ public void testPartialConcat() throws Exception { Assert.assertEquals(String.format("Unexpected number of bytes read from offset %d.", offset), readBuffer.length, bytesRead); AssertExtensions.assertArrayEquals(String.format("Unexpected read result from offset %d.", offset), - readBuffer, (int) offset, readBuffer, 0, bytesRead); + readBuffer, offset, readBuffer, 0, bytesRead); } s1.delete(writeHandle1, TIMEOUT).join(); } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/StorageTestBase.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/StorageTestBase.java index dd8563979f0..f784c126055 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/StorageTestBase.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/StorageTestBase.java @@ -59,7 +59,7 @@ */ public abstract class StorageTestBase extends ThreadPooledTestSuite { //region General Test arguments - protected static final Duration TIMEOUT = Duration.ofSeconds(30); + protected static final Duration TIMEOUT = Duration.ofSeconds(60); protected static final long DEFAULT_EPOCH = 1; protected static final int APPENDS_PER_SEGMENT = 10; protected static final String APPEND_FORMAT = "Segment_%s_Append_%d"; @@ -93,11 +93,6 @@ public void testCreate() throws Exception { assertThrows("create() did not throw for existing StreamSegment.", () -> createSegment(segmentName, s), ex -> ex instanceof StreamSegmentExistsException); - - // Delete and make sure it can be recreated. - s.openWrite(segmentName).thenCompose(handle -> s.delete(handle, null)).join(); - createSegment(segmentName, s); - Assert.assertTrue("Expected the segment to exist.", s.exists(segmentName, null).join()); } } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageTests.java index abb2cd0c20a..a7860c452e9 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkStorageTests.java @@ -33,8 +33,10 @@ import java.util.concurrent.CompletionException; import java.util.concurrent.ExecutionException; -import static org.junit.Assert.*; +import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; /** * Unit tests specifically targeted at test {@link ChunkStorage} implementation. @@ -87,7 +89,7 @@ public void testChunkLifeCycle() throws Exception { testNotExists(chunkName); // Perform basic operations. - ChunkHandle chunkHandle = chunkStorage.create(chunkName).get(); + ChunkHandle chunkHandle = chunkStorage.createWithContent(chunkName, 1, new ByteArrayInputStream(new byte[1])).get(); assertEquals(chunkName, chunkHandle.getChunkName()); assertEquals(false, chunkHandle.isReadOnly()); @@ -190,16 +192,14 @@ public void testSimpleReadWriteCreateWithContent() throws Exception { public void testConsecutiveReads() throws Exception { String chunkName = "testchunk"; - // Create. - ChunkHandle chunkHandle = chunkStorage.create(chunkName).get(); - assertEquals(chunkName, chunkHandle.getChunkName()); - assertEquals(false, chunkHandle.isReadOnly()); - - // Write + // Create. Write byte[] writeBuffer = new byte[15]; populate(writeBuffer); - int bytesWritten = chunkStorage.write(chunkHandle, 0, writeBuffer.length, new ByteArrayInputStream(writeBuffer)).get(); - assertEquals(writeBuffer.length, bytesWritten); + ChunkHandle chunkHandle = chunkStorage.createWithContent(chunkName, writeBuffer.length, new ByteArrayInputStream(writeBuffer)).get(); + assertEquals(chunkName, chunkHandle.getChunkName()); + assertEquals(false, chunkHandle.isReadOnly()); + val chunkInfo = chunkStorage.getInfo(chunkName).get(); + assertEquals(writeBuffer.length, chunkInfo.getLength()); // Read back in multiple reads. byte[] readBuffer = new byte[writeBuffer.length]; @@ -360,6 +360,9 @@ public void testSimpleReadExceptions() throws Exception { */ @Test public void testSimpleWriteExceptions() throws Exception { + if (!chunkStorage.supportsAppend()) { + return; + } String chunkName = "testchunk"; byte[] writeBuffer = new byte[10]; @@ -456,7 +459,9 @@ public void testSimpleScenario() throws Exception { // Open writable handles ChunkHandle handleA; ChunkInfo chunkInfoA; - byte[] bytes = new byte[10]; + long multiple = getMinimumConcatSize(); + + byte[] bytes = new byte[Math.toIntExact(10 * multiple)]; populate(bytes); int totalBytesWritten = 0; @@ -473,22 +478,23 @@ public void testSimpleScenario() throws Exception { handleA = chunkStorage.openWrite(chunknameA).get(); assertFalse(handleA.isReadOnly()); for (int i = 1; i < 5; i++) { - try (BoundedInputStream bis = new BoundedInputStream(new ByteArrayInputStream(bytes), i)) { - int bytesWritten = chunkStorage.write(handleA, totalBytesWritten, i, bis).get(); - assertEquals(i, bytesWritten); + val size = i * multiple; + try (BoundedInputStream bis = new BoundedInputStream(new ByteArrayInputStream(bytes), Math.toIntExact(size))) { + int bytesWritten = chunkStorage.write(handleA, totalBytesWritten, Math.toIntExact(size), bis).get(); + assertEquals(size, bytesWritten); } - totalBytesWritten += i; + totalBytesWritten += size; } } else { handleA = chunkStorage.createWithContent(chunknameA, bytes.length, new ByteArrayInputStream(bytes)).get(); - totalBytesWritten = 10; + totalBytesWritten = Math.toIntExact(10 * multiple); } chunkInfoA = chunkStorage.getInfo(chunknameA).get(); assertEquals(totalBytesWritten, chunkInfoA.getLength()); // Write some data to segment B - byte[] bytes2 = new byte[5]; + byte[] bytes2 = new byte[Math.toIntExact(5 * multiple)]; populate(bytes2); ChunkHandle handleB = chunkStorage.createWithContent(chunknameB, bytes2.length, new ByteArrayInputStream(bytes2)).get(); totalBytesWritten += bytes.length; @@ -499,7 +505,7 @@ public void testSimpleScenario() throws Exception { assertFalse(handleB.isReadOnly()); // Read some data int totalBytesRead = 0; - byte[] buffer = new byte[10]; + byte[] buffer = new byte[Math.toIntExact(10 * multiple)]; for (int i = 1; i < 5; i++) { totalBytesRead += chunkStorage.read(handleA, totalBytesRead, i, buffer, totalBytesRead).get(); } @@ -523,19 +529,24 @@ public void testSimpleScenario() throws Exception { } } + protected int getMinimumConcatSize() { + return 1; + } + /** * Test concat operation for non-existent chunks. */ @Test public void testConcatNotExists() throws Exception { String existingChunkName = "test"; - ChunkHandle existingChunkHandle = chunkStorage.create(existingChunkName).get(); + val size = getMinimumConcatSize() + 1; + ChunkHandle existingChunkHandle = chunkStorage.createWithContent(existingChunkName, size, new ByteArrayInputStream(new byte[size])).get(); try { AssertExtensions.assertFutureThrows( " concat should throw ChunkNotFoundException.", chunkStorage.concat( new ConcatArgument[]{ - ConcatArgument.builder().name(existingChunkName).length(0).build(), + ConcatArgument.builder().name(existingChunkName).length(size).build(), ConcatArgument.builder().name("NonExistent").length(1).build() } ), @@ -544,8 +555,8 @@ public void testConcatNotExists() throws Exception { " concat should throw ChunkNotFoundException.", chunkStorage.concat( new ConcatArgument[]{ - ConcatArgument.builder().name("NonExistent").length(0).build(), - ConcatArgument.builder().name(existingChunkName).length(0).build(), + ConcatArgument.builder().name("NonExistent").length(1).build(), + ConcatArgument.builder().name(existingChunkName).length(size).build(), } ), ex -> ex instanceof ChunkNotFoundException); @@ -646,7 +657,7 @@ public void testConcatException() throws Exception { @Test public void testDeleteAfterOpen() throws Exception { String testChunkName = "test"; - ChunkHandle writeHandle = chunkStorage.create(testChunkName).get(); + ChunkHandle writeHandle = chunkStorage.createWithContent(testChunkName, 1, new ByteArrayInputStream(new byte[1])).get(); ChunkHandle readHandle = chunkStorage.openRead(testChunkName).get(); chunkStorage.delete(writeHandle).join(); byte[] bufferRead = new byte[10]; @@ -654,10 +665,12 @@ public void testDeleteAfterOpen() throws Exception { " read should throw ChunkNotFoundException.", chunkStorage.read(readHandle, 0, 10, bufferRead, 0), ex -> ex instanceof ChunkNotFoundException && ex.getMessage().contains(testChunkName)); - AssertExtensions.assertFutureThrows( - " write should throw ChunkNotFoundException.", - chunkStorage.write(writeHandle, 0, 1, new ByteArrayInputStream(new byte[1])), - ex -> ex instanceof ChunkNotFoundException && ex.getMessage().contains(testChunkName)); + if (chunkStorage.supportsAppend()) { + AssertExtensions.assertFutureThrows( + " write should throw ChunkNotFoundException.", + chunkStorage.write(writeHandle, 0, 1, new ByteArrayInputStream(new byte[1])), + ex -> ex instanceof ChunkNotFoundException && ex.getMessage().contains(testChunkName)); + } AssertExtensions.assertFutureThrows( " truncate should throw ChunkNotFoundException.", chunkStorage.truncate(writeHandle, 0), @@ -737,38 +750,37 @@ public void testConcat() throws Exception { public void testReadonly() throws Exception { String chunkName = "chunk"; // Create chunks - chunkStorage.create(chunkName).get(); + chunkStorage.createWithContent(chunkName, 1, new ByteArrayInputStream(new byte[1])).get(); assertTrue(chunkStorage.exists(chunkName).get()); - // Open writable handle - ChunkHandle hWrite = chunkStorage.openWrite(chunkName).get(); - assertFalse(hWrite.isReadOnly()); - - // Write some data - int bytesWritten = chunkStorage.write(hWrite, 0, 1, new ByteArrayInputStream(new byte[1])).get(); - assertEquals(1, bytesWritten); - - AssertExtensions.assertThrows( - " write should throw IllegalArgumentException.", - () -> chunkStorage.write(ChunkHandle.readHandle(chunkName), 0, 1, new ByteArrayInputStream(new byte[1])).get(), - ex -> ex instanceof IllegalArgumentException); - AssertExtensions.assertThrows( " delete should throw IllegalArgumentException.", () -> chunkStorage.delete(ChunkHandle.readHandle(chunkName)).get(), ex -> ex instanceof IllegalArgumentException); try { + // Open writable handle + ChunkHandle hWrite = chunkStorage.openWrite(chunkName).get(); + assertFalse(hWrite.isReadOnly()); + int bytesWritten = 0; + + // Write some data + AssertExtensions.assertThrows( + " write should throw IllegalArgumentException.", + () -> chunkStorage.write(ChunkHandle.readHandle(chunkName), 1, 1, new ByteArrayInputStream(new byte[1])).get(), + ex -> ex instanceof IllegalArgumentException); + // Make readonly and open. chunkStorage.setReadOnly(hWrite, true).join(); - chunkStorage.openWrite(chunkName); // Make writable and open again. chunkStorage.setReadOnly(hWrite, false).join(); + ChunkHandle hWrite2 = chunkStorage.openWrite(chunkName).get(); assertFalse(hWrite2.isReadOnly()); - - bytesWritten = chunkStorage.write(hWrite2, 1, 1, new ByteArrayInputStream(new byte[1])).get(); - assertEquals(1, bytesWritten); + if (chunkStorage.supportsAppend()) { + bytesWritten = chunkStorage.write(hWrite2, 1, 1, new ByteArrayInputStream(new byte[1])).get(); + assertEquals(1, bytesWritten); + } chunkStorage.delete(hWrite).join(); } catch (Exception e) { val ex = Exceptions.unwrap(e); @@ -815,7 +827,7 @@ public void testTruncateExceptions() throws Exception { try { String chunkName = "chunk"; // Create chunks - chunkStorage.create(chunkName).get(); + chunkStorage.createWithContent(chunkName, 1, new ByteArrayInputStream(new byte[1])).get(); assertTrue(chunkStorage.exists(chunkName).get()); // Open writable handle @@ -880,11 +892,12 @@ private void testNotExists(String chunkName) throws Exception { chunkStorage.getInfo(chunkName), ex -> ex instanceof ChunkNotFoundException && ex.getMessage().contains(chunkName)); - AssertExtensions.assertFutureThrows( - " write should throw exception.", - chunkStorage.write(ChunkHandle.writeHandle(chunkName), 0, 1, new ByteArrayInputStream(new byte[1])), - ex -> ex instanceof ChunkNotFoundException && ex.getMessage().contains(chunkName)); - + if (chunkStorage.supportsAppend()) { + AssertExtensions.assertFutureThrows( + " write should throw exception.", + chunkStorage.write(ChunkHandle.writeHandle(chunkName), 0, 1, new ByteArrayInputStream(new byte[1])), + ex -> ex instanceof ChunkNotFoundException && ex.getMessage().contains(chunkName)); + } AssertExtensions.assertFutureThrows( " setReadOnly should throw exception.", chunkStorage.setReadOnly(ChunkHandle.writeHandle(chunkName), false), diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedRollingStorageTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedRollingStorageTests.java index 3ae80ce3509..973216abb84 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedRollingStorageTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedRollingStorageTests.java @@ -19,7 +19,9 @@ import io.pravega.segmentstore.storage.metadata.ChunkMetadataStore; import io.pravega.segmentstore.storage.mocks.InMemoryChunkStorage; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; import io.pravega.segmentstore.storage.rolling.RollingStorageTestBase; +import lombok.val; import java.util.concurrent.ScheduledExecutorService; @@ -48,11 +50,17 @@ protected Storage createStorage() throws Exception { chunkStorage = getChunkStorage(); } } - return new ChunkedSegmentStorage(CONTAINER_ID, + val ret = new ChunkedSegmentStorage(CONTAINER_ID, chunkStorage, chunkMetadataStore, executor, - ChunkedSegmentStorageConfig.DEFAULT_CONFIG); + getDefaultConfig()); + ret.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); + return ret; + } + + protected ChunkedSegmentStorageConfig getDefaultConfig() { + return ChunkedSegmentStorageConfig.DEFAULT_CONFIG; } /** @@ -72,7 +80,7 @@ protected ChunkStorage getChunkStorage() throws Exception { * @throws Exception If any unexpected error occurred. */ protected ChunkMetadataStore getMetadataStore() throws Exception { - return new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); + return new InMemoryMetadataStore(getDefaultConfig(), executorService()); } @Override diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfigTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfigTests.java index 8e1f19b53a6..bbe3622a520 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfigTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageConfigTests.java @@ -47,6 +47,7 @@ public void testProvidedValues() { props.setProperty(ChunkedSegmentStorageConfig.READ_INDEX_BLOCK_SIZE.getFullName(ChunkedSegmentStorageConfig.COMPONENT_CODE), "14"); props.setProperty(ChunkedSegmentStorageConfig.MAX_METADATA_ENTRIES_IN_BUFFER.getFullName(ChunkedSegmentStorageConfig.COMPONENT_CODE), "15"); props.setProperty(ChunkedSegmentStorageConfig.MAX_METADATA_ENTRIES_IN_CACHE.getFullName(ChunkedSegmentStorageConfig.COMPONENT_CODE), "16"); + props.setProperty(ChunkedSegmentStorageConfig.GARBAGE_COLLECTION_MAX_TXN_BATCH_SIZE.getFullName(ChunkedSegmentStorageConfig.COMPONENT_CODE), "17"); TypedProperties typedProperties = new TypedProperties(props, "storage"); ChunkedSegmentStorageConfig config = new ChunkedSegmentStorageConfig(typedProperties); @@ -69,6 +70,7 @@ public void testProvidedValues() { Assert.assertEquals(config.getIndexBlockSize(), 14); Assert.assertEquals(config.getMaxEntriesInTxnBuffer(), 15); Assert.assertEquals(config.getMaxEntriesInCache(), 16); + Assert.assertEquals(config.getGarbageCollectionTransactionBatchSize(), 17); } @Test @@ -100,6 +102,7 @@ private void testDefaultValues(ChunkedSegmentStorageConfig config) { Assert.assertEquals(config.getIndexBlockSize(), ChunkedSegmentStorageConfig.DEFAULT_CONFIG.getIndexBlockSize()); Assert.assertEquals(config.getMaxEntriesInTxnBuffer(), ChunkedSegmentStorageConfig.DEFAULT_CONFIG.getMaxEntriesInTxnBuffer()); Assert.assertEquals(config.getMaxEntriesInCache(), ChunkedSegmentStorageConfig.DEFAULT_CONFIG.getMaxEntriesInCache()); + Assert.assertEquals(config.getGarbageCollectionTransactionBatchSize(), ChunkedSegmentStorageConfig.DEFAULT_CONFIG.getGarbageCollectionTransactionBatchSize()); } @Test diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageMockTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageMockTests.java index 718a159f32a..bbb0672ce65 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageMockTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageMockTests.java @@ -23,6 +23,7 @@ import io.pravega.segmentstore.storage.metadata.StorageMetadataVersionMismatchException; import io.pravega.segmentstore.storage.metadata.StorageMetadataWritesFencedOutException; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; import io.pravega.segmentstore.storage.noop.NoOpChunkStorage; import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.ThreadPooledTestSuite; @@ -33,7 +34,6 @@ import java.io.ByteArrayInputStream; import java.io.IOException; -import java.time.Duration; import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionException; @@ -82,6 +82,7 @@ public void testExceptionDuringCommit(Exception exceptionToThrow, Class clazz, b @Cleanup ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, spyChunkStorage, spyMetadataStore, executorService(), config); chunkedSegmentStorage.initialize(1); + chunkedSegmentStorage.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Step 1: Create segment and write some data. val h1 = chunkedSegmentStorage.create(testSegmentName, policy, null).get(); @@ -228,6 +229,7 @@ public void testExceptionDuringMetadataRead(Exception exceptionToThrow, Class cl @Cleanup ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, spyChunkStorage, spyMetadataStore, executorService(), config); chunkedSegmentStorage.initialize(1); + chunkedSegmentStorage.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Step 1: Create segment and write some data. val h1 = chunkedSegmentStorage.create(testSegmentName, policy, null).get(); @@ -318,7 +320,7 @@ public void testIOExceptionDuringWrite() throws Exception { @Cleanup ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, spyChunkStorage, spyMetadataStore, executorService(), config); chunkedSegmentStorage.initialize(1); - + chunkedSegmentStorage.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Step 1: Create segment and write some data. val h1 = chunkedSegmentStorage.create(testSegmentName, policy, null).get(); @@ -339,73 +341,6 @@ public void testIOExceptionDuringWrite() throws Exception { //verify(spyChunkStorage, times(1)).doDelete(any()); } - @Test - public void testFileNotFoundExceptionDuringGarbageCollection() throws Exception { - String testSegmentName = "test"; - SegmentRollingPolicy policy = new SegmentRollingPolicy(2); // Force rollover after every 2 byte. - val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .garbageCollectionDelay(Duration.ZERO) - .build(); - @Cleanup - BaseMetadataStore spyMetadataStore = spy(new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService())); - @Cleanup - BaseChunkStorage spyChunkStorage = spy(new NoOpChunkStorage(executorService())); - ((NoOpChunkStorage) spyChunkStorage).setShouldSupportConcat(false); - @Cleanup - ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, spyChunkStorage, spyMetadataStore, executorService(), config); - chunkedSegmentStorage.initialize(1); - chunkedSegmentStorage.getGarbageCollector().setSuspended(true); - - // Step 1: Create segment and write some data. - val h1 = chunkedSegmentStorage.create(testSegmentName, policy, null).get(); - chunkedSegmentStorage.write(h1, 0, new ByteArrayInputStream(new byte[10]), 10, null).get(); - - Assert.assertEquals(h1.getSegmentName(), testSegmentName); - Assert.assertFalse(h1.isReadOnly()); - // Step 2: Inject fault. - Exception exceptionToThrow = new ChunkNotFoundException("Test Exception", "Mock Exception", new Exception("Mock Exception")); - doThrow(exceptionToThrow).when(spyChunkStorage).doDelete(any()); - - chunkedSegmentStorage.delete(h1, null).get(); - Assert.assertEquals(5, chunkedSegmentStorage.getGarbageCollector().getGarbageChunks().size()); - chunkedSegmentStorage.getGarbageCollector().deleteGarbage(false, 100).get(); - verify(spyChunkStorage, times(5)).doDelete(any()); - } - - @Test - public void testExceptionDuringGarbageCollection() throws Exception { - String testSegmentName = "test"; - SegmentRollingPolicy policy = new SegmentRollingPolicy(2); // Force rollover after every 2 byte. - val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .storageMetadataRollingPolicy(policy) - .garbageCollectionDelay(Duration.ZERO) - .build(); - @Cleanup - BaseMetadataStore spyMetadataStore = spy(new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService())); - @Cleanup - BaseChunkStorage spyChunkStorage = spy(new NoOpChunkStorage(executorService())); - ((NoOpChunkStorage) spyChunkStorage).setShouldSupportConcat(false); - @Cleanup - ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, spyChunkStorage, spyMetadataStore, executorService(), config); - chunkedSegmentStorage.initialize(1); - chunkedSegmentStorage.getGarbageCollector().setSuspended(true); - - // Step 1: Create segment and write some data. - val h1 = chunkedSegmentStorage.create(testSegmentName, policy, null).get(); - chunkedSegmentStorage.write(h1, 0, new ByteArrayInputStream(new byte[10]), 10, null).get(); - - Assert.assertEquals(h1.getSegmentName(), testSegmentName); - Assert.assertFalse(h1.isReadOnly()); - // Step 2: Inject fault. - Exception exceptionToThrow = new IllegalStateException("Test Exception"); - doThrow(exceptionToThrow).when(spyChunkStorage).doDelete(any()); - - chunkedSegmentStorage.delete(h1, null).get(); - Assert.assertEquals(5, chunkedSegmentStorage.getGarbageCollector().getGarbageChunks().size()); - chunkedSegmentStorage.getGarbageCollector().deleteGarbage(false, 100).get(); - verify(spyChunkStorage, times(5)).doDelete(any()); - } - @Test public void testReport() { val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG; diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageTests.java index bd92f31c2e2..5c6d164df98 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/ChunkedSegmentStorageTests.java @@ -35,6 +35,7 @@ import io.pravega.segmentstore.storage.metadata.StatusFlags; import io.pravega.segmentstore.storage.mocks.AbstractInMemoryChunkStorage; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; import io.pravega.segmentstore.storage.noop.NoOpChunkStorage; import io.pravega.shared.NameUtils; import io.pravega.test.common.AssertExtensions; @@ -44,6 +45,7 @@ import java.time.Duration; import java.util.ArrayList; import java.util.Arrays; +import java.util.HashSet; import java.util.Random; import java.util.TreeMap; import java.util.UUID; @@ -71,7 +73,7 @@ */ @Slf4j public class ChunkedSegmentStorageTests extends ThreadPooledTestSuite { - protected static final Duration TIMEOUT = Duration.ofSeconds(30); + protected static final Duration TIMEOUT = Duration.ofSeconds(3000); private static final int CONTAINER_ID = 42; private static final int OWNER_EPOCH = 100; protected final Random rnd = new Random(0); @@ -79,16 +81,19 @@ public class ChunkedSegmentStorageTests extends ThreadPooledTestSuite { @Rule public Timeout globalTimeout = Timeout.seconds(TIMEOUT.getSeconds()); + @Override @Before public void before() throws Exception { super.before(); } + @Override @After public void after() throws Exception { super.after(); } + @Override protected int getThreadPoolSize() { return 1; } @@ -499,6 +504,8 @@ public void testSimpleScenarioWithNonAppendProvider() throws Exception { val h = testContext.chunkedSegmentStorage.create(testSegmentName, policy, null).get(); Assert.assertEquals(h.getSegmentName(), testSegmentName); Assert.assertFalse(h.isReadOnly()); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); // Check metadata is stored. val segmentMetadata = TestUtils.getSegmentMetadata(testContext.metadataStore, testSegmentName); @@ -560,6 +567,9 @@ public void testSimpleScenarioWithNonAppendProvider() throws Exception { TestUtils.checkSegmentBounds(testContext.metadataStore, testSegmentName, 0, 14); TestUtils.checkReadIndexEntries(testContext.chunkedSegmentStorage, testContext.metadataStore, testSegmentName, 0, 14, true); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); info = testContext.chunkedSegmentStorage.getStreamSegmentInfo(testSegmentName, null).get(); Assert.assertFalse(info.isSealed()); @@ -596,6 +606,8 @@ private void testSimpleScenario(String testSegmentName, SegmentRollingPolicy pol val h = testContext.chunkedSegmentStorage.create(testSegmentName, policy, null).get(); Assert.assertEquals(h.getSegmentName(), testSegmentName); Assert.assertFalse(h.isReadOnly()); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); // Check metadata is stored. val segmentMetadata = TestUtils.getSegmentMetadata(testContext.metadataStore, testSegmentName); @@ -643,6 +655,9 @@ private void testSimpleScenario(String testSegmentName, SegmentRollingPolicy pol TestUtils.checkSegmentBounds(testContext.metadataStore, testSegmentName, 0, 14); TestUtils.checkReadIndexEntries(testContext.chunkedSegmentStorage, testContext.metadataStore, testSegmentName, 0, 14, true); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); info = testContext.chunkedSegmentStorage.getStreamSegmentInfo(testSegmentName, null).get(); Assert.assertFalse(info.isSealed()); @@ -987,6 +1002,8 @@ public void testWrite() throws Exception { // Create val hWrite = testContext.chunkedSegmentStorage.create(testSegmentName, policy, null).get(); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); // Write some data. long writeAt = 0; @@ -994,6 +1011,9 @@ public void testWrite() throws Exception { testContext.chunkedSegmentStorage.write(hWrite, writeAt, new ByteArrayInputStream(new byte[i]), i, null).join(); writeAt += i; } + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); int total = 10; @@ -1015,6 +1035,8 @@ public void testWriteAfterWriteFailure() throws Exception { // Create val hWrite = testContext.chunkedSegmentStorage.create(testSegmentName, policy, null).get(); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); // Write some data. long writeAt = 0; @@ -1032,6 +1054,9 @@ public void testWriteAfterWriteFailure() throws Exception { TestUtils.checkSegmentBounds(testContext.metadataStore, testSegmentName, 0, 10); TestUtils.checkReadIndexEntries(testContext.chunkedSegmentStorage, testContext.metadataStore, testSegmentName, 0, 10, true); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); } /** @@ -1048,6 +1073,8 @@ public void testWriteSequential() throws Exception { // Create val hWrite = testContext.chunkedSegmentStorage.create(testSegmentName, policy, null).get(); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); // Write some data sequentially. val bytes = populate(100); @@ -1057,7 +1084,9 @@ public void testWriteSequential() throws Exception { futures.add(testContext.chunkedSegmentStorage.write(hWrite, i, new ByteArrayInputStream(bytes, i, 1), 1, null)); } Futures.allOf(futures).join(); - + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); checkDataRead(testSegmentName, testContext, 0, bytes.length, bytes); } @@ -1159,7 +1188,7 @@ public void testSegmentNotExistsExceptionForDeleted() throws Exception { testContext.chunkedSegmentStorage.delete(h, null).join(); Assert.assertFalse(testContext.chunkedSegmentStorage.exists(testSegmentName, null).get()); val segmentMetadataAfterDelete = TestUtils.getSegmentMetadata(testContext.metadataStore, testSegmentName); - Assert.assertNull(segmentMetadataAfterDelete); + Assert.assertFalse(segmentMetadataAfterDelete.isActive()); AssertExtensions.assertFutureThrows( "getStreamSegmentInfo succeeded on missing segment.", @@ -1414,17 +1443,23 @@ private void testSimpleConcat(TestContext testContext, int maxChunkLength, int n // Populate segments. val h1 = populateSegment(testContext, targetSegmentName, maxChunkLength, nChunks1); val h2 = populateSegment(testContext, sourceSegmentName, maxChunkLength, nChunks2); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, targetSegmentName)); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, sourceSegmentName)); // Concat. testContext.chunkedSegmentStorage.seal(h2, null).join(); testContext.chunkedSegmentStorage.concat(h1, (long) nChunks1 * (long) maxChunkLength, sourceSegmentName, null).join(); + HashSet chunksAfter = new HashSet(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, targetSegmentName)); + // Validate. TestUtils.checkSegmentLayout(testContext.metadataStore, targetSegmentName, maxChunkLength, nChunks1 + nChunks2); TestUtils.checkSegmentBounds(testContext.metadataStore, targetSegmentName, 0, ((long) nChunks1 + (long) nChunks2) * maxChunkLength); TestUtils.checkReadIndexEntries(testContext.chunkedSegmentStorage, testContext.metadataStore, targetSegmentName, 0, ((long) nChunks1 + (long) nChunks2) * maxChunkLength, true); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, targetSegmentName); - + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); } @Test @@ -1446,6 +1481,9 @@ public void testSimpleConcatWithDefrag() throws Exception { private void testBaseConcat(TestContext testContext, long maxRollingLength, long[] targetLayout, long[] sourceLayout, long[] resultLayout) throws Exception { val source = testContext.insertMetadata("source", maxRollingLength, 1, sourceLayout); val target = testContext.insertMetadata("target", maxRollingLength, 1, targetLayout); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, "source")); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, "target")); // Concat. testContext.chunkedSegmentStorage.seal(SegmentStorageHandle.writeHandle("source"), null).join(); @@ -1461,7 +1499,9 @@ private void testBaseConcat(TestContext testContext, long maxRollingLength, long TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, "target"); TestUtils.checkSegmentBounds(testContext.metadataStore, "target", 0, Arrays.stream(resultLayout).sum()); TestUtils.checkReadIndexEntries(testContext.chunkedSegmentStorage, testContext.metadataStore, "target", 0, Arrays.stream(resultLayout).sum(), true); - + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, "target")); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); // Cleanup testContext.chunkedSegmentStorage.delete(SegmentStorageHandle.writeHandle("target"), null).join(); } @@ -1636,49 +1676,144 @@ public void testBasicConcatWithDefrag() throws Exception { @Test public void testBaseConcatWithDefragWithMinMaxLimits() throws Exception { // Set limits. + val maxRollingSize = 30; ChunkedSegmentStorageConfig config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .maxSizeLimitForConcat(12) - .minSizeLimitForConcat(2) + .maxSizeLimitForConcat(20) + .minSizeLimitForConcat(10) .build(); @Cleanup TestContext testContext = getTestContext(config); ((AbstractInMemoryChunkStorage) testContext.chunkStorage).setShouldSupportConcat(true); - // Populate segments - testBaseConcat(testContext, 1024, + // no-op. + testBaseConcat(testContext, maxRollingSize, + new long[]{1}, + new long[]{21, 21, 21}, + new long[]{1, 21, 21, 21}); + + // no-op - max rollover size. + testBaseConcat(testContext, maxRollingSize, + new long[]{30}, + new long[]{29, 2}, + new long[]{30, 29, 2}); + // no-op - max rollover size. + testBaseConcat(testContext, maxRollingSize, + new long[]{30}, + new long[]{1, 2, 3, 4}, + new long[]{30, 10}); + + // small chunks followed by normal chunks. + testBaseConcat(testContext, maxRollingSize, new long[]{10}, - new long[]{1, 1, 1, 3, 1, 1, 3, 1, 3}, // small chunks followed by normal chunks. + new long[]{1, 1, 1, 3, 1, 1, 3, 1, 3}, new long[]{25}); - testBaseConcat(testContext, 1024, + // normal chunks followed by small chunks. + testBaseConcat(testContext, maxRollingSize, new long[]{10}, - new long[]{3, 1, 1, 1, 3, 1, 1, 3, 1}, // normal chunks followed by small chunks. + new long[]{3, 1, 1, 1, 3, 1, 1, 3, 1}, new long[]{25}); - testBaseConcat(testContext, 1024, + // consecutive normal. + testBaseConcat(testContext, maxRollingSize, new long[]{10}, - new long[]{1, 3, 3, 3, 1, 2, 2}, // consecutive normal. + new long[]{1, 3, 3, 3, 1, 2, 2}, new long[]{25}); - testBaseConcat(testContext, 1024, + testBaseConcat(testContext, maxRollingSize, new long[]{10}, - new long[]{5, 5, 5}, // all large chunks. + new long[]{5, 5, 5}, new long[]{25}); - testBaseConcat(testContext, 1024, + // all small chunks. + testBaseConcat(testContext, maxRollingSize, new long[]{10}, - new long[]{2, 2, 2, 2, 2, 2, 2, 1}, // all small chunks. + new long[]{2, 2, 2, 2, 2, 2, 2, 1}, new long[]{25}); - testBaseConcat(testContext, 1024, + testBaseConcat(testContext, maxRollingSize, new long[]{10}, - new long[]{12, 3}, // all concats possible. + new long[]{12, 3}, new long[]{25}); - testBaseConcat(testContext, 1024, + testBaseConcat(testContext, maxRollingSize, new long[]{10}, - new long[]{13, 2}, // not all concats possible. - new long[]{10, 15}); + new long[]{13, 2}, + new long[]{25}); + + // First chunk is greater than max concat size + testBaseConcat(testContext, maxRollingSize, + new long[]{13}, + new long[]{11, 1}, + new long[]{25}); + + // First chunk is greater than max concat size + testBaseConcat(testContext, maxRollingSize, + new long[]{13}, + new long[]{10, 2}, + new long[]{25}); + } + + @Test + public void testBaseConcatWithDefragWithMinMaxLimitsNoAppends() throws Exception { + // Set limits. + val maxRollingSize = 30; + ChunkedSegmentStorageConfig config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .maxSizeLimitForConcat(20) + .minSizeLimitForConcat(10) + .appendEnabled(false) + .build(); + @Cleanup + TestContext testContext = getTestContext(config); + ((AbstractInMemoryChunkStorage) testContext.chunkStorage).setShouldSupportConcat(true); + + // Normal case. + testBaseConcat(testContext, maxRollingSize, + new long[]{11}, + new long[]{12}, + new long[]{23}); + + // Bigger than max allowed. + testBaseConcat(testContext, maxRollingSize, + new long[]{10}, + new long[]{20}, + new long[]{10, 20}); + + // Target is bigger than max allowed after first concat. + testBaseConcat(testContext, maxRollingSize, + new long[]{11}, + new long[]{12, 13}, + new long[]{23, 13}); + + // One of the chunks in the middle is smaller than min size allowed. + testBaseConcat(testContext, maxRollingSize, + new long[]{11}, + new long[]{12, 5, 13}, + new long[]{23, 5, 13}); + + // All chunks are smaller, resultant chunk gets bigger than max size allowed. + testBaseConcat(testContext, maxRollingSize, + new long[]{11}, + new long[]{2, 2, 2, 2, 2, 2}, + new long[]{21, 2}); + + // Chunks are already at max rolling size. + testBaseConcat(testContext, maxRollingSize, + new long[]{30}, + new long[]{2, 30, 2, 30, 2, 30}, + new long[]{30, 2, 30, 2, 30, 2, 30}); + + // Test max rollover size. + testBaseConcat(testContext, maxRollingSize, + new long[]{11}, + new long[]{9, 10}, + new long[]{30}); + + // Test max rollover size. + testBaseConcat(testContext, maxRollingSize, + new long[]{20}, + new long[]{10, 10}, + new long[]{30, 10}); } /** @@ -1890,11 +2025,17 @@ public void testRepeatedTruncates() throws Exception { // Perform series of truncates. for (int truncateAt = 0; truncateAt < 9; truncateAt++) { + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + testContext.chunkedSegmentStorage.truncate(h1, truncateAt, null).join(); TestUtils.checkSegmentLayout(testContext.metadataStore, testSegmentName, 3, 3 - (truncateAt / 3)); TestUtils.checkSegmentBounds(testContext.metadataStore, testSegmentName, truncateAt, 9); TestUtils.checkReadIndexEntries(testContext.chunkedSegmentStorage, testContext.metadataStore, testSegmentName, truncateAt, 9, true); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); val metadata = TestUtils.getSegmentMetadata(testContext.metadataStore, testSegmentName); // length doesn't change. @@ -1927,11 +2068,17 @@ public void testRepeatedTruncatesOnLargeChunkVaryingSizes() throws Exception { int truncateAt = 0; for (int i = 0; i < 4; i++) { + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + testContext.chunkedSegmentStorage.truncate(h1, truncateAt, null).join(); // Check layout. TestUtils.checkSegmentLayout(testContext.metadataStore, testSegmentName, new long[]{10}); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); // Validate. val metadata = TestUtils.getSegmentMetadata(testContext.metadataStore, testSegmentName); @@ -1955,6 +2102,8 @@ public void testRepeatedTruncatesOnLargeChunk() throws Exception { val h1 = populateSegment(testContext, testSegmentName, 10, 1); byte[] buffer = new byte[10]; for (int truncateAt = 0; truncateAt < 9; truncateAt++) { + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); testContext.chunkedSegmentStorage.truncate(h1, truncateAt, null).join(); // Check layout. @@ -1967,6 +2116,9 @@ public void testRepeatedTruncatesOnLargeChunk() throws Exception { Assert.assertEquals(truncateAt, metadata.getStartOffset()); Assert.assertEquals(0, metadata.getFirstChunkStartOffset()); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); // Validate read. val bytesRead = testContext.chunkedSegmentStorage.read(h1, truncateAt, buffer, 0, 10 - truncateAt, null).get().intValue(); @@ -1994,6 +2146,8 @@ public void testRepeatedTruncatesAtFullLength() throws Exception { val info2 = testContext.chunkedSegmentStorage.getStreamSegmentInfo(testSegmentName, null).get(); expectedLength += i; Assert.assertEquals(expectedLength, info2.getLength()); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); // Now truncate testContext.chunkedSegmentStorage.truncate(h1, info2.getLength(), null).join(); @@ -2007,6 +2161,9 @@ public void testRepeatedTruncatesAtFullLength() throws Exception { Assert.assertEquals(null, metadata.getLastChunk()); Assert.assertEquals(null, metadata.getFirstChunk()); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); // Validate Exceptions. val expectedLength2 = expectedLength; @@ -2050,6 +2207,8 @@ private void testTruncate(long maxChunkLength, long truncateAt, int chunksCountB // Populate val h1 = populateSegment(testContext, testSegmentName, maxChunkLength, chunksCountBefore); + HashSet chunksBefore = new HashSet<>(); + chunksBefore.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); // Perform truncate. testContext.chunkedSegmentStorage.truncate(h1, truncateAt, null).join(); @@ -2059,6 +2218,9 @@ private void testTruncate(long maxChunkLength, long truncateAt, int chunksCountB TestUtils.checkSegmentBounds(testContext.metadataStore, testSegmentName, truncateAt, expectedLength); TestUtils.checkReadIndexEntries(testContext.chunkedSegmentStorage, testContext.metadataStore, testSegmentName, truncateAt, expectedLength, true); TestUtils.checkChunksExistInStorage(testContext.chunkStorage, testContext.metadataStore, testSegmentName); + HashSet chunksAfter = new HashSet<>(); + chunksAfter.addAll(TestUtils.getChunkNameList(testContext.metadataStore, testSegmentName)); + TestUtils.checkGarbageCollectionQueue(testContext.chunkedSegmentStorage, chunksBefore, chunksAfter); } /** @@ -2291,9 +2453,7 @@ public void testTruncateWithFailover() throws Exception { // Make sure to open segment with new instance before writing garbage to old instance. hWrite = newTestContext.chunkedSegmentStorage.openWrite(testSegmentName).get(); newTestContext.chunkedSegmentStorage.truncate(hWrite, offset, null).get(); - newTestContext.chunkedSegmentStorage.getGarbageCollector().setSuspended(true); - newTestContext.chunkedSegmentStorage.getGarbageCollector().deleteGarbage(false, 100).get(); - //checkDataRead(testSegmentName, testContext, offset, 0); + TestUtils.checkSegmentBounds(newTestContext.metadataStore, testSegmentName, offset, offset); TestUtils.checkReadIndexEntries(newTestContext.chunkedSegmentStorage, newTestContext.metadataStore, testSegmentName, offset, offset, false); @@ -2558,6 +2718,53 @@ private void testParallelReadRequestsOnSingleSegmentWithReentry(int numberOfRequ CompletableFuture.allOf(futures).join(); } + /** + * Test concurrent writes to storage system segments by simulating concurrent writes. + * + * @throws Exception Throws exception in case of any error. + */ + @Test + public void testSystemSegmentConcurrency() throws Exception { + + SegmentRollingPolicy policy = new SegmentRollingPolicy(2); // Force rollover after every 2 byte. + @Cleanup + TestContext testContext = getTestContext(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder().indexBlockSize(3).build()); + // Force parallel writes irrespective of thread pool size for tests themselves. + val writeSize = 10; + val numWrites = 10; + val numOfStorageSystemSegments = SystemJournal.getChunkStorageSystemSegments(CONTAINER_ID).length; + val data = new byte[numOfStorageSystemSegments][writeSize * numWrites]; + + var futures = new ArrayList>(); + // To make sure all write operations are concurrent. + @Cleanup("shutdownNow") + ExecutorService executor = Executors.newFixedThreadPool(numOfStorageSystemSegments); + + for (int i = 0; i < numOfStorageSystemSegments; i++) { + final int k = i; + futures.add(CompletableFuture.runAsync(() -> { + + populate(data[k]); + String systemSegmentName = SystemJournal.getChunkStorageSystemSegments(CONTAINER_ID)[k]; + val h = testContext.chunkedSegmentStorage.create(systemSegmentName, null).join(); + // Init + long offset = 0; + for (int j = 0; j < numWrites; j++) { + testContext.chunkedSegmentStorage.write(h, offset, new ByteArrayInputStream(data[k], writeSize * j, writeSize), writeSize, null).join(); + offset += writeSize; + } + val info = testContext.chunkedSegmentStorage.getStreamSegmentInfo(systemSegmentName, null).join(); + Assert.assertEquals(writeSize * numWrites, info.getLength()); + byte[] out = new byte[writeSize * numWrites]; + val hr = testContext.chunkedSegmentStorage.openRead(systemSegmentName).join(); + testContext.chunkedSegmentStorage.read(hr, 0, out, 0, writeSize * numWrites, null).join(); + Assert.assertArrayEquals(data[k], out); + }, executor)); + } + + Futures.allOf(futures).join(); + } + @Test public void testSimpleScenarioWithBlockIndexEntries() throws Exception { String testSegmentName = "foo"; @@ -2663,7 +2870,7 @@ public void testReadHugeChunks() throws Exception { public void testConcatHugeChunks() throws Exception { @Cleanup TestContext testContext = getTestContext(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .minSizeLimitForConcat(10L * Integer.MAX_VALUE) + .minSizeLimitForConcat(Integer.MAX_VALUE) .maxSizeLimitForConcat(100L * Integer.MAX_VALUE) .build()); testBaseConcat(testContext, 10L * Integer.MAX_VALUE, @@ -2849,6 +3056,9 @@ public static class TestContext implements AutoCloseable { @Getter protected ScheduledExecutorService executor; + @Getter + protected AbstractTaskQueueManager taskQueue; + protected TestContext() { } @@ -2861,8 +3071,10 @@ public TestContext(ScheduledExecutorService executor, ChunkedSegmentStorageConfi this.config = config; chunkStorage = createChunkStorage(); metadataStore = createMetadataStore(); + taskQueue = createTaskQueue(); chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, chunkStorage, metadataStore, this.executor, config); chunkedSegmentStorage.initialize(1); + chunkedSegmentStorage.getGarbageCollector().initialize(taskQueue).join(); } /** @@ -2885,6 +3097,8 @@ public TestContext fork(long epoch) throws Exception { this.executor, this.config); forkedContext.chunkedSegmentStorage.initialize(epoch); + forkedContext.taskQueue = createTaskQueue(); + forkedContext.chunkedSegmentStorage.getGarbageCollector().initialize(taskQueue).join(); return forkedContext; } @@ -2917,6 +3131,10 @@ public ChunkStorage createChunkStorage() throws Exception { return new NoOpChunkStorage(executor); } + public AbstractTaskQueueManager createTaskQueue() throws Exception { + return new InMemoryTaskQueueManager(); + } + /** * Creates and inserts metadata for a test segment. */ diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollectorTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollectorTests.java index 4966c33724e..849a49c239c 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollectorTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/GarbageCollectorTests.java @@ -16,15 +16,20 @@ package io.pravega.segmentstore.storage.chunklayer; -import io.pravega.common.concurrent.Futures; +import com.google.common.base.Preconditions; import io.pravega.segmentstore.storage.metadata.ChunkMetadata; import io.pravega.segmentstore.storage.metadata.ChunkMetadataStore; +import io.pravega.segmentstore.storage.metadata.ReadIndexBlockMetadata; +import io.pravega.segmentstore.storage.metadata.SegmentMetadata; +import io.pravega.segmentstore.storage.metadata.StatusFlags; +import io.pravega.segmentstore.storage.metadata.StorageMetadataException; import io.pravega.segmentstore.storage.mocks.InMemoryChunkStorage; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; +import io.pravega.shared.NameUtils; import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.ThreadPooledTestSuite; import lombok.Cleanup; -import lombok.Getter; import lombok.extern.slf4j.Slf4j; import lombok.val; import org.junit.After; @@ -36,36 +41,43 @@ import java.io.ByteArrayInputStream; import java.time.Duration; -import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; +import java.util.HashSet; +import java.util.TreeMap; import java.util.concurrent.CompletableFuture; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.function.Supplier; +import java.util.function.Function; import java.util.stream.Collectors; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.spy; + /** * Tests for {@link GarbageCollector}. */ @Slf4j public class GarbageCollectorTests extends ThreadPooledTestSuite { public static final int CONTAINER_ID = 42; + public static final long TXN_ID = 123; protected static final Duration TIMEOUT = Duration.ofSeconds(3000); @Rule public Timeout globalTimeout = Timeout.seconds(TIMEOUT.getSeconds()); + @Override @Before public void before() throws Exception { super.before(); } + @Override @After public void after() throws Exception { super.after(); } + @Override protected int getThreadPoolSize() { return 5; } @@ -95,9 +107,9 @@ public void testInitializationInvalidArgs() throws Exception { metadataStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); - garbageCollector.initialize(); + garbageCollector.initialize(new InMemoryTaskQueueManager()).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); } @@ -119,8 +131,8 @@ public void testInitialization() throws Exception { ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService(), System::currentTimeMillis, - CompletableFuture::new); - garbageCollector.initialize(); + d -> CompletableFuture.completedFuture(null)); + garbageCollector.initialize(new InMemoryTaskQueueManager()).join(); AssertExtensions.assertThrows("Should not allow null chunkStorage", () -> { @@ -130,7 +142,7 @@ public void testInitialization() throws Exception { ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService(), System::currentTimeMillis, - CompletableFuture::new); + d -> CompletableFuture.completedFuture(null)); }, ex -> ex instanceof NullPointerException); @@ -142,7 +154,7 @@ public void testInitialization() throws Exception { ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService(), System::currentTimeMillis, - CompletableFuture::new); + d -> CompletableFuture.completedFuture(null)); }, ex -> ex instanceof NullPointerException); @@ -154,7 +166,7 @@ public void testInitialization() throws Exception { null, executorService(), System::currentTimeMillis, - CompletableFuture::new); + d -> CompletableFuture.completedFuture(null)); }, ex -> ex instanceof NullPointerException); @@ -166,7 +178,7 @@ public void testInitialization() throws Exception { ChunkedSegmentStorageConfig.DEFAULT_CONFIG, null, System::currentTimeMillis, - CompletableFuture::new); + d -> CompletableFuture.completedFuture(null)); }, ex -> ex instanceof NullPointerException); AssertExtensions.assertThrows("Should not allow null currentTimeSupplier", @@ -177,7 +189,7 @@ public void testInitialization() throws Exception { ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService(), null, - CompletableFuture::new); + d -> CompletableFuture.completedFuture(null)); }, ex -> ex instanceof NullPointerException); AssertExtensions.assertThrows("Should not allow null delaySupplier", @@ -208,7 +220,8 @@ public void testActiveChunk() throws Exception { insertChunk(chunkStorage, "activeChunk", dataSize); insertChunkMetadata(metadataStore, "activeChunk", dataSize, 1); - val manualDelay = new ManualDelay(2); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, @@ -220,33 +233,33 @@ public void testActiveChunk() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - garbageCollector.addToGarbage(Collections.singleton("activeChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Collections.singleton("activeChunk")).join(); // Validate state before - Assert.assertEquals(1, garbageCollector.getGarbageChunks().size()); + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - Assert.assertEquals("activeChunk", garbageCollector.getGarbageChunks().peek().getName()); + Assert.assertEquals("activeChunk", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); // Validate state after Assert.assertEquals(0, garbageCollector.getQueueSize().get()); Assert.assertTrue(chunkStorage.exists("activeChunk").get()); Assert.assertNotNull(getChunkMetadata(metadataStore, "activeChunk")); + Assert.assertTrue(chunkStorage.exists("activeChunk").join()); } /** @@ -264,7 +277,8 @@ public void testDeletedChunk() throws Exception { insertChunk(chunkStorage, "deletedChunk", dataSize); insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); - val manualDelay = new ManualDelay(2); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, @@ -276,27 +290,26 @@ public void testDeletedChunk() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - garbageCollector.addToGarbage(Collections.singleton("deletedChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Collections.singleton("deletedChunk")).join(); // Validate state before Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - Assert.assertEquals("deletedChunk", garbageCollector.getGarbageChunks().peek().getName()); + Assert.assertEquals("deletedChunk", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); // Validate state after Assert.assertEquals(0, garbageCollector.getQueueSize().get()); @@ -319,8 +332,8 @@ public void testDeletedChunkMissingFromStorage() throws Exception { int dataSize = 1; insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); - val manualDelay = new ManualDelay(2); - + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, chunkStorage, @@ -331,27 +344,27 @@ public void testDeletedChunkMissingFromStorage() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + Assert.assertFalse(chunkStorage.exists("deletedChunk").join()); // Add some garbage - garbageCollector.addToGarbage(Collections.singleton("deletedChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Collections.singleton("deletedChunk")).join(); // Validate state before Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - Assert.assertEquals("deletedChunk", garbageCollector.getGarbageChunks().peek().getName()); + Assert.assertEquals("deletedChunk", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); // Validate state after Assert.assertEquals(0, garbageCollector.getQueueSize().get()); @@ -371,7 +384,8 @@ public void testNonExistentChunk() throws Exception { ChunkMetadataStore metadataStore = getMetadataStore(); int containerId = CONTAINER_ID; - val manualDelay = new ManualDelay(2); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, @@ -383,37 +397,37 @@ public void testNonExistentChunk() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + Assert.assertFalse(chunkStorage.exists("nonExistingChunk").join()); // Add some garbage - garbageCollector.addToGarbage(Collections.singleton("nonExistingChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Collections.singleton("nonExistingChunk")).join(); // Validate state before Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - Assert.assertEquals("nonExistingChunk", garbageCollector.getGarbageChunks().peek().getName()); + Assert.assertEquals("nonExistingChunk", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); // Validate state after Assert.assertEquals(0, garbageCollector.getQueueSize().get()); } /** - * Test for a mix bag of chunks. + * Test for chunk that is marked active but added as garbage. */ @Test - public void testMixedChunk() throws Exception { + public void testNewChunkOnSuccessful() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup @@ -421,13 +435,64 @@ public void testMixedChunk() throws Exception { int containerId = CONTAINER_ID; int dataSize = 1; - insertChunk(chunkStorage, "deletedChunk", dataSize); - insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); - insertChunk(chunkStorage, "activeChunk", dataSize); - insertChunkMetadata(metadataStore, "activeChunk", dataSize, 1); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); - val manualDelay = new ManualDelay(2); + @Cleanup + GarbageCollector garbageCollector = new GarbageCollector(containerId, + chunkStorage, + metadataStore, + ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .build(), + executorService(), + System::currentTimeMillis, + noDelay); + + // Now actually start run + garbageCollector.initialize(testTaskQueue).join(); + + Assert.assertNotNull(garbageCollector.getTaskQueue()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + + // Add some garbage + insertChunk(chunkStorage, "newChunk", dataSize); + garbageCollector.trackNewChunk(TXN_ID, "newChunk").join(); + insertChunkMetadata(metadataStore, "newChunk", dataSize, 1); + + // Validate state before + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals("newChunk", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); + + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + + garbageCollector.processBatch(list).join(); + + // Validate state after + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + Assert.assertTrue(chunkStorage.exists("newChunk").get()); + Assert.assertNotNull(getChunkMetadata(metadataStore, "newChunk")); + } + + /** + * Test for chunk that does not exist in metadata but added as garbage. + */ + @Test + public void testNewChunkOnFailure() throws Exception { + @Cleanup + ChunkStorage chunkStorage = getChunkStorage(); + @Cleanup + ChunkMetadataStore metadataStore = getMetadataStore(); + int containerId = CONTAINER_ID; + + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + int dataSize = 1; @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, @@ -439,38 +504,40 @@ public void testMixedChunk() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - garbageCollector.addToGarbage(Arrays.asList("activeChunk", "nonExistingChunk", "deletedChunk")); + insertChunk(chunkStorage, "newChunk", dataSize); + garbageCollector.trackNewChunk(TXN_ID, "newChunk").join(); // Validate state before - assertQueueEquals(garbageCollector, new String[]{"activeChunk", "nonExistingChunk", "deletedChunk"}); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals("newChunk", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); + Assert.assertNull(getChunkMetadata(metadataStore, "newChunk")); + Assert.assertTrue(chunkStorage.exists("newChunk").get()); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); // Validate state after Assert.assertEquals(0, garbageCollector.getQueueSize().get()); - Assert.assertFalse(chunkStorage.exists("deletedChunk").get()); - Assert.assertTrue(chunkStorage.exists("activeChunk").get()); + Assert.assertFalse(chunkStorage.exists("newChunk").get()); } /** - * Test setSuspended. + * Test for a mix bag of chunks. */ @Test - public void testSuspended() throws Exception { + public void testMixedChunk() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup @@ -484,7 +551,8 @@ public void testSuspended() throws Exception { insertChunk(chunkStorage, "activeChunk", dataSize); insertChunkMetadata(metadataStore, "activeChunk", dataSize, 1); - val manualDelay = new ManualDelay(3); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, @@ -496,38 +564,25 @@ public void testSuspended() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); - garbageCollector.setSuspended(true); - // Add some garbage - garbageCollector.addToGarbage(Arrays.asList("activeChunk", "nonExistingChunk", "deletedChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Arrays.asList("activeChunk", "nonExistingChunk", "deletedChunk")).join(); // Validate state before - assertQueueEquals(garbageCollector, new String[]{"activeChunk", "nonExistingChunk", "deletedChunk"}); + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"activeChunk", "nonExistingChunk", "deletedChunk"}); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 3); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(3, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); - - assertQueueEquals(garbageCollector, new String[]{"activeChunk", "nonExistingChunk", "deletedChunk"}); - - garbageCollector.setSuspended(false); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(1); - - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(2); + garbageCollector.processBatch(list).join(); // Validate state after Assert.assertEquals(0, garbageCollector.getQueueSize().get()); @@ -550,7 +605,8 @@ public void testIOException() throws Exception { insertChunk(chunkStorage, "deletedChunk", dataSize); insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); - val manualDelay = new ManualDelay(2); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); chunkStorage.setReadOnly(chunkStorage.openWrite("deletedChunk").get(), true); @@ -564,36 +620,36 @@ public void testIOException() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - garbageCollector.addToGarbage(Arrays.asList("deletedChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Arrays.asList("deletedChunk")).join(); // Validate state before - assertQueueEquals(garbageCollector, new String[]{"deletedChunk"}); + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"deletedChunk"}); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); // Validate state after - assertQueueEquals(garbageCollector, new String[]{"deletedChunk"}); + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"deletedChunk"}); + Assert.assertTrue(chunkStorage.exists("deletedChunk").get()); } /** - * Test for metadata exception. + * Test for ChunkNotFound exception. */ @Test - public void testMetadataException() throws Exception { + public void testChunkNotFound() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup @@ -601,12 +657,11 @@ public void testMetadataException() throws Exception { int containerId = CONTAINER_ID; int dataSize = 1; - insertChunk(chunkStorage, "deletedChunk", dataSize); - insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); + insertChunkMetadata(metadataStore, "missingChunk", dataSize, 0); + Assert.assertFalse(chunkStorage.exists("missingChunk").get()); - val manualDelay = new ManualDelay(2); - - metadataStore.markFenced(); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, @@ -618,36 +673,37 @@ public void testMetadataException() throws Exception { .build(), executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - garbageCollector.addToGarbage(Arrays.asList("deletedChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Arrays.asList("missingChunk")).join(); // Validate state before - assertQueueEquals(garbageCollector, new String[]{"deletedChunk"}); + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"missingChunk"}); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); // Validate state after - assertQueueEquals(garbageCollector, new String[]{"deletedChunk"}); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + } /** - * Test that loop continues after exception. + * Test for metadata exception. */ @Test - public void testDelayException() throws Exception { + public void testMetadataException() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup @@ -658,9 +714,11 @@ public void testDelayException() throws Exception { insertChunk(chunkStorage, "deletedChunk", dataSize); insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); - val manualDelay = new ManualDelay(2); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + + metadataStore.markFenced(); - val thrown = new AtomicBoolean(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, chunkStorage, @@ -671,105 +729,178 @@ public void testDelayException() throws Exception { .build(), executorService(), System::currentTimeMillis, - () -> { - if (thrown.compareAndSet(false, true)) { - return Futures.failedFuture(new Exception("testException")); - } else { - return manualDelay.get(); - } - }); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - garbageCollector.addToGarbage(Arrays.asList("deletedChunk")); + garbageCollector.addChunksToGarbage(TXN_ID, Arrays.asList("deletedChunk")).join(); // Validate state before - assertQueueEquals(garbageCollector, new String[]{"deletedChunk"}); + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"deletedChunk"}); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(list).join(); - // Validate results - Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + // Validate state after + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"deletedChunk"}); + // The chunk is deleted, but we have failure while deleting metadata. Assert.assertFalse(chunkStorage.exists("deletedChunk").get()); - Assert.assertNull(getChunkMetadata(metadataStore, "deletedChunk")); } /** - * Test when queue is full. + * Test for segment that is marked inactive and added as garbage. */ @Test - public void testQueueFull() throws Exception { + public void testDeletedSegment() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup ChunkMetadataStore metadataStore = getMetadataStore(); int containerId = CONTAINER_ID; - int dataSize = 1; + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); - insertChunk(chunkStorage, "activeChunk", dataSize); - insertChunkMetadata(metadataStore, "activeChunk", dataSize, 1); + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .build(); + @Cleanup + GarbageCollector garbageCollector = new GarbageCollector(containerId, + chunkStorage, + metadataStore, + config, + executorService(), + System::currentTimeMillis, + noDelay); - insertChunk(chunkStorage, "deletedChunk", dataSize); - insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); + // Now actually start run + garbageCollector.initialize(testTaskQueue).join(); - val manualDelay = new ManualDelay(2); + Assert.assertNotNull(garbageCollector.getTaskQueue()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + + insertSegment(metadataStore, chunkStorage, config, "testSegment", 10, 1, + new long[] {1, 2, 3, 4}, false, 0); + val chunkNames = TestUtils.getChunkNameList(metadataStore, "testSegment"); + // Add some garbage + garbageCollector.addSegmentToGarbage(TXN_ID, "testSegment").join(); + + // Validate state before + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); + + garbageCollector.processBatch(testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1)).join(); + + // Validate state after + Assert.assertEquals(4, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(4, garbageCollector.getQueueSize().get()); + + Assert.assertNull(getSegmentMetadata(metadataStore, "testSegment")); + garbageCollector.processBatch(testTaskQueue.drain(garbageCollector.getTaskQueueName(), 10)).join(); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + chunkNames.stream().forEach( chunkName -> Assert.assertFalse(chunkName + " should not exist", chunkStorage.exists(chunkName).join())); + } + + /** + * Test for segment that has lots of metadata. + */ + @Test + public void testLargeSegment() throws Exception { + int numChunks = 1000; + int chunkSize = 10000; + int maxBatchSize = 100; + int indexBlockSize = 100; + testSegmentDelete(numChunks, chunkSize, maxBatchSize, indexBlockSize); + } + + // + // Very useful test, but takes couple seconds. + //@Test(timeout = 180000) + public void testHugeSegment() throws Exception { + int numChunks = 1000; + int chunkSize = 10000; + int maxBatchSize = 4; + int indexBlockSize = 2; + testSegmentDelete(numChunks, chunkSize, maxBatchSize, indexBlockSize); + } + + private void testSegmentDelete(int numChunks, int chunkSize, int maxBatchSize, int indexBlockSize) throws Exception { + @Cleanup + ChunkStorage chunkStorage = getChunkStorage(); + @Cleanup + ChunkMetadataStore metadataStore = getMetadataStore(); + int containerId = CONTAINER_ID; + + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .indexBlockSize(indexBlockSize) // Create huge number of block index entries + .garbageCollectionTransactionBatchSize(maxBatchSize) // Keep batch size very low. + .build(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, chunkStorage, metadataStore, - ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .garbageCollectionDelay(Duration.ofMillis(1)) - .garbageCollectionSleep(Duration.ofMillis(1)) - .garbageCollectionMaxQueueSize(2) - .build(), + config, executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); - + // Large number of chunks with large number of index entries + long[] chunks = new long[numChunks]; + Arrays.fill(chunks, chunkSize); + insertSegment(metadataStore, chunkStorage, config, "testSegment", 10, 1, + chunks, false, 0); + val chunkNames = TestUtils.getChunkNameList(metadataStore, "testSegment"); // Add some garbage - garbageCollector.addToGarbage(Arrays.asList("activeChunk", "nonExistingChunk", "deletedChunk")); + garbageCollector.addSegmentToGarbage(TXN_ID, "testSegment").join(); // Validate state before - assertQueueEquals(garbageCollector, new String[]{"activeChunk", "nonExistingChunk"}); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(0); + chunkNames.stream().forEach( chunkName -> Assert.assertTrue(chunkStorage.exists(chunkName).join())); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(1); + garbageCollector.processBatch(testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1)).join(); // Validate state after + Assert.assertEquals(numChunks, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(numChunks, garbageCollector.getQueueSize().get()); + + Assert.assertNull(getSegmentMetadata(metadataStore, "testSegment")); + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, chunkNames.toArray(new String[numChunks])); + garbageCollector.processBatch(testTaskQueue.drain(garbageCollector.getTaskQueueName(), numChunks)).join(); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); - Assert.assertTrue(chunkStorage.exists("activeChunk").get()); - Assert.assertNotNull(getChunkMetadata(metadataStore, "activeChunk")); - Assert.assertNotNull(getChunkMetadata(metadataStore, "deletedChunk")); + chunkNames.stream().forEach( chunkName -> Assert.assertFalse(chunkName + " should not exist", chunkStorage.exists(chunkName).join())); } - /** - * Test for Max Attempts. + * Test for segment that is marked active and added as garbage. */ @Test - public void testMaxAttempts() throws Exception { + public void testActiveSegment() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup @@ -777,66 +908,188 @@ public void testMaxAttempts() throws Exception { int containerId = CONTAINER_ID; int dataSize = 1; - insertChunk(chunkStorage, "deletedChunk", dataSize); - insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .build(); + @Cleanup + GarbageCollector garbageCollector = new GarbageCollector(containerId, + chunkStorage, + metadataStore, + config, + executorService(), + System::currentTimeMillis, + noDelay); - val manualDelay = new ManualDelay(5); + // Now actually start run + garbageCollector.initialize(testTaskQueue).join(); - chunkStorage.setReadOnly(chunkStorage.openWrite("deletedChunk").get(), true); + Assert.assertNotNull(garbageCollector.getTaskQueue()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + + insertSegment(metadataStore, chunkStorage, config, "testSegment", 10, 1, + new long[] {1, 2, 3, 4}, false, 1); + val chunkNames = TestUtils.getChunkNameList(metadataStore, "testSegment"); + // Add some garbage + garbageCollector.addSegmentToGarbage(TXN_ID, "testSegment").join(); + + // Validate state before + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); + + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + + garbageCollector.processBatch(list).join(); + + // Validate state after + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + Assert.assertNotNull(getSegmentMetadata(metadataStore, "testSegment")); + chunkNames.stream().forEach( chunkName -> Assert.assertTrue(chunkStorage.exists(chunkName).join())); + } + + /** + * Test for segment that is marked active and added as garbage. + */ + @Test + public void testMetadataExceptionWithSegment() throws Exception { + @Cleanup + ChunkStorage chunkStorage = getChunkStorage(); + @Cleanup + ChunkMetadataStore metadataStore = spy(getMetadataStore()); + int containerId = CONTAINER_ID; + + int dataSize = 1; + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .build(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, chunkStorage, metadataStore, - ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .garbageCollectionDelay(Duration.ofMillis(1)) - .garbageCollectionSleep(Duration.ofMillis(1)) - .garbageCollectionMaxAttempts(3) - .build(), + config, executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + insertSegment(metadataStore, chunkStorage, config, "testSegment", 10, 1, + new long[] {1, 2, 3, 4}, false, 0); + val chunkNames = TestUtils.getChunkNameList(metadataStore, "testSegment"); + // Add some garbage - garbageCollector.addToGarbage(Arrays.asList("deletedChunk")); + garbageCollector.addSegmentToGarbage(TXN_ID, "testSegment").join(); // Validate state before - assertQueueEquals(garbageCollector, new String[]{"deletedChunk"}); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - for (int i = 0; i < 3; i++) { - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(i); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + // Step 2: Inject fault. + CompletableFuture f = new CompletableFuture(); + f.completeExceptionally(new StorageMetadataException("Test Exception")); + doReturn(f).when(metadataStore).commit(any()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(i + 1); + garbageCollector.processBatch(list).join(); - // Validate state after - assertQueueEquals(garbageCollector, new String[]{"deletedChunk"}); - } + // Validate state after + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(3); + Assert.assertNotNull(getSegmentMetadata(metadataStore, "testSegment")); + chunkNames.stream().forEach( chunkName -> Assert.assertTrue(chunkStorage.exists(chunkName).join())); + } - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(4); + /** + * Test for segment that for which metadata is partially updated already in previous attempt. + */ + @Test + public void testMetadataExceptionForSegmentPartialMetadataUpdate() throws Exception { + @Cleanup + ChunkStorage chunkStorage = getChunkStorage(); + @Cleanup + ChunkMetadataStore metadataStore = spy(getMetadataStore()); + int containerId = CONTAINER_ID; + + int dataSize = 1; + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .build(); + @Cleanup + GarbageCollector garbageCollector = new GarbageCollector(containerId, + chunkStorage, + metadataStore, + config, + executorService(), + System::currentTimeMillis, + noDelay); + + // Now actually start run + garbageCollector.initialize(testTaskQueue).join(); + + Assert.assertNotNull(garbageCollector.getTaskQueue()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + + insertSegment(metadataStore, chunkStorage, config, "testSegment", 10, 1, + new long[] {1, 2, 3, 4}, false, 0); + val chunkNames = TestUtils.getChunkNameList(metadataStore, "testSegment"); + + // Simulate partial update of metadata. + @Cleanup + val txn = metadataStore.beginTransaction(false, "testSegment"); + val metadata = (ChunkMetadata) txn.get(chunkNames.stream().findFirst().get()).join(); + metadata.setActive(false); + txn.update(metadata); + txn.commit().join(); + + // Add some garbage + garbageCollector.addSegmentToGarbage(TXN_ID, "testSegment").join(); + + // Validate state before + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); + + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + garbageCollector.processBatch(list).join(); // Validate state after + Assert.assertEquals(4, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(4, garbageCollector.getQueueSize().get()); + + garbageCollector.processBatch(testTaskQueue.drain(garbageCollector.getTaskQueueName(), 4)).join(); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + chunkNames.stream().forEach( chunkName -> Assert.assertFalse(chunkName + " should not exist", chunkStorage.exists(chunkName).join())); } /** - * Test for throttling. + * Test for segment that does not exist and added as garbage. */ @Test - public void testMaxConcurrency() throws Exception { + public void testNonExistentSegment() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup @@ -844,75 +1097,53 @@ public void testMaxConcurrency() throws Exception { int containerId = CONTAINER_ID; int dataSize = 1; - ArrayList expected = new ArrayList<>(); - for (int i = 0; i < 10; i++) { - val chunkName = "chunk" + i; - insertChunk(chunkStorage, chunkName, dataSize); - insertChunkMetadata(metadataStore, chunkName, dataSize, 0); - expected.add(chunkName); - } - - val manualDelay = new ManualDelay(6); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .build(); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, chunkStorage, metadataStore, - ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .garbageCollectionDelay(Duration.ofMillis(1)) - .garbageCollectionSleep(Duration.ofMillis(1)) - .garbageCollectionMaxConcurrency(2) - .build(), + config, executorService(), System::currentTimeMillis, - manualDelay); + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - for (int i = 0; i < expected.size(); i++) { - garbageCollector.addToGarbage(expected.get(i), 1000 * i, 0); - } + Assert.assertNull(getSegmentMetadata(metadataStore, "testSegment")); + garbageCollector.addSegmentToGarbage(TXN_ID, "testSegment").join(); // Validate state before - assertQueueEquals(garbageCollector, toArray(expected)); - - ArrayList deletedChunks = new ArrayList<>(); - int iterations = 5; - for (int i = 0; i < iterations; i++) { - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(i); - - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(i + 1); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - // Validate state after - Assert.assertEquals(expected.size() - 2, garbageCollector.getQueueSize().get()); - // remove two elements - deletedChunks.add(expected.remove(0)); - deletedChunks.add(expected.remove(0)); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - assertQueueEquals(garbageCollector, toArray(expected)); + garbageCollector.processBatch(list).join(); - for (val deleted : deletedChunks) { - Assert.assertFalse(chunkStorage.exists("deleted").get()); - } - for (val remaining : expected) { - Assert.assertTrue(chunkStorage.exists(remaining).get()); - } - } + // Validate state after + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + Assert.assertNull(getSegmentMetadata(metadataStore, "testSegment")); } /** - * Test that chunks are deleted in chronological order. + * Test for Max Attempts. */ @Test - public void testOrderedDeletion() throws Exception { + public void testMaxAttempts() throws Exception { @Cleanup ChunkStorage chunkStorage = getChunkStorage(); @Cleanup @@ -920,19 +1151,13 @@ public void testOrderedDeletion() throws Exception { int containerId = CONTAINER_ID; int dataSize = 1; - ArrayList expected = new ArrayList<>(); - for (int i = 0; i < 10; i++) { - val chunkName = "chunk" + i; - insertChunk(chunkStorage, chunkName, dataSize); - insertChunkMetadata(metadataStore, chunkName, dataSize, 0); - expected.add(chunkName); - } + insertChunk(chunkStorage, "deletedChunk", dataSize); + insertChunkMetadata(metadataStore, "deletedChunk", dataSize, 0); - val manualDelay = new ManualDelay(11); + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); - val baseTime = System.currentTimeMillis(); - val currentIteration = new AtomicInteger(); - final Supplier timeSupplier = () -> baseTime + 10000 * currentIteration.get(); + chunkStorage.setReadOnly(chunkStorage.openWrite("deletedChunk").get(), true); @Cleanup GarbageCollector garbageCollector = new GarbageCollector(containerId, @@ -941,64 +1166,165 @@ public void testOrderedDeletion() throws Exception { ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .garbageCollectionDelay(Duration.ofMillis(1)) .garbageCollectionSleep(Duration.ofMillis(1)) + .garbageCollectionMaxAttempts(3) .build(), executorService(), - timeSupplier, - manualDelay); + System::currentTimeMillis, + noDelay); // Now actually start run - garbageCollector.initialize(); + garbageCollector.initialize(testTaskQueue).join(); - Assert.assertNotNull(garbageCollector.getGarbageChunks()); + Assert.assertNotNull(garbageCollector.getTaskQueue()); Assert.assertEquals(0, garbageCollector.getQueueSize().get()); // Add some garbage - for (int i = 0; i < expected.size(); i++) { - garbageCollector.addToGarbage(expected.get(i), baseTime + 10000 * (i + 1), 0); + garbageCollector.addChunksToGarbage(TXN_ID, Arrays.asList("deletedChunk")).join(); + + // Validate state before + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"deletedChunk"}); + + for (int i = 0; i < 3; i++) { + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + + garbageCollector.processBatch(list).join(); + // Validate state after + assertQueueEquals(garbageCollector.getTaskQueueName(), testTaskQueue, new String[]{"deletedChunk"}); } + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + + garbageCollector.processBatch(list).join(); + + // Validate state after + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getFailedQueueName()).size()); + Assert.assertEquals("deletedChunk", testTaskQueue.getTaskQueueMap().get(garbageCollector.getFailedQueueName()).peek().getName()); + } + + /** + * Test for Max Attempts. + */ + @Test + public void testMaxAttemptsWithSegment() throws Exception { + @Cleanup + ChunkStorage chunkStorage = getChunkStorage(); + @Cleanup + ChunkMetadataStore metadataStore = spy(getMetadataStore()); + int containerId = CONTAINER_ID; + + int dataSize = 1; + Function> noDelay = d -> CompletableFuture.completedFuture(null); + val testTaskQueue = new InMemoryTaskQueueManager(); + + val config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ofMillis(1)) + .garbageCollectionSleep(Duration.ofMillis(1)) + .build(); + @Cleanup + GarbageCollector garbageCollector = new GarbageCollector(containerId, + chunkStorage, + metadataStore, + config, + executorService(), + System::currentTimeMillis, + noDelay); + + // Now actually start run + garbageCollector.initialize(testTaskQueue).join(); + + Assert.assertNotNull(garbageCollector.getTaskQueue()); + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + + insertSegment(metadataStore, chunkStorage, config, "testSegment", 10, 1, + new long[] {}, false, 0); + + // Add some garbage + garbageCollector.addSegmentToGarbage(TXN_ID, "testSegment").join(); + // Validate state before - assertQueueEquals(garbageCollector, toArray(expected)); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).peek().getName()); - ArrayList deletedChunks = new ArrayList<>(); - int iterations = 10; - for (int i = 0; i < iterations; i++) { - // move "timer ahead" - currentIteration.incrementAndGet(); - // Return first delay - this will "unpause" the first iteration. - manualDelay.completeDelay(i); + for (int i = 0; i < 3; i++) { + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + // Step 2: Inject fault. + CompletableFuture f = new CompletableFuture(); + f.completeExceptionally(new StorageMetadataException("Test Exception")); + doReturn(f).when(metadataStore).commit(any()); - // Wait for "Delay" to be invoked again. This indicates that first iteration was complete. - // Don't complete the delay. - manualDelay.waitForInvocation(i + 1); + garbageCollector.processBatch(list).join(); // Validate state after - Assert.assertEquals(expected.size() - 1, garbageCollector.getQueueSize().get()); - // remove two elements - deletedChunks.add(expected.remove(0)); - - assertQueueEquals(garbageCollector, toArray(expected)); + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); - for (val deleted : deletedChunks) { - Assert.assertFalse(chunkStorage.exists("deleted").get()); - } - for (val remaining : expected) { - Assert.assertTrue(chunkStorage.exists(remaining).get()); - } + Assert.assertNotNull(getSegmentMetadata(metadataStore, "testSegment")); } + + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1); + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, garbageCollector.getQueueSize().get()); + garbageCollector.processBatch(list).join(); + + // Validate state after + Assert.assertEquals(0, testTaskQueue.getTaskQueueMap().get(garbageCollector.getTaskQueueName()).size()); + Assert.assertEquals(1, testTaskQueue.getTaskQueueMap().get(garbageCollector.getFailedQueueName()).size()); + Assert.assertEquals("testSegment", testTaskQueue.getTaskQueueMap().get(garbageCollector.getFailedQueueName()).peek().getName()); + + Assert.assertEquals(0, garbageCollector.getQueueSize().get()); + + Assert.assertNotNull(getSegmentMetadata(metadataStore, "testSegment")); + } + + @Test + public void testSerialization() throws Exception { + val serializer = new GarbageCollector.TaskInfo.Serializer(); + GarbageCollector.TaskInfo obj1 = GarbageCollector.TaskInfo.builder() + .transactionId(1) + .taskType(2) + .attempts(3) + .name("name") + .build(); + val bytes = serializer.serialize(obj1); + val obj2 = serializer.deserialize(bytes); + + Assert.assertEquals(1, obj2.getTransactionId()); + Assert.assertEquals(2, obj2.getTaskType()); + Assert.assertEquals(3, obj2.getAttempts()); + Assert.assertEquals("name", obj2.getName()); } - private String[] toArray(ArrayList expected) { - return expected.toArray(new String[expected.size()]); + @Test + public void testSerializationWithBaseClass() throws Exception { + val serializer = new GarbageCollector.AbstractTaskInfo.AbstractTaskInfoSerializer(); + GarbageCollector.TaskInfo obj1 = GarbageCollector.TaskInfo.builder() + .transactionId(1) + .taskType(2) + .attempts(3) + .name("name") + .build(); + val bytes = serializer.serialize(obj1); + val obj2 = (GarbageCollector.TaskInfo) serializer.deserialize(bytes); + + Assert.assertEquals(1, obj2.getTransactionId()); + Assert.assertEquals(2, obj2.getTaskType()); + Assert.assertEquals(3, obj2.getAttempts()); + Assert.assertEquals("name", obj2.getName()); } - private void assertQueueEquals(GarbageCollector garbageCollector, String[] expected) { - Assert.assertEquals(expected.length, garbageCollector.getQueueSize().get()); - val queue = Arrays.stream(garbageCollector.getGarbageChunks().toArray(new Object[expected.length])) - .map(info -> ((GarbageCollector.GarbageChunkInfo) info).getName()).collect(Collectors.toSet()); - Assert.assertEquals(expected.length, queue.size()); + private void assertQueueEquals(String queueName, InMemoryTaskQueueManager queueManager, String[] expected) { + HashSet visited = new HashSet<>(); + val queue = queueManager.getTaskQueueMap().get(queueName).stream().peek(info -> visited.add(info.getName())).collect(Collectors.counting()); + Assert.assertEquals(expected.length, visited.size()); for (String chunk : expected) { - Assert.assertTrue(queue.contains(chunk)); + Assert.assertTrue(visited.contains(chunk)); } } @@ -1025,64 +1351,92 @@ private void insertChunkMetadata(ChunkMetadataStore metadataStore, String chunkN } } - private ChunkMetadata getChunkMetadata(ChunkMetadataStore metadataStore, String chunkName) throws Exception { - try (val txn = metadataStore.beginTransaction(true, chunkName)) { - return (ChunkMetadata) txn.get(chunkName).get(); - } - } + public SegmentMetadata insertSegment(ChunkMetadataStore metadataStore, + ChunkStorage chunkStorage, + ChunkedSegmentStorageConfig config, + String testSegmentName, + long maxRollingLength, + int ownerEpoch, + long[] chunkLengths, + boolean addIndexMetadata, + int status) throws Exception { + Preconditions.checkArgument(maxRollingLength > 0, "maxRollingLength"); + Preconditions.checkArgument(ownerEpoch > 0, "ownerEpoch"); + try (val txn = metadataStore.beginTransaction(false, new String[]{testSegmentName})) { + String firstChunk = null; + String lastChunk = null; + TreeMap index = new TreeMap<>(); + // Add chunks. + long length = 0; + long startOfLast = 0; + long startOffset = 0; + int chunkCount = 0; + for (int i = 0; i < chunkLengths.length; i++) { + String chunkName = testSegmentName + "_chunk_" + Integer.toString(i); + ChunkMetadata chunkMetadata = ChunkMetadata.builder() + .name(chunkName) + .length(chunkLengths[i]) + .nextChunk(i == chunkLengths.length - 1 ? null : testSegmentName + "_chunk_" + Integer.toString(i + 1)) + .build(); + chunkMetadata.setActive(true); + index.put(startOffset, chunkName); + startOffset += chunkLengths[i]; + length += chunkLengths[i]; + txn.create(chunkMetadata); + + insertChunk(chunkStorage, chunkName, Math.toIntExact(chunkLengths[i])); + chunkCount++; + } - /** - * A utility test class that helps synchronize test code with iterations. - */ - static class ManualDelay implements Supplier> { - /** - * List of futures to return. - */ - @Getter - final ArrayList> toReturn = new ArrayList<>(); - - /** - * List of futures to track each invocation. - */ - @Getter - final ArrayList> invocations = new ArrayList<>(); - - /** - * Current index. - */ - final AtomicInteger currentIndex = new AtomicInteger(); - - /** - * Constructor. - * - * @param count Number of iterations to run. - */ - ManualDelay(int count) { - for (int i = 0; i < count; i++) { - toReturn.add(new CompletableFuture<>()); - invocations.add(new CompletableFuture<>()); + // Fix the first and last + if (chunkLengths.length > 0) { + firstChunk = testSegmentName + "_chunk_0"; + lastChunk = testSegmentName + "_chunk_" + Integer.toString(chunkLengths.length - 1); + startOfLast = length - chunkLengths[chunkLengths.length - 1]; } - } - /** - * Call back method which is called at the start of each request for delay. - * @return - */ - @Override - synchronized public CompletableFuture get() { - log.debug("Delay Invoked count = {}", currentIndex.get()); - // Trigger that call was made. - invocations.get(currentIndex.get()).complete(null); - // return next "delay" future. - return toReturn.get(currentIndex.getAndIncrement()); + // Finally save + SegmentMetadata segmentMetadata = SegmentMetadata.builder() + .maxRollinglength(maxRollingLength) + .name(testSegmentName) + .ownerEpoch(ownerEpoch) + .firstChunk(firstChunk) + .lastChunk(lastChunk) + .length(length) + .lastChunkStartOffset(startOfLast) + .build(); + segmentMetadata.setStatus(status); + segmentMetadata.setChunkCount(chunkCount); + segmentMetadata.checkInvariants(); + txn.create(segmentMetadata); + + if (addIndexMetadata) { + for (long blockStartOffset = 0; blockStartOffset < segmentMetadata.getLength(); blockStartOffset += config.getIndexBlockSize()) { + val floor = index.floorEntry(blockStartOffset); + txn.create(ReadIndexBlockMetadata.builder() + .name(NameUtils.getSegmentReadIndexBlockName(segmentMetadata.getName(), blockStartOffset)) + .startOffset(floor.getKey()) + .chunkName(floor.getValue()) + .status(StatusFlags.ACTIVE) + .build()); + } + } + + txn.commit().join(); + return segmentMetadata; } + } - void completeDelay(int i) { - toReturn.get(i).complete(null); + private ChunkMetadata getChunkMetadata(ChunkMetadataStore metadataStore, String chunkName) throws Exception { + try (val txn = metadataStore.beginTransaction(true, chunkName)) { + return (ChunkMetadata) txn.get(chunkName).get(); } + } - void waitForInvocation(int i) { - invocations.get(i).join(); + private SegmentMetadata getSegmentMetadata(ChunkMetadataStore metadataStore, String chunkName) throws Exception { + try (val txn = metadataStore.beginTransaction(true, chunkName)) { + return (SegmentMetadata) txn.get(chunkName).get(); } } + } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/NoAppendSimpleStorageTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/NoAppendSimpleStorageTests.java index 02c9e70bab6..2d12e2a786f 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/NoAppendSimpleStorageTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/NoAppendSimpleStorageTests.java @@ -16,11 +16,9 @@ package io.pravega.segmentstore.storage.chunklayer; import io.pravega.segmentstore.storage.mocks.InMemoryChunkStorage; -import io.pravega.test.common.AssertExtensions; import lombok.val; import org.junit.Test; -import java.io.ByteArrayInputStream; import java.util.concurrent.Executor; import static org.junit.Assert.assertEquals; @@ -36,6 +34,7 @@ protected static InMemoryChunkStorage getNoAppendInMemoryChunkStorage(Executor e return ret; } + @Override protected ChunkStorage getChunkStorage() { return new InMemoryChunkStorage(executorService()); } @@ -44,6 +43,7 @@ protected ChunkStorage getChunkStorage() { * Unit tests for {@link InMemoryChunkStorage} using {@link ChunkedRollingStorageTests}. */ public static class NoAppendSimpleStorageRollingStorageTests extends ChunkedRollingStorageTests { + @Override protected ChunkStorage getChunkStorage() { return getNoAppendInMemoryChunkStorage(executorService()); } @@ -61,41 +61,13 @@ protected ChunkStorage createChunkStorage() { /** * Test default capabilities. */ + @Override @Test public void testCapabilities() { assertEquals(false, chunkStorage.supportsAppend()); assertEquals(true, chunkStorage.supportsTruncation()); assertEquals(false, chunkStorage.supportsConcat()); } - - /** - * Test simple reads and writes for exceptions. - */ - @Test - @Override - public void testSimpleWriteExceptions() throws Exception { - String chunkName = "testchunk"; - - byte[] writeBuffer = new byte[10]; - populate(writeBuffer); - int length = writeBuffer.length; - val chunkHandle = chunkStorage.createWithContent(chunkName, 10, new ByteArrayInputStream(writeBuffer)).get(); - int bytesWritten = Math.toIntExact(chunkStorage.getInfo(chunkName).get().getLength()); - assertEquals(length, bytesWritten); - assertEquals(chunkName, chunkHandle.getChunkName()); - assertEquals(false, chunkHandle.isReadOnly()); - - // Write exceptions. - AssertExtensions.assertThrows( - " write should throw exception.", - () -> chunkStorage.write(chunkHandle, 10, 1, new ByteArrayInputStream(writeBuffer)).get(), - ex -> ex instanceof IllegalArgumentException); - } - - @Test - @Override - public void testReadonly() throws Exception { - } } } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SimpleStorageTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SimpleStorageTests.java index 928a96924cc..23dafe55b90 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SimpleStorageTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SimpleStorageTests.java @@ -23,6 +23,7 @@ import io.pravega.segmentstore.storage.StorageTestBase; import io.pravega.segmentstore.storage.metadata.ChunkMetadataStore; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; import io.pravega.test.common.AssertExtensions; import lombok.extern.slf4j.Slf4j; import lombok.val; @@ -62,10 +63,16 @@ protected Storage createStorage() throws Exception { chunkStorage = getChunkStorage(); } } - ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, chunkStorage, chunkMetadataStore, executor, ChunkedSegmentStorageConfig.DEFAULT_CONFIG); + ChunkedSegmentStorage chunkedSegmentStorage = new ChunkedSegmentStorage(CONTAINER_ID, + chunkStorage, chunkMetadataStore, executor, getDefaultConfig()); + chunkedSegmentStorage.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); return chunkedSegmentStorage; } + protected ChunkedSegmentStorageConfig getDefaultConfig() { + return ChunkedSegmentStorageConfig.DEFAULT_CONFIG; + } + abstract protected ChunkStorage getChunkStorage() throws Exception; /** @@ -83,6 +90,7 @@ protected Storage forkStorage(ChunkedSegmentStorage storage) throws Exception { getCloneMetadataStore(storage.getMetadataStore()), executor, storage.getConfig()); + forkedChunkedSegmentStorage.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); return forkedChunkedSegmentStorage; } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalOperationsTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalOperationsTests.java index 424a841ebd4..b36fe61b28c 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalOperationsTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalOperationsTests.java @@ -23,6 +23,7 @@ import io.pravega.segmentstore.storage.mocks.InMemoryChunkStorage; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; import io.pravega.segmentstore.storage.mocks.InMemorySnapshotInfoStore; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; import io.pravega.shared.NameUtils; import io.pravega.test.common.ThreadPooledTestSuite; import lombok.Builder; @@ -61,17 +62,24 @@ public class SystemJournalOperationsTests extends ThreadPooledTestSuite { protected static final Duration TIMEOUT = Duration.ofSeconds(30); private static final int CONTAINER_ID = 42; private static final int[] PRIMES_1 = {2, 3, 5, 7}; + + // InMemoryChunkStorage internally saves each write as separate array. + // This means if read/write call fail too often then no journal read will ever be completed. + // So fail only every 5th, 7th or 11th etc. time. Not more often than that + private static final int[] PRIMES_2 = {5, 7, 11}; private static final int THREAD_POOL_SIZE = 10; @Rule public Timeout globalTimeout = Timeout.seconds(TIMEOUT.getSeconds()); + @Override @Before public void before() throws Exception { super.before(); FlakySnapshotInfoStore.clear(); } + @Override @After public void after() throws Exception { super.after(); @@ -112,6 +120,17 @@ private TestAction[][] getSimpleScenarioActions(TestContext testContext, String } }; } + + private TestAction[][] getMultipleRestartScenarioActions(TestContext testContext, String testSegmentName) { + TestAction[][] ret = new TestAction[4][4]; + for (int i = 0; i < 4; i++) { + ret[i] = new TestAction[4]; + for (int j = 0; j < 4; j++) { + ret[i][j] = new AddChunkAction(testSegmentName, 4); + } + } + return ret; + } /// end region /** @@ -126,6 +145,7 @@ private TestAction[][] getSimpleScenarioActions(TestContext testContext, String */ @Test public void testSimpleScenario() throws Exception { + @Cleanup val testContext = new TestContext(CONTAINER_ID); val testSegmentName = testContext.segmentNames[0]; @Cleanup @@ -189,9 +209,11 @@ public void testSimpleScenario() throws Exception { */ @Test public void testSimpleScenarioWithSnapshots() throws Exception { + @Cleanup val testContext = new TestContext(CONTAINER_ID); testContext.setConfig(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .maxJournalUpdatesPerSnapshot(2) + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) .build()); @@ -266,9 +288,11 @@ public void testSimpleScenarioWithSnapshots() throws Exception { @Test public void testWithSnapshots() throws Exception { + @Cleanup val testContext = new TestContext(CONTAINER_ID); testContext.setConfig(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .maxJournalUpdatesPerSnapshot(3) + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) .build()); @@ -278,31 +302,32 @@ public void testWithSnapshots() throws Exception { val instance = new TestInstance(testContext, 1); instance.bootstrap(); instance.validate(); - checkJournalsNotExist(testContext, instance, 1, 1, 1); // Add chunk. instance.append(testSegmentName, "A", 0, 1); - checkJournalsExist(testContext, instance, 1, 1, 1); + checkJournalsExist(testContext, instance, 1, 2, 2); // Add chunk. instance.append(testSegmentName, "B", 1, 2); - checkJournalsExist(testContext, instance, 1, 1, 2); + checkJournalsExist(testContext, instance, 1, 2, 3); // Add chunk. instance.append(testSegmentName, "C", 3, 3); - checkJournalsExist(testContext, instance, 1, 1, 3); + checkJournalsExist(testContext, instance, 1, 2, 4); // Add chunk. instance.append(testSegmentName, "D", 6, 4); - checkJournalsExist(testContext, instance, 1, 1, 4); + checkJournalsExist(testContext, instance, 1, 2, 5); // Add chunk. instance.append(testSegmentName, "E", 10, 5); - checkJournalsExist(testContext, instance, 2, 2, 5); + instance.deleteGarbage(); + checkJournalsExist(testContext, instance, 2, 3, 6); + checkJournalsNotExistBefore(testContext, instance.epoch, 2, 3, 6); // Add chunk. instance.append(testSegmentName, "F", 15, 6); - checkJournalsExist(testContext, instance, 2, 2, 6); + checkJournalsExist(testContext, instance, 2, 3, 7); // Bootstrap new instance. @Cleanup @@ -342,11 +367,30 @@ private void checkJournalsNotExist(TestContext testContext, TestInstance instanc } } + private void checkJournalsNotExistBefore(TestContext testContext, long epoch, long snapshotId, long journalIndex, long changeNumber) throws Exception { + // check snapshots + for (int i = 0; i < snapshotId; i++) { + Assert.assertFalse(testContext.chunkStorage.exists(NameUtils.getSystemJournalSnapshotFileName(CONTAINER_ID, epoch, i)).get()); + } + // Check journals + if (testContext.config.isAppendEnabled() && testContext.chunkStorage.supportsAppend()) { + for (int i = 0; i < journalIndex; i++) { + Assert.assertFalse(testContext.chunkStorage.exists(NameUtils.getSystemJournalFileName(CONTAINER_ID, epoch, i)).get()); + } + } else { + for (int i = 0; i < changeNumber; i++) { + Assert.assertFalse(testContext.chunkStorage.exists(NameUtils.getSystemJournalFileName(CONTAINER_ID, epoch, i)).get()); + } + } + } + @Test public void testWithSnapshotsAndTime() throws Exception { + @Cleanup val testContext = new TestContext(CONTAINER_ID); testContext.setConfig(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .maxJournalUpdatesPerSnapshot(2) + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) .build()); @@ -356,30 +400,34 @@ public void testWithSnapshotsAndTime() throws Exception { val instance = new TestInstance(testContext, 1); instance.bootstrap(); instance.validate(); - checkJournalsNotExist(testContext, instance, 1, 1, 1); + //checkJournalsNotExist(testContext, instance, 1, 1, 1); // Add chunk. instance.append(testSegmentName, "A", 0, 1); - checkJournalsExist(testContext, instance, 1, 1, 1); + checkJournalsExist(testContext, instance, 1, 2, 2); // Add chunk. instance.append(testSegmentName, "B", 1, 2); - checkJournalsExist(testContext, instance, 1, 1, 2); + checkJournalsExist(testContext, instance, 1, 2, 3); // Trigger Time and add chunk testContext.addTime(testContext.config.getJournalSnapshotInfoUpdateFrequency().toMillis() + 1); instance.append(testSegmentName, "C", 3, 3); - checkJournalsExist(testContext, instance, 2, 2, 3); + instance.deleteGarbage(); + checkJournalsExist(testContext, instance, 2, 3, 4); + checkJournalsNotExistBefore(testContext, instance.epoch, 2, 3, 4); // Add chunk. instance.append(testSegmentName, "D", 6, 4); - checkJournalsExist(testContext, instance, 2, 2, 4); + checkJournalsExist(testContext, instance, 2, 3, 5); // Add chunk. instance.append(testSegmentName, "E", 10, 5); - checkJournalsExist(testContext, instance, 2, 2, 5); + checkJournalsExist(testContext, instance, 2, 3, 6); // Add chunk. instance.append(testSegmentName, "F", 15, 6); - checkJournalsExist(testContext, instance, 3, 3, 6); + instance.deleteGarbage(); + checkJournalsExist(testContext, instance, 3, 4, 7); + checkJournalsNotExistBefore(testContext, instance.epoch, 3, 4, 7); // Bootstrap new instance. @Cleanup @@ -418,6 +466,7 @@ public void testSimpleScenarioWithActions() throws Exception { val testContext = new TestContext(CONTAINER_ID); testContext.setConfig(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .maxJournalUpdatesPerSnapshot(2) + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) .build()); val testSegmentName = testContext.segmentNames[0]; @@ -428,56 +477,65 @@ public void testSimpleScenarioWithActions() throws Exception { public void testSimpleScenarioWithMultipleCombinations() throws Exception { for (String method1 : new String[] {"doRead.before", "doRead.after"}) { for (String method2 : new String[] {"doWrite.before", "doWrite.after"}) { - testWithFlakyChunkStorage(this::testScenario, this::getSimpleScenarioActions, method1, method2, PRIMES_1); + testWithFlakyChunkStorage(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, method1, method2, PRIMES_1); + } + } + } + + @Test + public void testMultipleRestartScenarioWithMultipleCombinations() throws Exception { + for (String method1 : new String[] {"doRead.before", "doRead.after"}) { + for (String method2 : new String[] {"doWrite.before", "doWrite.after"}) { + testWithFlakyChunkStorage(getTestConfig(100), this::testScenario, this::getMultipleRestartScenarioActions, method1, method2, PRIMES_2); } } } @Test public void testSimpleScenarioWithFlakyReadsBefore() throws Exception { - testWithFlakyChunkStorage(this::testScenario, this::getSimpleScenarioActions, "doRead.before", PRIMES_1); + testWithFlakyChunkStorage(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "doRead.before", PRIMES_1); } @Test public void testSimpleScenarioWithFlakyReadsAfter() throws Exception { - testWithFlakyChunkStorage(this::testScenario, this::getSimpleScenarioActions, "doRead.after", PRIMES_1); + testWithFlakyChunkStorage(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "doRead.after", PRIMES_1); } @Test public void testSimpleScenarioWithFlakyWriteBefore() throws Exception { - testWithFlakyChunkStorage(this::testScenario, this::getSimpleScenarioActions, "doWrite.before", PRIMES_1); + testWithFlakyChunkStorage(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "doWrite.before", PRIMES_1); } @Test public void testSimpleScenarioWithFlakyWriteAfter() throws Exception { - testWithFlakyChunkStorage(this::testScenario, this::getSimpleScenarioActions, "doWrite.after", PRIMES_1); + testWithFlakyChunkStorage(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "doWrite.after", PRIMES_1); } @Test public void testScenarioWithFlakySnapshotInfoStoreReadsBefore() throws Exception { - testScenarioWithFlakySnapshotInfoStore(this::testScenario, this::getSimpleScenarioActions, "getSnapshotId.before", PRIMES_1); + testScenarioWithFlakySnapshotInfoStore(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "getSnapshotId.before", PRIMES_1); } @Test public void testScenarioWithFlakySnapshotInfoStoreReadsAfter() throws Exception { - testScenarioWithFlakySnapshotInfoStore(this::testScenario, this::getSimpleScenarioActions, "getSnapshotId.after", PRIMES_1); + testScenarioWithFlakySnapshotInfoStore(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "getSnapshotId.after", PRIMES_1); } @Test public void testScenarioWithFlakySnapshotInfoStoreWriteBefore() throws Exception { - testScenarioWithFlakySnapshotInfoStore(this::testScenario, this::getSimpleScenarioActions, "setSnapshotId.before", PRIMES_1); + testScenarioWithFlakySnapshotInfoStore(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "setSnapshotId.before", PRIMES_1); } @Test public void testScenarioWithFlakySnapshotInfoStoreWriteAfter() throws Exception { - testScenarioWithFlakySnapshotInfoStore(this::testScenario, this::getSimpleScenarioActions, "setSnapshotId.after", PRIMES_1); + testScenarioWithFlakySnapshotInfoStore(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, "setSnapshotId.after", PRIMES_1); } @Test public void testScenarioWithFlakySnapshotInfoStoreMultiple() throws Exception { for (String method1 : new String[] {"getSnapshotId.before", "getSnapshotId.after"}) { for (String method2 : new String[] {"setSnapshotId.before", "setSnapshotId.after"}) { - testScenarioWithFlakySnapshotInfoStore(this::testScenario, this::getSimpleScenarioActions, method1, method2, PRIMES_1); + testScenarioWithFlakySnapshotInfoStore(getTestConfig(2), this::testScenario, this::getSimpleScenarioActions, method1, method2, PRIMES_1); } } } @@ -496,6 +554,7 @@ public void testTruncateVariousOffsets() throws Exception { val testContext = new TestContext(CONTAINER_ID); testContext.setConfig(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .maxJournalUpdatesPerSnapshot(2) + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) .build()); val testSegmentName = testContext.segmentNames[0]; @@ -529,6 +588,7 @@ private void testTruncate(int chunkSize, int chunkCount, int truncateAt) throws val testContext = new TestContext(CONTAINER_ID); testContext.setConfig(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .maxJournalUpdatesPerSnapshot(2) + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) .build()); val testSegmentName = testContext.segmentNames[0]; @@ -553,26 +613,21 @@ private void testTruncate(TestContext testContext, String testSegmentName, int[] }); } - void testScenario(ChunkStorage chunkStorage, TestScenarioProvider scenarioProvider) throws Exception { - @Cleanup - val testContext = new TestContext(CONTAINER_ID, chunkStorage); - testContext.setConfig(ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() - .maxJournalUpdatesPerSnapshot(2) + private ChunkedSegmentStorageConfig getTestConfig(int maxJournalUpdatesPerSnapshot) { + return ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .maxJournalUpdatesPerSnapshot(maxJournalUpdatesPerSnapshot) + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) - .build()); - val testSegmentName = testContext.segmentNames[0]; - val scenario = scenarioProvider.getScenario(testContext, testSegmentName); - testScenario(testContext, scenario); + .build(); } - /** - * Tests a scenario for given set of test actions. - * @throws Exception Exception if any. - */ - int testScenario(TestContext testContext, String segmentName, TestScenarioProvider scenarioProvider) throws Exception { + void testScenario(ChunkStorage chunkStorage, ChunkedSegmentStorageConfig config, TestScenarioProvider scenarioProvider) throws Exception { + @Cleanup + val testContext = new TestContext(CONTAINER_ID, chunkStorage); + testContext.setConfig(config); val testSegmentName = testContext.segmentNames[0]; val scenario = scenarioProvider.getScenario(testContext, testSegmentName); - return testScenario(testContext, scenario); + testScenario(testContext, scenario); } /** @@ -624,12 +679,12 @@ int testScenario(TestContext testContext, TestAction[][] actions) throws Excepti return chunkId; } - void testWithFlakyChunkStorage(TestMethod test, TestScenarioProvider scenarioProvider, String interceptMethod1, String interceptMethod2, int[] primes) throws Exception { + void testWithFlakyChunkStorage(ChunkedSegmentStorageConfig config, TestMethod test, TestScenarioProvider scenarioProvider, String interceptMethod1, String interceptMethod2, int[] primes) throws Exception { for (val prime1 : primes) { for (val prime2 : primes) { FlakyChunkStorage flakyChunkStorage = new FlakyChunkStorage(executorService()); flakyChunkStorage.interceptor.flakyPredicates.add(FlakinessPredicate.builder() - .method("doRead.before") + .method(interceptMethod1) .matchPredicate(n -> n % prime1 == 0) .matchRegEx("_sysjournal") .action(() -> { @@ -637,19 +692,19 @@ void testWithFlakyChunkStorage(TestMethod test, TestScenarioProvider scenarioPro }) .build()); flakyChunkStorage.interceptor.flakyPredicates.add(FlakinessPredicate.builder() - .method("doWrite.before") + .method(interceptMethod2) .matchPredicate(n -> n % prime2 == 0) .matchRegEx("_sysjournal") .action(() -> { throw new IOException("Intentional"); }) .build()); - test.test(flakyChunkStorage, scenarioProvider); + test.test(flakyChunkStorage, config, scenarioProvider); } } } - void testWithFlakyChunkStorage(TestMethod test, TestScenarioProvider scenarioProvider, String interceptMethod, int[] primes) throws Exception { + void testWithFlakyChunkStorage(ChunkedSegmentStorageConfig config, TestMethod test, TestScenarioProvider scenarioProvider, String interceptMethod, int[] primes) throws Exception { for (val prime : primes) { FlakyChunkStorage flakyChunkStorage = new FlakyChunkStorage(executorService()); flakyChunkStorage.interceptor.flakyPredicates.add(FlakinessPredicate.builder() @@ -660,11 +715,11 @@ void testWithFlakyChunkStorage(TestMethod test, TestScenarioProvider scenarioPro throw new IOException("Intentional"); }) .build()); - test.test(flakyChunkStorage, scenarioProvider); + test.test(flakyChunkStorage, config, scenarioProvider); } } - void testScenarioWithFlakySnapshotInfoStore(TestMethod test, TestScenarioProvider scenarioProvider, String interceptMethod, int[] primes) throws Exception { + void testScenarioWithFlakySnapshotInfoStore(ChunkedSegmentStorageConfig config, TestMethod test, TestScenarioProvider scenarioProvider, String interceptMethod, int[] primes) throws Exception { for (val prime : primes) { FlakyChunkStorage flakyChunkStorage = new FlakyChunkStorage(executorService()); val flakySnaphotInfoStore = new FlakySnapshotInfoStore(); @@ -677,11 +732,11 @@ void testScenarioWithFlakySnapshotInfoStore(TestMethod test, TestScenarioProvide throw new IOException("Intentional"); }) .build()); - test.test(flakyChunkStorage, scenarioProvider); + test.test(flakyChunkStorage, config, scenarioProvider); } } - void testScenarioWithFlakySnapshotInfoStore(TestMethod test, TestScenarioProvider scenarioProvider, + void testScenarioWithFlakySnapshotInfoStore(ChunkedSegmentStorageConfig config, TestMethod test, TestScenarioProvider scenarioProvider, String method1, String method2, int[] primes) throws Exception { for (val prime1 : primes) { @@ -706,16 +761,239 @@ void testScenarioWithFlakySnapshotInfoStore(TestMethod test, TestScenarioProvide throw new IOException("Intentional"); }) .build()); - test.test(flakyChunkStorage, scenarioProvider); + test.test(flakyChunkStorage, config, scenarioProvider); } } } + /** + * Test basic zombie scenario with truncate. + * @throws Exception Exception if any. + */ + @Test + public void testZombieScenario() throws Exception { + @Cleanup + val testContext = new TestContext(CONTAINER_ID); + val testSegmentName = testContext.segmentNames[0]; + @Cleanup + val instance = new TestInstance(testContext, 1); + instance.bootstrap(); + instance.validate(); + // Add a chunk + instance.append(testSegmentName, "A", 0, 10); + + // Bootstrap. + @Cleanup + val instance2 = new TestInstance(testContext, 2); + instance2.bootstrap(); + + // Validate. + instance2.validate(); + TestUtils.checkSegmentBounds(instance2.metadataStore, testSegmentName, 0, 10); + TestUtils.checkSegmentLayout(instance2.metadataStore, testSegmentName, new long[] { 10}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance2.metadataStore, testSegmentName); + val segmentMetadata2 = TestUtils.getSegmentMetadata(instance2.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata2.getFirstChunk()); + Assert.assertEquals("A", segmentMetadata2.getLastChunk()); + Assert.assertEquals(0, segmentMetadata2.getFirstChunkStartOffset()); + Assert.assertEquals(0, segmentMetadata2.getLastChunkStartOffset()); + + // Bootstrap a new instance. + @Cleanup + val instance3 = new TestInstance(testContext, 3); + instance3.bootstrap(); + instance3.validate(); + TestUtils.checkSegmentBounds(instance3.metadataStore, testSegmentName, 0, 10); + TestUtils.checkSegmentLayout(instance3.metadataStore, testSegmentName, new long[] { 10}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance3.metadataStore, testSegmentName); + val segmentMetadata3 = TestUtils.getSegmentMetadata(instance3.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata3.getFirstChunk()); + Assert.assertEquals("A", segmentMetadata3.getLastChunk()); + Assert.assertEquals(0, segmentMetadata3.getFirstChunkStartOffset()); + Assert.assertEquals(0, segmentMetadata3.getLastChunkStartOffset()); + + // Zombie Truncate + instance2.writeZombieRecord(SystemJournal.TruncationRecord.builder() + .offset(4) + .startOffset(0) + .segmentName(testSegmentName) + .firstChunkName("A") + .build()); + + // Bootstrap a new instance. + @Cleanup + val instance4 = new TestInstance(testContext, 4); + instance4.bootstrap(); + TestUtils.checkSegmentBounds(instance4.metadataStore, testSegmentName, 0, 10); + TestUtils.checkSegmentLayout(instance4.metadataStore, testSegmentName, new long[] { 10}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance4.metadataStore, testSegmentName); + val segmentMetadata4 = TestUtils.getSegmentMetadata(instance4.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata4.getFirstChunk()); + Assert.assertEquals("A", segmentMetadata4.getLastChunk()); + Assert.assertEquals(0, segmentMetadata4.getFirstChunkStartOffset()); + Assert.assertEquals(0, segmentMetadata4.getLastChunkStartOffset()); + } + + /** + * Test zombie scenario with multiple truncates. + * @throws Exception Exception if any. + */ + @Test + public void testZombieScenarioMultipleTruncates() throws Exception { + @Cleanup + val testContext = new TestContext(CONTAINER_ID); + val testSegmentName = testContext.segmentNames[0]; + @Cleanup + val instance = new TestInstance(testContext, 1); + instance.bootstrap(); + instance.validate(); + // Add a chunk + instance.append(testSegmentName, "A", 0, 10); + instance.truncate(testSegmentName, 2); + // Bootstrap. + @Cleanup + val instance2 = new TestInstance(testContext, 2); + instance2.bootstrap(); + + // Validate. + instance2.validate(); + TestUtils.checkSegmentBounds(instance2.metadataStore, testSegmentName, 2, 10); + TestUtils.checkSegmentLayout(instance2.metadataStore, testSegmentName, new long[] { 10}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance2.metadataStore, testSegmentName); + val segmentMetadata2 = TestUtils.getSegmentMetadata(instance2.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata2.getFirstChunk()); + Assert.assertEquals("A", segmentMetadata2.getLastChunk()); + Assert.assertEquals(0, segmentMetadata2.getFirstChunkStartOffset()); + Assert.assertEquals(0, segmentMetadata2.getLastChunkStartOffset()); + + // Bootstrap a new instance. + @Cleanup + val instance3 = new TestInstance(testContext, 3); + instance3.bootstrap(); + instance3.validate(); + TestUtils.checkSegmentBounds(instance3.metadataStore, testSegmentName, 2, 10); + TestUtils.checkSegmentLayout(instance3.metadataStore, testSegmentName, new long[] { 10}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance3.metadataStore, testSegmentName); + val segmentMetadata3 = TestUtils.getSegmentMetadata(instance3.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata3.getFirstChunk()); + Assert.assertEquals("A", segmentMetadata3.getLastChunk()); + Assert.assertEquals(0, segmentMetadata3.getFirstChunkStartOffset()); + Assert.assertEquals(0, segmentMetadata3.getLastChunkStartOffset()); + instance3.truncate(testSegmentName, 3); + + // Zombie Truncate + instance2.writeZombieRecord(SystemJournal.TruncationRecord.builder() + .offset(4) + .startOffset(0) + .segmentName(testSegmentName) + .firstChunkName("A") + .build()); + + // Bootstrap a new instance. + @Cleanup + val instance4 = new TestInstance(testContext, 4); + instance4.bootstrap(); + TestUtils.checkSegmentBounds(instance4.metadataStore, testSegmentName, 3, 10); + TestUtils.checkSegmentLayout(instance4.metadataStore, testSegmentName, new long[] { 10}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance4.metadataStore, testSegmentName); + val segmentMetadata4 = TestUtils.getSegmentMetadata(instance4.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata4.getFirstChunk()); + Assert.assertEquals("A", segmentMetadata4.getLastChunk()); + Assert.assertEquals(0, segmentMetadata4.getFirstChunkStartOffset()); + Assert.assertEquals(0, segmentMetadata4.getLastChunkStartOffset()); + } + + /** + * Test zombie scenario with multiple chunks. + * @throws Exception Exception if any. + */ + @Test + public void testZombieScenarioMultipleChunks() throws Exception { + @Cleanup + val testContext = new TestContext(CONTAINER_ID); + val testSegmentName = testContext.segmentNames[0]; + @Cleanup + val instance = new TestInstance(testContext, 1); + instance.bootstrap(); + instance.validate(); + // Add a chunk + instance.append(testSegmentName, "A", 0, 10); + instance.append(testSegmentName, "B", 10, 20); + instance.append(testSegmentName, "C", 30, 30); + instance.truncate(testSegmentName, 2); + // Bootstrap. + @Cleanup + val instance2 = new TestInstance(testContext, 2); + instance2.bootstrap(); + + // Validate. + instance2.validate(); + TestUtils.checkSegmentBounds(instance2.metadataStore, testSegmentName, 2, 60); + TestUtils.checkSegmentLayout(instance2.metadataStore, testSegmentName, new long[] { 10, 20, 30}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance2.metadataStore, testSegmentName); + val segmentMetadata2 = TestUtils.getSegmentMetadata(instance2.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata2.getFirstChunk()); + Assert.assertEquals("C", segmentMetadata2.getLastChunk()); + Assert.assertEquals(0, segmentMetadata2.getFirstChunkStartOffset()); + Assert.assertEquals(30, segmentMetadata2.getLastChunkStartOffset()); + + // Bootstrap a new instance. + @Cleanup + val instance3 = new TestInstance(testContext, 3); + instance3.bootstrap(); + instance3.validate(); + TestUtils.checkSegmentBounds(instance3.metadataStore, testSegmentName, 2, 60); + TestUtils.checkSegmentLayout(instance3.metadataStore, testSegmentName, new long[] { 10, 20, 30}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance3.metadataStore, testSegmentName); + val segmentMetadata3 = TestUtils.getSegmentMetadata(instance3.metadataStore, testSegmentName); + Assert.assertEquals("A", segmentMetadata3.getFirstChunk()); + Assert.assertEquals("C", segmentMetadata3.getLastChunk()); + Assert.assertEquals(0, segmentMetadata3.getFirstChunkStartOffset()); + Assert.assertEquals(30, segmentMetadata3.getLastChunkStartOffset()); + instance3.truncate(testSegmentName, 15); + instance3.append(testSegmentName, "D", 60, 100); + + // Zombie Truncate + instance2.writeZombieRecord(SystemJournal.TruncationRecord.builder() + .offset(40) + .startOffset(30) + .segmentName(testSegmentName) + .firstChunkName("C") + .build()); + instance2.writeZombieRecord(SystemJournal.ChunkAddedRecord.builder() + .offset(60) + .oldChunkName("C") + .newChunkName("X") + .segmentName(testSegmentName) + .build()); + instance2.writeZombieRecord(SystemJournal.ChunkAddedRecord.builder() + .offset(100) + .oldChunkName("X") + .newChunkName("Y") + .segmentName(testSegmentName) + .build()); + + // Bootstrap a new instance. + @Cleanup + val instance4 = new TestInstance(testContext, 4); + instance4.bootstrap(); + TestUtils.checkSegmentBounds(instance4.metadataStore, testSegmentName, 15, 160); + TestUtils.checkSegmentLayout(instance4.metadataStore, testSegmentName, new long[] { 20, 30, 100}); + TestUtils.checkChunksExistInStorage(testContext.chunkStorage, instance4.metadataStore, testSegmentName); + val segmentMetadata4 = TestUtils.getSegmentMetadata(instance4.metadataStore, testSegmentName); + Assert.assertEquals("B", segmentMetadata4.getFirstChunk()); + Assert.assertEquals("D", segmentMetadata4.getLastChunk()); + Assert.assertEquals(10, segmentMetadata4.getFirstChunkStartOffset()); + Assert.assertEquals(60, segmentMetadata4.getLastChunkStartOffset()); + // Keep + instance2.close(); + } + /** * Represents a test method. */ interface TestMethod { - void test(ChunkStorage chunkStorage, TestScenarioProvider scenarioProvider) throws Exception; + void test(ChunkStorage chunkStorage, ChunkedSegmentStorageConfig config, TestScenarioProvider scenarioProvider) throws Exception; } /** @@ -816,6 +1094,7 @@ static class ExpectedSegmentInfo { @Data class TestContext implements AutoCloseable { ChunkedSegmentStorageConfig config = ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() + .garbageCollectionDelay(Duration.ZERO) .selfCheckEnabled(true) .build(); ChunkStorage chunkStorage; @@ -868,6 +1147,7 @@ class TestInstance implements AutoCloseable { SystemJournal systemJournal; SnapshotInfoStore snapshotInfoStore; long epoch; + boolean isZombie; TestInstance(TestContext testContext, long epoch) { this.testContext = testContext; @@ -887,14 +1167,37 @@ class TestInstance implements AutoCloseable { metadataStore, garbageCollector, () -> testContext.getTime(), testContext.config, executorService()); } + /** + * Bootstrap + */ void bootstrap() throws Exception { systemJournal.bootstrap(epoch, snapshotInfoStore).join(); + garbageCollector.initialize(new InMemoryTaskQueueManager()); + deleteGarbage(); + } + + /** + * Delete Garbage + */ + void deleteGarbage() throws Exception { + val testTaskQueue = (InMemoryTaskQueueManager) garbageCollector.getTaskQueue(); + val list = testTaskQueue.drain(garbageCollector.getTaskQueueName(), 1000); + garbageCollector.processBatch(list).join(); + } + + /** + * Commit a zombie record. + */ + void writeZombieRecord(SystemJournal.SystemJournalRecord record) throws Exception { + isZombie = true; + systemJournal.commitRecord(record).join(); } /** * Append a chunk. */ void append(String segmentName, String chunkName, int offset, int length) throws Exception { + Assert.assertFalse( "Attempt to use zombie instance", isZombie); append(segmentName, chunkName, offset, length, length); } @@ -985,6 +1288,7 @@ synchronized void append(String segmentName, String chunkName, int offset, int m * Truncate. */ synchronized void truncate(String segmentName, int offset) throws Exception { + Assert.assertFalse( "Attempt to use zombie instance", isZombie); val list = testContext.expectedChunks.get(segmentName); val segmentInfo = testContext.expectedSegments.get(segmentName); @@ -1061,7 +1365,6 @@ synchronized void truncate(String segmentName, int offset) throws Exception { * Validates the metadata against expected results. */ void validate() throws Exception { - Assert.assertEquals(0, systemJournal.getCurrentFileIndex().get()); for (val expectedSegmentInfo : testContext.expectedSegments.values()) { // Check segment metadata. val expectedChunkInfoList = testContext.expectedChunks.get(expectedSegmentInfo.name); @@ -1179,14 +1482,42 @@ static class FlakyChunkStorage extends InMemoryChunkStorage { @Override protected int doWrite(ChunkHandle handle, long offset, int length, InputStream data) throws ChunkStorageException { + // Apply any interceptors with identifier 'doWrite.before' or 'doWrite.after' interceptor.intercept(handle.getChunkName(), "doWrite.before"); val ret = super.doWrite(handle, offset, length, data); interceptor.intercept(handle.getChunkName(), "doWrite.after"); return ret; } + @Override + protected ChunkHandle doCreateWithContent(String chunkName, int length, InputStream data) throws ChunkStorageException { + // Apply any interceptors with identifier 'doWrite.before' or 'doWrite.after' + interceptor.intercept(chunkName, "doWrite.before"); + // Make sure you are calling methods on super class. + ChunkHandle handle = super.doCreate(chunkName); + int bytesWritten = super.doWrite(handle, 0, length, data); + if (bytesWritten < length) { + super.doDelete(ChunkHandle.writeHandle(chunkName)); + throw new ChunkStorageException(chunkName, "doCreateWithContent - invalid length returned"); + } + val ret = handle; + interceptor.intercept(chunkName, "doWrite.after"); + return ret; + } + + @Override + protected ChunkHandle doCreate(String chunkName) throws ChunkStorageException, IllegalArgumentException { + // Apply any interceptors with identifier 'doWrite.before' or 'doWrite.after' + interceptor.intercept(chunkName, "doWrite.before"); + // Make sure you are calling methods on super class. + val ret = super.doCreate(chunkName); + interceptor.intercept(chunkName, "doWrite.after"); + return ret; + } + @Override protected int doRead(ChunkHandle handle, long fromOffset, int length, byte[] buffer, int bufferOffset) throws ChunkStorageException { + // Apply any interceptors with identifier 'doRead.before' or 'doRead.after' interceptor.intercept(handle.getChunkName(), "doRead.before"); val ret = super.doRead(handle, fromOffset, length, buffer, bufferOffset); interceptor.intercept(handle.getChunkName(), "doRead.after"); @@ -1197,6 +1528,7 @@ protected int doRead(ChunkHandle handle, long fromOffset, int length, byte[] buf static class FlakySnapshotInfoStore extends InMemorySnapshotInfoStore { final FlakyInterceptor interceptor = new FlakyInterceptor(); + @Override @SneakyThrows public CompletableFuture getSnapshotId(int containerId) { try { @@ -1209,6 +1541,7 @@ public CompletableFuture getSnapshotId(int containerId) { } } + @Override @SneakyThrows public CompletableFuture setSnapshotId(int containerId, SnapshotInfo checkpoint) { try { @@ -1226,16 +1559,19 @@ public CompletableFuture setSnapshotId(int containerId, SnapshotInfo check * Runs {@link SystemJournalOperationsTests} for Non-appendable storage. */ public static class NonAppendableChunkStorageSystemJournalOperationsTests extends SystemJournalOperationsTests { + @Override @Before public void before() throws Exception { super.before(); } + @Override @After public void after() throws Exception { super.after(); } + @Override protected ChunkStorage createChunkStorage() throws Exception { val chunkStorage = new InMemoryChunkStorage(executorService()); chunkStorage.setShouldSupportAppend(false); diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalRecordsTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalRecordsTests.java new file mode 100644 index 00000000000..9c69a3630ed --- /dev/null +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalRecordsTests.java @@ -0,0 +1,430 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.storage.chunklayer; + +import io.pravega.segmentstore.storage.metadata.ChunkMetadata; +import io.pravega.segmentstore.storage.metadata.SegmentMetadata; +import io.pravega.test.common.AssertExtensions; +import lombok.val; +import org.junit.Assert; +import org.junit.Test; + +import java.util.ArrayList; +import java.util.Arrays; + +public class SystemJournalRecordsTests { + + @Test + public void testChunkAddedRecordSerialization() throws Exception { + testSystemJournalRecordSerialization(SystemJournal.ChunkAddedRecord.builder() + .segmentName("segmentName") + .newChunkName("newChunkName") + .oldChunkName("oldChunkName") + .offset(1) + .build()); + + // With nullable values + testSystemJournalRecordSerialization(SystemJournal.ChunkAddedRecord.builder() + .segmentName("segmentName") + .newChunkName("newChunkName") + .oldChunkName(null) + .offset(1) + .build()); + } + + @Test + public void testTruncationRecordSerialization() throws Exception { + testSystemJournalRecordSerialization(SystemJournal.TruncationRecord.builder() + .segmentName("segmentName") + .offset(1) + .firstChunkName("firstChunkName") + .startOffset(2) + .build()); + } + + private void testSystemJournalRecordSerialization(SystemJournal.SystemJournalRecord original) throws Exception { + val serializer = new SystemJournal.SystemJournalRecord.SystemJournalRecordSerializer(); + val bytes = serializer.serialize(original); + val obj = serializer.deserialize(bytes); + Assert.assertEquals(original, obj); + } + + @Test + public void testSystemJournalRecordBatchSerialization() throws Exception { + ArrayList lst = new ArrayList(); + testSystemJournalRecordBatchSerialization( + SystemJournal.SystemJournalRecordBatch.builder() + .systemJournalRecords(lst) + .build()); + + ArrayList lst2 = new ArrayList(); + lst2.add(SystemJournal.ChunkAddedRecord.builder() + .segmentName("segmentName") + .newChunkName("newChunkName") + .oldChunkName("oldChunkName") + .offset(1) + .build()); + lst2.add(SystemJournal.ChunkAddedRecord.builder() + .segmentName("segmentName") + .newChunkName("newChunkName") + .oldChunkName(null) + .offset(1) + .build()); + lst2.add(SystemJournal.TruncationRecord.builder() + .segmentName("segmentName") + .offset(1) + .firstChunkName("firstChunkName") + .startOffset(2) + .build()); + testSystemJournalRecordBatchSerialization( + SystemJournal.SystemJournalRecordBatch.builder() + .systemJournalRecords(lst) + .build()); + } + + private void testSystemJournalRecordBatchSerialization(SystemJournal.SystemJournalRecordBatch original) throws Exception { + val serializer = new SystemJournal.SystemJournalRecordBatch.SystemJournalRecordBatchSerializer(); + val bytes = serializer.serialize(original); + val obj = serializer.deserialize(bytes); + Assert.assertEquals(original, obj); + } + + @Test + public void testSnapshotRecordSerialization() throws Exception { + + ArrayList list = new ArrayList<>(); + list.add(ChunkMetadata.builder() + .name("name") + .nextChunk("nextChunk") + .length(1) + .status(2) + .build()); + list.add(ChunkMetadata.builder() + .name("name") + .length(1) + .status(2) + .build()); + + testSegmentSnapshotRecordSerialization( + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata(SegmentMetadata.builder() + .name("name") + .length(1) + .chunkCount(2) + .startOffset(3) + .status(5) + .maxRollinglength(6) + .firstChunk("firstChunk") + .lastChunk("lastChunk") + .lastModified(7) + .firstChunkStartOffset(8) + .lastChunkStartOffset(9) + .ownerEpoch(10) + .build()) + .chunkMetadataCollection(list) + .build()); + + testSegmentSnapshotRecordSerialization( + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata(SegmentMetadata.builder() + .name("name") + .length(1) + .chunkCount(2) + .startOffset(3) + .status(5) + .maxRollinglength(6) + .firstChunk(null) + .lastChunk(null) + .lastModified(7) + .firstChunkStartOffset(8) + .lastChunkStartOffset(9) + .ownerEpoch(10) + .build()) + .chunkMetadataCollection(list) + .build()); + } + + private void testSegmentSnapshotRecordSerialization(SystemJournal.SegmentSnapshotRecord original) throws Exception { + val serializer = new SystemJournal.SegmentSnapshotRecord.Serializer(); + val bytes = serializer.serialize(original); + val obj = serializer.deserialize(bytes); + Assert.assertEquals(original, obj); + } + + @Test + public void testSystemSnapshotRecordSerialization() throws Exception { + + ArrayList list1 = new ArrayList<>(); + list1.add(ChunkMetadata.builder() + .name("name1") + .nextChunk("nextChunk1") + .length(1) + .status(2) + .build()); + list1.add(ChunkMetadata.builder() + .name("name12") + .length(1) + .status(2) + .build()); + + ArrayList list2 = new ArrayList<>(); + list2.add(ChunkMetadata.builder() + .name("name2") + .nextChunk("nextChunk2") + .length(1) + .status(3) + .build()); + list2.add(ChunkMetadata.builder() + .name("name22") + .length(1) + .status(3) + .build()); + + ArrayList segmentlist = new ArrayList<>(); + + segmentlist.add( + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata(SegmentMetadata.builder() + .name("name1") + .length(1) + .chunkCount(2) + .startOffset(3) + .status(5) + .maxRollinglength(6) + .firstChunk("firstChunk111") + .lastChunk("lastChun111k") + .lastModified(7) + .firstChunkStartOffset(8) + .lastChunkStartOffset(9) + .ownerEpoch(10) + .build()) + .chunkMetadataCollection(list1) + .build()); + + segmentlist.add( + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata(SegmentMetadata.builder() + .name("name2") + .length(1) + .chunkCount(2) + .startOffset(3) + .status(5) + .maxRollinglength(6) + .firstChunk(null) + .lastChunk(null) + .lastModified(7) + .firstChunkStartOffset(8) + .lastChunkStartOffset(9) + .ownerEpoch(10) + .build()) + .chunkMetadataCollection(list2) + .build()); + val systemSnapshot = SystemJournal.SystemSnapshotRecord.builder() + .epoch(42) + .fileIndex(7) + .segmentSnapshotRecords(segmentlist) + .build(); + testSystemSnapshotRecordSerialization(systemSnapshot); + } + + private void testSystemSnapshotRecordSerialization(SystemJournal.SystemSnapshotRecord original) throws Exception { + val serializer = new SystemJournal.SystemSnapshotRecord.Serializer(); + val bytes = serializer.serialize(original); + val obj = serializer.deserialize(bytes); + Assert.assertEquals(original, obj); + } + + @Test + public void testValid() { + val valid = SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata( + SegmentMetadata.builder() + .name("test") + .chunkCount(4) + .firstChunk("A") + .firstChunkStartOffset(1) + .startOffset(2) + .lastChunk("D") + .lastChunkStartOffset(10) + .length(15) + .build() + .setActive(true) + .setStorageSystemSegment(true) + ) + .chunkMetadataCollection(Arrays.asList( + ChunkMetadata.builder() + .name("A") + .length(2) + .nextChunk("B") + .build(), + ChunkMetadata.builder() + .name("B") + .length(3) + .nextChunk("C") + .build(), + ChunkMetadata.builder() + .name("C") + .length(4) + .nextChunk("D") + .build(), + ChunkMetadata.builder() + .name("D") + .length(5) + .nextChunk(null) + .build() + )) + .build(); + valid.checkInvariants(); + } + + @Test + public void testInvalidRecords() { + // Create mal formed data + SystemJournal.SegmentSnapshotRecord[] invalidDataList = new SystemJournal.SegmentSnapshotRecord[] { + // Not system segment + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata( + SegmentMetadata.builder() + .name("test") + .build() + .setActive(true) + ) + .chunkMetadataCollection(Arrays.asList()) + .build(), + // Incorrect chunk count + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata( + SegmentMetadata.builder() + .name("test") + .firstChunk("A") + .lastChunk("A") + .chunkCount(1) + .build() + .setActive(true) + .setStorageSystemSegment(true) + ) + .chunkMetadataCollection(Arrays.asList( + ChunkMetadata.builder() + .name("A") + .length(2) + .nextChunk("B") + .build(), + ChunkMetadata.builder() + .name("B") + .length(3) + .nextChunk(null) + .build() + )) + .build(), + // Incorrect chunk count. + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata( + SegmentMetadata.builder() + .name("test") + .firstChunk("A") + .lastChunk("A") + .chunkCount(1) + .build() + .setActive(true) + .setStorageSystemSegment(true) + ) + .chunkMetadataCollection(Arrays.asList( + ChunkMetadata.builder() + .name("A") + .length(2) + .nextChunk(null) + .build() + )) + .build(), + // Incorrect chunks + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata( + SegmentMetadata.builder() + .name("test") + .firstChunk("A") + .lastChunk("A") + .chunkCount(1) + .length(2) + .build() + .setActive(true) + .setStorageSystemSegment(true) + ) + .chunkMetadataCollection(Arrays.asList( + ChunkMetadata.builder() + .name("Z") + .length(2) + .nextChunk(null) + .build() + )) + .build(), + // Wrong last chunk pointer + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata( + SegmentMetadata.builder() + .name("test") + .firstChunk("A") + .lastChunk("A") + .chunkCount(1) + .length(2) + .build() + .setActive(true) + .setStorageSystemSegment(true) + ) + .chunkMetadataCollection(Arrays.asList( + ChunkMetadata.builder() + .name("A") + .length(2) + .nextChunk("Z") + .build() + )) + .build(), + // Incorrect + SystemJournal.SegmentSnapshotRecord.builder() + .segmentMetadata( + SegmentMetadata.builder() + .name("test") + .firstChunk("A") + .lastChunk("B") + .chunkCount(2) + .length(10) + .lastChunkStartOffset(6) + .build() + .setActive(true) + .setStorageSystemSegment(true) + ) + .chunkMetadataCollection(Arrays.asList( + ChunkMetadata.builder() + .name("A") + .length(5) + .nextChunk("B") + .build(), + ChunkMetadata.builder() + .name("B") + .length(5) + .nextChunk(null) + .build() + )) + .build() + }; + + for (val invalidData: invalidDataList) { + AssertExtensions.assertThrows(invalidData.toString(), + () -> invalidData.checkInvariants(), + ex -> ex instanceof IllegalStateException); + } + } + +} diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalTests.java index f79f0c12f2d..7a55ba526aa 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/SystemJournalTests.java @@ -17,20 +17,22 @@ package io.pravega.segmentstore.storage.chunklayer; import io.pravega.common.Exceptions; +import io.pravega.common.concurrent.Futures; import io.pravega.segmentstore.storage.SegmentHandle; import io.pravega.segmentstore.storage.SegmentRollingPolicy; -import io.pravega.segmentstore.storage.metadata.ChunkMetadata; import io.pravega.segmentstore.storage.metadata.ChunkMetadataStore; -import io.pravega.segmentstore.storage.metadata.SegmentMetadata; -import io.pravega.segmentstore.storage.mocks.InMemorySnapshotInfoStore; import io.pravega.segmentstore.storage.mocks.InMemoryChunkStorage; import io.pravega.segmentstore.storage.mocks.InMemoryMetadataStore; +import io.pravega.segmentstore.storage.mocks.InMemorySnapshotInfoStore; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; import io.pravega.shared.NameUtils; import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.ThreadPooledTestSuite; import java.io.ByteArrayInputStream; import java.time.Duration; import java.util.ArrayList; +import java.util.Random; +import java.util.concurrent.CompletableFuture; import java.util.function.Consumer; import lombok.Cleanup; import lombok.val; @@ -45,18 +47,20 @@ * Tests for testing bootstrap functionality with {@link SystemJournal}. */ public class SystemJournalTests extends ThreadPooledTestSuite { - protected static final Duration TIMEOUT = Duration.ofSeconds(30); + private static final int THREAD_POOL_SIZE = 10; @Rule - public Timeout globalTimeout = Timeout.seconds(TIMEOUT.getSeconds()); + public Timeout globalTimeout = Timeout.seconds(60); + @Override @Before public void before() throws Exception { super.before(); InMemorySnapshotInfoStore.clear(); } + @Override @After public void after() throws Exception { super.after(); @@ -68,7 +72,12 @@ protected int getThreadPoolSize() { } protected ChunkMetadataStore getMetadataStore() throws Exception { - return new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); + val metadataStore = new InMemoryMetadataStore(ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); + metadataStore.setReadCallback( transactionData -> { + Assert.assertFalse("Attempt to read pinned metadata from store", transactionData.getKey().contains("_system/containers")); + return CompletableFuture.completedFuture(null); + }); + return metadataStore; } protected ChunkStorage getChunkStorage() throws Exception { @@ -78,6 +87,7 @@ protected ChunkStorage getChunkStorage() throws Exception { private ChunkedSegmentStorageConfig.ChunkedSegmentStorageConfigBuilder getDefaultConfigBuilder(SegmentRollingPolicy policy) { return ChunkedSegmentStorageConfig.DEFAULT_CONFIG.toBuilder() .selfCheckEnabled(true) + .garbageCollectionDelay(Duration.ZERO) .storageMetadataRollingPolicy(policy); } @@ -112,7 +122,7 @@ public void testInitialization() throws Exception { //Assert.assertEquals(epoch, journal.getEpoch()); Assert.assertEquals(0, journal.getCurrentFileIndex().get()); - Assert.assertEquals(NameUtils.INTERNAL_SCOPE_NAME, journal.getSystemSegmentsPrefix()); + Assert.assertEquals(NameUtils.INTERNAL_CONTAINER_PREFIX, journal.getSystemSegmentsPrefix()); Assert.assertArrayEquals(SystemJournal.getChunkStorageSystemSegments(containerId), journal.getSystemSegments()); journal.initialize(); } @@ -245,7 +255,9 @@ public void testSystemSegmentNoConcatAllowed() throws Exception { ChunkedSegmentStorage segmentStorage = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStore, executorService(), config); segmentStorage.initialize(epoch); - segmentStorage.bootstrap(snapshotInfoStore).join(); + segmentStorage.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); + segmentStorage.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage); segmentStorage.create("test", null).get(); AssertExtensions.assertFutureThrows("concat() should throw", @@ -256,6 +268,12 @@ public void testSystemSegmentNoConcatAllowed() throws Exception { ex -> ex instanceof IllegalStateException); } + private void deleteGarbage(ChunkedSegmentStorage segmentStorage) { + val testTaskQueue = (InMemoryTaskQueueManager) segmentStorage.getGarbageCollector().getTaskQueue(); + val list = testTaskQueue.drain(segmentStorage.getGarbageCollector().getTaskQueueName(), 1000); + segmentStorage.getGarbageCollector().processBatch(list).join(); + } + /** * Tests a scenario when there is only one fail over. * The test adds a few chunks to the system segments and then fails over. @@ -292,9 +310,11 @@ public void testSimpleBootstrapWithOneFailover() throws Exception { ChunkedSegmentStorage segmentStorage1 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreBeforeCrash, executorService(), config); segmentStorage1.initialize(epoch); + segmentStorage1.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Bootstrap - segmentStorage1.bootstrap(snapshotInfoStore).join(); + segmentStorage1.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage1); checkSystemSegmentsLayout(segmentStorage1); // Simulate some writes to system segment, this should cause some new chunks being added. @@ -313,9 +333,11 @@ public void testSimpleBootstrapWithOneFailover() throws Exception { @Cleanup ChunkedSegmentStorage segmentStorage2 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreAfterCrash, executorService(), config); segmentStorage2.initialize(epoch); + segmentStorage2.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Bootstrap - segmentStorage2.bootstrap(snapshotInfoStore).join(); + segmentStorage2.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage2); checkSystemSegmentsLayout(segmentStorage2); // Validate @@ -359,9 +381,11 @@ public void testSimpleBootstrapWithTwoFailovers() throws Exception { ChunkedSegmentStorage segmentStorage1 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreBeforeCrash, executorService(), config); segmentStorage1.initialize(epoch); + segmentStorage1.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Bootstrap - segmentStorage1.bootstrap(snapshotInfoStore).join(); + segmentStorage1.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage1); checkSystemSegmentsLayout(segmentStorage1); // Simulate some writes to system segment, this should cause some new chunks being added. @@ -378,9 +402,11 @@ public void testSimpleBootstrapWithTwoFailovers() throws Exception { @Cleanup ChunkedSegmentStorage segmentStorage2 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreAfterCrash, executorService(), config); segmentStorage2.initialize(epoch); + segmentStorage2.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Bootstrap - segmentStorage2.bootstrap(snapshotInfoStore).join(); + segmentStorage2.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage2); checkSystemSegmentsLayout(segmentStorage2); val h2 = segmentStorage2.openWrite(systemSegmentName).join(); @@ -433,9 +459,11 @@ public void testSimpleBootstrapWithPartialDataWrite() throws Exception { ChunkedSegmentStorage segmentStorage1 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStore, executorService(), config); segmentStorage1.initialize(epoch); + segmentStorage1.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Bootstrap - segmentStorage1.bootstrap(snapshotInfoStore).join(); + segmentStorage1.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage1); checkSystemSegmentsLayout(segmentStorage1); // Simulate some writes to system segment, this should cause some new chunks being added. @@ -465,9 +493,11 @@ public void testSimpleBootstrapWithPartialDataWrite() throws Exception { @Cleanup ChunkedSegmentStorage segmentStorage2 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreAfterCrash, executorService(), config); segmentStorage2.initialize(epoch); + segmentStorage2.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Bootstrap - segmentStorage2.bootstrap(snapshotInfoStore).join(); + segmentStorage2.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage2); checkSystemSegmentsLayout(segmentStorage2); // Validate @@ -479,7 +509,6 @@ public void testSimpleBootstrapWithPartialDataWrite() throws Exception { Assert.assertEquals("Hello World", new String(out)); } - /** * Tests a scenario when there are multiple fail overs overs. * The test adds a few chunks to the system segments and then fails over. @@ -493,16 +522,19 @@ public void testSimpleBootstrapWithMultipleFailovers() throws Exception { val containerId = 42; @Cleanup ChunkStorage chunkStorage = getChunkStorage(); - testSimpleBootstrapWithMultipleFailovers(containerId, chunkStorage, null); + val policy = new SegmentRollingPolicy(100); + val config = getDefaultConfigBuilder(policy) + .selfCheckEnabled(true) + .build(); + + testSimpleBootstrapWithMultipleFailovers(containerId, chunkStorage, config, null); } - private void testSimpleBootstrapWithMultipleFailovers(int containerId, ChunkStorage chunkStorage, Consumer faultInjection) throws Exception { + private void testSimpleBootstrapWithMultipleFailovers(int containerId, ChunkStorage chunkStorage, ChunkedSegmentStorageConfig config, Consumer faultInjection) throws Exception { @Cleanup CleanupHelper cleanupHelper = new CleanupHelper(); String systemSegmentName = SystemJournal.getChunkStorageSystemSegments(containerId)[0]; long epoch = 0; - val policy = new SegmentRollingPolicy(100); - val config = getDefaultConfigBuilder(policy).build(); val data = new InMemorySnapshotInfoStore(); val snapshotInfoStore = new SnapshotInfoStore(containerId, snapshotId -> data.setSnapshotId(containerId, snapshotId), @@ -520,8 +552,10 @@ private void testSimpleBootstrapWithMultipleFailovers(int containerId, ChunkStor cleanupHelper.add(segmentStorageInLoop); segmentStorageInLoop.initialize(epoch); + segmentStorageInLoop.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); - segmentStorageInLoop.bootstrap(snapshotInfoStore).join(); + segmentStorageInLoop.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorageInLoop); checkSystemSegmentsLayout(segmentStorageInLoop); val h = segmentStorageInLoop.openWrite(systemSegmentName).join(); @@ -551,8 +585,10 @@ private void testSimpleBootstrapWithMultipleFailovers(int containerId, ChunkStor @Cleanup ChunkedSegmentStorage segmentStorageFinal = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreFinal, executorService(), config); segmentStorageFinal.initialize(epoch); + segmentStorageFinal.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); - segmentStorageFinal.bootstrap(snapshotInfoStore).join(); + segmentStorageFinal.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorageFinal); checkSystemSegmentsLayout(segmentStorageFinal); val info = segmentStorageFinal.getStreamSegmentInfo(systemSegmentName, null).join(); @@ -565,32 +601,18 @@ private void testSimpleBootstrapWithMultipleFailovers(int containerId, ChunkStor Assert.assertEquals(expected, actual); } - @Test - public void testSimpleBootstrapWithIncompleteSnapshot() throws Exception { - val containerId = 42; - @Cleanup - ChunkStorage chunkStorage = getChunkStorage(); - testSimpleBootstrapWithMultipleFailovers(containerId, chunkStorage, epoch -> { - val snapShotFile = NameUtils.getSystemJournalSnapshotFileName(containerId, epoch, 1); - val size = 1; - if (chunkStorage.supportsTruncation()) { - chunkStorage.truncate(ChunkHandle.writeHandle(snapShotFile), size).join(); - } else { - val bytes = new byte[size]; - chunkStorage.read(ChunkHandle.readHandle(snapShotFile), 0, size, bytes, 0).join(); - chunkStorage.delete(ChunkHandle.writeHandle(snapShotFile)).join(); - chunkStorage.createWithContent(snapShotFile, size, new ByteArrayInputStream(bytes)).join(); - } - }); - } - @Test public void testSimpleBootstrapWithMissingSnapshot() throws Exception { val containerId = 42; @Cleanup ChunkStorage chunkStorage = getChunkStorage(); + val policy = new SegmentRollingPolicy(100); + val config = getDefaultConfigBuilder(policy) + .selfCheckEnabled(true) + .build(); + try { - testSimpleBootstrapWithMultipleFailovers(containerId, chunkStorage, epoch -> { + testSimpleBootstrapWithMultipleFailovers(containerId, chunkStorage, config, epoch -> { val snapShotFile = NameUtils.getSystemJournalSnapshotFileName(containerId, epoch, 1); chunkStorage.delete(ChunkHandle.writeHandle(snapShotFile)).join(); }); @@ -641,8 +663,10 @@ public void testSimpleBootstrapWithMultipleFailoversWithTruncate() throws Except cleanupHelper.add(segmentStorageInLoop); segmentStorageInLoop.initialize(epoch); + segmentStorageInLoop.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); - segmentStorageInLoop.bootstrap(snapshotInfoStore).join(); + segmentStorageInLoop.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorageInLoop); checkSystemSegmentsLayout(segmentStorageInLoop); val h = segmentStorageInLoop.openWrite(systemSegmentName).join(); @@ -694,8 +718,10 @@ public void testSimpleBootstrapWithMultipleFailoversWithTruncate() throws Except ChunkedSegmentStorage segmentStorageFinal = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreFinal, executorService(), config); cleanupHelper.add(segmentStorageFinal); segmentStorageFinal.initialize(epoch); + segmentStorageFinal.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); - segmentStorageFinal.bootstrap(snapshotInfoStore).join(); + segmentStorageFinal.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorageFinal); checkSystemSegmentsLayout(segmentStorageFinal); val info = segmentStorageFinal.getStreamSegmentInfo(systemSegmentName, null).join(); @@ -788,9 +814,10 @@ private void testBootstrapWithTruncate(String initialGarbageThatIsTruncated, Str @Cleanup ChunkedSegmentStorage segmentStorage1 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreBeforeCrash, executorService(), config); segmentStorage1.initialize(epoch); - + segmentStorage1.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); // Bootstrap - segmentStorage1.bootstrap(snapshotInfoStore).join(); + segmentStorage1.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage1); checkSystemSegmentsLayout(segmentStorage1); // Simulate some writes to system segment, this should cause some new chunks being added. @@ -809,8 +836,10 @@ private void testBootstrapWithTruncate(String initialGarbageThatIsTruncated, Str @Cleanup ChunkedSegmentStorage segmentStorage2 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreAfterCrash, executorService(), config); segmentStorage2.initialize(epoch); + segmentStorage2.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); - segmentStorage2.bootstrap(snapshotInfoStore).join(); + segmentStorage2.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage2); checkSystemSegmentsLayout(segmentStorage2); val h2 = segmentStorage2.openWrite(systemSegmentName).join(); @@ -866,8 +895,11 @@ public void testSimpleBootstrapWithTwoTruncates() throws Exception { @Cleanup ChunkedSegmentStorage segmentStorage1 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreBeforeCrash, executorService(), config); segmentStorage1.initialize(epoch); + segmentStorage1.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); + // Bootstrap - segmentStorage1.bootstrap(snapshotInfoStore).join(); + segmentStorage1.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage1); checkSystemSegmentsLayout(segmentStorage1); // Simulate some writes to system segment, this should cause some new chunks being added. @@ -884,8 +916,10 @@ public void testSimpleBootstrapWithTwoTruncates() throws Exception { @Cleanup ChunkedSegmentStorage segmentStorage2 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreAfterCrash, executorService(), config); segmentStorage2.initialize(epoch); + segmentStorage2.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); - segmentStorage2.bootstrap(snapshotInfoStore).join(); + segmentStorage2.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage2); checkSystemSegmentsLayout(segmentStorage2); val h2 = segmentStorage2.openWrite(systemSegmentName).join(); @@ -1069,7 +1103,6 @@ public void testSimpleOperationSequence() throws Exception { TestUtils.checkSegmentLayout(metadataStoreAfterCrash2, systemSegmentName, policy.getMaxLength(), 10); } - /** * Test simple chunk truncation. * We failover two times to test correct interaction between snapshot and system logs. @@ -1130,7 +1163,7 @@ public void testSimpleTruncation() throws Exception { metadataStoreAfterCrash, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); - SystemJournal systemJournalAfter = new SystemJournal(containerId, chunkStorage, metadataStoreAfterCrash, garbageCollector2, config, executorService() ); + SystemJournal systemJournalAfter = new SystemJournal(containerId, chunkStorage, metadataStoreAfterCrash, garbageCollector2, config, executorService()); systemJournalAfter.bootstrap(2, snapshotInfoStore).join(); @@ -1191,251 +1224,126 @@ public void testSimpleTruncation() throws Exception { TestUtils.checkSegmentBounds(metadataStoreAfterCrash3, systemSegmentName, 20, 20); } - - /** - * Check system segment layout. + * Test concurrent writes to storage system segments by simulating concurrent writes. + * + * @throws Exception Throws exception in case of any error. */ - private void checkSystemSegmentsLayout(ChunkedSegmentStorage segmentStorage) throws Exception { - for (String systemSegment : segmentStorage.getSystemJournal().getSystemSegments()) { - TestUtils.checkChunksExistInStorage(segmentStorage.getChunkStorage(), segmentStorage.getMetadataStore(), systemSegment); - } - } - @Test - public void testChunkAddedRecordSerialization() throws Exception { - testSystemJournalRecordSerialization(SystemJournal.ChunkAddedRecord.builder() - .segmentName("segmentName") - .newChunkName("newChunkName") - .oldChunkName("oldChunkName") - .offset(1) - .build()); - - // With nullable values - testSystemJournalRecordSerialization(SystemJournal.ChunkAddedRecord.builder() - .segmentName("segmentName") - .newChunkName("newChunkName") - .oldChunkName(null) - .offset(1) - .build()); - } + public void testSystemSegmentConcurrency() throws Exception { + @Cleanup + ChunkStorage chunkStorage = getChunkStorage(); + @Cleanup + ChunkMetadataStore metadataStoreBeforeCrash = getMetadataStore(); + @Cleanup + ChunkMetadataStore metadataStoreAfterCrash = getMetadataStore(); - @Test - public void testTruncationRecordSerialization() throws Exception { - testSystemJournalRecordSerialization(SystemJournal.TruncationRecord.builder() - .segmentName("segmentName") - .offset(1) - .firstChunkName("firstChunkName") - .startOffset(2) - .build()); - } + int containerId = 42; + int maxLength = 8; + long epoch = 1; + val policy = new SegmentRollingPolicy(maxLength); + val config = getDefaultConfigBuilder(policy).build(); - private void testSystemJournalRecordSerialization(SystemJournal.SystemJournalRecord original) throws Exception { - val serializer = new SystemJournal.SystemJournalRecord.SystemJournalRecordSerializer(); - val bytes = serializer.serialize(original); - val obj = serializer.deserialize(bytes); - Assert.assertEquals(original, obj); - } + val snapshotData = new InMemorySnapshotInfoStore(); + val snapshotInfoStore = new SnapshotInfoStore(containerId, + snapshotId -> snapshotData.setSnapshotId(containerId, snapshotId), + () -> snapshotData.getSnapshotId(containerId)); - @Test - public void testSystemJournalRecordBatchSerialization() throws Exception { - ArrayList lst = new ArrayList(); - testSystemJournalRecordBatchSerialization( - SystemJournal.SystemJournalRecordBatch.builder() - .systemJournalRecords(lst) - .build()); - - ArrayList lst2 = new ArrayList(); - lst2.add(SystemJournal.ChunkAddedRecord.builder() - .segmentName("segmentName") - .newChunkName("newChunkName") - .oldChunkName("oldChunkName") - .offset(1) - .build()); - lst2.add(SystemJournal.ChunkAddedRecord.builder() - .segmentName("segmentName") - .newChunkName("newChunkName") - .oldChunkName(null) - .offset(1) - .build()); - lst2.add(SystemJournal.TruncationRecord.builder() - .segmentName("segmentName") - .offset(1) - .firstChunkName("firstChunkName") - .startOffset(2) - .build()); - testSystemJournalRecordBatchSerialization( - SystemJournal.SystemJournalRecordBatch.builder() - .systemJournalRecords(lst) - .build()); - } + // Start container with epoch 1 + @Cleanup + ChunkedSegmentStorage segmentStorage1 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreBeforeCrash, executorService(), config); - private void testSystemJournalRecordBatchSerialization(SystemJournal.SystemJournalRecordBatch original) throws Exception { - val serializer = new SystemJournal.SystemJournalRecordBatch.SystemJournalRecordBatchSerializer(); - val bytes = serializer.serialize(original); - val obj = serializer.deserialize(bytes); - Assert.assertEquals(original, obj); - } + segmentStorage1.initialize(epoch); - @Test - public void testSnapshotRecordSerialization() throws Exception { - - ArrayList list = new ArrayList<>(); - list.add(ChunkMetadata.builder() - .name("name") - .nextChunk("nextChunk") - .length(1) - .status(2) - .build()); - list.add(ChunkMetadata.builder() - .name("name") - .length(1) - .status(2) - .build()); - - testSegmentSnapshotRecordSerialization( - SystemJournal.SegmentSnapshotRecord.builder() - .segmentMetadata(SegmentMetadata.builder() - .name("name") - .length(1) - .chunkCount(2) - .startOffset(3) - .status(5) - .maxRollinglength(6) - .firstChunk("firstChunk") - .lastChunk("lastChunk") - .lastModified(7) - .firstChunkStartOffset(8) - .lastChunkStartOffset(9) - .ownerEpoch(10) - .build()) - .chunkMetadataCollection(list) - .build()); - - testSegmentSnapshotRecordSerialization( - SystemJournal.SegmentSnapshotRecord.builder() - .segmentMetadata(SegmentMetadata.builder() - .name("name") - .length(1) - .chunkCount(2) - .startOffset(3) - .status(5) - .maxRollinglength(6) - .firstChunk(null) - .lastChunk(null) - .lastModified(7) - .firstChunkStartOffset(8) - .lastChunkStartOffset(9) - .ownerEpoch(10) - .build()) - .chunkMetadataCollection(list) - .build()); - } + // Bootstrap + segmentStorage1.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); + segmentStorage1.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage1); - private void testSegmentSnapshotRecordSerialization(SystemJournal.SegmentSnapshotRecord original) throws Exception { - val serializer = new SystemJournal.SegmentSnapshotRecord.Serializer(); - val bytes = serializer.serialize(original); - val obj = serializer.deserialize(bytes); - Assert.assertEquals(original, obj); - } + checkSystemSegmentsLayout(segmentStorage1); - @Test - public void testSystemSnapshotRecordSerialization() throws Exception { - - ArrayList list1 = new ArrayList<>(); - list1.add(ChunkMetadata.builder() - .name("name1") - .nextChunk("nextChunk1") - .length(1) - .status(2) - .build()); - list1.add(ChunkMetadata.builder() - .name("name12") - .length(1) - .status(2) - .build()); - - ArrayList list2 = new ArrayList<>(); - list2.add(ChunkMetadata.builder() - .name("name2") - .nextChunk("nextChunk2") - .length(1) - .status(3) - .build()); - list2.add(ChunkMetadata.builder() - .name("name22") - .length(1) - .status(3) - .build()); - - ArrayList segmentlist = new ArrayList<>(); - - segmentlist.add( - SystemJournal.SegmentSnapshotRecord.builder() - .segmentMetadata(SegmentMetadata.builder() - .name("name1") - .length(1) - .chunkCount(2) - .startOffset(3) - .status(5) - .maxRollinglength(6) - .firstChunk("firstChunk111") - .lastChunk("lastChun111k") - .lastModified(7) - .firstChunkStartOffset(8) - .lastChunkStartOffset(9) - .ownerEpoch(10) - .build()) - .chunkMetadataCollection(list1) - .build()); - - segmentlist.add( - SystemJournal.SegmentSnapshotRecord.builder() - .segmentMetadata(SegmentMetadata.builder() - .name("name2") - .length(1) - .chunkCount(2) - .startOffset(3) - .status(5) - .maxRollinglength(6) - .firstChunk(null) - .lastChunk(null) - .lastModified(7) - .firstChunkStartOffset(8) - .lastChunkStartOffset(9) - .ownerEpoch(10) - .build()) - .chunkMetadataCollection(list2) - .build()); - val systemSnapshot = SystemJournal.SystemSnapshotRecord.builder() - .epoch(42) - .fileIndex(7) - .segmentSnapshotRecords(segmentlist) - .build(); - testSystemSnapshotRecordSerialization(systemSnapshot); + // Simulate some writes to system segment, this should cause some new chunks being added. + val writeSize = 10; + val numWrites = 10; + val numOfStorageSystemSegments = SystemJournal.getChunkStorageSystemSegments(containerId).length; + val data = new byte[numOfStorageSystemSegments][writeSize * numWrites]; + + var futures = new ArrayList>(); + val rnd = new Random(0); + for (int i = 0; i < numOfStorageSystemSegments; i++) { + final int k = i; + futures.add(CompletableFuture.runAsync(() -> { + rnd.nextBytes(data[k]); + String systemSegmentName = SystemJournal.getChunkStorageSystemSegments(containerId)[k]; + val h = segmentStorage1.openWrite(systemSegmentName).join(); + // Init + long offset = 0; + for (int j = 0; j < numWrites; j++) { + segmentStorage1.write(h, offset, new ByteArrayInputStream(data[k], writeSize * j, writeSize), writeSize, null).join(); + offset += writeSize; + } + val info = segmentStorage1.getStreamSegmentInfo(systemSegmentName, null).join(); + Assert.assertEquals(writeSize * numWrites, info.getLength()); + byte[] out = new byte[writeSize * numWrites]; + val hr = segmentStorage1.openRead(systemSegmentName).join(); + segmentStorage1.read(hr, 0, out, 0, writeSize * numWrites, null).join(); + Assert.assertArrayEquals(data[k], out); + }, executorService())); + } + + Futures.allOf(futures).join(); + // Step 2 + // Start container with epoch 2 + epoch++; + + @Cleanup + ChunkedSegmentStorage segmentStorage2 = new ChunkedSegmentStorage(containerId, chunkStorage, metadataStoreAfterCrash, executorService(), config); + segmentStorage2.initialize(epoch); + + // Bootstrap + segmentStorage2.getGarbageCollector().initialize(new InMemoryTaskQueueManager()).join(); + segmentStorage2.bootstrap(snapshotInfoStore, null).join(); + deleteGarbage(segmentStorage2); + checkSystemSegmentsLayout(segmentStorage2); + + // Validate + for (int i = 0; i < numOfStorageSystemSegments; i++) { + String systemSegmentName = SystemJournal.getChunkStorageSystemSegments(containerId)[i]; + val info = segmentStorage2.getStreamSegmentInfo(systemSegmentName, null).join(); + Assert.assertEquals(writeSize * numWrites, info.getLength()); + byte[] out = new byte[writeSize * numWrites]; + val hr = segmentStorage2.openRead(systemSegmentName).join(); + segmentStorage2.read(hr, 0, out, 0, writeSize * numWrites, null).join(); + Assert.assertArrayEquals(data[i], out); + } } - private void testSystemSnapshotRecordSerialization(SystemJournal.SystemSnapshotRecord original) throws Exception { - val serializer = new SystemJournal.SystemSnapshotRecord.Serializer(); - val bytes = serializer.serialize(original); - val obj = serializer.deserialize(bytes); - Assert.assertEquals(original, obj); + /** + * Check system segment layout. + */ + private void checkSystemSegmentsLayout(ChunkedSegmentStorage segmentStorage) throws Exception { + for (String systemSegment : segmentStorage.getSystemJournal().getSystemSegments()) { + TestUtils.checkChunksExistInStorage(segmentStorage.getChunkStorage(), segmentStorage.getMetadataStore(), systemSegment); + } } /** * Tests {@link SystemJournal} with non Appendable {@link ChunkStorage} using {@link SystemJournalTests}. */ public static class NonAppendableChunkStorageSystemJournalTests extends SystemJournalTests { + @Override @Before public void before() throws Exception { super.before(); } + @Override @After public void after() throws Exception { super.after(); } + @Override protected ChunkStorage getChunkStorage() throws Exception { val chunkStorage = new InMemoryChunkStorage(executorService()); chunkStorage.setShouldSupportAppend(false); diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/TestUtils.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/TestUtils.java index cf41b533ce3..4ea9148fba7 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/TestUtils.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/chunklayer/TestUtils.java @@ -20,13 +20,17 @@ import io.pravega.segmentstore.storage.metadata.ReadIndexBlockMetadata; import io.pravega.segmentstore.storage.metadata.SegmentMetadata; import io.pravega.segmentstore.storage.metadata.StorageMetadata; +import io.pravega.segmentstore.storage.mocks.InMemoryTaskQueueManager; import io.pravega.shared.NameUtils; import lombok.val; import org.junit.Assert; import java.util.ArrayList; +import java.util.HashMap; import java.util.HashSet; +import java.util.Set; import java.util.TreeMap; +import java.util.stream.Collectors; /** * Test utility. @@ -101,7 +105,7 @@ public static void checkSegmentLayout(ChunkMetadataStore metadataStore, String s // Assert Assert.assertNotNull(segmentMetadata.getFirstChunk()); Assert.assertNotNull(segmentMetadata.getLastChunk()); - long expectedLength = 0; + long expectedLength = segmentMetadata.getFirstChunkStartOffset(); int i = 0; val chunks = getChunkList(metadataStore, segmentName); for (val chunk : chunks) { @@ -239,13 +243,61 @@ public static ArrayList getChunkList(ChunkMetadataStore metadataS while (null != current) { val chunk = (ChunkMetadata) txn.get(current).get(); Assert.assertNotNull(chunk); - chunkList.add(chunk); + chunkList.add((ChunkMetadata) chunk.deepCopy()); current = chunk.getNextChunk(); } return chunkList; } } + /** + * Gets the list of names of chunks for the given segment. + * + * @param metadataStore Metadata store to query. + * @param key Key. + * @return List of names of chunks for the segment. + * @throws Exception Exceptions are thrown in case of any errors. + */ + public static Set getChunkNameList(ChunkMetadataStore metadataStore, String key) throws Exception { + return getChunkList(metadataStore, key).stream().map( c -> c.getName()).collect(Collectors.toSet()); + } + + /** + * Checks garbage collection queue to ensure new chunks and truncated chunks are added to GC queue. + * + * @param chunkedSegmentStorage instance of {@link ChunkedSegmentStorage} + * @param beforeSet set of chunks before. + * @param afterSet set of chunks after. + */ + public static void checkGarbageCollectionQueue(ChunkedSegmentStorage chunkedSegmentStorage, Set beforeSet, Set afterSet) { + // Get the enqueued tasks. + // Need to de-dup + val tasks = new HashMap(); + val tasksList = ((InMemoryTaskQueueManager) chunkedSegmentStorage.getGarbageCollector().getTaskQueue()) + .drain(chunkedSegmentStorage.getGarbageCollector().getTaskQueueName(), Integer.MAX_VALUE).stream() + .collect(Collectors.toList()); + for (val task : tasksList) { + tasks.put(task.getName(), task); + } + + // All chunks not in new set must be enqueued for deletion. + for ( val oldChunk: beforeSet) { + if (!afterSet.contains(oldChunk)) { + val task = tasks.get(oldChunk); + Assert.assertNotNull(task); + Assert.assertEquals(GarbageCollector.TaskInfo.DELETE_CHUNK, task.getTaskType() ); + } + } + // All chunks not in old set must be enqueued for deletion. + for ( val newChunk: afterSet) { + if (!beforeSet.contains(newChunk)) { + val task = tasks.get(newChunk); + Assert.assertNotNull(task); + Assert.assertEquals(GarbageCollector.TaskInfo.DELETE_CHUNK, task.getTaskType() ); + } + } + } + /** * Checks if all chunks actually exist in storage for given segment. * diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreMockTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreMockTests.java index e58b85d3747..16fdcfbc696 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreMockTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreMockTests.java @@ -26,6 +26,7 @@ import io.pravega.segmentstore.storage.mocks.MockStorageMetadata; import io.pravega.test.common.AssertExtensions; import io.pravega.test.common.ThreadPooledTestSuite; +import lombok.Cleanup; import lombok.val; import org.junit.Assert; import org.junit.Test; @@ -51,6 +52,7 @@ public class TableBasedMetadataStoreMockTests extends ThreadPooledTestSuite { @Test public void testIllegalStateExceptionDuringRead() { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); @@ -94,13 +96,14 @@ public void testBadReadMissingDbObjectDuringRead() { @Test public void testBadReadMissingNoVersionDuringRead() { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = spy(new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService())); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); when(tableBasedMetadataStore.read("test")).thenReturn(CompletableFuture.completedFuture(BaseMetadataStore.TransactionData.builder() .key("test") .value(new MockStorageMetadata("key", "value")) - .dbObject(new Long(10)) + .dbObject(Long.valueOf(10)) .build())); val txn = tableBasedMetadataStore.beginTransaction(true, "test"); AssertExtensions.assertFutureThrows( @@ -112,6 +115,7 @@ public void testBadReadMissingNoVersionDuringRead() { @Test public void testRandomExceptionDuringRead() { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); @@ -129,6 +133,7 @@ public void testRandomExceptionDuringRead() { @Test public void testDataLogWriterNotPrimaryExceptionDuringWrite() { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); @@ -149,6 +154,7 @@ public void testDataLogWriterNotPrimaryExceptionDuringWrite() { @Test public void testBadKeyVersionExceptionDuringWrite() { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); @@ -169,6 +175,7 @@ public void testBadKeyVersionExceptionDuringWrite() { @Test public void testRandomRuntimeExceptionDuringWrite() { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); @@ -189,6 +196,7 @@ public void testRandomRuntimeExceptionDuringWrite() { @Test public void testExceptionDuringRemove() throws Exception { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); @@ -208,6 +216,7 @@ public void testExceptionDuringRemove() throws Exception { @Test public void testExceptionDuringRemoveWithSpy() throws Exception { TableStore mockTableStore = spy(new InMemoryTableStore(executorService())); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); // Step 1 - set up keys @@ -266,6 +275,7 @@ public void testExceptionDuringRemoveWithSpy() throws Exception { @Test public void testRandomExceptionDuringWrite() { TableStore mockTableStore = mock(TableStore.class); + @Cleanup TableBasedMetadataStore tableBasedMetadataStore = new TableBasedMetadataStore("test", mockTableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); when(mockTableStore.createSegment(any(), any(), any())).thenReturn(Futures.failedFuture(new CompletionException(new StreamSegmentExistsException("test")))); diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreTests.java index 6633667d5e2..5760a333bf2 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/metadata/TableBasedMetadataStoreTests.java @@ -32,6 +32,7 @@ * Note that this is just a test for key-value store. Here the storage is NOT using this implementation. */ public class TableBasedMetadataStoreTests extends ChunkMetadataStoreTests { + @Override @Before public void setUp() throws Exception { val tableStore = new InMemoryTableStore(executorService()); @@ -43,10 +44,12 @@ public void setUp() throws Exception { */ public static class TableBasedMetadataSimpleStorageTests extends SimpleStorageTests { + @Override protected ChunkStorage getChunkStorage() throws Exception { return new InMemoryChunkStorage(executorService()); } + @Override protected ChunkMetadataStore getMetadataStore() throws Exception { TableStore tableStore = new InMemoryTableStore(executorService()); String tableName = "TableBasedMetadataSimpleStorageTests"; @@ -68,10 +71,12 @@ protected ChunkMetadataStore getCloneMetadataStore(ChunkMetadataStore metadataSt * Unit tests for {@link TableBasedMetadataStore} with {@link InMemoryChunkStorage} using {@link ChunkedRollingStorageTests}. */ public static class InMemorySimpleStorageRollingTests extends ChunkedRollingStorageTests { + @Override protected ChunkStorage getChunkStorage() throws Exception { return new InMemoryChunkStorage(executorService()); } + @Override protected ChunkMetadataStore getMetadataStore() throws Exception { TableStore tableStore = new InMemoryTableStore(executorService()); String tableName = "TableBasedMetadataSimpleStorageTests"; @@ -90,6 +95,7 @@ public ChunkMetadataStore createMetadataStore() throws Exception { return new TableBasedMetadataStore(tableName, tableStore, ChunkedSegmentStorageConfig.DEFAULT_CONFIG, executorService()); } + @Override public TestContext getTestContext() throws Exception { return new TableBasedMetadataTestContext(executorService()); } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemorySimpleStorageTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemorySimpleStorageTests.java index c98cdec73e7..32c8bdace9d 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemorySimpleStorageTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemorySimpleStorageTests.java @@ -30,6 +30,7 @@ * Unit tests for {@link InMemorySimpleStorage} using {@link SimpleStorageTests}. */ public class InMemorySimpleStorageTests extends SimpleStorageTests { + @Override protected ChunkStorage getChunkStorage() { return new InMemoryChunkStorage(executorService()); } @@ -38,6 +39,7 @@ protected ChunkStorage getChunkStorage() { * Unit tests for {@link InMemorySimpleStorage} using {@link ChunkedRollingStorageTests}. */ public static class InMemorySimpleStorageRollingStorageTests extends ChunkedRollingStorageTests { + @Override protected ChunkStorage getChunkStorage() { return new InMemoryChunkStorage(executorService()); } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTableStore.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTableStore.java index 34d6e0d280b..b351dcbb792 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTableStore.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTableStore.java @@ -30,14 +30,8 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.contracts.tables.TableStore; -import lombok.NonNull; -import lombok.RequiredArgsConstructor; -import lombok.SneakyThrows; -import lombok.val; - -import javax.annotation.concurrent.GuardedBy; -import javax.annotation.concurrent.ThreadSafe; import java.time.Duration; import java.util.Collection; import java.util.Collections; @@ -50,6 +44,12 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.function.Function; import java.util.stream.Collectors; +import javax.annotation.concurrent.GuardedBy; +import javax.annotation.concurrent.ThreadSafe; +import lombok.NonNull; +import lombok.RequiredArgsConstructor; +import lombok.SneakyThrows; +import lombok.val; @RequiredArgsConstructor @ThreadSafe @@ -124,27 +124,22 @@ public CompletableFuture> get(String segmentName, List merge(String targetSegmentName, String sourceSegmentName, Duration timeout) { - throw new UnsupportedOperationException(); - } - - @Override - public CompletableFuture seal(String segmentName, Duration timeout) { + public CompletableFuture>> keyIterator(String segmentName, IteratorArgs args) { throw new UnsupportedOperationException(); } @Override - public CompletableFuture>> keyIterator(String segmentName, IteratorArgs args) { + public CompletableFuture>> entryIterator(String segmentName, IteratorArgs args) { throw new UnsupportedOperationException(); } @Override - public CompletableFuture>> entryIterator(String segmentName, IteratorArgs args) { + public CompletableFuture>> entryDeltaIterator(String segmentName, long fromPosition, Duration fetchTimeout) { throw new UnsupportedOperationException(); } @Override - public CompletableFuture>> entryDeltaIterator(String segmentName, long fromPosition, Duration fetchTimeout) { + public CompletableFuture getInfo(String segmentName, Duration timeout) { throw new UnsupportedOperationException(); } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTaskQueueManager.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTaskQueueManager.java new file mode 100644 index 00000000000..9ad51f51165 --- /dev/null +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/mocks/InMemoryTaskQueueManager.java @@ -0,0 +1,59 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.segmentstore.storage.mocks; + +import com.google.common.base.Preconditions; +import io.pravega.segmentstore.storage.chunklayer.AbstractTaskQueueManager; +import io.pravega.segmentstore.storage.chunklayer.GarbageCollector; +import lombok.Getter; +import lombok.val; + +import java.util.ArrayList; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.LinkedBlockingQueue; + +public class InMemoryTaskQueueManager implements AbstractTaskQueueManager { + @Getter + private final ConcurrentHashMap> taskQueueMap = new ConcurrentHashMap<>(); + + @Override + public CompletableFuture addQueue(String queueName, Boolean ignoreProcessing) { + taskQueueMap.putIfAbsent(queueName, new LinkedBlockingQueue()); + return CompletableFuture.completedFuture(null); + } + + @Override + public synchronized CompletableFuture addTask(String queueName, GarbageCollector.TaskInfo task) { + val queue = taskQueueMap.get(queueName); + Preconditions.checkState(null != queue, "Attempt to access non existent queue."); + queue.add(task); + return CompletableFuture.completedFuture(null); + } + + public ArrayList drain(String queueName, int maxElements) { + val list = new ArrayList(); + val queue = taskQueueMap.get(queueName); + Preconditions.checkState(null != queue, "Attempt to access non existent queue."); + queue.drainTo(list, maxElements); + return list; + } + + @Override + public void close() throws Exception { + + } +} diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpSimpleStorageTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpSimpleStorageTests.java index 352d0c39d54..5d81b3043a2 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpSimpleStorageTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpSimpleStorageTests.java @@ -24,6 +24,7 @@ * Unit tests for {@link NoOpChunkStorage} using {@link SimpleStorageTests}. */ public class NoOpSimpleStorageTests extends SimpleStorageTests { + @Override protected ChunkStorage getChunkStorage() throws Exception { return new NoOpChunkStorage(executorService()); } @@ -37,6 +38,7 @@ protected void populate(byte[] data) { * Unit tests for {@link NoOpChunkStorage} using {@link ChunkedRollingStorageTests}. */ public static class NoOpRollingStorageTests extends ChunkedRollingStorageTests { + @Override protected ChunkStorage getChunkStorage() { return new NoOpChunkStorage(executorService()); } diff --git a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpStorageUserDataWriteOnlyTests.java b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpStorageUserDataWriteOnlyTests.java index 9b3a1fa39b3..6aef53c67e2 100644 --- a/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpStorageUserDataWriteOnlyTests.java +++ b/segmentstore/storage/src/test/java/io/pravega/segmentstore/storage/noop/NoOpStorageUserDataWriteOnlyTests.java @@ -21,6 +21,7 @@ import io.pravega.segmentstore.storage.StorageTestBase; import io.pravega.segmentstore.storage.SyncStorage; import io.pravega.segmentstore.storage.mocks.InMemoryStorageFactory; +import lombok.Cleanup; import lombok.val; import org.junit.Before; import org.junit.Test; @@ -113,6 +114,7 @@ public void testExist() { public void testUnseal() throws Exception { StorageExtraConfig config = StorageExtraConfig.builder().build(); NoOpStorage.NoOpSegmentHandle handle = new NoOpStorage.NoOpSegmentHandle("foo_unseal"); + @Cleanup NoOpStorage storage = new NoOpStorage(config, systemStorage, null); storage.unseal(handle); } @@ -121,6 +123,7 @@ public void testUnseal() throws Exception { public void testTruncate() throws Exception { StorageExtraConfig config = StorageExtraConfig.builder().build(); NoOpStorage.NoOpSegmentHandle handle = new NoOpStorage.NoOpSegmentHandle("foo_truncate"); + @Cleanup NoOpStorage storage = new NoOpStorage(config, systemStorage, null); storage.truncate(handle, 0); } @@ -129,6 +132,7 @@ public void testTruncate() throws Exception { public void testSupportTruncation() throws Exception { StorageExtraConfig config = StorageExtraConfig.builder().build(); NoOpStorage.NoOpSegmentHandle handle = new NoOpStorage.NoOpSegmentHandle("foo_supportTruncation"); + @Cleanup NoOpStorage storage = new NoOpStorage(config, systemStorage, null); assertEquals(systemStorage.supportsTruncation(), storage.supportsTruncation()); } diff --git a/shared/authplugin/src/test/java/io/pravega/auth/FakeAuthHandler.java b/shared/authplugin/src/test/java/io/pravega/auth/FakeAuthHandler.java index 87baf149b4c..611ba7ec5db 100644 --- a/shared/authplugin/src/test/java/io/pravega/auth/FakeAuthHandler.java +++ b/shared/authplugin/src/test/java/io/pravega/auth/FakeAuthHandler.java @@ -15,8 +15,6 @@ */ package io.pravega.auth; -import io.pravega.shared.security.auth.UserPrincipal; - import java.security.Principal; public class FakeAuthHandler implements AuthHandler { @@ -31,7 +29,7 @@ public String getHandlerName() { @Override public Principal authenticate(String token) { - return new UserPrincipal(token); + return new MockPrincipal(token); } @Override diff --git a/shared/authplugin/src/test/java/io/pravega/auth/MockPrincipal.java b/shared/authplugin/src/test/java/io/pravega/auth/MockPrincipal.java new file mode 100644 index 00000000000..57b8266f169 --- /dev/null +++ b/shared/authplugin/src/test/java/io/pravega/auth/MockPrincipal.java @@ -0,0 +1,26 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.auth; + +import java.io.Serializable; +import java.security.Principal; +import lombok.Data; + +@Data +public class MockPrincipal implements Principal, Serializable { + private static final long serialVersionUID = 1L; + private final String name; +} diff --git a/shared/authplugin/src/test/java/io/pravega/auth/TestAuthHandler.java b/shared/authplugin/src/test/java/io/pravega/auth/TestAuthHandler.java index af2177b09c2..247d30973e3 100644 --- a/shared/authplugin/src/test/java/io/pravega/auth/TestAuthHandler.java +++ b/shared/authplugin/src/test/java/io/pravega/auth/TestAuthHandler.java @@ -15,8 +15,6 @@ */ package io.pravega.auth; -import io.pravega.shared.security.auth.UserPrincipal; - import java.security.Principal; public class TestAuthHandler implements AuthHandler { @@ -31,7 +29,7 @@ public String getHandlerName() { @Override public Principal authenticate(String token) { - return new UserPrincipal(token); + return new MockPrincipal(token); } @Override diff --git a/shared/cluster/src/main/java/io/pravega/common/cluster/Cluster.java b/shared/cluster/src/main/java/io/pravega/common/cluster/Cluster.java index e73b17ce84e..27e640dfb40 100644 --- a/shared/cluster/src/main/java/io/pravega/common/cluster/Cluster.java +++ b/shared/cluster/src/main/java/io/pravega/common/cluster/Cluster.java @@ -60,4 +60,10 @@ public interface Cluster extends AutoCloseable { */ public Set getClusterMembers(); + /** + * Get the health status. + * + * @return true/false. + */ + public boolean isHealthy(); } diff --git a/shared/cluster/src/main/java/io/pravega/common/cluster/zkImpl/ClusterZKImpl.java b/shared/cluster/src/main/java/io/pravega/common/cluster/zkImpl/ClusterZKImpl.java index ac89357f3d7..f9af1d2fe0b 100644 --- a/shared/cluster/src/main/java/io/pravega/common/cluster/zkImpl/ClusterZKImpl.java +++ b/shared/cluster/src/main/java/io/pravega/common/cluster/zkImpl/ClusterZKImpl.java @@ -41,6 +41,7 @@ import java.util.Optional; import java.util.Set; import java.util.concurrent.Executor; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.stream.Collectors; import static io.pravega.common.cluster.ClusterListener.EventType.ERROR; @@ -64,6 +65,8 @@ public class ClusterZKImpl implements Cluster { private final CuratorFramework client; + private final AtomicBoolean isZKConnected = new AtomicBoolean(false); + private final Map entryMap = new HashMap<>(INIT_SIZE); private Optional cache = Optional.empty(); @@ -73,6 +76,10 @@ public ClusterZKImpl(CuratorFramework zkClient, String clusterName) { if (client.getState().equals(CuratorFrameworkState.LATENT)) { client.start(); } + this.isZKConnected.set(client.getZookeeperClient().isConnected()); + //Listen for any zookeeper connection state changes + client.getConnectionStateListenable().addListener( + (curatorClient, newState) -> this.isZKConnected.set(newState.isConnected())); } /** @@ -109,6 +116,11 @@ public void deregisterHost(Host host) { close(node); } + @Override + public boolean isHealthy() { + return isZKConnected.get(); + } + /** * Add Listener to the cluster. * diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AbortEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AbortEvent.java index bf79ba607b9..9ce46143aba 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AbortEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AbortEvent.java @@ -31,6 +31,7 @@ @Data @AllArgsConstructor public class AbortEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String stream; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AutoScaleEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AutoScaleEvent.java index 6111f74e033..fc3fa0a114e 100644 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AutoScaleEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/AutoScaleEvent.java @@ -32,6 +32,7 @@ public class AutoScaleEvent implements ControllerEvent { public static final byte UP = (byte) 0; public static final byte DOWN = (byte) 1; + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CommitEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CommitEvent.java index bfc9c16347c..371eebf04cc 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CommitEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CommitEvent.java @@ -30,6 +30,7 @@ @Data @AllArgsConstructor public class CommitEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String stream; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CreateReaderGroupEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CreateReaderGroupEvent.java index f8a3a549470..9728fcfc8c3 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CreateReaderGroupEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/CreateReaderGroupEvent.java @@ -35,6 +35,7 @@ @Data @AllArgsConstructor public class CreateReaderGroupEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final long requestId; private final String scope; @@ -111,7 +112,6 @@ private void read00(RevisionDataInput source, CreateReaderGroupEventBuilder eb) ImmutableMap.Builder endStreamCutBuilder = ImmutableMap.builder(); source.readMap(DataInput::readUTF, RGStreamCutRecord.SERIALIZER::deserialize, endStreamCutBuilder); eb.endingStreamCuts(endStreamCutBuilder.build()); - } } //endregion diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteReaderGroupEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteReaderGroupEvent.java index 796b5d485d1..7043246486e 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteReaderGroupEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteReaderGroupEvent.java @@ -31,6 +31,7 @@ @Data @AllArgsConstructor public class DeleteReaderGroupEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String rgName; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteStreamEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteStreamEvent.java index 1f63077bdc1..42f8857f91b 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteStreamEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/DeleteStreamEvent.java @@ -30,6 +30,7 @@ @Data @AllArgsConstructor public class DeleteStreamEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String stream; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/RGStreamCutRecord.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/RGStreamCutRecord.java index c99cd79cc77..c3bbb8b078e 100644 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/RGStreamCutRecord.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/RGStreamCutRecord.java @@ -20,23 +20,20 @@ import io.pravega.common.io.serialization.RevisionDataInput; import io.pravega.common.io.serialization.RevisionDataOutput; import io.pravega.common.io.serialization.VersionedSerializer; -import lombok.Builder; -import lombok.Data; -import lombok.NonNull; -import lombok.SneakyThrows; -import lombok.extern.slf4j.Slf4j; - import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.util.Collections; import java.util.Map; +import lombok.Builder; +import lombok.Data; +import lombok.NonNull; +import lombok.SneakyThrows; /** * This is data class for storing stream cuts (starting and ending) related to a ReaderGroup. */ @Data -@Slf4j public class RGStreamCutRecord { public static final RGStreamCutRecordSerializer SERIALIZER = new RGStreamCutRecordSerializer(); /** diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/ScaleOpEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/ScaleOpEvent.java index 6e84b3e212b..d4e60bcb5c4 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/ScaleOpEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/ScaleOpEvent.java @@ -34,6 +34,7 @@ @Data @AllArgsConstructor public class ScaleOpEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String stream; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/SealStreamEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/SealStreamEvent.java index 926c1cfcfd2..f137d840de7 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/SealStreamEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/SealStreamEvent.java @@ -30,6 +30,7 @@ @Data @AllArgsConstructor public class SealStreamEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String stream; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/TruncateStreamEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/TruncateStreamEvent.java index d46812b67d8..37ccb343f1d 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/TruncateStreamEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/TruncateStreamEvent.java @@ -30,6 +30,7 @@ @Data @AllArgsConstructor public class TruncateStreamEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String stream; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateReaderGroupEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateReaderGroupEvent.java index a48bf8156f0..bcad2207290 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateReaderGroupEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateReaderGroupEvent.java @@ -34,6 +34,7 @@ @Data @AllArgsConstructor public class UpdateReaderGroupEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String rgName; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateStreamEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateStreamEvent.java index 6513f70333c..02b8a565451 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateStreamEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/UpdateStreamEvent.java @@ -30,6 +30,7 @@ @Data @AllArgsConstructor public class UpdateStreamEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String stream; diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/CreateTableEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/CreateTableEvent.java index dbb344da774..8e7c39c5dc5 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/CreateTableEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/CreateTableEvent.java @@ -33,6 +33,7 @@ @Data @AllArgsConstructor public class CreateTableEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scopeName; private final String kvtName; @@ -42,6 +43,7 @@ public class CreateTableEvent implements ControllerEvent { private final long timestamp; private final long requestId; private final UUID tableId; + private final long rolloverSizeBytes; @Override public String getKey() { @@ -71,6 +73,7 @@ protected byte getWriteVersion() { @Override protected void declareVersions() { version(0).revision(0, this::write00, this::read00); + version(0).revision(1, this::write01, this::read01); } private void write00(CreateTableEvent e, RevisionDataOutput target) throws IOException { @@ -94,6 +97,14 @@ private void read00(RevisionDataInput source, CreateTableEventBuilder eb) throws eb.primaryKeyLength(source.readInt()); eb.secondaryKeyLength(source.readInt()); } + + private void write01(CreateTableEvent e, RevisionDataOutput target) throws IOException { + target.writeLong(e.rolloverSizeBytes); + } + + private void read01(RevisionDataInput source, CreateTableEventBuilder eb) throws IOException { + eb.rolloverSizeBytes(source.readLong()); + } } //endregion } diff --git a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/DeleteTableEvent.java b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/DeleteTableEvent.java index 4dc62732b05..804395e3943 100755 --- a/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/DeleteTableEvent.java +++ b/shared/controller-api/src/main/java/io/pravega/shared/controller/event/kvtable/DeleteTableEvent.java @@ -33,6 +33,7 @@ @Data @AllArgsConstructor public class DeleteTableEvent implements ControllerEvent { + @SuppressWarnings("unused") private static final long serialVersionUID = 1L; private final String scope; private final String kvtName; diff --git a/shared/controller-api/src/main/proto/Controller.proto b/shared/controller-api/src/main/proto/Controller.proto index 41302a4bbe9..caafce433fc 100644 --- a/shared/controller-api/src/main/proto/Controller.proto +++ b/shared/controller-api/src/main/proto/Controller.proto @@ -93,6 +93,7 @@ message ReaderGroupConfiguration { string readerGroupId = 8; repeated StreamCut startingStreamCuts = 9; repeated StreamCut endingStreamCuts = 10; + int64 rolloverSizeBytes = 11; } message ReaderGroupConfigResponse { @@ -160,6 +161,7 @@ message KeyValueTableConfig { int32 partitionCount = 3; int32 primaryKeyLength = 4; int32 secondaryKeyLength = 5; + int64 rolloverSizeBytes = 6; } message KeyValueTableConfigResponse { @@ -219,6 +221,7 @@ message UpdateStreamStatus { FAILURE = 1; STREAM_NOT_FOUND = 2; SCOPE_NOT_FOUND = 3; + STREAM_SEALED = 4; } Status status = 1; } @@ -397,6 +400,8 @@ message StreamConfig { ScalingPolicy scalingPolicy = 2; RetentionPolicy retentionPolicy = 3; Tags tags = 4; + int64 timestampAggregationTimeout = 5; + int64 rolloverSizeBytes = 6; } message Tags { diff --git a/shared/controller-api/src/main/swagger/Controller.yaml b/shared/controller-api/src/main/swagger/Controller.yaml index 41ca4c41c1a..68a18ea3420 100644 --- a/shared/controller-api/src/main/swagger/Controller.yaml +++ b/shared/controller-api/src/main/swagger/Controller.yaml @@ -141,8 +141,14 @@ paths: - "Streams" parameters: - in: query - name: showInternalStreams - description: Optional flag whether to display system created streams. If not specified only user created streams will be returned + name: filter_type + description: Filter options + required: false + type: string + enum: [showInternalStreams, tag] + - in: query + name: filter_value + description: value to be passed. must match the type passed with it. required: false type: string operationId: listStreams @@ -179,6 +185,12 @@ paths: $ref: "#/definitions/ScalingConfig" retentionPolicy: $ref: "#/definitions/RetentionConfig" + streamTags: + $ref: "#/definitions/TagsList" + timestampAggregationTimeout: + $ref: "#/definitions/TimestampAggregationTimeout" + rolloverSizeBytes: + $ref: "#/definitions/RolloverSizeBytes" produces: - application/json responses: @@ -239,6 +251,13 @@ paths: $ref: "#/definitions/ScalingConfig" retentionPolicy: $ref: "#/definitions/RetentionConfig" + streamTags: + $ref: "#/definitions/TagsList" + timestampAggregationTimeout: + $ref: "#/definitions/TimestampAggregationTimeout" + rolloverSizeBytes: + $ref: "#/definitions/RolloverSizeBytes" + produces: - application/json responses: @@ -584,6 +603,17 @@ paths: 500: description: Internal server error while fetching the health status of a given health contributor. definitions: + TimestampAggregationTimeout: + type: long + minimum: 0 + RolloverSizeBytes: + type: long + minimum: 0 + TagsList: + type: array + items: + type: string + maxLength: 256 ScalingEventList: type: object properties: @@ -621,6 +651,12 @@ definitions: $ref: "#/definitions/ScalingConfig" retentionPolicy: $ref: "#/definitions/RetentionConfig" + tags: + $ref: "#/definitions/TagsList" + timestampAggregationTimeout: + $ref: "#/definitions/TimestampAggregationTimeout" + rolloverSizeBytes: + $ref: "#/definitions/RolloverSizeBytes" ScalingConfig: type: object properties: diff --git a/shared/controller-api/src/test/java/io/pravega/shared/controller/event/ControllerEventSerializerTests.java b/shared/controller-api/src/test/java/io/pravega/shared/controller/event/ControllerEventSerializerTests.java index 1fe42d84c6b..1fdcf83b560 100644 --- a/shared/controller-api/src/test/java/io/pravega/shared/controller/event/ControllerEventSerializerTests.java +++ b/shared/controller-api/src/test/java/io/pravega/shared/controller/event/ControllerEventSerializerTests.java @@ -84,7 +84,7 @@ public void testUpdateStreamEvent() { @Test public void testCreateTableEvent() { testClass(() -> new CreateTableEvent(SCOPE, KVTABLE, 3, 4, 8, System.currentTimeMillis(), - 123L, UUID.randomUUID())); + 123L, UUID.randomUUID(), 0)); } @Test diff --git a/shared/controller-api/src/test/java/io/pravega/shared/controller/tracing/RPCTracingHelpersTest.java b/shared/controller-api/src/test/java/io/pravega/shared/controller/tracing/RPCTracingHelpersTest.java index 1e97b4671df..d593dee5032 100644 --- a/shared/controller-api/src/test/java/io/pravega/shared/controller/tracing/RPCTracingHelpersTest.java +++ b/shared/controller-api/src/test/java/io/pravega/shared/controller/tracing/RPCTracingHelpersTest.java @@ -27,7 +27,6 @@ import io.grpc.netty.NettyChannelBuilder; import io.pravega.common.tracing.RequestTracker; import lombok.Cleanup; -import lombok.extern.slf4j.Slf4j; import org.junit.Test; import org.mockito.Mockito; @@ -38,11 +37,10 @@ /** * Test to check the correct management of tracing request headers by the client/server interceptors. */ -@Slf4j public class RPCTracingHelpersTest { @Test - @SuppressWarnings("unchecked") + @SuppressWarnings({ "unchecked", "rawtypes" }) public void testInterceptors() { String requestDescriptor = "createStream-myScope-myStream"; long requestId = 1234L; diff --git a/shared/health-bindings/src/main/java/io/pravega/shared/health/bindings/resources/HealthImpl.java b/shared/health-bindings/src/main/java/io/pravega/shared/health/bindings/resources/HealthImpl.java index 8a165091a33..8eecfe5d2ea 100644 --- a/shared/health-bindings/src/main/java/io/pravega/shared/health/bindings/resources/HealthImpl.java +++ b/shared/health-bindings/src/main/java/io/pravega/shared/health/bindings/resources/HealthImpl.java @@ -74,7 +74,6 @@ public void getHealth(SecurityContext securityContext, AsyncResponse asyncRespon private void getHealth(String id, SecurityContext securityContext, AsyncResponse asyncResponse, String method) { long traceId = LoggerHelpers.traceEnter(log, method); processRequest(() -> { - restAuthHelper.authenticateAuthorize(getAuthorizationHeader(), authorizationResource.ofScopes(), READ_UPDATE); Health health = endpoint.getHealth(id); Response response = Response.status(Response.Status.OK) .entity(adapter(health)) @@ -96,7 +95,6 @@ public void getLiveness(SecurityContext securityContext, AsyncResponse asyncResp private void getLiveness(String id, SecurityContext securityContext, AsyncResponse asyncResponse, String method) { long traceId = LoggerHelpers.traceEnter(log, method); processRequest(() -> { - restAuthHelper.authenticateAuthorize(getAuthorizationHeader(), authorizationResource.ofScopes(), READ_UPDATE); boolean alive = endpoint.isAlive(id); asyncResponse.resume(Response.status(Response.Status.OK) .entity(alive) @@ -138,7 +136,6 @@ public void getReadiness(SecurityContext securityContext, AsyncResponse asyncRes private void getReadiness(String id, SecurityContext securityContext, AsyncResponse asyncResponse, String method) { long traceId = LoggerHelpers.traceEnter(log, method); processRequest(() -> { - restAuthHelper.authenticateAuthorize(getAuthorizationHeader(), authorizationResource.ofScopes(), READ_UPDATE); boolean ready = endpoint.isReady(id); asyncResponse.resume(Response.status(Response.Status.OK) .entity(ready) @@ -159,7 +156,6 @@ public void getStatus(SecurityContext securityContext, AsyncResponse asyncRespon private void getStatus(String id, SecurityContext securityContext, AsyncResponse asyncResponse, String method) { long traceId = LoggerHelpers.traceEnter(log, method); processRequest(() -> { - restAuthHelper.authenticateAuthorize(getAuthorizationHeader(), authorizationResource.ofScopes(), READ_UPDATE); Status status = endpoint.getStatus(id); asyncResponse.resume(Response.status(Response.Status.OK) .entity(adapter(status)) diff --git a/shared/health-bindings/src/test/java/io/pravega/shared/health/bindings/HealthTests.java b/shared/health-bindings/src/test/java/io/pravega/shared/health/bindings/HealthTests.java index 5c2f3d2f024..5c3cf597450 100644 --- a/shared/health-bindings/src/test/java/io/pravega/shared/health/bindings/HealthTests.java +++ b/shared/health-bindings/src/test/java/io/pravega/shared/health/bindings/HealthTests.java @@ -55,12 +55,12 @@ public class HealthTests { private static final int INTERVAL = 1000; @Rule - public final Timeout globalTimeout = new Timeout(10 * INTERVAL, TimeUnit.MILLISECONDS); + public final Timeout globalTimeout = new Timeout(20 * INTERVAL, TimeUnit.MILLISECONDS); private RESTServerConfig serverConfig; private RESTServer restServer; private Client client; - private HealthServiceManager healthServiceManager; + private HealthServiceManager healthServiceManager; @Before public void setup() throws Exception { @@ -108,7 +108,7 @@ protected URI getURI(String path) { @Test public void testHealth() throws Exception { // Register the HealthIndicator. - healthServiceManager.getRoot().register(new StaticHealthyContributor()); + healthServiceManager.register(new StaticHealthyContributor()); URI streamResourceURI = UriBuilder.fromUri(getURI("/v1/health")) .scheme(getURLScheme()).build(); @@ -134,7 +134,7 @@ public void testHealth() throws Exception { public void testContributorHealth() throws Exception { // Register the HealthIndicator. StaticHealthyContributor contributor = new StaticHealthyContributor(); - healthServiceManager.getRoot().register(contributor); + healthServiceManager.register(contributor); // Wait for contributor initialization. URI statusURI = UriBuilder.fromUri(getURI(String.format("/v1/health/status/%s", contributor.getName()))) @@ -164,7 +164,7 @@ public void testStatus() throws Exception { // Start with a HealthyIndicator. StaticHealthyContributor healthyIndicator = new StaticHealthyContributor(); - healthServiceManager.getRoot().register(healthyIndicator); + healthServiceManager.register(healthyIndicator); streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/status")) .scheme(getURLScheme()).build(); @@ -172,14 +172,14 @@ public void testStatus() throws Exception { // Adding an unhealthy indicator should change the Status. StaticFailingContributor failingIndicator = new StaticFailingContributor(); - healthServiceManager.getRoot().register(failingIndicator); + healthServiceManager.register(failingIndicator); streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/status")) .scheme(getURLScheme()).build(); assertStatus(streamResourceURI, HealthStatus.DOWN); // Make sure that even though we have a majority of healthy reports, we still are considered failing. - healthServiceManager.getRoot().register(new StaticHealthyContributor("sample-healthy-indicator-two")); + healthServiceManager.register(new StaticHealthyContributor("sample-healthy-indicator-two")); streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/status")) .scheme(getURLScheme()).build(); assertStatus(streamResourceURI, HealthStatus.DOWN); @@ -189,7 +189,7 @@ public void testStatus() throws Exception { public void testReadiness() throws Exception { // Start with a HealthyContributor. StaticHealthyContributor healthyIndicator = new StaticHealthyContributor(); - healthServiceManager.getRoot().register(healthyIndicator); + healthServiceManager.register(healthyIndicator); URI streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/readiness")) .scheme(getURLScheme()).build(); @@ -197,7 +197,7 @@ public void testReadiness() throws Exception { // Adding an unhealthy contributor should change the readiness status. StaticFailingContributor failingContributor = new StaticFailingContributor(); - healthServiceManager.getRoot().register(failingContributor); + healthServiceManager.register(failingContributor); streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/readiness")) .scheme(getURLScheme()).build(); @@ -208,7 +208,7 @@ public void testReadiness() throws Exception { public void testLiveness() throws Exception { // Start with a HealthyContributor. StaticHealthyContributor healthyIndicator = new StaticHealthyContributor(); - healthServiceManager.getRoot().register(healthyIndicator); + healthServiceManager.register(healthyIndicator); URI streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/liveness")) .scheme(getURLScheme()).build(); @@ -216,7 +216,7 @@ public void testLiveness() throws Exception { // Adding an unhealthy contributor should change the readiness. StaticFailingContributor failingIndicator = new StaticFailingContributor(); - healthServiceManager.getRoot().register(failingIndicator); + healthServiceManager.register(failingIndicator); streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/liveness")) .scheme(getURLScheme()).build(); @@ -226,7 +226,7 @@ public void testLiveness() throws Exception { @Test public void testDetails() { // Register the HealthIndicator. - healthServiceManager.getRoot().register(new StaticHealthyContributor()); + healthServiceManager.register(new StaticHealthyContributor()); URI streamResourceURI = UriBuilder.fromUri(getURI("/v1/health/details")) .scheme(getURLScheme()).build(); Response response = client.target(streamResourceURI).request().buildGet().invoke(); @@ -238,7 +238,7 @@ public void testDetails() { public void testContributorDetails() throws Exception { // Register the HealthIndicator. StaticHealthyContributor healthyIndicator = new StaticHealthyContributor(); - healthServiceManager.getRoot().register(healthyIndicator); + healthServiceManager.register(healthyIndicator); URI statusURI = UriBuilder.fromUri(getURI("/v1/health/status/" + StaticHealthyContributor.NAME)) .scheme(getURLScheme()).build(); @@ -328,6 +328,7 @@ public StaticHealthyContributor(String name) { super(name); } + @Override public Status doHealthCheck(Health.HealthBuilder builder) { Status status = Status.UP; Map details = new HashMap<>(); @@ -347,6 +348,7 @@ public StaticFailingContributor() { super("static-contributor-indicator"); } + @Override public Status doHealthCheck(Health.HealthBuilder builder) { Status status = Status.DOWN; Map details = new HashMap<>(); diff --git a/shared/health/src/main/java/io/pravega/shared/health/HealthContributor.java b/shared/health/src/main/java/io/pravega/shared/health/HealthContributor.java index 7916108fa19..d6730bcd00c 100644 --- a/shared/health/src/main/java/io/pravega/shared/health/HealthContributor.java +++ b/shared/health/src/main/java/io/pravega/shared/health/HealthContributor.java @@ -49,6 +49,7 @@ public interface HealthContributor extends AutoCloseable { /** * Closes the {@link HealthContributor} and forwards the closure to all its children. */ + @Override void close(); /** diff --git a/shared/health/src/main/java/io/pravega/shared/health/HealthServiceManager.java b/shared/health/src/main/java/io/pravega/shared/health/HealthServiceManager.java index a8cffacde7f..1df03eef8cd 100644 --- a/shared/health/src/main/java/io/pravega/shared/health/HealthServiceManager.java +++ b/shared/health/src/main/java/io/pravega/shared/health/HealthServiceManager.java @@ -19,7 +19,6 @@ import io.pravega.shared.health.impl.AbstractHealthContributor; import io.pravega.shared.health.impl.HealthEndpointImpl; import io.pravega.shared.health.impl.HealthServiceUpdaterImpl; -import lombok.Getter; import lombok.extern.slf4j.Slf4j; import java.time.Duration; @@ -32,7 +31,6 @@ public class HealthServiceManager implements AutoCloseable { * The root {@link HealthContributor} of the service. All {@link HealthContributor} objects are reachable from this * contributor. */ - @Getter @VisibleForTesting private final HealthContributor root; @@ -78,16 +76,45 @@ public void start() { this.updater.awaitRunning(); } + /** + * Register health contributors to the health manager. + * + * @param children The health contributor used to register. + */ + public void register(HealthContributor... children) { + for (HealthContributor child : children) { + this.root.register(child); + } + } + @Override public void close() { if (!this.closed.getAndSet(true)) { - this.root.close(); this.updater.close(); this.updater.stopAsync(); this.updater.awaitTerminated(); + this.root.close(); } } + /** + * Get name of the health indicatior. + * + * @return The name of the health indicatior + */ + public String getName() { + return this.root.getName(); + } + + /** + * Get health information summary. + * + * @return the health information summary. + */ + public Health getHealthSnapshot() { + return this.root.getHealthSnapshot(); + } + private static class RootHealthContributor extends AbstractHealthContributor { RootHealthContributor() { diff --git a/shared/health/src/main/java/io/pravega/shared/health/impl/AbstractHealthContributor.java b/shared/health/src/main/java/io/pravega/shared/health/impl/AbstractHealthContributor.java index 666d60aca26..9ecb7739f9f 100644 --- a/shared/health/src/main/java/io/pravega/shared/health/impl/AbstractHealthContributor.java +++ b/shared/health/src/main/java/io/pravega/shared/health/impl/AbstractHealthContributor.java @@ -100,13 +100,16 @@ synchronized final public Health getHealthSnapshot() { Collection statuses = new ArrayList<>(); Map children = new HashMap<>(); - for (val contributor : contributors.entrySet()) { - if (!contributor.getValue().isClosed()) { - Health health = contributor.getValue().getHealthSnapshot(); - children.put(contributor.getKey(), health); - statuses.add(health.getStatus()); - } else { - contributors.remove(name); + for (val entry : contributors.entrySet()) { + HealthContributor contributor = entry.getValue(); + synchronized (contributor) { + if (!contributor.isClosed()) { + Health health = contributor.getHealthSnapshot(); + children.put(entry.getKey(), health); + statuses.add(health.getStatus()); + } else { + contributors.remove(name); + } } } @@ -139,7 +142,7 @@ synchronized final public void register(HealthContributor... children) { } @Override - public final void close() { + synchronized public final void close() { if (!closed.getAndSet(true)) { for (val contributor : contributors.entrySet()) { contributor.getValue().close(); diff --git a/shared/health/src/main/java/io/pravega/shared/health/impl/HealthServiceUpdaterImpl.java b/shared/health/src/main/java/io/pravega/shared/health/impl/HealthServiceUpdaterImpl.java index 276689470ac..92436f4ee16 100644 --- a/shared/health/src/main/java/io/pravega/shared/health/impl/HealthServiceUpdaterImpl.java +++ b/shared/health/src/main/java/io/pravega/shared/health/impl/HealthServiceUpdaterImpl.java @@ -64,6 +64,7 @@ public class HealthServiceUpdaterImpl extends AbstractScheduledService implement * Provides the latest {@link Health} result of the recurring {@link io.pravega.shared.health.HealthEndpoint#getHealth()} calls. * @return The latest {@link Health} result. */ + @Override public Health getLatestHealth() { return latest.get(); } @@ -88,7 +89,7 @@ protected Scheduler scheduler() { */ @Override protected void startUp() { - log.info("Starting the HealthServiceUpdater, running at {} SECOND intervals.", interval); + log.info("Starting the HealthServiceUpdater, running at {} intervals.", interval); } /** diff --git a/shared/health/src/test/java/io/pravega/shared/health/HealthManagerTests.java b/shared/health/src/test/java/io/pravega/shared/health/HealthManagerTests.java index 78ee93be796..eff794a77fe 100644 --- a/shared/health/src/test/java/io/pravega/shared/health/HealthManagerTests.java +++ b/shared/health/src/test/java/io/pravega/shared/health/HealthManagerTests.java @@ -65,7 +65,7 @@ public void after() { public void testHealth() throws Exception { @Cleanup HealthyContributor contributor = new HealthyContributor(); - service.getRoot().register(contributor); + service.register(contributor); awaitHealthContributor(service, contributor.getName()); Assert.assertNotNull(service.getEndpoint().getHealth(contributor.getName())); @@ -92,7 +92,7 @@ public void testHealthInvalidName() { public void testDetailsEndpoints() throws TimeoutException { @Cleanup HealthyContributor contributor = new HealthyContributor("contributor"); - service.getRoot().register(contributor); + service.register(contributor); // Wait for the health result to be picked up by the HealthServiceUpdater. awaitHealthContributor(service, contributor.getName()); @@ -127,7 +127,7 @@ public void testDetailsEndpoints() throws TimeoutException { public void testStatusEndpoints() throws Exception { @Cleanup HealthyContributor contributor = new HealthyContributor("contributor"); - service.getRoot().register(contributor); + service.register(contributor); awaitHealthContributor(service, contributor.getName()); // Test the 'service level' endpoint. @@ -143,7 +143,7 @@ public void testStatusEndpoints() throws Exception { public void testLivenessEndpoints() throws Exception { @Cleanup HealthyContributor contributor = new HealthyContributor("contributor"); - service.getRoot().register(contributor); + service.register(contributor); awaitHealthContributor(service, contributor.getName()); Assert.assertEquals("The HealthServiceManager should produce an 'alive' result.", @@ -160,7 +160,7 @@ public void testLivenessEndpoints() throws Exception { public void testReadinessEndpoints() throws TimeoutException { @Cleanup HealthyContributor contributor = new HealthyContributor("contributor"); - service.getRoot().register(contributor); + service.register(contributor); // Wait for the HealthServiceUpdater to update the Health state. awaitHealthContributor(service, contributor.getName()); diff --git a/shared/health/src/test/java/io/pravega/shared/health/HealthServiceUpdaterTests.java b/shared/health/src/test/java/io/pravega/shared/health/HealthServiceUpdaterTests.java index 78995a3609b..f7df9110f1f 100644 --- a/shared/health/src/test/java/io/pravega/shared/health/HealthServiceUpdaterTests.java +++ b/shared/health/src/test/java/io/pravega/shared/health/HealthServiceUpdaterTests.java @@ -47,19 +47,19 @@ public void after() { public void testServiceUpdaterProperlyUpdates() throws Exception { @Cleanup HealthContributor contributor = new HealthyContributor("contributor"); - service.getRoot().register(contributor); + service.register(contributor); - TestHealthContributors.awaitHealthContributor(service, service.getRoot().getName()); + TestHealthContributors.awaitHealthContributor(service, service.getName()); Health health = service.getEndpoint().getHealth(); Assert.assertEquals(Status.UP, health.getStatus()); contributor.close(); Assert.assertEquals("Closed contributor should no longer be listed as a child.", 0, - service.getRoot().getHealthSnapshot().getChildren().size()); + service.getHealthSnapshot().getChildren().size()); // We register an indicator that will return a failing result, so the next health check should contain a 'DOWN' Status. contributor = new FailingContributor("failing"); - service.getRoot().register(contributor); + service.register(contributor); TestHealthContributors.awaitHealthContributor(service, contributor.getName()); health = service.getEndpoint().getHealth(); diff --git a/shared/health/src/test/java/io/pravega/shared/health/TestHealthContributors.java b/shared/health/src/test/java/io/pravega/shared/health/TestHealthContributors.java index 2c0638c6fda..1b7c0b4f95e 100644 --- a/shared/health/src/test/java/io/pravega/shared/health/TestHealthContributors.java +++ b/shared/health/src/test/java/io/pravega/shared/health/TestHealthContributors.java @@ -43,6 +43,7 @@ public HealthyContributor(String name) { super(name, StatusAggregator.UNANIMOUS); } + @Override public Status doHealthCheck(Health.HealthBuilder builder) { Status status = Status.UP; Map details = new HashMap<>(); @@ -64,6 +65,7 @@ public FailingContributor() { this("failing"); } + @Override public Status doHealthCheck(Health.HealthBuilder builder) { Status status = Status.DOWN; builder.status(status); @@ -80,6 +82,7 @@ public ThrowingContributor() { super("thrower"); } + @Override public Status doHealthCheck(Health.HealthBuilder builder) { Status status = Status.UNKNOWN; throw new RuntimeException(); diff --git a/shared/metrics/src/main/java/io/pravega/shared/MetricsNames.java b/shared/metrics/src/main/java/io/pravega/shared/MetricsNames.java index ce5b9e031de..1040bf9b83d 100644 --- a/shared/metrics/src/main/java/io/pravega/shared/MetricsNames.java +++ b/shared/metrics/src/main/java/io/pravega/shared/MetricsNames.java @@ -94,6 +94,7 @@ public final class MetricsNames { public static final String TABLE_SEGMENT_GET_LATENCY = PREFIX + "segmentstore.tablesegment.get_latency_ms"; // Histogram public static final String TABLE_SEGMENT_ITERATE_KEYS_LATENCY = PREFIX + "segmentstore.tablesegment.iterate_keys_latency_ms"; // Histogram public static final String TABLE_SEGMENT_ITERATE_ENTRIES_LATENCY = PREFIX + "segmentstore.tablesegment.iterate_entries_latency_ms"; // Histogram + public static final String TABLE_SEGMENT_GET_INFO_LATENCY = PREFIX + "segmentstore.tablesegment.get_info_latency_ms"; // Histogram public static final String TABLE_SEGMENT_UPDATE = PREFIX + "segmentstore.tablesegment.update"; // Counter and Per-segment Counter public static final String TABLE_SEGMENT_UPDATE_CONDITIONAL = PREFIX + "segmentstore.tablesegment.update_conditional"; // Counter and Per-segment Counter @@ -102,6 +103,7 @@ public final class MetricsNames { public static final String TABLE_SEGMENT_GET = PREFIX + "segmentstore.tablesegment.get"; // Counter and Per-segment Counter public static final String TABLE_SEGMENT_ITERATE_KEYS = PREFIX + "segmentstore.tablesegment.iterate_keys"; // Counter and Per-segment Counter public static final String TABLE_SEGMENT_ITERATE_ENTRIES = PREFIX + "segmentstore.tablesegment.iterate_entries"; // Counter and Per-segment Counter + public static final String TABLE_SEGMENT_GET_INFO = PREFIX + "segmentstore.tablesegment.get_info"; // Counter and Per-segment Counter // Storage stats public static final String STORAGE_READ_LATENCY = PREFIX + "segmentstore.storage.read_latency_ms"; // Histogram @@ -156,9 +158,22 @@ public final class MetricsNames { public static final String SLTS_DELETE_COUNT = PREFIX + "segmentstore.storage.slts.delete_count"; // Counter public static final String SLTS_CONCAT_COUNT = PREFIX + "segmentstore.storage.slts.concat_count"; // Counter public static final String SLTS_TRUNCATE_COUNT = PREFIX + "segmentstore.storage.slts.truncate_count"; // Counter - public static final String SLTS_SYSTEM_TRUNCATE_COUNT = PREFIX + "segmentstore.storage.slts.system_truncate_count"; // Counter + public static final String SLTS_SYSTEM_TRUNCATE_COUNT = PREFIX + "segmentstore.storage.slts.system_truncate_count"; // Counter - public static final String SLTS_GC_QUEUE_SIZE = PREFIX + "segmentstore.storage.slts.GC_queue_record_count"; // Counter + public static final String SLTS_GC_QUEUE_SIZE = PREFIX + "segmentstore.storage.slts.GC_queue_record_count"; // Counter + public static final String SLTS_GC_TASK_PROCESSED = PREFIX + "segmentstore.storage.slts.GC.task_processed_count"; // Counter + + public static final String SLTS_GC_CHUNK_NEW = PREFIX + "segmentstore.storage.slts.GC.chunk_new_count"; // Counter + public static final String SLTS_GC_CHUNK_QUEUED = PREFIX + "segmentstore.storage.slts.GC.chunk_queued_count"; // Counter + + public static final String SLTS_GC_CHUNK_DELETED = PREFIX + "segmentstore.storage.slts.GC.chunk_deleted_count"; // Counter + public static final String SLTS_GC_CHUNK_RETRY = PREFIX + "segmentstore.storage.slts.GC.chunk_retry_count"; // Counter + public static final String SLTS_GC_CHUNK_FAILED = PREFIX + "segmentstore.storage.slts.GC.chunk_failed_count"; // Counter + + public static final String SLTS_GC_SEGMENT_QUEUED = PREFIX + "segmentstore.storage.slts.GC.segment_queued_count"; // Counter + public static final String SLTS_GC_SEGMENT_PROCESSED = PREFIX + "segmentstore.storage.slts.GC.segment_deleted_count"; // Counter + public static final String SLTS_GC_SEGMENT_RETRY = PREFIX + "segmentstore.storage.slts.GC.segment_retry_count"; // Counter + public static final String SLTS_GC_SEGMENT_FAILED = PREFIX + "segmentstore.storage.slts.GC.segment_failed_count"; // Counter // SLTS Metadata stats public static final String STORAGE_METADATA_SIZE = PREFIX + "segmentstore.storage.size."; diff --git a/shared/metrics/src/main/java/io/pravega/shared/metrics/StatsLoggerImpl.java b/shared/metrics/src/main/java/io/pravega/shared/metrics/StatsLoggerImpl.java index 6e6af585a42..52576272d1b 100644 --- a/shared/metrics/src/main/java/io/pravega/shared/metrics/StatsLoggerImpl.java +++ b/shared/metrics/src/main/java/io/pravega/shared/metrics/StatsLoggerImpl.java @@ -63,7 +63,7 @@ public Counter createCounter(String statName, String... tags) { @Override public Gauge registerGauge(final String statName, final Supplier valueSupplier, String... tags) { try { - return new GaugeImpl<>(statName, Preconditions.checkNotNull(valueSupplier), tags); + return new GaugeImpl(statName, Preconditions.checkNotNull(valueSupplier), tags); } catch (Exception e) { log.warn("registerGauge failure: {}", statName, e); return NULLGAUGE; @@ -127,7 +127,7 @@ public synchronized void add(long delta) { } } - private class GaugeImpl implements Gauge { + private class GaugeImpl implements Gauge { @Getter private final Id id; private final AtomicReference> supplierReference = new AtomicReference<>(); diff --git a/shared/protocol/src/main/java/io/pravega/shared/NameUtils.java b/shared/protocol/src/main/java/io/pravega/shared/NameUtils.java index f7bca13ac1b..d2e2cff1415 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/NameUtils.java +++ b/shared/protocol/src/main/java/io/pravega/shared/NameUtils.java @@ -37,6 +37,9 @@ public final class NameUtils { // The scope name which has to be used when creating internally used pravega streams. public static final String INTERNAL_SCOPE_NAME = "_system"; + // The prefix used for internal container segments. + public static final String INTERNAL_CONTAINER_PREFIX = "_system/containers/"; + // The prefix which has to be appended to streams created internally for readerGroups. public static final String READER_GROUP_STREAM_PREFIX = INTERNAL_NAME_PREFIX + "RG"; @@ -108,7 +111,7 @@ public final class NameUtils { /** * Prefix for Container Metadata Segment name. */ - private static final String METADATA_SEGMENT_NAME_PREFIX = "_system/containers/metadata_"; + private static final String METADATA_SEGMENT_NAME_PREFIX = INTERNAL_CONTAINER_PREFIX + "metadata_"; /** * Format for Container Metadata Segment name. @@ -118,7 +121,7 @@ public final class NameUtils { /** * Prefix for Storage Metadata Segment name. */ - private static final String STORAGE_METADATA_SEGMENT_NAME_PREFIX = "_system/containers/storage_metadata_"; + private static final String STORAGE_METADATA_SEGMENT_NAME_PREFIX = INTERNAL_CONTAINER_PREFIX + "storage_metadata_"; /** * Format for Storage Metadata Segment name. @@ -128,12 +131,12 @@ public final class NameUtils { /** * Format for Container System Journal file name. */ - private static final String SYSJOURNAL_NAME_FORMAT = "_system/containers/_sysjournal.epoch%d.container%d.file%d"; + private static final String SYSJOURNAL_NAME_FORMAT = INTERNAL_CONTAINER_PREFIX + "_sysjournal.epoch%d.container%d.file%d"; /** * Format for Container System snapshot file name. */ - private static final String SYSJOURNAL_SNAPSHOT_NAME_FORMAT = "_system/containers/_sysjournal.epoch%d.container%d.snapshot%d"; + private static final String SYSJOURNAL_SNAPSHOT_NAME_FORMAT = INTERNAL_CONTAINER_PREFIX + "_sysjournal.epoch%d.container%d.snapshot%d"; /** * The Transaction unique identifier is made of two parts, each having a length of 16 bytes (64 bits in Hex). @@ -179,7 +182,7 @@ public final class NameUtils { /** * Formatting for internal Segments used for ContainerEventProcessor. */ - private static final String CONTAINER_EVENT_PROCESSOR_SEGMENT_NAME = "_system/containers/event_processor_%s_%d"; + private static final String CONTAINER_EVENT_PROCESSOR_SEGMENT_NAME = INTERNAL_CONTAINER_PREFIX + "event_processor_%s_%d"; //endregion diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/AdminRequestProcessor.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/AdminRequestProcessor.java index fc0500865e5..477958a53cc 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/AdminRequestProcessor.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/AdminRequestProcessor.java @@ -19,5 +19,6 @@ * A class that handles each type of Admin-specific Request. */ public interface AdminRequestProcessor extends RequestProcessor { - // Placeholder for new admin-specific requests to be added in the future. + + void flushToStorage(WireCommands.FlushToStorage flushToStorage); } diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessor.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessor.java index 613f1920565..c70a325cad1 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessor.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessor.java @@ -15,12 +15,12 @@ */ package io.pravega.shared.protocol.netty; -import io.pravega.shared.protocol.netty.WireCommands.MergeSegments; import io.pravega.shared.protocol.netty.WireCommands.CreateSegment; import io.pravega.shared.protocol.netty.WireCommands.DeleteSegment; import io.pravega.shared.protocol.netty.WireCommands.GetSegmentAttribute; import io.pravega.shared.protocol.netty.WireCommands.GetStreamSegmentInfo; import io.pravega.shared.protocol.netty.WireCommands.KeepAlive; +import io.pravega.shared.protocol.netty.WireCommands.MergeSegments; import io.pravega.shared.protocol.netty.WireCommands.ReadSegment; import io.pravega.shared.protocol.netty.WireCommands.SealSegment; import io.pravega.shared.protocol.netty.WireCommands.SetupAppend; @@ -102,13 +102,8 @@ public void keepAlive(KeepAlive keepAlive) { } @Override - public void mergeTableSegments(WireCommands.MergeTableSegments mergeSegments) { - getNextRequestProcessor().mergeTableSegments(mergeSegments); - } - - @Override - public void sealTableSegment(WireCommands.SealTableSegment sealTableSegment) { - getNextRequestProcessor().sealTableSegment(sealTableSegment); + public void getTableSegmentInfo(WireCommands.GetTableSegmentInfo request) { + getNextRequestProcessor().getTableSegmentInfo(request); } @Override diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingReplyProcessor.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingReplyProcessor.java index 94a1b33d61a..04bf9e5e41d 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingReplyProcessor.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingReplyProcessor.java @@ -115,7 +115,12 @@ public void segmentRead(SegmentRead data) { public void segmentAttributeUpdated(WireCommands.SegmentAttributeUpdated segmentAttributeUpdated) { throw new IllegalStateException("Unexpected operation: " + segmentAttributeUpdated); } - + + @Override + public void storageFlushed(WireCommands.StorageFlushed storageFlushed) { + throw new IllegalStateException("Unexpected operation: " + storageFlushed); + } + @Override public void segmentAttribute(WireCommands.SegmentAttribute segmentAttribute) { throw new IllegalStateException("Unexpected operation: " + segmentAttribute); @@ -155,7 +160,12 @@ public void segmentDeleted(SegmentDeleted segmentDeleted) { public void authTokenCheckFailed(WireCommands.AuthTokenCheckFailed authFailed) { throw new IllegalStateException("Unexpected operation: " + authFailed); } - + + @Override + public void tableSegmentInfo(WireCommands.TableSegmentInfo info) { + throw new IllegalStateException("Unexpected operation: " + info); + } + @Override public void tableEntriesUpdated(WireCommands.TableEntriesUpdated tableEntriesUpdated) { throw new IllegalStateException("Unexpected operation: " + tableEntriesUpdated); diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingRequestProcessor.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingRequestProcessor.java index 65498e3fa2e..b4a293ecebe 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingRequestProcessor.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/FailingRequestProcessor.java @@ -80,6 +80,11 @@ public void updateSegmentPolicy(UpdateSegmentPolicy updateSegmentPolicy) { throw new IllegalStateException("Unexpected operation"); } + @Override + public void getTableSegmentInfo(WireCommands.GetTableSegmentInfo getInfo) { + throw new IllegalStateException("Unexpected operation"); + } + @Override public void createTableSegment(WireCommands.CreateTableSegment createTableSegment) { throw new IllegalStateException("Unexpected operation"); @@ -120,21 +125,11 @@ public void mergeSegments(WireCommands.MergeSegments mergeSegments) { throw new IllegalStateException("Unexpected operation"); } - @Override - public void mergeTableSegments(WireCommands.MergeTableSegments mergeSegments) { - throw new IllegalStateException("Unexpected operation"); - } - @Override public void sealSegment(SealSegment sealSegment) { throw new IllegalStateException("Unexpected operation"); } - @Override - public void sealTableSegment(WireCommands.SealTableSegment sealTableSegment) { - throw new IllegalStateException("Unexpected operation"); - } - @Override public void truncateSegment(TruncateSegment truncateSegment) { throw new IllegalStateException("Unexpected operation"); diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/ReplyProcessor.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/ReplyProcessor.java index 085c8cfd18c..404bd5fa895 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/ReplyProcessor.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/ReplyProcessor.java @@ -48,6 +48,8 @@ default void process(Reply reply) { void conditionalCheckFailed(WireCommands.ConditionalCheckFailed dataNotAppended); + void storageFlushed(WireCommands.StorageFlushed storageFlushed); + void segmentRead(WireCommands.SegmentRead segmentRead); void segmentAttributeUpdated(WireCommands.SegmentAttributeUpdated segmentAttributeUpdated); @@ -78,6 +80,8 @@ default void process(Reply reply) { void authTokenCheckFailed(WireCommands.AuthTokenCheckFailed authTokenCheckFailed); + void tableSegmentInfo(WireCommands.TableSegmentInfo info); + void tableEntriesUpdated(WireCommands.TableEntriesUpdated tableEntriesUpdated); void tableKeysRemoved(WireCommands.TableKeysRemoved tableKeysRemoved); diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/RequestProcessor.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/RequestProcessor.java index 4034b562981..461926b2896 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/RequestProcessor.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/RequestProcessor.java @@ -24,12 +24,10 @@ import io.pravega.shared.protocol.netty.WireCommands.GetStreamSegmentInfo; import io.pravega.shared.protocol.netty.WireCommands.Hello; import io.pravega.shared.protocol.netty.WireCommands.KeepAlive; -import io.pravega.shared.protocol.netty.WireCommands.MergeTableSegments; import io.pravega.shared.protocol.netty.WireCommands.RemoveTableKeys; import io.pravega.shared.protocol.netty.WireCommands.UpdateTableEntries; import io.pravega.shared.protocol.netty.WireCommands.ReadSegment; import io.pravega.shared.protocol.netty.WireCommands.SealSegment; -import io.pravega.shared.protocol.netty.WireCommands.SealTableSegment; import io.pravega.shared.protocol.netty.WireCommands.SetupAppend; import io.pravega.shared.protocol.netty.WireCommands.TruncateSegment; import io.pravega.shared.protocol.netty.WireCommands.UpdateSegmentAttribute; @@ -57,12 +55,8 @@ public interface RequestProcessor { void mergeSegments(MergeSegments mergeSegments); - void mergeTableSegments(MergeTableSegments mergeSegments); - void sealSegment(SealSegment sealSegment); - void sealTableSegment(SealTableSegment sealTableSegment); - void truncateSegment(TruncateSegment truncateSegment); void deleteSegment(DeleteSegment deleteSegment); @@ -71,6 +65,8 @@ public interface RequestProcessor { void updateSegmentPolicy(UpdateSegmentPolicy updateSegmentPolicy); + void getTableSegmentInfo(WireCommands.GetTableSegmentInfo getInfo); + void createTableSegment(CreateTableSegment createTableSegment); void deleteTableSegment(DeleteTableSegment deleteSegment); diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommandType.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommandType.java index ee588c3c8f8..5ffad73f8f2 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommandType.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommandType.java @@ -32,6 +32,9 @@ public enum WireCommandType { PARTIAL_EVENT(-2, WireCommands.PartialEvent::readFrom), + FLUSH_TO_STORAGE(-3, WireCommands.FlushToStorage::readFrom), + FLUSHED_TO_STORAGE(-4, WireCommands.StorageFlushed::readFrom), + EVENT(0, null), // Is read manually. SETUP_APPEND(1, WireCommands.SetupAppend::readFrom), @@ -87,11 +90,10 @@ public enum WireCommandType { AUTH_TOKEN_CHECK_FAILED(60, WireCommands.AuthTokenCheckFailed::readFrom), ERROR_MESSAGE(61, WireCommands.ErrorMessage::readFrom), + GET_TABLE_SEGMENT_INFO(68, WireCommands.GetTableSegmentInfo::readFrom), + TABLE_SEGMENT_INFO(69, WireCommands.TableSegmentInfo::readFrom), CREATE_TABLE_SEGMENT(70, WireCommands.CreateTableSegment::readFrom), DELETE_TABLE_SEGMENT(71, WireCommands.DeleteTableSegment::readFrom), - MERGE_TABLE_SEGMENTS(72, WireCommands.MergeTableSegments::readFrom), - SEAL_TABLE_SEGMENT(73, WireCommands.SealTableSegment::readFrom), - UPDATE_TABLE_ENTRIES(74, WireCommands.UpdateTableEntries::readFrom), TABLE_ENTRIES_UPDATED(75, WireCommands.TableEntriesUpdated::readFrom), diff --git a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommands.java b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommands.java index adfd789e29a..89e4d6fa666 100644 --- a/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommands.java +++ b/shared/protocol/src/main/java/io/pravega/shared/protocol/netty/WireCommands.java @@ -36,7 +36,9 @@ import java.util.Map; import java.util.UUID; import javax.annotation.concurrent.NotThreadSafe; + import lombok.AccessLevel; +import lombok.AllArgsConstructor; import lombok.Data; import lombok.EqualsAndHashCode; import lombok.Getter; @@ -61,7 +63,7 @@ * Incompatible changes should instead create a new WireCommand object. */ public final class WireCommands { - public static final int WIRE_VERSION = 12; + public static final int WIRE_VERSION = 14; public static final int OLDEST_COMPATIBLE_VERSION = 5; public static final int TYPE_SIZE = 4; public static final int TYPE_PLUS_LENGTH_SIZE = 8; @@ -784,6 +786,55 @@ public long getRequestId() { } } + @Data + public static final class FlushToStorage implements Request, WireCommand { + final WireCommandType type = WireCommandType.FLUSH_TO_STORAGE; + final int containerId; + @ToString.Exclude + final String delegationToken; + final long requestId; + + @Override + public void process(RequestProcessor cp) { + ((AdminRequestProcessor) cp).flushToStorage(this); + } + + @Override + public void writeFields(DataOutput out) throws IOException { + out.writeInt(containerId); + out.writeUTF(delegationToken == null ? "" : delegationToken); + out.writeLong(requestId); + } + + public static WireCommand readFrom(ByteBufInputStream in, int length) throws IOException { + int containerId = in.readInt(); + String delegationToken = in.readUTF(); + long requestId = in.readLong(); + return new FlushToStorage(containerId, delegationToken, requestId); + } + } + + @Data + public static final class StorageFlushed implements Reply, WireCommand { + final WireCommandType type = WireCommandType.FLUSHED_TO_STORAGE; + final long requestId; + + @Override + public void process(ReplyProcessor cp) { + cp.storageFlushed(this); + } + + @Override + public void writeFields(DataOutput out) throws IOException { + out.writeLong(requestId); + } + + public static WireCommand readFrom(ByteBufInputStream in, int length) throws IOException { + long requestId = in.readLong(); + return new StorageFlushed(requestId); + } + } + @Data public static final class ReadSegment implements Request, WireCommand { final WireCommandType type = WireCommandType.READ_SEGMENT; @@ -1068,7 +1119,8 @@ public static WireCommand readFrom(T in, int // Versioning workaround until PDP-21 is implemented (https://github.com/pravega/pravega/issues/1948). startOffset = in.readLong(); } - return new StreamSegmentInfo(requestId, segmentName, exists, isSealed, isDeleted, lastModified, segmentLength, startOffset); + return new StreamSegmentInfo(requestId, segmentName, exists, isSealed, isDeleted, + lastModified, segmentLength, startOffset); } } @@ -1085,6 +1137,7 @@ public static final class CreateSegment implements Request, WireCommand { final int targetRate; @ToString.Exclude final String delegationToken; + final long rolloverSizeBytes; @Override public void process(RequestProcessor cp) { @@ -1098,16 +1151,85 @@ public void writeFields(DataOutput out) throws IOException { out.writeInt(targetRate); out.writeByte(scaleType); out.writeUTF(delegationToken == null ? "" : delegationToken); + out.writeLong(rolloverSizeBytes); } - public static WireCommand readFrom(DataInput in, int length) throws IOException { + public static WireCommand readFrom(T in, int length) throws IOException { long requestId = in.readLong(); String segment = in.readUTF(); int desiredRate = in.readInt(); byte scaleType = in.readByte(); String delegationToken = in.readUTF(); + long rolloverSizeBytes = 0; + if (in.available() >= Long.BYTES) { + rolloverSizeBytes = in.readLong(); + } + + return new CreateSegment(requestId, segment, scaleType, desiredRate, delegationToken, rolloverSizeBytes); + } + } + + @Data + public static final class GetTableSegmentInfo implements Request, WireCommand { + final WireCommandType type = WireCommandType.GET_TABLE_SEGMENT_INFO; + final long requestId; + final String segmentName; + @ToString.Exclude + final String delegationToken; + + @Override + public void process(RequestProcessor cp) { + cp.getTableSegmentInfo(this); + } + + @Override + public void writeFields(DataOutput out) throws IOException { + out.writeLong(requestId); + out.writeUTF(segmentName); + out.writeUTF(delegationToken == null ? "" : delegationToken); + } + + public static WireCommand readFrom(DataInput in, int length) throws IOException { + long requestId = in.readLong(); + String segment = in.readUTF(); + String delegationToken = in.readUTF(); + return new GetTableSegmentInfo(requestId, segment, delegationToken); + } + } + + @Data + public static final class TableSegmentInfo implements Reply, WireCommand { + final WireCommandType type = WireCommandType.TABLE_SEGMENT_INFO; + final long requestId; + final String segmentName; + final long startOffset; + final long length; + final long entryCount; + final int keyLength; + + @Override + public void process(ReplyProcessor cp) { + cp.tableSegmentInfo(this); + } + + @Override + public void writeFields(DataOutput out) throws IOException { + out.writeLong(requestId); + out.writeUTF(segmentName); + out.writeLong(startOffset); + out.writeLong(length); + out.writeLong(entryCount); + out.writeInt(keyLength); + } - return new CreateSegment(requestId, segment, scaleType, desiredRate, delegationToken); + public static WireCommand readFrom(T in, int length) throws IOException { + long requestId = in.readLong(); + String segmentName = in.readUTF(); + long startOffset = in.readLong(); + long segmentLength = in.readLong(); + long entryCount = in.readLong(); + int keyLength = in.readInt(); + return new TableSegmentInfo(requestId, segmentName, startOffset, segmentLength, entryCount, keyLength); } } @@ -1121,6 +1243,7 @@ public static final class CreateTableSegment implements Request, WireCommand { final int keyLength; @ToString.Exclude final String delegationToken; + final long rolloverSizeBytes; @Override public void process(RequestProcessor cp) { @@ -1134,6 +1257,7 @@ public void writeFields(DataOutput out) throws IOException { out.writeUTF(delegationToken == null ? "" : delegationToken); out.writeBoolean(sortedDeprecated); out.writeInt(keyLength); + out.writeLong(rolloverSizeBytes); } public static WireCommand readFrom(T in, int length) throws IOException { @@ -1142,14 +1266,18 @@ public static WireCommand readFrom(T in, int String delegationToken = in.readUTF(); boolean sorted = false; int keyLength = 0; + long rolloverSizeBytes = 0; if (in.available() >= 1) { sorted = in.readBoolean(); } if (in.available() >= Integer.BYTES) { keyLength = in.readInt(); } + if (in.available() >= Long.BYTES) { + rolloverSizeBytes = in.readLong(); + } - return new CreateTableSegment(requestId, segment, sorted, keyLength, delegationToken); + return new CreateTableSegment(requestId, segment, sorted, keyLength, delegationToken, rolloverSizeBytes); } } @@ -1272,6 +1400,7 @@ public static WireCommand readFrom(DataInput in, int length) throws IOException } @Data + @AllArgsConstructor public static final class MergeSegments implements Request, WireCommand { final WireCommandType type = WireCommandType.MERGE_SEGMENTS; final long requestId; @@ -1279,41 +1408,20 @@ public static final class MergeSegments implements Request, WireCommand { final String source; @ToString.Exclude final String delegationToken; + final List attributeUpdates; - @Override - public void process(RequestProcessor cp) { - cp.mergeSegments(this); - } - - @Override - public void writeFields(DataOutput out) throws IOException { - out.writeLong(requestId); - out.writeUTF(target); - out.writeUTF(source); - out.writeUTF(delegationToken == null ? "" : delegationToken); - } - - public static WireCommand readFrom(DataInput in, int length) throws IOException { - long requestId = in.readLong(); - String target = in.readUTF(); - String source = in.readUTF(); - String delegationToken = in.readUTF(); - return new MergeSegments(requestId, target, source, delegationToken); + // Constructor to keep compatibility with all the calls not requiring attributes to merge Segments. + public MergeSegments(long requestId, String target, String source, String delegationToken) { + this.requestId = requestId; + this.target = target; + this.source = source; + this.delegationToken = delegationToken; + this.attributeUpdates = Collections.emptyList(); } - } - - @Data - public static final class MergeTableSegments implements Request, WireCommand { - final WireCommandType type = WireCommandType.MERGE_TABLE_SEGMENTS; - final long requestId; - final String target; - final String source; - @ToString.Exclude - final String delegationToken; @Override public void process(RequestProcessor cp) { - cp.mergeTableSegments(this); + cp.mergeSegments(this); } @Override @@ -1322,14 +1430,27 @@ public void writeFields(DataOutput out) throws IOException { out.writeUTF(target); out.writeUTF(source); out.writeUTF(delegationToken == null ? "" : delegationToken); + out.writeInt(attributeUpdates.size()); + for (ConditionalAttributeUpdate entry : attributeUpdates) { + entry.writeFields(out); + } } - public static WireCommand readFrom(DataInput in, int length) throws IOException { + public static WireCommand readFrom(ByteBufInputStream in, int length) throws IOException { long requestId = in.readLong(); String target = in.readUTF(); String source = in.readUTF(); String delegationToken = in.readUTF(); - return new MergeTableSegments(requestId, target, source, delegationToken); + List attributeUpdates = new ArrayList<>(); + if (in.available() <= 0) { + // MergeSegment Commands prior v5 do not allow attributeUpdates, so we can return. + return new MergeSegments(requestId, target, source, delegationToken, attributeUpdates); + } + int numberOfEntries = in.readInt(); + for (int i = 0; i < numberOfEntries; i++) { + attributeUpdates.add(ConditionalAttributeUpdate.readFrom(in, length)); + } + return new MergeSegments(requestId, target, source, delegationToken, attributeUpdates); } } @@ -1391,34 +1512,6 @@ public static WireCommand readFrom(DataInput in, int length) throws IOException } } - @Data - public static final class SealTableSegment implements Request, WireCommand { - final WireCommandType type = WireCommandType.SEAL_TABLE_SEGMENT; - final long requestId; - final String segment; - @ToString.Exclude - final String delegationToken; - - @Override - public void process(RequestProcessor cp) { - cp.sealTableSegment(this); - } - - @Override - public void writeFields(DataOutput out) throws IOException { - out.writeLong(requestId); - out.writeUTF(segment); - out.writeUTF(delegationToken == null ? "" : delegationToken); - } - - public static WireCommand readFrom(DataInput in, int length) throws IOException { - long requestId = in.readLong(); - String segment = in.readUTF(); - String delegationToken = in.readUTF(); - return new SealTableSegment(requestId, segment, delegationToken); - } - } - @Data public static final class SegmentSealed implements Reply, WireCommand { final WireCommandType type = WireCommandType.SEGMENT_SEALED; @@ -1749,9 +1842,9 @@ public enum ErrorCode { } private final int code; - private final Class exception; + private final Class exception; - private ErrorCode(int code, Class exception) { + private ErrorCode(int code, Class exception) { this.code = code; this.exception = exception; } @@ -1760,7 +1853,7 @@ public static ErrorCode valueOf(int code) { return OBJECTS_BY_CODE.getOrDefault(code, ErrorCode.UNSPECIFIED); } - public static ErrorCode valueOf(Class exception) { + public static ErrorCode valueOf(Class exception) { return OBJECTS_BY_CLASS.getOrDefault(exception, ErrorCode.UNSPECIFIED); } @@ -1768,7 +1861,7 @@ public int getCode() { return this.code; } - public Class getExceptionType() { + public Class getExceptionType() { return this.exception; } @@ -2194,7 +2287,6 @@ public void writeFields(DataOutput out) throws IOException { } public static WireCommand readFrom(EnhancedByteBufInputStream in, int length) throws IOException { - final int initialAvailable = in.available(); long requestId = in.readLong(); String segment = in.readUTF(); TableEntries entries = TableEntries.readFrom(in, in.available()); @@ -2568,4 +2660,40 @@ public void release() { */ abstract void releaseInternal(); } + + /** + * Convenience class to encapsulate the contents of an attribute update when several should be serialized in the same + * WireCommand. + */ + @Data + public static final class ConditionalAttributeUpdate { + public static final byte REPLACE = (byte) 1; // AttributeUpdate of type AttributeUpdateType.Replace. + public static final byte REPLACE_IF_EQUALS = (byte) 4; // AttributeUpdate of type AttributeUpdateType.ReplaceIfEquals. + public static final int LENGTH = 4 * Long.BYTES + 1; // UUID (2 longs) + oldValue + newValue + updateType (1 byte) + + private final UUID attributeId; + private final byte attributeUpdateType; + private final long newValue; + private final long oldValue; + + public void writeFields(DataOutput out) throws IOException { + out.writeLong(attributeId.getMostSignificantBits()); + out.writeLong(attributeId.getLeastSignificantBits()); + out.writeByte(attributeUpdateType); + out.writeLong(newValue); + out.writeLong(oldValue); + } + + public static ConditionalAttributeUpdate readFrom(DataInput in, int length) throws IOException { + UUID attributeId = new UUID(in.readLong(), in.readLong()); + byte attributeUpdateType = in.readByte(); + long newValue = in.readLong(); + long oldValue = in.readLong(); + return new ConditionalAttributeUpdate(attributeId, attributeUpdateType, newValue, oldValue); + } + + public int size() { + return LENGTH; + } + } } diff --git a/shared/protocol/src/test/java/io/pravega/shared/StreamSegmentNameUtilsTests.java b/shared/protocol/src/test/java/io/pravega/shared/StreamSegmentNameUtilsTests.java index 8c66219e757..9b277e413a8 100644 --- a/shared/protocol/src/test/java/io/pravega/shared/StreamSegmentNameUtilsTests.java +++ b/shared/protocol/src/test/java/io/pravega/shared/StreamSegmentNameUtilsTests.java @@ -221,7 +221,7 @@ public void testGetScopedStreamName() { @Test public void testComputeSegmentId() { long sid = NameUtils.computeSegmentId(1, 1); - Assert.assertEquals(sid, (long) (0x1L << 32) + 1); + Assert.assertEquals(sid, (0x1L << 32) + 1); AssertExtensions.assertThrows( "Accepted a negative epoch", diff --git a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessorTest.java b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessorTest.java index 1a2e73a6839..78e385fe7ab 100644 --- a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessorTest.java +++ b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/DelegatingRequestProcessorTest.java @@ -54,7 +54,7 @@ public void testEverythingCalled() { rp.updateSegmentAttribute(new WireCommands.UpdateSegmentAttribute(0, "", null, 0, 0, "")); rp.getSegmentAttribute(new WireCommands.GetSegmentAttribute(0, "", null, "")); rp.getStreamSegmentInfo(new WireCommands.GetStreamSegmentInfo(0, "", "")); - rp.createSegment(new WireCommands.CreateSegment(0, "", (byte) 0, 0, "")); + rp.createSegment(new WireCommands.CreateSegment(0, "", (byte) 0, 0, "", 0)); rp.updateSegmentPolicy(new WireCommands.UpdateSegmentPolicy(0, "", (byte) 0, 0, "")); rp.deleteTableSegment(new WireCommands.DeleteTableSegment(0, "", false, "")); rp.keepAlive(new WireCommands.KeepAlive()); @@ -64,12 +64,10 @@ public void testEverythingCalled() { rp.readTableKeys(new WireCommands.ReadTableKeys(0, "", "", 0, null)); rp.readTableEntries(new WireCommands.ReadTableEntries(0, "", "", 0, null)); rp.mergeSegments(new WireCommands.MergeSegments(0, "", "", "")); - rp.mergeTableSegments(new WireCommands.MergeTableSegments(0, "", "", "")); rp.sealSegment(new WireCommands.SealSegment(0, "", "")); - rp.sealTableSegment(new WireCommands.SealTableSegment(0, "", "")); rp.truncateSegment(new WireCommands.TruncateSegment(0, "", 0, "")); rp.deleteSegment(new WireCommands.DeleteSegment(0, "", "")); - rp.createTableSegment(new WireCommands.CreateTableSegment(0, "", false, 0, "")); + rp.createTableSegment(new WireCommands.CreateTableSegment(0, "", false, 0, "", 0)); rp.readTableEntriesDelta(new WireCommands.ReadTableEntriesDelta(0, "", "", 0, 0)); rp.createTransientSegment(new WireCommands.CreateTransientSegment(0, new UUID(0, 0), "", null)); rp.connectionDropped(); @@ -91,9 +89,7 @@ public void testEverythingCalled() { verify(rp.getNextRequestProcessor(), times(1)).readTable(any()); verify(rp.getNextRequestProcessor(), times(1)).readTableKeys(any()); verify(rp.getNextRequestProcessor(), times(1)).mergeSegments(any()); - verify(rp.getNextRequestProcessor(), times(1)).mergeTableSegments(any()); verify(rp.getNextRequestProcessor(), times(1)).sealSegment(any()); - verify(rp.getNextRequestProcessor(), times(1)).sealTableSegment(any()); verify(rp.getNextRequestProcessor(), times(1)).truncateSegment(any()); verify(rp.getNextRequestProcessor(), times(1)).deleteSegment(any()); verify(rp.getNextRequestProcessor(), times(1)).readTableEntries(any()); diff --git a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingReplyProcessorTest.java b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingReplyProcessorTest.java index 4535919c641..2d9e20192f2 100644 --- a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingReplyProcessorTest.java +++ b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingReplyProcessorTest.java @@ -19,11 +19,13 @@ import io.pravega.shared.protocol.netty.WireCommands.AuthTokenCheckFailed; import io.pravega.shared.protocol.netty.WireCommands.ConditionalCheckFailed; import io.pravega.shared.protocol.netty.WireCommands.DataAppended; +import io.pravega.shared.protocol.netty.WireCommands.ErrorMessage; import io.pravega.shared.protocol.netty.WireCommands.InvalidEventNumber; import io.pravega.shared.protocol.netty.WireCommands.NoSuchSegment; import io.pravega.shared.protocol.netty.WireCommands.OperationUnsupported; import io.pravega.shared.protocol.netty.WireCommands.SegmentAlreadyExists; import io.pravega.shared.protocol.netty.WireCommands.SegmentAttribute; +import io.pravega.shared.protocol.netty.WireCommands.StorageFlushed; import io.pravega.shared.protocol.netty.WireCommands.SegmentAttributeUpdated; import io.pravega.shared.protocol.netty.WireCommands.SegmentCreated; import io.pravega.shared.protocol.netty.WireCommands.SegmentDeleted; @@ -45,7 +47,6 @@ import io.pravega.shared.protocol.netty.WireCommands.TableRead; import io.pravega.shared.protocol.netty.WireCommands.TableSegmentNotEmpty; import io.pravega.shared.protocol.netty.WireCommands.WrongHost; -import io.pravega.shared.protocol.netty.WireCommands.ErrorMessage; import org.junit.Test; import static io.pravega.test.common.AssertExtensions.assertThrows; @@ -84,6 +85,7 @@ public void testEverythingThrows() { assertThrows(IllegalStateException.class, () -> rp.segmentsMerged(new SegmentsMerged(0, "", "", 2))); assertThrows(IllegalStateException.class, () -> rp.segmentTruncated(new SegmentTruncated(0, ""))); assertThrows(IllegalStateException.class, () -> rp.streamSegmentInfo(new StreamSegmentInfo(0, "", false, false, false, 0, 0, 0))); + assertThrows(IllegalStateException.class, () -> rp.tableSegmentInfo(new WireCommands.TableSegmentInfo(0, "", 0, 0, 0, 0))); assertThrows(IllegalStateException.class, () -> rp.tableEntriesDeltaRead(new TableEntriesDeltaRead(0, "", null, false, true, 0))); assertThrows(IllegalStateException.class, () -> rp.tableEntriesRead(new TableEntriesRead(0, "", null, null))); assertThrows(IllegalStateException.class, () -> rp.tableEntriesUpdated(new TableEntriesUpdated(0, null))); @@ -95,6 +97,7 @@ public void testEverythingThrows() { assertThrows(IllegalStateException.class, () -> rp.tableSegmentNotEmpty(new TableSegmentNotEmpty(0, "", ""))); assertThrows(IllegalStateException.class, () -> rp.wrongHost(new WrongHost(0, "", "", ""))); assertThrows(IllegalStateException.class, () -> rp.errorMessage(new ErrorMessage(0, "", "", ErrorMessage.ErrorCode.UNSPECIFIED))); + assertThrows(IllegalStateException.class, () -> rp.storageFlushed(new StorageFlushed(0))); } } diff --git a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingRequestProcessorTest.java b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingRequestProcessorTest.java index c64887155cf..a5073e48b6e 100644 --- a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingRequestProcessorTest.java +++ b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/FailingRequestProcessorTest.java @@ -32,9 +32,7 @@ import io.pravega.shared.protocol.netty.WireCommands.ReadTableKeys; import io.pravega.shared.protocol.netty.WireCommands.ReadTableEntries; import io.pravega.shared.protocol.netty.WireCommands.MergeSegments; -import io.pravega.shared.protocol.netty.WireCommands.MergeTableSegments; import io.pravega.shared.protocol.netty.WireCommands.SealSegment; -import io.pravega.shared.protocol.netty.WireCommands.SealTableSegment; import io.pravega.shared.protocol.netty.WireCommands.TruncateSegment; import io.pravega.shared.protocol.netty.WireCommands.DeleteSegment; import io.pravega.shared.protocol.netty.WireCommands.ReadTableEntriesDelta; @@ -62,9 +60,9 @@ public void testEverythingThrows() { assertThrows(IllegalStateException.class, () -> rp.updateSegmentAttribute(new UpdateSegmentAttribute(0, "", null, 0, 0, ""))); assertThrows(IllegalStateException.class, () -> rp.getSegmentAttribute(new GetSegmentAttribute(0, "", null, ""))); assertThrows(IllegalStateException.class, () -> rp.getStreamSegmentInfo(new WireCommands.GetStreamSegmentInfo(0, "", ""))); - assertThrows(IllegalStateException.class, () -> rp.createSegment(new CreateSegment(0, "", (byte) 0, 0, ""))); + assertThrows(IllegalStateException.class, () -> rp.createSegment(new CreateSegment(0, "", (byte) 0, 0, "", 0))); assertThrows(IllegalStateException.class, () -> rp.updateSegmentPolicy(new UpdateSegmentPolicy(0, "", (byte) 0, 0, ""))); - assertThrows(IllegalStateException.class, () -> rp.createTableSegment(new CreateTableSegment(0, "", false, 0, ""))); + assertThrows(IllegalStateException.class, () -> rp.createTableSegment(new CreateTableSegment(0, "", false, 0, "", 0))); assertThrows(IllegalStateException.class, () -> rp.deleteTableSegment(new DeleteTableSegment(0, "", false, ""))); assertThrows(IllegalStateException.class, () -> rp.updateTableEntries(new UpdateTableEntries(0, "", "", null, 0))); assertThrows(IllegalStateException.class, () -> rp.removeTableKeys(new RemoveTableKeys(0, "", "", null, 0))); @@ -72,13 +70,11 @@ public void testEverythingThrows() { assertThrows(IllegalStateException.class, () -> rp.readTableKeys(new ReadTableKeys(0, "", "", 0, null))); assertThrows(IllegalStateException.class, () -> rp.readTableEntries(new ReadTableEntries(0, "", "", 0, null))); assertThrows(IllegalStateException.class, () -> rp.mergeSegments(new MergeSegments(0, "", "", ""))); - assertThrows(IllegalStateException.class, () -> rp.mergeTableSegments(new MergeTableSegments(0, "", "", ""))); assertThrows(IllegalStateException.class, () -> rp.sealSegment(new SealSegment(0, "", ""))); - assertThrows(IllegalStateException.class, () -> rp.sealTableSegment(new SealTableSegment(0, "", ""))); assertThrows(IllegalStateException.class, () -> rp.truncateSegment(new TruncateSegment(0, "", 0, ""))); assertThrows(IllegalStateException.class, () -> rp.deleteSegment(new DeleteSegment(0, "", ""))); assertThrows(IllegalStateException.class, () -> rp.readTableEntries(new ReadTableEntries(0, "", "", 0, null))); - assertThrows(IllegalStateException.class, () -> rp.createTableSegment(new CreateTableSegment(0, "", false, 0, ""))); + assertThrows(IllegalStateException.class, () -> rp.createTableSegment(new CreateTableSegment(0, "", false, 0, "", 0))); assertThrows(IllegalStateException.class, () -> rp.readTableEntriesDelta(new ReadTableEntriesDelta(0, "", "", 0, 0))); assertThrows(IllegalStateException.class, () -> rp.createTransientSegment(new CreateTransientSegment(0, new UUID(0, 0), "", ""))); assertThrows(IllegalStateException.class, () -> rp.connectionDropped()); diff --git a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/WireCommandsTest.java b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/WireCommandsTest.java index 3a07b9cc70e..b4df24089c5 100644 --- a/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/WireCommandsTest.java +++ b/shared/protocol/src/test/java/io/pravega/shared/protocol/netty/WireCommandsTest.java @@ -29,6 +29,7 @@ import java.nio.ByteBuffer; import java.util.AbstractMap.SimpleImmutableEntry; import java.util.Arrays; +import java.util.Collections; import java.util.List; import java.util.Map; import java.util.UUID; @@ -36,6 +37,7 @@ import java.util.function.Function; import java.util.function.Supplier; import lombok.Data; +import lombok.ToString; import org.junit.Test; import static io.netty.buffer.Unpooled.wrappedBuffer; @@ -551,6 +553,16 @@ public void testConditionalCheckFailed() throws IOException { testCommand(new WireCommands.ConditionalCheckFailed(uuid, l, l)); } + @Test + public void testFlushToStorage() throws IOException { + testCommand(new WireCommands.FlushToStorage(i, "", l)); + } + + @Test + public void testStorageFlushed() throws IOException { + testCommand(new WireCommands.StorageFlushed(l)); + } + @Test public void testReadSegment() throws IOException { testCommand(new WireCommands.ReadSegment(testString1, l, i, "", l)); @@ -601,12 +613,22 @@ public void testStreamSegmentInfo() throws IOException { @Test public void testCreateSegment() throws IOException { - testCommand(new WireCommands.CreateSegment(l, testString1, b, i, "")); + testCommand(new WireCommands.CreateSegment(l, testString1, b, i, "", 1024L)); + } + + @Test + public void testGetTableSegmentInfo() throws IOException { + testCommand(new WireCommands.GetTableSegmentInfo(l, testString1, "")); + } + + @Test + public void testTableSegmentInfo() throws IOException { + testCommand(new WireCommands.TableSegmentInfo(l, testString1, l + 1, l + 2, 3, 4)); } @Test public void testCreateTableSegment() throws IOException { - testCommand(new WireCommands.CreateTableSegment(l, testString1, true, 16, "")); + testCommand(new WireCommands.CreateTableSegment(l, testString1, true, 16, "", 1024L)); } @Test @@ -628,11 +650,6 @@ public void testMergeSegments() throws IOException { testCommand(new WireCommands.MergeSegments(l, testString1, testString2, "")); } - @Test - public void testMergeTableSegments() throws IOException { - testCommand(new WireCommands.MergeTableSegments(l, testString1, testString2, "")); - } - @Test public void testSegmentsMerged() throws IOException { testCommand(new WireCommands.SegmentsMerged(l, testString1, testString2, -l)); @@ -643,11 +660,6 @@ public void testSealSegment() throws IOException { testCommand(new WireCommands.SealSegment(l, testString1, "")); } - @Test - public void testSealTableSegment() throws IOException { - testCommand(new WireCommands.SealTableSegment(l, testString1, "")); - } - @Test public void testSegmentSealed() throws IOException { testCommand(new WireCommands.SegmentSealed(l, testString1)); @@ -905,14 +917,63 @@ public void testConditionalBlockEnd() throws IOException { ce -> ce.getData().refCnt()); } + @Test + public void testMergeSegmentsWithAttributes() throws IOException { + List attributeUpdates = Arrays.asList( + new WireCommands.ConditionalAttributeUpdate(UUID.randomUUID(), WireCommands.ConditionalAttributeUpdate.REPLACE, 0, Long.MIN_VALUE), + new WireCommands.ConditionalAttributeUpdate(UUID.randomUUID(), WireCommands.ConditionalAttributeUpdate.REPLACE_IF_EQUALS, 0, Long.MIN_VALUE)); + WireCommands.MergeSegments conditionalMergeSegments = new WireCommands.MergeSegments(l, testString1, testString2, + "", attributeUpdates); + testCommand(conditionalMergeSegments); + // Check the size of the ConditionalAttributeUpdate. + assertEquals(attributeUpdates.get(0).size(), 4 * Long.BYTES + 1); + } + + @Data + public static final class MergeSegmentsV5 implements Request, WireCommand { + final WireCommandType type = WireCommandType.MERGE_SEGMENTS; + final long requestId; + final String target; + final String source; + @ToString.Exclude + final String delegationToken; + + public MergeSegmentsV5(long requestId, String target, String source, String delegationToken) { + this.requestId = requestId; + this.target = target; + this.source = source; + this.delegationToken = delegationToken; + } + + @Override + public void process(RequestProcessor cp) {} + + @Override + public void writeFields(DataOutput out) throws IOException { + out.writeLong(requestId); + out.writeUTF(target); + out.writeUTF(source); + out.writeUTF(delegationToken == null ? "" : delegationToken); + } + } + + @Test + public void testCompatibilityMergeSegmentsV5() throws IOException { + // Test that we are able to decode a message with a previous version + ByteArrayOutputStream bout = new ByteArrayOutputStream(); + MergeSegmentsV5 commandV5 = new MergeSegmentsV5(l, testString1, testString2, ""); + commandV5.writeFields(new DataOutputStream(bout)); + testCommandFromByteArray(bout.toByteArray(), new WireCommands.MergeSegments(l, testString1, testString2, "", Collections.emptyList())); + } + @Test public void testErrorMessage() throws IOException { for (WireCommands.ErrorMessage.ErrorCode code : WireCommands.ErrorMessage.ErrorCode.values()) { - Class exceptionType = code.getExceptionType(); + Class exceptionType = code.getExceptionType(); WireCommands.ErrorMessage cmd = new WireCommands.ErrorMessage(1, "segment", testString1, code); testCommand(cmd); - assertTrue(cmd.getErrorCode().getExceptionType().equals(exceptionType)); - assertTrue(WireCommands.ErrorMessage.ErrorCode.valueOf(exceptionType).equals(code)); + assertEquals(cmd.getErrorCode().getExceptionType(), exceptionType); + assertEquals(WireCommands.ErrorMessage.ErrorCode.valueOf(exceptionType), code); RuntimeException exception = cmd.getThrowableException(); AssertExtensions.assertThrows(exceptionType, () -> { diff --git a/shared/protocol/src/test/java/io/pravega/shared/watermarks/WatermarksTest.java b/shared/protocol/src/test/java/io/pravega/shared/watermarks/WatermarksTest.java index b7d3d2e10ca..0f0c5cdb203 100644 --- a/shared/protocol/src/test/java/io/pravega/shared/watermarks/WatermarksTest.java +++ b/shared/protocol/src/test/java/io/pravega/shared/watermarks/WatermarksTest.java @@ -42,7 +42,7 @@ public void testSegmentWithRange() throws IOException { } @Test - public void testWatermark() throws IOException { + public void testWatermark() { SegmentWithRange segmentWithRange1 = new SegmentWithRange(0L, 0.0, 0.5); SegmentWithRange segmentWithRange2 = new SegmentWithRange(1L, 0.5, 1.0); ImmutableMap map = ImmutableMap.of(segmentWithRange1, 1L, segmentWithRange2, 1L); diff --git a/shared/rest/src/main/java/io/pravega/shared/rest/RESTServer.java b/shared/rest/src/main/java/io/pravega/shared/rest/RESTServer.java index 47f392f9c6f..da5632e81b2 100644 --- a/shared/rest/src/main/java/io/pravega/shared/rest/RESTServer.java +++ b/shared/rest/src/main/java/io/pravega/shared/rest/RESTServer.java @@ -71,7 +71,8 @@ protected void startUp() { contextConfigurator.setKeyStoreFile(restServerConfig.getKeyFilePath()); contextConfigurator.setKeyStorePass(JKSHelper.loadPasswordFrom(restServerConfig.getKeyFilePasswordPath())); httpServer = GrizzlyHttpServerFactory.createHttpServer(baseUri, resourceConfig, true, - new SSLEngineConfigurator(contextConfigurator, false, false, false)); + new SSLEngineConfigurator(contextConfigurator, false, false, false) + .setEnabledProtocols(restServerConfig.tlsProtocolVersion())); } else { httpServer = GrizzlyHttpServerFactory.createHttpServer(baseUri, resourceConfig, true); } diff --git a/shared/rest/src/main/java/io/pravega/shared/rest/RESTServerConfig.java b/shared/rest/src/main/java/io/pravega/shared/rest/RESTServerConfig.java index 3ea9d4c6c5a..40c103e5395 100644 --- a/shared/rest/src/main/java/io/pravega/shared/rest/RESTServerConfig.java +++ b/shared/rest/src/main/java/io/pravega/shared/rest/RESTServerConfig.java @@ -41,6 +41,12 @@ public interface RESTServerConfig extends ServerConfig { */ boolean isTlsEnabled(); + /** + * Version for the TLS protocol. + * @return TLS protocol Version is specified. + */ + String[] tlsProtocolVersion(); + /** * Path to a file which contains the key file for the TLS connection. * @return File which contains the key file for the TLS connection. diff --git a/shared/rest/src/main/java/io/pravega/shared/rest/impl/RESTServerConfigImpl.java b/shared/rest/src/main/java/io/pravega/shared/rest/impl/RESTServerConfigImpl.java index a53961bc614..701cc936ec5 100644 --- a/shared/rest/src/main/java/io/pravega/shared/rest/impl/RESTServerConfigImpl.java +++ b/shared/rest/src/main/java/io/pravega/shared/rest/impl/RESTServerConfigImpl.java @@ -22,24 +22,34 @@ import lombok.Builder; import lombok.Getter; +import java.util.Arrays; import java.util.Properties; /** * REST server config. */ @Getter +@Builder public class RESTServerConfigImpl implements RESTServerConfig { private final String host; private final int port; private final boolean authorizationEnabled; private final String userPasswordFile; private final boolean tlsEnabled; + private final String[] tlsProtocolVersion; private final String keyFilePath; private final String keyFilePasswordPath; - @Builder + public static final class RESTServerConfigImplBuilder { + private String[] tlsProtocolVersion = new String[] {"TLSv1.2", "TLSv1.3"}; + + public RESTServerConfigImpl build() { + return new RESTServerConfigImpl(host, port, authorizationEnabled, userPasswordFile, tlsEnabled, tlsProtocolVersion, keyFilePath, keyFilePasswordPath); + } + } + RESTServerConfigImpl(final String host, final int port, boolean authorizationEnabled, String userPasswordFile, - boolean tlsEnabled, String keyFilePath, String keyFilePasswordPath) { + boolean tlsEnabled, String[] tlsProtocolVersion, String keyFilePath, String keyFilePasswordPath) { Exceptions.checkNotNullOrEmpty(host, "host"); Exceptions.checkArgument(port > 0, "port", "Should be positive integer"); Exceptions.checkArgument(!tlsEnabled || !Strings.isNullOrEmpty(keyFilePath), @@ -48,6 +58,7 @@ public class RESTServerConfigImpl implements RESTServerConfig { this.host = host; this.port = port; this.tlsEnabled = tlsEnabled; + this.tlsProtocolVersion = Arrays.copyOf(tlsProtocolVersion, tlsProtocolVersion.length); this.keyFilePath = keyFilePath; this.keyFilePasswordPath = keyFilePasswordPath; this.authorizationEnabled = authorizationEnabled; @@ -63,6 +74,7 @@ public String toString() { .append(String.format("host: %s, ", host)) .append(String.format("port: %d, ", port)) .append(String.format("tlsEnabled: %b, ", tlsEnabled)) + .append(String.format("tlsProtocolVersion: %s, ", Arrays.toString(tlsProtocolVersion))) .append(String.format("keyFilePath is %s, ", Strings.isNullOrEmpty(keyFilePath) ? "unspecified" : "specified")) .append(String.format("keyFilePasswordPath is %s", @@ -84,6 +96,11 @@ public boolean isTlsEnabled() { return this.tlsEnabled; } + @Override + public String[] tlsProtocolVersion() { + return Arrays.copyOf(this.tlsProtocolVersion, this.tlsProtocolVersion.length); + } + @Override public Properties toAuthHandlerProperties() { Properties props = new Properties(); diff --git a/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerConfigImplTests.java b/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerConfigImplTests.java index c0be3ec3aa3..c1221af8911 100644 --- a/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerConfigImplTests.java +++ b/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerConfigImplTests.java @@ -16,6 +16,7 @@ package io.pravega.shared.rest.impl; import io.pravega.shared.rest.RESTServerConfig; +import io.pravega.test.common.SecurityConfigDefaults; import org.junit.Test; import static org.junit.Assert.assertNotNull; @@ -28,15 +29,16 @@ public class RESTServerConfigImplTests { @Test public void testToStringIsSuccessfulWithAllConfigSpecified() { - RESTServerConfig config = new RESTServerConfigImpl("localhost", 2020, true, "/passwd", true, - "/rest.keystore.jks", "/keystore.jks.passwd"); + RESTServerConfig config = RESTServerConfigImpl.builder().host("localhost").port(2020).authorizationEnabled(true) + .userPasswordFile("/passwd").tlsEnabled(true).tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) + .keyFilePath("/rest.keystore.jks").keyFilePasswordPath("/keystore.jks.passwd").build(); assertNotNull(config.toString()); } @Test public void testToStringIsSuccessfulWithTlsDisabled() { - RESTServerConfig config = new RESTServerConfigImpl("localhost", 2020, false, null, false, - null, null); + RESTServerConfig config = RESTServerConfigImpl.builder().host("localhost").port(2020).authorizationEnabled(false) + .userPasswordFile(null).tlsEnabled(false).keyFilePath(null).keyFilePasswordPath(null).build(); assertNotNull(config.toString()); } diff --git a/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerTest.java b/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerTest.java index b46170f79fa..bac6e5af922 100644 --- a/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerTest.java +++ b/shared/rest/src/test/java/io/pravega/shared/rest/impl/RESTServerTest.java @@ -43,7 +43,7 @@ public abstract class RESTServerTest { @Rule - public final Timeout globalTimeout = new Timeout(10, TimeUnit.SECONDS); + public final Timeout globalTimeout = new Timeout(20, TimeUnit.SECONDS); private RESTServerConfig serverConfig; private RESTServer restServer; @@ -124,6 +124,7 @@ protected String getURLScheme() { RESTServerConfig getServerConfig() throws Exception { return RESTServerConfigImpl.builder().host("localhost").port(TestUtils.getAvailableListenPort()) .tlsEnabled(true) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .keyFilePath(getResourcePath(SecurityConfigDefaults.TLS_SERVER_KEYSTORE_NAME)) .keyFilePasswordPath(getResourcePath(SecurityConfigDefaults.TLS_PASSWORD_FILE_NAME)) .build(); @@ -136,6 +137,7 @@ public static class FailingSecureRESTServerTest extends SecureRESTServerTest { RESTServerConfig getServerConfig() throws Exception { return RESTServerConfigImpl.builder().host("localhost").port(TestUtils.getAvailableListenPort()) .tlsEnabled(true) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .keyFilePath(getResourcePath(SecurityConfigDefaults.TLS_SERVER_KEYSTORE_NAME)) .keyFilePasswordPath("Wrong_Path") .build(); diff --git a/shared/rest/src/test/java/io/pravega/shared/rest/security/PravegaAuthManagerTest.java b/shared/rest/src/test/java/io/pravega/shared/rest/security/PravegaAuthManagerTest.java index 4f4fa7e5640..196c5b8252b 100644 --- a/shared/rest/src/test/java/io/pravega/shared/rest/security/PravegaAuthManagerTest.java +++ b/shared/rest/src/test/java/io/pravega/shared/rest/security/PravegaAuthManagerTest.java @@ -93,7 +93,7 @@ public class PravegaAuthManagerTest { } @Rule - public Timeout globalTimeout = new Timeout(30, TimeUnit.HOURS); + public Timeout globalTimeout = new Timeout(30, TimeUnit.SECONDS); @BeforeClass public static void before() { diff --git a/shared/security/src/main/java/io/pravega/shared/security/crypto/StrongPasswordProcessor.java b/shared/security/src/main/java/io/pravega/shared/security/crypto/StrongPasswordProcessor.java index ee7a80e844e..adbdd0f9c2a 100644 --- a/shared/security/src/main/java/io/pravega/shared/security/crypto/StrongPasswordProcessor.java +++ b/shared/security/src/main/java/io/pravega/shared/security/crypto/StrongPasswordProcessor.java @@ -101,7 +101,7 @@ private byte[] getSalt() throws NoSuchAlgorithmException { return salt; } - private String toHex(byte[] array) throws NoSuchAlgorithmException { + private String toHex(byte[] array) { BigInteger bi = new BigInteger(1, array); String hex = bi.toString(16); int paddingLength = (array.length * 2) - hex.length(); diff --git a/standalone/src/main/java/io/pravega/local/InProcPravegaCluster.java b/standalone/src/main/java/io/pravega/local/InProcPravegaCluster.java index bfbe33f555b..8e9796ca296 100644 --- a/standalone/src/main/java/io/pravega/local/InProcPravegaCluster.java +++ b/standalone/src/main/java/io/pravega/local/InProcPravegaCluster.java @@ -17,6 +17,7 @@ import com.google.common.base.Preconditions; import com.google.common.base.Strings; +import io.pravega.common.security.TLSProtocolVersion; import io.pravega.shared.security.auth.DefaultCredentials; import io.pravega.common.function.Callbacks; import io.pravega.common.security.ZKTLSUtils; @@ -52,6 +53,7 @@ import java.util.Arrays; import java.util.Optional; import java.util.UUID; +import java.util.stream.Collectors; import javax.annotation.concurrent.GuardedBy; import lombok.Builder; @@ -70,6 +72,7 @@ public class InProcPravegaCluster implements AutoCloseable { private static final String LOCALHOST = "localhost"; + private static final String ALL_INTERFACES = "0.0.0.0"; private static final int THREADPOOL_SIZE = 20; private boolean isInMemStorage; @@ -79,6 +82,7 @@ public class InProcPravegaCluster implements AutoCloseable { /*Enabling this will configure security for the singlenode with hardcoded cert files and creds.*/ private boolean enableAuth; private boolean enableTls; + private String[] tlsProtocolVersion; private boolean enableTlsReload; @@ -144,6 +148,7 @@ public static final class InProcPravegaClusterBuilder { private int containerCount = 4; private boolean enableRestServer = true; private boolean replyWithStackTraceOnError = true; + private String[] tlsProtocolVersion = new TLSProtocolVersion(SingleNodeConfig.TLS_PROTOCOL_VERSION.getDefaultValue()).getProtocols(); public InProcPravegaCluster build() { //Check for valid combinations of flags @@ -169,7 +174,7 @@ public InProcPravegaCluster build() { "TLS enabled, but not all parameters set"); this.isInProcHDFS = !this.isInMemStorage; - return new InProcPravegaCluster(isInMemStorage, enableAuth, enableTls, enableTlsReload, + return new InProcPravegaCluster(isInMemStorage, enableAuth, enableTls, tlsProtocolVersion, enableTlsReload, enableMetrics, enableInfluxDB, metricsReportInterval, isInProcController, controllerCount, controllerPorts, controllerURI, restServerPort, isInProcSegmentStore, segmentStoreCount, segmentStorePorts, isInProcZK, zkPort, zkHost, @@ -296,10 +301,13 @@ private void startLocalSegmentStore(int segmentStoreId) throws Exception { .with(ServiceConfig.LISTENING_PORT, this.segmentStorePorts[segmentStoreId]) .with(ServiceConfig.CLUSTER_NAME, this.clusterName) .with(ServiceConfig.ENABLE_TLS, this.enableTls) + .with(ServiceConfig.TLS_PROTOCOL_VERSION, Arrays.stream(this.tlsProtocolVersion).collect(Collectors.joining(","))) .with(ServiceConfig.KEY_FILE, this.keyFile) + .with(ServiceConfig.REST_KEYSTORE_FILE, this.jksKeyFile) + .with(ServiceConfig.REST_KEYSTORE_PASSWORD_FILE, this.keyPasswordFile) .with(ServiceConfig.CERT_FILE, this.certFile) .with(ServiceConfig.ENABLE_TLS_RELOAD, this.enableTlsReload) - .with(ServiceConfig.LISTENING_IP_ADDRESS, LOCALHOST) + .with(ServiceConfig.LISTENING_IP_ADDRESS, ALL_INTERFACES) .with(ServiceConfig.PUBLISHED_IP_ADDRESS, LOCALHOST) .with(ServiceConfig.CACHE_POLICY_MAX_TIME, 60) .with(ServiceConfig.CACHE_POLICY_MAX_SIZE, 128 * 1024 * 1024L) @@ -308,12 +316,12 @@ private void startLocalSegmentStore(int segmentStoreId) throws Exception { ServiceConfig.DataLogType.BOOKKEEPER) .with(ServiceConfig.STORAGE_LAYOUT, StorageLayoutType.ROLLING_STORAGE) .with(ServiceConfig.STORAGE_IMPLEMENTATION, isInMemStorage ? - ServiceConfig.StorageType.INMEMORY : - ServiceConfig.StorageType.FILESYSTEM) + ServiceConfig.StorageType.INMEMORY.name() : + ServiceConfig.StorageType.FILESYSTEM.name()) .with(ServiceConfig.ENABLE_ADMIN_GATEWAY, this.enableAdminGateway) .with(ServiceConfig.ADMIN_GATEWAY_PORT, this.adminGatewayPort) .with(ServiceConfig.REPLY_WITH_STACK_TRACE_ON_ERROR, this.replyWithStackTraceOnError) - .with(ServiceConfig.REST_LISTENING_PORT, this.restServerPort + segmentStoreId + 1) + .with(ServiceConfig.REST_LISTENING_PORT, ServiceConfig.REST_LISTENING_PORT.getDefaultValue() + segmentStoreId) .with(ServiceConfig.REST_LISTENING_ENABLE, this.enableRestServer)) .include(DurableLogConfig.builder() .with(DurableLogConfig.CHECKPOINT_COMMIT_COUNT, 100) @@ -407,6 +415,7 @@ private ControllerServiceMain startLocalController(int controllerId) { .publishedRPCPort(this.controllerPorts[controllerId]) .authorizationEnabled(this.enableAuth) .tlsEnabled(this.enableTls) + .tlsProtocolVersion(this.tlsProtocolVersion) .tlsTrustStore(this.certFile) .tlsCertFile(this.certFile) .tlsKeyFile(this.keyFile) @@ -423,6 +432,7 @@ private ControllerServiceMain startLocalController(int controllerId) { .host("0.0.0.0") .port(this.restServerPort) .tlsEnabled(this.enableTls) + .tlsProtocolVersion(this.tlsProtocolVersion) .keyFilePath(this.jksKeyFile) .keyFilePasswordPath(this.keyPasswordFile) .build(); diff --git a/standalone/src/main/java/io/pravega/local/LocalPravegaEmulator.java b/standalone/src/main/java/io/pravega/local/LocalPravegaEmulator.java index 77f4e95fb55..79b19ccc563 100644 --- a/standalone/src/main/java/io/pravega/local/LocalPravegaEmulator.java +++ b/standalone/src/main/java/io/pravega/local/LocalPravegaEmulator.java @@ -15,6 +15,7 @@ */ package io.pravega.local; +import io.pravega.common.security.TLSProtocolVersion; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import lombok.Builder; import lombok.Getter; @@ -37,6 +38,7 @@ public class LocalPravegaEmulator implements AutoCloseable { private boolean enableRestServer; private boolean enableAuth; private boolean enableTls; + private String[] tlsProtocolVersion; private String certFile; private String passwd; private String userName; @@ -56,6 +58,7 @@ public class LocalPravegaEmulator implements AutoCloseable { private final InProcPravegaCluster inProcPravegaCluster; public static final class LocalPravegaEmulatorBuilder { + private String[] tlsProtocolVersion = new TLSProtocolVersion(SingleNodeConfig.TLS_PROTOCOL_VERSION.getDefaultValue()).getProtocols(); public LocalPravegaEmulator build() { this.inProcPravegaCluster = InProcPravegaCluster .builder() @@ -73,6 +76,7 @@ public LocalPravegaEmulator build() { .enableRestServer(enableRestServer) .enableAuth(enableAuth) .enableTls(enableTls) + .tlsProtocolVersion(tlsProtocolVersion) .certFile(certFile) .keyFile(keyFile) .enableTlsReload(enableTlsReload) @@ -92,7 +96,7 @@ public LocalPravegaEmulator build() { this.inProcPravegaCluster.setControllerPorts(new int[]{controllerPort}); this.inProcPravegaCluster.setSegmentStorePorts(new int[]{segmentStorePort}); return new LocalPravegaEmulator(zkPort, controllerPort, segmentStorePort, restServerPort, enableRestServer, - enableAuth, enableTls, certFile, passwd, userName, passwdFile, keyFile, enableTlsReload, + enableAuth, enableTls, tlsProtocolVersion, certFile, passwd, userName, passwdFile, keyFile, enableTlsReload, jksKeyFile, jksTrustFile, keyPasswordFile, enableMetrics, enableInfluxDB, metricsReportInterval, enabledAdminGateway, adminGatewayPort, inProcPravegaCluster); } @@ -115,6 +119,7 @@ public static void main(String[] args) { .enableRestServer(conf.isEnableRestServer()) .enableAuth(conf.isEnableAuth()) .enableTls(conf.isEnableTls()) + .tlsProtocolVersion(conf.getTlsProtocolVersion()) .enableMetrics(conf.isEnableMetrics()) .enableInfluxDB(conf.isEnableInfluxDB()) .metricsReportInterval(conf.getMetricsReportInterval()) diff --git a/standalone/src/main/java/io/pravega/local/SingleNodeConfig.java b/standalone/src/main/java/io/pravega/local/SingleNodeConfig.java index e436d2c7fb3..1f2dadd0f6f 100644 --- a/standalone/src/main/java/io/pravega/local/SingleNodeConfig.java +++ b/standalone/src/main/java/io/pravega/local/SingleNodeConfig.java @@ -15,6 +15,7 @@ */ package io.pravega.local; +import io.pravega.common.security.TLSProtocolVersion; import io.pravega.common.util.ConfigBuilder; import io.pravega.common.util.Property; import io.pravega.common.util.TypedProperties; @@ -39,6 +40,7 @@ public class SingleNodeConfig { // TLS-related configurations public final static Property ENABLE_TLS = Property.named("security.tls.enable", false, "enableTls"); + public final static Property TLS_PROTOCOL_VERSION = Property.named("security.tls.protocolVersion", "TLSv1.2,TLSv1.3"); public final static Property KEY_FILE = Property.named("security.tls.privateKey.location", "", "keyFile"); public final static Property CERT_FILE = Property.named("security.tls.certificate.location", "", "certFile"); public final static Property KEYSTORE_JKS = Property.named("security.tls.keyStore.location", "", "keyStoreJKS"); @@ -146,6 +148,12 @@ public class SingleNodeConfig { @Getter private boolean enableTls; + /** + * + */ + @Getter + private String[] tlsProtocolVersion; + /** * Flag to enable auth. */ @@ -198,6 +206,7 @@ private SingleNodeConfig(TypedProperties properties) { this.passwd = properties.get(PASSWD); this.enableRestServer = properties.getBoolean(ENABLE_REST_SERVER); this.enableTls = properties.getBoolean(ENABLE_TLS); + this.tlsProtocolVersion = new TLSProtocolVersion(properties.get(TLS_PROTOCOL_VERSION)).getProtocols(); this.enableAuth = properties.getBoolean(ENABLE_AUTH); this.keyStoreJKS = properties.get(KEYSTORE_JKS); this.keyStoreJKSPasswordFile = properties.get(KEYSTORE_JKS_PASSWORD_FILE); diff --git a/standalone/src/test/java/io/pravega/local/AuthEnabledInProcPravegaClusterTest.java b/standalone/src/test/java/io/pravega/local/AuthEnabledInProcPravegaClusterTest.java index 6d13b545d43..531b548df41 100644 --- a/standalone/src/test/java/io/pravega/local/AuthEnabledInProcPravegaClusterTest.java +++ b/standalone/src/test/java/io/pravega/local/AuthEnabledInProcPravegaClusterTest.java @@ -41,7 +41,7 @@ public class AuthEnabledInProcPravegaClusterTest { @ClassRule - public static final PravegaEmulatorResource EMULATOR = new PravegaEmulatorResource(true, false, false); + public static final PravegaEmulatorResource EMULATOR = PravegaEmulatorResource.builder().authEnabled(true).build(); final String scope = "AuthTestScope"; final String stream = "AuthTestStream"; final String msg = "Test message on the plaintext channel with auth credentials"; diff --git a/standalone/src/test/java/io/pravega/local/InProcPravegaClusterTest.java b/standalone/src/test/java/io/pravega/local/InProcPravegaClusterTest.java index 5004f279635..f77324b6150 100644 --- a/standalone/src/test/java/io/pravega/local/InProcPravegaClusterTest.java +++ b/standalone/src/test/java/io/pravega/local/InProcPravegaClusterTest.java @@ -32,7 +32,7 @@ public class InProcPravegaClusterTest { @ClassRule - public static final PravegaEmulatorResource EMULATOR = new PravegaEmulatorResource(false, false, false); + public static final PravegaEmulatorResource EMULATOR = PravegaEmulatorResource.builder().build(); final String msg = "Test message on the plaintext channel"; /** diff --git a/standalone/src/test/java/io/pravega/local/PravegaEmulatorResource.java b/standalone/src/test/java/io/pravega/local/PravegaEmulatorResource.java index bbf8f9c5976..e00cd373511 100644 --- a/standalone/src/test/java/io/pravega/local/PravegaEmulatorResource.java +++ b/standalone/src/test/java/io/pravega/local/PravegaEmulatorResource.java @@ -22,12 +22,14 @@ import io.pravega.shared.security.auth.DefaultCredentials; import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.TestUtils; +import lombok.Builder; import lombok.Cleanup; import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; import org.junit.rules.ExternalResource; import java.net.URI; +import java.util.Arrays; import static org.junit.Assert.assertNotNull; @@ -49,20 +51,37 @@ * */ @Slf4j +@Builder public class PravegaEmulatorResource extends ExternalResource { final boolean authEnabled; final boolean tlsEnabled; + final boolean restEnabled; + final String[] tlsProtocolVersion; final LocalPravegaEmulator pravega; + public static final class PravegaEmulatorResourceBuilder { + boolean authEnabled = false; + boolean tlsEnabled = false; + boolean restEnabled = false; + String[] tlsProtocolVersion = SecurityConfigDefaults.TLS_PROTOCOL_VERSION; + + public PravegaEmulatorResource build() { + return new PravegaEmulatorResource(authEnabled, tlsEnabled, restEnabled, tlsProtocolVersion); + } + } /** * Create an instance of Pravega Emulator resource. * @param authEnabled Authorisation enable flag. * @param tlsEnabled Tls enable flag. * @param restEnabled REST endpoint enable flag. + * @param tlsProtocolVersion TlsProtocolVersion */ - public PravegaEmulatorResource(boolean authEnabled, boolean tlsEnabled, boolean restEnabled) { + + public PravegaEmulatorResource(boolean authEnabled, boolean tlsEnabled, boolean restEnabled, String[] tlsProtocolVersion) { this.authEnabled = authEnabled; this.tlsEnabled = tlsEnabled; + this.restEnabled = restEnabled; + this.tlsProtocolVersion = Arrays.copyOf(tlsProtocolVersion, tlsProtocolVersion.length); LocalPravegaEmulator.LocalPravegaEmulatorBuilder emulatorBuilder = LocalPravegaEmulator.builder() .controllerPort(TestUtils.getAvailableListenPort()) .segmentStorePort(TestUtils.getAvailableListenPort()) @@ -71,6 +90,7 @@ public PravegaEmulatorResource(boolean authEnabled, boolean tlsEnabled, boolean .enableRestServer(restEnabled) .enableAuth(authEnabled) .enableTls(tlsEnabled) + .tlsProtocolVersion(tlsProtocolVersion) .enabledAdminGateway(true) .adminGatewayPort(TestUtils.getAvailableListenPort()); @@ -95,6 +115,8 @@ public PravegaEmulatorResource(boolean authEnabled, boolean tlsEnabled, boolean pravega = emulatorBuilder.build(); } + + @Override protected void before() throws Exception { pravega.start(); diff --git a/standalone/src/test/java/io/pravega/local/SecurePravegaClusterTest.java b/standalone/src/test/java/io/pravega/local/SecurePravegaClusterTest.java index 82f3c061381..e232be42c07 100644 --- a/standalone/src/test/java/io/pravega/local/SecurePravegaClusterTest.java +++ b/standalone/src/test/java/io/pravega/local/SecurePravegaClusterTest.java @@ -31,7 +31,7 @@ @RunWith(SerializedClassRunner.class) public class SecurePravegaClusterTest { @ClassRule - public static final PravegaEmulatorResource EMULATOR = new PravegaEmulatorResource(true, true, false); + public static final PravegaEmulatorResource EMULATOR = PravegaEmulatorResource.builder().authEnabled(true).tlsEnabled(true).build(); final String scope = "TlsAndAuthTestScope"; final String stream = "TlsAndAuthTestStream"; final String msg = "Test message on the encrypted channel with auth credentials"; diff --git a/standalone/src/test/java/io/pravega/local/TlsEnabledInProcPravegaClusterTest.java b/standalone/src/test/java/io/pravega/local/TlsEnabledInProcPravegaClusterTest.java index 175397ff57d..ebf83359522 100644 --- a/standalone/src/test/java/io/pravega/local/TlsEnabledInProcPravegaClusterTest.java +++ b/standalone/src/test/java/io/pravega/local/TlsEnabledInProcPravegaClusterTest.java @@ -41,7 +41,7 @@ public class TlsEnabledInProcPravegaClusterTest { @ClassRule - public static final PravegaEmulatorResource EMULATOR = new PravegaEmulatorResource(false, true, false); + public static final PravegaEmulatorResource EMULATOR = PravegaEmulatorResource.builder().tlsEnabled(true).build(); final String scope = "TlsTestScope"; final String stream = "TlsTestStream"; final String msg = "Test message on the encrypted channel"; diff --git a/standalone/src/test/java/io/pravega/local/TlsProtocolVersion12Test.java b/standalone/src/test/java/io/pravega/local/TlsProtocolVersion12Test.java new file mode 100644 index 00000000000..63df2c9fc9b --- /dev/null +++ b/standalone/src/test/java/io/pravega/local/TlsProtocolVersion12Test.java @@ -0,0 +1,33 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.local; + +import org.junit.ClassRule; +import org.junit.Test; + +import static io.pravega.local.PravegaSanityTests.testWriteAndReadAnEvent; + +public class TlsProtocolVersion12Test { + + @ClassRule + public static final PravegaEmulatorResource EMULATOR = PravegaEmulatorResource.builder().tlsEnabled(true).tlsProtocolVersion(new String[] {"TLSv1.2"}).build(); + + @Test(timeout = 30000) + public void testTlsProtocolVersiontls1_2() throws Exception { + testWriteAndReadAnEvent("tls12scope", "tls12stream", "Test message on the TLSv1.2 encrypted channel", + EMULATOR.getClientConfig()); + } +} diff --git a/standalone/src/test/java/io/pravega/local/TlsProtocolVersion13Test.java b/standalone/src/test/java/io/pravega/local/TlsProtocolVersion13Test.java new file mode 100644 index 00000000000..fb8482bfa1a --- /dev/null +++ b/standalone/src/test/java/io/pravega/local/TlsProtocolVersion13Test.java @@ -0,0 +1,33 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.local; + +import org.junit.ClassRule; +import org.junit.Test; + +import static io.pravega.local.PravegaSanityTests.testWriteAndReadAnEvent; + +public class TlsProtocolVersion13Test { + + @ClassRule + public static final PravegaEmulatorResource EMULATOR = PravegaEmulatorResource.builder().tlsEnabled(true).tlsProtocolVersion(new String[] {"TLSv1.3"}).build(); + + @Test(timeout = 30000) + public void testTlsProtocolVersiontls1_3() throws Exception { + testWriteAndReadAnEvent("tls13scope", "tls13stream", "Test message on the TLSv1.3 encrypted channel", + EMULATOR.getClientConfig()); + } +} diff --git a/test/integration/src/main/java/io/pravega/test/integration/demo/ClusterWrapper.java b/test/integration/src/main/java/io/pravega/test/integration/demo/ClusterWrapper.java index c763aeec1c2..915728ab600 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/demo/ClusterWrapper.java +++ b/test/integration/src/main/java/io/pravega/test/integration/demo/ClusterWrapper.java @@ -28,6 +28,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.segmentstore.server.store.ServiceConfig; import io.pravega.segmentstore.storage.DurableDataLogException; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.TestUtils; import io.pravega.test.common.TestingServerStarter; import io.pravega.shared.security.auth.PasswordAuthHandlerInput; @@ -42,6 +43,7 @@ import java.io.File; import java.security.NoSuchAlgorithmException; import java.security.spec.InvalidKeySpecException; +import java.time.Duration; import java.util.Arrays; import java.util.List; import java.util.UUID; @@ -122,6 +124,10 @@ public class ClusterWrapper implements AutoCloseable { @Builder.Default private boolean tlsEnabled = false; + @Getter + @Builder.Default + private String[] tlsProtocolVersion = SecurityConfigDefaults.TLS_PROTOCOL_VERSION; + @Builder.Default private boolean controllerRestEnabled = false; @@ -141,6 +147,10 @@ public class ClusterWrapper implements AutoCloseable { @Getter private String tlsServerKeystorePasswordPath; + @Getter + @Builder.Default + private Duration accessTokenTtl = Duration.ofSeconds(300); + private ClusterWrapper() {} @SneakyThrows @@ -217,7 +227,7 @@ private void startSegmentStore() throws DurableDataLogException { segmentStoreServer = new PravegaConnectionListener(this.tlsEnabled, false, "localhost", segmentStorePort, store, tableStore, SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), authEnabled ? new TokenVerifierImpl(tokenSigningKeyBasis) : null, - this.tlsServerCertificatePath, this.tlsServerKeyPath, true, serviceBuilder.getLowPriorityExecutor()); + this.tlsServerCertificatePath, this.tlsServerKeyPath, true, serviceBuilder.getLowPriorityExecutor(), tlsProtocolVersion); segmentStoreServer.startListening(); } @@ -256,6 +266,7 @@ private ControllerWrapper createControllerWrapper() { .isRGWritesWithReadPermEnabled(rgWritesWithReadPermEnabled) .accessTokenTtlInSeconds(tokenTtlInSeconds) .enableTls(tlsEnabled) + .tlsProtocolVersion(tlsProtocolVersion) .serverCertificatePath(tlsServerCertificatePath) .serverKeyPath(tlsServerKeyPath) .serverKeystorePath(tlsServerKeystorePath) diff --git a/test/integration/src/main/java/io/pravega/test/integration/demo/ControllerWrapper.java b/test/integration/src/main/java/io/pravega/test/integration/demo/ControllerWrapper.java index 3ddcd82c32e..84ce200371d 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/demo/ControllerWrapper.java +++ b/test/integration/src/main/java/io/pravega/test/integration/demo/ControllerWrapper.java @@ -44,6 +44,7 @@ import java.util.UUID; import java.util.concurrent.TimeUnit; +import io.pravega.test.common.SecurityConfigDefaults; import lombok.Builder; import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; @@ -113,7 +114,7 @@ public ControllerWrapper(final String connectionString, final boolean disableEve int accessTokenTtlInSeconds) { this (connectionString, disableEventProcessor, disableControllerCluster, controllerPort, serviceHost, servicePort, containerCount, restPort, enableAuth, passwordAuthHandlerInputFilePath, tokenSigningKey, - isRGWritesWithReadPermEnabled, accessTokenTtlInSeconds, false, "", "", "", ""); + isRGWritesWithReadPermEnabled, accessTokenTtlInSeconds, false, SecurityConfigDefaults.TLS_PROTOCOL_VERSION, "", "", "", ""); } @Builder @@ -123,7 +124,7 @@ public ControllerWrapper(final String connectionString, final boolean disableEve final int containerCount, int restPort, boolean enableAuth, String passwordAuthHandlerInputFilePath, String tokenSigningKey, boolean isRGWritesWithReadPermEnabled, - int accessTokenTtlInSeconds, boolean enableTls, String serverCertificatePath, + int accessTokenTtlInSeconds, boolean enableTls, String[] tlsProtocolVersion, String serverCertificatePath, String serverKeyPath, String serverKeystorePath, String serverKeystorePasswordPath) { ZKClientConfig zkClientConfig = ZKClientConfigImpl.builder().connectionString(connectionString) @@ -181,6 +182,7 @@ public ControllerWrapper(final String connectionString, final boolean disableEve .isRGWritesWithReadPermEnabled(isRGWritesWithReadPermEnabled) .userPasswordFile(passwordAuthHandlerInputFilePath) .tlsEnabled(enableTls) + .tlsProtocolVersion(tlsProtocolVersion) .tlsTrustStore(serverCertificatePath) .tlsCertFile(serverCertificatePath) .tlsKeyFile(serverKeyPath) @@ -189,6 +191,7 @@ public ControllerWrapper(final String connectionString, final boolean disableEve Optional restServerConfig = restPort > 0 ? Optional.of(RESTServerConfigImpl.builder().host("localhost").port(restPort) .tlsEnabled(enableTls) + .tlsProtocolVersion(tlsProtocolVersion) .keyFilePath(serverKeystorePath) .keyFilePasswordPath(serverKeystorePasswordPath) .build()) : diff --git a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleDownTest.java b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleDownTest.java index 086a2e4be5b..996973eb490 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleDownTest.java +++ b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleDownTest.java @@ -35,6 +35,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.shared.NameUtils; import io.pravega.test.common.TestingServerStarter; + import java.util.Collections; import java.util.HashMap; import java.util.Map; @@ -77,7 +78,7 @@ public static void main(String[] args) throws Exception { @Cleanup PravegaConnectionListener server = new PravegaConnectionListener(false, false, "localhost", 12345, store, tableStore, autoScaleMonitor.getStatsRecorder(), autoScaleMonitor.getTableSegmentStatsRecorder(), null, null, null, true, - serviceBuilder.getLowPriorityExecutor()); + serviceBuilder.getLowPriorityExecutor(), Config.TLS_PROTOCOL_VERSION.toArray(new String[Config.TLS_PROTOCOL_VERSION.size()])); server.startListening(); controllerWrapper.awaitRunning(); controllerWrapper.getControllerService().createScope("test", 0L).get(); diff --git a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpTest.java b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpTest.java index f1c0ccf8495..20377dae8d7 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpTest.java +++ b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpTest.java @@ -78,7 +78,7 @@ public static void main(String[] args) throws Exception { @Cleanup PravegaConnectionListener server = new PravegaConnectionListener(false, false, "localhost", 12345, store, tableStore, autoScaleMonitor.getStatsRecorder(), autoScaleMonitor.getTableSegmentStatsRecorder(), null, null, null, true, - serviceBuilder.getLowPriorityExecutor()); + serviceBuilder.getLowPriorityExecutor(), Config.TLS_PROTOCOL_VERSION.toArray(new String[Config.TLS_PROTOCOL_VERSION.size()])); server.startListening(); controllerWrapper.awaitRunning(); diff --git a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpWithTxnTest.java b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpWithTxnTest.java index fcaae19191f..af1a8e71169 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpWithTxnTest.java +++ b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndAutoScaleUpWithTxnTest.java @@ -47,6 +47,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.shared.NameUtils; import io.pravega.test.common.TestingServerStarter; + import java.util.Collections; import java.util.HashMap; import java.util.Map; @@ -97,7 +98,7 @@ public static void main(String[] args) throws Exception { @Cleanup PravegaConnectionListener server = new PravegaConnectionListener(false, false, "localhost", 12345, store, tableStore, autoScaleMonitor.getStatsRecorder(), autoScaleMonitor.getTableSegmentStatsRecorder(), null, null, null, true, - serviceBuilder.getLowPriorityExecutor()); + serviceBuilder.getLowPriorityExecutor(), Config.TLS_PROTOCOL_VERSION.toArray(new String[Config.TLS_PROTOCOL_VERSION.size()])); server.startListening(); controllerWrapper.awaitRunning(); diff --git a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndTransactionTest.java b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndTransactionTest.java index e96842ee79d..4c033f13606 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndTransactionTest.java +++ b/test/integration/src/main/java/io/pravega/test/integration/demo/EndToEndTransactionTest.java @@ -33,6 +33,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilder; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.test.common.TestingServerStarter; + import java.util.concurrent.CompletableFuture; import lombok.Cleanup; import lombok.extern.slf4j.Slf4j; @@ -58,7 +59,7 @@ public static void main(String[] args) throws Exception { int port = Config.SERVICE_PORT; @Cleanup PravegaConnectionListener server = new PravegaConnectionListener(false, port, store, - serviceBuilder.createTableStoreService(), serviceBuilder.getLowPriorityExecutor()); + serviceBuilder.createTableStoreService(), serviceBuilder.getLowPriorityExecutor(), Config.TLS_PROTOCOL_VERSION.toArray(new String[Config.TLS_PROTOCOL_VERSION.size()])); server.startListening(); Thread.sleep(1000); diff --git a/test/integration/src/main/java/io/pravega/test/integration/selftest/Reporter.java b/test/integration/src/main/java/io/pravega/test/integration/selftest/Reporter.java index 274ad553214..4f8eee96ad7 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/selftest/Reporter.java +++ b/test/integration/src/main/java/io/pravega/test/integration/selftest/Reporter.java @@ -182,7 +182,7 @@ private void outputRow(Object opType, Object count, Object sum, Object lAvg, Obj } private double toMB(double bytes) { - return bytes / (double) ONE_MB; + return bytes / ONE_MB; } private double toSeconds(long nanos) { diff --git a/test/integration/src/main/java/io/pravega/test/integration/selftest/TestState.java b/test/integration/src/main/java/io/pravega/test/integration/selftest/TestState.java index b690b31b1eb..6bdc8f156bf 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/selftest/TestState.java +++ b/test/integration/src/main/java/io/pravega/test/integration/selftest/TestState.java @@ -588,7 +588,7 @@ synchronized double sum() { sum += (long) this.latencyCounts[i] * i; } - return (double) sum; + return sum; } /** diff --git a/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessListenerWithRealStoreAdapter.java b/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessListenerWithRealStoreAdapter.java index 21bd85c2c75..7fb0bcea195 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessListenerWithRealStoreAdapter.java +++ b/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessListenerWithRealStoreAdapter.java @@ -69,6 +69,7 @@ protected StreamSegmentStore getStreamSegmentStore() { return this.segmentStoreAdapter.getStreamSegmentStore(); } + @Override protected TableStore getTableStore() { return this.segmentStoreAdapter.getTableStore(); } diff --git a/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessMockClientAdapter.java b/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessMockClientAdapter.java index be086596593..9c93a583ccc 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessMockClientAdapter.java +++ b/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/InProcessMockClientAdapter.java @@ -44,6 +44,7 @@ import io.pravega.segmentstore.contracts.tables.TableEntry; import io.pravega.segmentstore.contracts.tables.TableKey; import io.pravega.segmentstore.contracts.tables.TableSegmentConfig; +import io.pravega.segmentstore.contracts.tables.TableSegmentInfo; import io.pravega.segmentstore.contracts.tables.TableStore; import io.pravega.segmentstore.server.host.delegationtoken.PassingTokenVerifier; import io.pravega.segmentstore.server.host.handler.PravegaConnectionListener; @@ -53,6 +54,7 @@ import io.pravega.segmentstore.server.host.stat.AutoScalerConfig; import io.pravega.segmentstore.server.host.stat.TableSegmentStatsRecorder; import io.pravega.test.common.NoOpScheduledExecutor; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.integration.selftest.TestConfig; import java.time.Duration; import java.util.AbstractMap; @@ -109,7 +111,7 @@ protected void startUp() throws Exception { val store = getStreamSegmentStore(); this.autoScaleMonitor = new AutoScaleMonitor(store, AutoScalerConfig.builder().build()); this.listener = new PravegaConnectionListener(false, false, "localhost", segmentStorePort, store, - getTableStore(), autoScaleMonitor.getStatsRecorder(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), null, null, false, NoOpScheduledExecutor.get()); + getTableStore(), autoScaleMonitor.getStatsRecorder(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), null, null, false, NoOpScheduledExecutor.get(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); this.listener.startListening(); this.streamManager = new MockStreamManager(SCOPE, LISTENING_ADDRESS, segmentStorePort); @@ -274,6 +276,11 @@ public CompletableFuture> getAttributes(String streamSegm }, executor); } + @Override + public CompletableFuture flushToStorage(int containerId, Duration timeout) { + throw new UnsupportedOperationException("flushToStorage"); + } + @Override public CompletableFuture read(String streamSegmentName, long offset, int maxLength, Duration timeout) { throw new UnsupportedOperationException("read"); @@ -284,6 +291,13 @@ public CompletableFuture mergeStreamSegment(String tar throw new UnsupportedOperationException("mergeStreamSegment"); } + @Override + public CompletableFuture mergeStreamSegment(String target, String source, + AttributeUpdateCollection attributeUpdates, + Duration timeout) { + throw new UnsupportedOperationException("mergeStreamSegment"); + } + @Override public CompletableFuture sealStreamSegment(String streamSegmentName, Duration timeout) { throw new UnsupportedOperationException("sealStreamSegment"); @@ -476,13 +490,8 @@ public CompletableFuture>> entryDeltaIter } @Override - public CompletableFuture merge(String targetSegmentName, String sourceSegmentName, Duration timeout) { - throw new UnsupportedOperationException("mergeTableSegments"); - } - - @Override - public CompletableFuture seal(String segmentName, Duration timeout) { - throw new UnsupportedOperationException("sealTableSegment"); + public CompletableFuture getInfo(String segmentName, Duration timeout) { + throw new UnsupportedOperationException("getInfo"); } } } diff --git a/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/OutOfProcessAdapter.java b/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/OutOfProcessAdapter.java index 396018c566a..0dc933840f1 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/OutOfProcessAdapter.java +++ b/test/integration/src/main/java/io/pravega/test/integration/selftest/adapters/OutOfProcessAdapter.java @@ -205,6 +205,7 @@ private Process startController(int controllerId) throws IOException { .sysProp(configProperty(Config.COMPONENT_CODE, Config.PROPERTY_PWD_AUTH_HANDLER_ACCOUNTS_STORE), pathOfConfigItem(SecurityConfigDefaults.AUTH_HANDLER_INPUT_FILE_NAME)) .sysProp(configProperty(Config.COMPONENT_CODE, Config.PROPERTY_TLS_ENABLED), this.testConfig.isEnableSecurity()) + .sysProp(configProperty(Config.COMPONENT_CODE, Config.PROPERTY_TLS_PROTOCOL_VERSION), Config.PROPERTY_TLS_PROTOCOL_VERSION.getDefaultValue()) .sysProp(configProperty(Config.COMPONENT_CODE, Config.PROPERTY_TLS_CERT_FILE), pathOfConfigItem(SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME)) .sysProp(configProperty(Config.COMPONENT_CODE, Config.PROPERTY_TLS_TRUST_STORE), pathOfConfigItem(SecurityConfigDefaults.TLS_CA_CERT_FILE_NAME)) .sysProp(configProperty(Config.COMPONENT_CODE, Config.PROPERTY_TLS_KEY_FILE), pathOfConfigItem(SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME)) @@ -238,6 +239,7 @@ private Process startSegmentStore(int segmentStoreId) throws IOException { .sysProp(ServiceBuilderConfig.CONFIG_FILE_PROPERTY_NAME, getSegmentStoreConfigFilePath()) .sysProp(configProperty(ServiceConfig.COMPONENT_CODE, ServiceConfig.ZK_URL), getZkUrl()) .sysProp(configProperty(ServiceConfig.COMPONENT_CODE, ServiceConfig.ENABLE_TLS), this.testConfig.isEnableSecurity()) + .sysProp(configProperty(ServiceConfig.COMPONENT_CODE, ServiceConfig.TLS_PROTOCOL_VERSION), Config.PROPERTY_TLS_PROTOCOL_VERSION.getDefaultValue()) .sysProp(configProperty(ServiceConfig.COMPONENT_CODE, ServiceConfig.KEY_FILE), pathOfConfigItem(SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME)) .sysProp(configProperty(ServiceConfig.COMPONENT_CODE, ServiceConfig.CERT_FILE), diff --git a/test/integration/src/main/java/io/pravega/test/integration/utils/SetupUtils.java b/test/integration/src/main/java/io/pravega/test/integration/utils/SetupUtils.java index 9a5e506f93b..a46621b2a8e 100644 --- a/test/integration/src/main/java/io/pravega/test/integration/utils/SetupUtils.java +++ b/test/integration/src/main/java/io/pravega/test/integration/utils/SetupUtils.java @@ -39,8 +39,12 @@ import io.pravega.segmentstore.server.host.delegationtoken.PassingTokenVerifier; import io.pravega.segmentstore.server.host.handler.AdminConnectionListener; import io.pravega.segmentstore.server.host.handler.PravegaConnectionListener; +import io.pravega.segmentstore.server.host.stat.SegmentStatsRecorder; +import io.pravega.segmentstore.server.host.stat.TableSegmentStatsRecorder; import io.pravega.segmentstore.server.store.ServiceBuilder; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; +import io.pravega.shared.security.auth.DefaultCredentials; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.TestUtils; import io.pravega.test.common.TestingServerStarter; import io.pravega.test.integration.demo.ControllerWrapper; @@ -54,6 +58,8 @@ import lombok.extern.slf4j.Slf4j; import org.apache.curator.test.TestingServer; +import static io.pravega.test.integration.utils.TestUtils.pathToConfig; + /** * Utility functions for creating the test setup. */ @@ -86,18 +92,24 @@ public final class SetupUtils { private final int servicePort = TestUtils.getAvailableListenPort(); @Getter private final int adminPort = TestUtils.getAvailableListenPort(); - @Getter - private final ClientConfig clientConfig = ClientConfig.builder().controllerURI(URI.create("tcp://localhost:" + controllerRPCPort)).build(); - + private ClientConfig.ClientConfigBuilder clientConfigBuilder = ClientConfig.builder(); + + /** + * Returns the cli config for this instance. + */ + public ClientConfig getClientConfig() { + return clientConfigBuilder.build(); + } + /** * Start all pravega related services required for the test deployment. * * @throws Exception on any errors. */ public void startAllServices() throws Exception { - startAllServices(null); + startAllServices(null, false, false); } - + /** * Start all pravega related services required for the test deployment. * @@ -105,14 +117,51 @@ public void startAllServices() throws Exception { * @throws Exception on any errors. */ public void startAllServices(Integer numThreads) throws Exception { + startAllServices(null, false, false); + } + + /** + * Start all pravega related services required for the test deployment. + * + * @param enableAuth set to enale authentication + * @param enableTls set to enable tls + * @throws Exception on any errors. + */ + public void startAllServices(boolean enableAuth, boolean enableTls) throws Exception { + startAllServices(null, enableAuth, enableTls); + } + + /** + * Start all pravega related services required for the test deployment. + * + * @param numThreads the number of threads for the internal client threadpool. + * @param enableAuth set to enale authentication + * @param enableTls set to enable tls + * @throws Exception on any errors. + */ + public void startAllServices(Integer numThreads, boolean enableAuth, boolean enableTls) throws Exception { if (!this.started.compareAndSet(false, true)) { log.warn("Services already started, not attempting to start again"); return; } + + if (enableAuth) { + clientConfigBuilder = clientConfigBuilder.credentials(new DefaultCredentials(SecurityConfigDefaults.AUTH_ADMIN_PASSWORD, + SecurityConfigDefaults.AUTH_ADMIN_USERNAME)); + } + + if (enableTls) { + clientConfigBuilder = clientConfigBuilder.trustStore(pathToConfig() + SecurityConfigDefaults.TLS_CA_CERT_FILE_NAME) + .controllerURI(URI.create("tls://localhost:" + controllerRPCPort)) + .validateHostName(false); + } else { + clientConfigBuilder = clientConfigBuilder.controllerURI(URI.create("tcp://localhost:" + controllerRPCPort)); + } + this.executor = ExecutorServiceHelpers.newScheduledThreadPool(2, "Controller pool"); - this.controller = new ControllerImpl(ControllerImplConfig.builder().clientConfig(clientConfig).build(), + this.controller = new ControllerImpl(ControllerImplConfig.builder().clientConfig(getClientConfig()).build(), executor); - this.clientFactory = new ClientFactoryImpl(scope, controller, clientConfig); + this.clientFactory = new ClientFactoryImpl(scope, controller, getClientConfig()); // Start zookeeper. this.zkTestServer = new TestingServerStarter().start(); @@ -124,19 +173,32 @@ public void startAllServices(Integer numThreads) throws Exception { serviceBuilder.initialize(); StreamSegmentStore store = serviceBuilder.createStreamSegmentService(); TableStore tableStore = serviceBuilder.createTableStoreService(); - this.server = new PravegaConnectionListener(false, servicePort, store, tableStore, serviceBuilder.getLowPriorityExecutor()); + this.server = new PravegaConnectionListener(enableTls, false, "localhost", + servicePort, store, tableStore, SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), new PassingTokenVerifier(), + pathToConfig() + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME, + pathToConfig() + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME, true, + serviceBuilder.getLowPriorityExecutor(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); + this.server.startListening(); log.info("Started Pravega Service"); - this.adminListener = new AdminConnectionListener(false, false, "localhost", adminPort, - store, tableStore, new PassingTokenVerifier(), null, null); + this.adminListener = new AdminConnectionListener(enableTls, false, "localhost", adminPort, + store, tableStore, new PassingTokenVerifier(), pathToConfig() + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME, + pathToConfig() + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME, SecurityConfigDefaults.TLS_PROTOCOL_VERSION); this.adminListener.startListening(); log.info("AdminConnectionListener started successfully."); // Start Controller. this.controllerWrapper = new ControllerWrapper( - this.zkTestServer.getConnectString(), false, true, controllerRPCPort, "localhost", servicePort, - Config.HOST_STORE_CONTAINER_COUNT, controllerRESTPort); + this.zkTestServer.getConnectString(), false, true, controllerRPCPort, + "localhost", servicePort, Config.HOST_STORE_CONTAINER_COUNT, controllerRESTPort, enableAuth, + pathToConfig() + SecurityConfigDefaults.AUTH_HANDLER_INPUT_FILE_NAME, + "secret", true, 600, enableTls, SecurityConfigDefaults.TLS_PROTOCOL_VERSION, + pathToConfig() + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME, + pathToConfig() + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME, + pathToConfig() + SecurityConfigDefaults.TLS_SERVER_KEYSTORE_NAME, + pathToConfig() + SecurityConfigDefaults.TLS_PASSWORD_FILE_NAME); + this.controllerWrapper.awaitRunning(); this.controllerWrapper.getController().createScope(scope).get(); log.info("Initialized Pravega Controller"); @@ -175,7 +237,7 @@ public void createTestStream(final String streamName, final int numSegments) { Preconditions.checkArgument(numSegments > 0); @Cleanup - StreamManager streamManager = StreamManager.create(clientConfig); + StreamManager streamManager = StreamManager.create(getClientConfig()); streamManager.createScope(scope); streamManager.createStream(scope, streamName, StreamConfiguration.builder() @@ -225,11 +287,11 @@ public ReaderGroupManager createReaderGroupManager(final String streamName) { Preconditions.checkState(this.started.get(), "Services not yet started"); Preconditions.checkNotNull(streamName); - return ReaderGroupManager.withScope(scope, clientConfig); + return ReaderGroupManager.withScope(scope, getClientConfig()); } public URI getControllerUri() { - return clientConfig.getControllerURI(); + return getClientConfig().getControllerURI(); } public URI getControllerRestUri() { diff --git a/test/integration/src/test/java/io/pravega/test/integration/AppendTest.java b/test/integration/src/test/java/io/pravega/test/integration/AppendTest.java index b3d8e47cfb6..2bed3666bac 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/AppendTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/AppendTest.java @@ -140,7 +140,7 @@ public void sendReceivingAppend() throws Exception { @Cleanup EmbeddedChannel channel = createChannel(store); - SegmentCreated created = (SegmentCreated) sendRequest(channel, new CreateSegment(1, segment, CreateSegment.NO_SCALE, 0, "")); + SegmentCreated created = (SegmentCreated) sendRequest(channel, new CreateSegment(1, segment, CreateSegment.NO_SCALE, 0, "", 1024L)); assertEquals(segment, created.getSegment()); UUID uuid = UUID.randomUUID(); @@ -165,7 +165,7 @@ public void sendLargeAppend() throws Exception { @Cleanup EmbeddedChannel channel = createChannel(store); - SegmentCreated created = (SegmentCreated) sendRequest(channel, new CreateSegment(1, segment, CreateSegment.NO_SCALE, 0, "")); + SegmentCreated created = (SegmentCreated) sendRequest(channel, new CreateSegment(1, segment, CreateSegment.NO_SCALE, 0, "", 1024L)); assertEquals(segment, created.getSegment()); UUID uuid = UUID.randomUUID(); @@ -189,7 +189,7 @@ public void testMultipleAppends() throws Exception { @Cleanup EmbeddedChannel channel = createChannel(store); - SegmentCreated created = (SegmentCreated) sendRequest(channel, new CreateSegment(1, segment, CreateSegment.NO_SCALE, 0, "")); + SegmentCreated created = (SegmentCreated) sendRequest(channel, new CreateSegment(1, segment, CreateSegment.NO_SCALE, 0, "", 1024L)); assertEquals(segment, created.getSegment()); UUID uuid = UUID.randomUUID(); diff --git a/test/integration/src/test/java/io/pravega/test/integration/BatchClientAuthTest.java b/test/integration/src/test/java/io/pravega/test/integration/BatchClientAuthTest.java index d29cfea3d0a..9e9c6c97061 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/BatchClientAuthTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/BatchClientAuthTest.java @@ -156,6 +156,7 @@ public void testListAndReadSegmentsWithUnauthorizedAccountViaSystemProperties() } private static File createAuthFile() { + @SuppressWarnings("resource") PasswordAuthHandlerInput result = new PasswordAuthHandlerInput("BatchClientAuth", ".txt"); StrongPasswordProcessor passwordProcessor = StrongPasswordProcessor.builder().build(); diff --git a/test/integration/src/test/java/io/pravega/test/integration/BatchClientTest.java b/test/integration/src/test/java/io/pravega/test/integration/BatchClientTest.java index 920357e8ec9..13d04d7e423 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/BatchClientTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/BatchClientTest.java @@ -300,7 +300,7 @@ protected void createTestStreamWithEvents(EventStreamClientFactory clientFactory write30ByteEvents(3, writer); } - private void createStream() throws InterruptedException { + private void createStream() { StreamConfiguration config = StreamConfiguration.builder() .scalingPolicy(ScalingPolicy.fixed(1)) .build(); diff --git a/test/integration/src/test/java/io/pravega/test/integration/ByteStreamTest.java b/test/integration/src/test/java/io/pravega/test/integration/ByteStreamTest.java index 78774829055..256f9a25ab3 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/ByteStreamTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/ByteStreamTest.java @@ -132,13 +132,13 @@ public void readWriteTestTruncate() throws IOException { //Truncate data before offset 5 writer.truncateDataBefore(5); - // seek to offset 4 and verify if truncation is successful. - reader.seekToOffset(4); + // seek to an invalid truncated offset and verify if truncation is successful. + reader.seekToOffset(reader.fetchHeadOffset() - 1); assertThrows(SegmentTruncatedException.class, reader::read); - // seek to offset 5 and verify if we are able to read the data. + // seek to the new head and verify if we are able to read the data. byte[] data = new byte[]{5, 6, 7, 8, 9}; - reader.seekToOffset(5); + reader.seekToOffset(reader.fetchHeadOffset()); byte[] readBuffer1 = new byte[5]; int bytesRead = reader.read(readBuffer1); assertEquals(5, bytesRead); diff --git a/test/integration/src/test/java/io/pravega/test/integration/ClusterWrapperTest.java b/test/integration/src/test/java/io/pravega/test/integration/ClusterWrapperTest.java index 406dde1551e..57d895bee6a 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/ClusterWrapperTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/ClusterWrapperTest.java @@ -115,6 +115,7 @@ public void writeAndReadBackAMessageWithTlsAndAuthOn() { // TLS related configs .tlsEnabled(true) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .tlsServerCertificatePath(TestUtils.pathToConfig() + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME) .tlsServerKeyPath(TestUtils.pathToConfig() + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME) .tlsHostVerificationEnabled(false) @@ -163,6 +164,7 @@ public void restApiInvocationWithSecurityEnabled() { // TLS related configs .tlsEnabled(true) + .tlsProtocolVersion(SecurityConfigDefaults.TLS_PROTOCOL_VERSION) .tlsServerCertificatePath(TestUtils.pathToConfig() + SecurityConfigDefaults.TLS_SERVER_CERT_FILE_NAME) .tlsServerKeyPath(TestUtils.pathToConfig() + SecurityConfigDefaults.TLS_SERVER_PRIVATE_KEY_FILE_NAME) .tlsHostVerificationEnabled(false) diff --git a/test/integration/src/test/java/io/pravega/test/integration/ControllerRestApiTest.java b/test/integration/src/test/java/io/pravega/test/integration/ControllerRestApiTest.java index d1ad80bc98f..8a5a5be3b3e 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/ControllerRestApiTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/ControllerRestApiTest.java @@ -44,6 +44,7 @@ import io.pravega.controller.server.rest.generated.model.StreamProperty; import io.pravega.controller.server.rest.generated.model.StreamState; import io.pravega.controller.server.rest.generated.model.StreamsList; +import io.pravega.controller.server.rest.generated.model.TagsList; import io.pravega.controller.server.rest.generated.model.UpdateStreamRequest; import io.pravega.controller.store.stream.ScaleMetadata; import io.pravega.test.common.InlineExecutor; @@ -146,6 +147,15 @@ public void restApiTests() { Assert.assertEquals("Create scope response", scope1, response.readEntity(ScopeProperty.class).getScopeName()); log.info("Create scope: {} successful ", scope1); + // Create another scope for empty stream test later. + final String scope2 = RandomStringUtils.randomAlphanumeric(10); + final CreateScopeRequest createScopeRequest1 = new CreateScopeRequest(); + createScopeRequest1.setScopeName(scope2); + builder = webTarget.request(MediaType.APPLICATION_JSON_TYPE); + response = builder.post(Entity.json(createScopeRequest1)); + assertEquals("Create scope status", CREATED.getStatusCode(), response.getStatus()); + Assert.assertEquals("Create scope response", scope2, response.readEntity(ScopeProperty.class).getScopeName()); + // TEST CreateStream POST http://controllerURI:Port/v1/scopes/{scopeName}/streams resourceURl = new StringBuilder(restServerURI).append("/v1/scopes/" + scope1 + "/streams").toString(); webTarget = client.target(resourceURl); @@ -161,9 +171,15 @@ public void restApiTests() { retentionConfig.setType(RetentionConfig.TypeEnum.LIMITED_DAYS); retentionConfig.setValue(123L); + TagsList tagsList = new TagsList(); + tagsList.add("testTag"); + createStreamRequest.setStreamName(stream1); createStreamRequest.setScalingPolicy(scalingConfig); createStreamRequest.setRetentionPolicy(retentionConfig); + createStreamRequest.setStreamTags(tagsList); + createStreamRequest.setTimestampAggregationTimeout(1000L); + createStreamRequest.setRolloverSizeBytes(1024L); builder = webTarget.request(MediaType.APPLICATION_JSON_TYPE); response = builder.post(Entity.json(createStreamRequest)); @@ -172,6 +188,9 @@ public void restApiTests() { final StreamProperty streamPropertyResponse = response.readEntity(StreamProperty.class); assertEquals("Scope name in response", scope1, streamPropertyResponse.getScopeName()); assertEquals("Stream name in response", stream1, streamPropertyResponse.getStreamName()); + assertEquals("TimestampAggregationTimeout in response", 1000L, (long) streamPropertyResponse.getTimestampAggregationTimeout()); + assertEquals("RolloverSizeBytes in response", 1024L, (long) streamPropertyResponse.getRolloverSizeBytes()); + log.info("Create stream: {} successful", stream1); // Test listScopes GET http://controllerURI:Port/v1/scopes/{scopeName}/streams @@ -191,6 +210,29 @@ public void restApiTests() { Assert.assertEquals("List streams size", 1, response.readEntity(StreamsList.class).getStreams().size()); log.info("List streams successful"); + // Test listStream GET /v1/scopes/scope1/streams for tags + response = client.target(resourceURl).queryParam("filter_type", "tag"). + queryParam("filter_value", "testTag").request().get(); + assertEquals("List streams", OK.getStatusCode(), response.getStatus()); + Assert.assertEquals("List streams size", 1, response.readEntity(StreamsList.class).getStreams().size()); + + response = client.target(resourceURl).queryParam("filter_type", "tag"). + queryParam("filter_value", "randomTag").request().get(); + assertEquals("List streams", OK.getStatusCode(), response.getStatus()); + Assert.assertEquals("List streams size", 0, response.readEntity(StreamsList.class).getStreams().size()); + log.info("List streams with tag successful"); + + response = client.target(resourceURl).queryParam("filter_type", "showInternalStreams").request().get(); + assertEquals("List streams", OK.getStatusCode(), response.getStatus()); + assertTrue(response.readEntity(StreamsList.class).getStreams().get(0).getStreamName().startsWith("_MARK")); + log.info("List streams with showInternalStreams successful"); + + // Test for the case when the scope is empty. + resourceURl = new StringBuilder(restServerURI).append("/v1/scopes/" + scope2 + "/streams").toString(); + response = client.target(resourceURl).request().get(); + assertEquals("List streams", OK.getStatusCode(), response.getStatus()); + Assert.assertEquals("List streams size", 0, response.readEntity(StreamsList.class).getStreams().size()); + // Test getScope resourceURl = new StringBuilder(restServerURI).append("/v1/scopes/" + scope1).toString(); response = client.target(resourceURl).request().get(); @@ -210,6 +252,8 @@ public void restApiTests() { scalingConfig1.minSegments(4); // update existing minSegments from 2 to 4 updateStreamRequest.setScalingPolicy(scalingConfig1); updateStreamRequest.setRetentionPolicy(retentionConfig); + updateStreamRequest.setTimestampAggregationTimeout(2000L); + updateStreamRequest.setRolloverSizeBytes(2048L); response = client.target(resourceURl).request(MediaType.APPLICATION_JSON_TYPE) .put(Entity.json(updateStreamRequest)); @@ -234,7 +278,10 @@ public void restApiTests() { .toString(); response = client.target(resourceURl).request().get(); assertEquals("Get stream status", OK.getStatusCode(), response.getStatus()); - assertEquals("Get stream stream1 response", stream1, response.readEntity(StreamProperty.class).getStreamName()); + StreamProperty responseProperty = response.readEntity(StreamProperty.class); + assertEquals("Get stream stream1 response", stream1, responseProperty.getStreamName()); + assertEquals("Get stream stream1 response TimestampAggregationTimeout", (long) responseProperty.getTimestampAggregationTimeout(), 2000L); + assertEquals("Get stream stream1 RolloverSizeBytes", (long) responseProperty.getRolloverSizeBytes(), 2048L); log.info("Get stream successful"); // Test updateStreamState diff --git a/test/integration/src/test/java/io/pravega/test/integration/KeyValueTableTest.java b/test/integration/src/test/java/io/pravega/test/integration/KeyValueTableTest.java index 68f1c9077b1..b9e5325d713 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/KeyValueTableTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/KeyValueTableTest.java @@ -80,6 +80,7 @@ public class KeyValueTableTest extends KeyValueTableTestBase { private final int servicePort = TestUtils.getAvailableListenPort(); private final int containerCount = 4; + @Override @Before public void setup() throws Exception { super.setup(); @@ -101,7 +102,7 @@ public void setup() throws Exception { this.controllerWrapper.awaitRunning(); this.controller = controllerWrapper.getController(); - //4. Create Scope + // 4. Create Scope this.controller.createScope(SCOPE).get(); ClientConfig clientConfig = ClientConfig.builder().build(); SocketConnectionFactoryImpl connectionFactory = new SocketConnectionFactoryImpl(clientConfig); diff --git a/test/integration/src/test/java/io/pravega/test/integration/MetricsTest.java b/test/integration/src/test/java/io/pravega/test/integration/MetricsTest.java index 3c5aef54d18..50d2530aacf 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/MetricsTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/MetricsTest.java @@ -49,10 +49,11 @@ import io.pravega.shared.metrics.MetricsConfig; import io.pravega.shared.metrics.MetricsProvider; import io.pravega.shared.metrics.StatsProvider; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.SerializedClassRunner; import io.pravega.test.common.TestUtils; -import io.pravega.test.common.TestingServerStarter; import io.pravega.test.common.ThreadPooledTestSuite; +import io.pravega.test.common.TestingServerStarter; import io.pravega.test.integration.demo.ControllerWrapper; import java.time.Duration; import java.util.Collections; @@ -132,7 +133,7 @@ public void setup() throws Exception { this.server = new PravegaConnectionListener(false, false, "localhost", servicePort, store, tableStore, monitor.getStatsRecorder(), monitor.getTableSegmentStatsRecorder(), new PassingTokenVerifier(), - null, null, true, this.serviceBuilder.getLowPriorityExecutor()); + null, null, true, this.serviceBuilder.getLowPriorityExecutor(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); this.server.startListening(); // 4. Start Pravega Controller service diff --git a/test/integration/src/test/java/io/pravega/test/integration/ReadFromDeletedStreamTest.java b/test/integration/src/test/java/io/pravega/test/integration/ReadFromDeletedStreamTest.java index c38bf71664e..4c3b45144b2 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/ReadFromDeletedStreamTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/ReadFromDeletedStreamTest.java @@ -32,6 +32,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilder; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.test.common.AssertExtensions; +import io.pravega.test.common.SecurityConfigDefaults; import lombok.Cleanup; import lombok.extern.slf4j.Slf4j; import org.junit.Test; @@ -56,7 +57,7 @@ public void testDeletedAndRecreatedStream() throws Exception { @Cleanup PravegaConnectionListener server = new PravegaConnectionListener(false, false, "localhost", 12345, store, tableStore, SegmentStatsRecorder.noOp(), TableSegmentStatsRecorder.noOp(), null, null, null, true, - serviceBuilder.getLowPriorityExecutor()); + serviceBuilder.getLowPriorityExecutor(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); server.startListening(); streamManager.createScope("test"); diff --git a/test/integration/src/test/java/io/pravega/test/integration/ReaderGroupStreamCutUpdateTest.java b/test/integration/src/test/java/io/pravega/test/integration/ReaderGroupStreamCutUpdateTest.java index e48b88e615b..8699d2919c0 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/ReaderGroupStreamCutUpdateTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/ReaderGroupStreamCutUpdateTest.java @@ -135,7 +135,7 @@ public void testStreamcutsUpdateInReaderGroup() throws Exception { new JavaSerializer<>(), ReaderConfig.builder().build()); Map currentStreamcuts = readerGroup.getStreamCuts(); - EventRead eventRead; + EventRead eventRead; int lastIteration = 0, iteration = 0; int assertionFrequency = checkpointingIntervalMs / readerSleepInterval; do { diff --git a/test/integration/src/test/java/io/pravega/test/integration/RestoreBackUpDataRecoveryTest.java b/test/integration/src/test/java/io/pravega/test/integration/RestoreBackUpDataRecoveryTest.java index 9bb0b858b0e..9d1925412e2 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/RestoreBackUpDataRecoveryTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/RestoreBackUpDataRecoveryTest.java @@ -316,7 +316,7 @@ private static class ControllerRunner implements AutoCloseable { private final Controller controller; private final URI controllerURI = URI.create("tcp://" + serviceHost + ":" + controllerPort); - ControllerRunner(int bkPort, int servicePort, int containerCount) throws InterruptedException { + ControllerRunner(int bkPort, int servicePort, int containerCount) { this.controllerWrapper = new ControllerWrapper("localhost:" + bkPort, false, controllerPort, serviceHost, servicePort, containerCount); this.controllerWrapper.awaitRunning(); @@ -369,7 +369,7 @@ private static class PravegaRunner implements AutoCloseable { } public void restartControllerAndSegmentStore(StorageFactory storageFactory, InMemoryDurableDataLogFactory dataLogFactory) - throws DurableDataLogException, InterruptedException { + throws DurableDataLogException { this.segmentStoreRunner = new SegmentStoreRunner(storageFactory, dataLogFactory, this.containerCount); this.controllerRunner = new ControllerRunner(this.bookKeeperRunner.bkPort, this.segmentStoreRunner.servicePort, containerCount); } diff --git a/test/integration/src/test/java/io/pravega/test/integration/StreamMetricsTest.java b/test/integration/src/test/java/io/pravega/test/integration/StreamMetricsTest.java index 884a79aeda6..4b8dc2b0fed 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/StreamMetricsTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/StreamMetricsTest.java @@ -46,6 +46,7 @@ import io.pravega.shared.metrics.MetricsProvider; import io.pravega.shared.metrics.StatsProvider; import io.pravega.test.common.AssertExtensions; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.SerializedClassRunner; import io.pravega.test.common.TestUtils; import io.pravega.test.common.TestingServerStarter; @@ -120,7 +121,7 @@ public void setup() throws Exception { this.server = new PravegaConnectionListener(false, false, "localhost", servicePort, store, tableStore, monitor.getStatsRecorder(), monitor.getTableSegmentStatsRecorder(), new PassingTokenVerifier(), - null, null, true, this.serviceBuilder.getLowPriorityExecutor()); + null, null, true, this.serviceBuilder.getLowPriorityExecutor(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); this.server.startListening(); // 4. Start Pravega Controller service diff --git a/test/integration/src/test/java/io/pravega/test/integration/WatermarkingTest.java b/test/integration/src/test/java/io/pravega/test/integration/WatermarkingTest.java index 03aeebb4dfb..8f5d075af21 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/WatermarkingTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/WatermarkingTest.java @@ -89,6 +89,7 @@ public class WatermarkingTest extends ThreadPooledTestSuite { public static final PravegaResource PRAVEGA = new PravegaResource(); private final AtomicLong timer = new AtomicLong(); + @Override protected int getThreadPoolSize() { return 5; } diff --git a/test/integration/src/test/java/io/pravega/test/integration/controller/server/ControllerServiceTest.java b/test/integration/src/test/java/io/pravega/test/integration/controller/server/ControllerServiceTest.java index 0a51e69e87f..1435cfa18de 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/controller/server/ControllerServiceTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/controller/server/ControllerServiceTest.java @@ -318,7 +318,7 @@ private static void getSegmentsAtTime(Controller controller, final String scope, assertFalse("FAILURE: Fetching positions at given time stamp failed", segments.get().isEmpty()); } - private static void getActiveSegmentsForNonExistentStream(Controller controller) throws InterruptedException { + private static void getActiveSegmentsForNonExistentStream(Controller controller) { AssertExtensions.assertFutureThrows("", controller.getCurrentSegments("scope", "streamName"), e -> Exceptions.unwrap(e) instanceof StoreException.DataNotFoundException); diff --git a/test/integration/src/test/java/io/pravega/test/integration/controller/server/EventProcessorTest.java b/test/integration/src/test/java/io/pravega/test/integration/controller/server/EventProcessorTest.java index 5a6139291dd..78302c82c9b 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/controller/server/EventProcessorTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/controller/server/EventProcessorTest.java @@ -529,6 +529,7 @@ public void testEventProcessorRebalance() throws Exception { ConcurrentSkipListSet output2 = new ConcurrentSkipListSet<>(); // wait until rebalance may have happened. + @Cleanup ReaderGroupManager groupManager = new ReaderGroupManagerImpl(scope, controller, clientFactory); ReaderGroup readerGroup = groupManager.getReaderGroup(readerGroupName); diff --git a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndStatsTest.java b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndStatsTest.java index 56844d892b2..c0449cca821 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndStatsTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndStatsTest.java @@ -37,6 +37,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilder; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.shared.NameUtils; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.TestUtils; import io.pravega.test.common.TestingServerStarter; import io.pravega.test.integration.demo.ControllerWrapper; @@ -83,7 +84,7 @@ public void setUp() throws Exception { server = new PravegaConnectionListener(false, false, "localhost", servicePort, store, tableStore, statsRecorder, TableSegmentStatsRecorder.noOp(), null, null, null, true, - serviceBuilder.getLowPriorityExecutor()); + serviceBuilder.getLowPriorityExecutor(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); server.startListening(); controllerWrapper = new ControllerWrapper(zkTestServer.getConnectString(), diff --git a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTransactionOrderTest.java b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTransactionOrderTest.java index 50f38779678..051526380de 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTransactionOrderTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTransactionOrderTest.java @@ -45,6 +45,7 @@ import io.pravega.segmentstore.server.store.ServiceBuilder; import io.pravega.segmentstore.server.store.ServiceBuilderConfig; import io.pravega.shared.NameUtils; +import io.pravega.test.common.SecurityConfigDefaults; import io.pravega.test.common.TestUtils; import io.pravega.test.common.TestingServerStarter; import io.pravega.test.integration.demo.ControllerWrapper; @@ -135,7 +136,7 @@ public void setUp() throws Exception { server = new PravegaConnectionListener(false, false, "localhost", servicePort, store, tableStore, autoScaleMonitor.getStatsRecorder(), autoScaleMonitor.getTableSegmentStatsRecorder(), null, null, null, - true, serviceBuilder.getLowPriorityExecutor()); + true, serviceBuilder.getLowPriorityExecutor(), SecurityConfigDefaults.TLS_PROTOCOL_VERSION); server.startListening(); controllerWrapper.awaitRunning(); diff --git a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTruncationTest.java b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTruncationTest.java index 3bac50290b6..46164884749 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTruncationTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTruncationTest.java @@ -281,6 +281,34 @@ public void testWriteDuringTruncationAndDeletion() throws Exception { assertThrows(RuntimeException.class, () -> writer.writeEvent("test")); } + @Test(timeout = 50000) + public void testTruncateOnSealedStream() throws Exception { + StreamConfiguration config = StreamConfiguration.builder() + .scalingPolicy(ScalingPolicy.fixed(4)) + .build(); + String streamName = "testTruncateOnSealedStream"; + @Cleanup + StreamManager streamManager = StreamManager.create(PRAVEGA.getControllerURI()); + String scope = "test"; + streamManager.createScope(scope); + streamManager.createStream(scope, streamName, config); + + LocalController controller = (LocalController) PRAVEGA.getLocalController(); + + // Seal Stream. + assertTrue(controller.sealStream(scope, streamName).get()); + + Map streamCutPositions = new HashMap<>(); + streamCutPositions.put(computeSegmentId(2, 1), 0L); + streamCutPositions.put(computeSegmentId(3, 1), 0L); + streamCutPositions.put(computeSegmentId(4, 1), 0L); + + // Attempt to truncate a sealed stream should complete exceptionally. + assertFutureThrows("Should throw UnsupportedOperationException", + controller.truncateStream(scope, streamName, streamCutPositions), + e -> UnsupportedOperationException.class.isAssignableFrom(e.getClass())); + } + @Test(timeout = 50000) public void testWriteOnSealedStream() throws Exception { JavaSerializer serializer = new JavaSerializer<>(); diff --git a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTxnWithTest.java b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTxnWithTest.java index 68e522f38a9..b1dc543b7cf 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTxnWithTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndTxnWithTest.java @@ -183,8 +183,8 @@ public void testTxnConfig() throws Exception { }, e -> Exceptions.unwrap(e) instanceof IllegalArgumentException); - EventWriterConfig highTimeoutConfig = EventWriterConfig.builder().transactionTimeoutTime(200 * 1000).build(); - AssertExtensions.assertThrows("lease value too large, max value is 120000", + EventWriterConfig highTimeoutConfig = EventWriterConfig.builder().transactionTimeoutTime(700 * 1000).build(); + AssertExtensions.assertThrows("lease value too large, max value is 600000", () -> createTxn(clientFactory, highTimeoutConfig, streamName), e -> e instanceof IllegalArgumentException); } diff --git a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndUpdateTest.java b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndUpdateTest.java index d89165dd5f3..bca1b5b0e24 100644 --- a/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndUpdateTest.java +++ b/test/integration/src/test/java/io/pravega/test/integration/endtoendtest/EndToEndUpdateTest.java @@ -22,14 +22,14 @@ import io.pravega.controller.server.eventProcessor.LocalController; import io.pravega.test.common.ThreadPooledTestSuite; import io.pravega.test.integration.PravegaResource; +import java.util.concurrent.ExecutionException; import lombok.extern.slf4j.Slf4j; import org.junit.ClassRule; import org.junit.Test; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeoutException; - +import static io.pravega.test.common.AssertExtensions.assertFutureThrows; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; @Slf4j public class EndToEndUpdateTest extends ThreadPooledTestSuite { @@ -43,8 +43,7 @@ protected int getThreadPoolSize() { } @Test(timeout = 30000) - public void testUpdateStream() throws InterruptedException, ExecutionException, TimeoutException, - TruncatedDataException, ReinitializationRequiredException { + public void testUpdateStream() throws InterruptedException, ExecutionException, TruncatedDataException, ReinitializationRequiredException { String scope = "scope"; String streamName = "updateStream"; @@ -91,5 +90,17 @@ public void testUpdateStream() throws InterruptedException, ExecutionException, // verify that stream is scaled to have 3 segments assertEquals(controller.getCurrentSegments(scope, streamName).join().getNumberOfSegments(), 5); + + // Seal Stream. + assertTrue(controller.sealStream(scope, streamName).get()); + + config = StreamConfiguration.builder() + .scalingPolicy(ScalingPolicy.fixed(3)) + .build(); + + // Attempt to update a sealed stream should complete exceptionally. + assertFutureThrows("Should throw UnsupportedOperationException", + controller.updateStream(scope, streamName, config), + e -> UnsupportedOperationException.class.isAssignableFrom(e.getClass())); } } diff --git a/test/system/kubernetes/fluentBitSetup.sh b/test/system/kubernetes/fluentBitSetup.sh index c558dad710d..42aa652278d 100755 --- a/test/system/kubernetes/fluentBitSetup.sh +++ b/test/system/kubernetes/fluentBitSetup.sh @@ -28,9 +28,11 @@ CONFIG_MAP_DATA=/etc/config KEEP_PVC=false NAMESPACE=${NAMESPACE:-"default"} NAME=${NAME:-"pravega-fluent-bit"} -TAR_NAME="pravega-logs-export.tar" +LOGS_DIR="pravega-logs-export" +TAR_NAME="${LOGS_DIR}.tar" SKIP_FORCE_ROTATE=${SKIP_FORCE_ROTATE:-"false"} ALPINE_IMAGE=${ALPINE_IMAGE:-"alpine:latest"} +SKIP_LOG_BUNDLE_COMPRESSION=${SKIP_LOG_BUNDLE_COMPRESSION:-"false"} RETRIES=3 # Configurable flag parameters. @@ -92,6 +94,9 @@ for i in "$@"; do -f | --skip-force-rotate) SKIP_FORCE_ROTATE="true" ;; + -b | --skip-bundle-compression) + SKIP_LOG_BUNDLE_COMPRESSION="true" + ;; esac done @@ -190,7 +195,6 @@ cp_log() { # Set of log files downloaded to $FLUENT_BIT_EXPORT_PATH. ###################################### cp_remote_logs() { - local output=$1; shift local remote_log_files=$@ if [ -z "$remote_log_files" ]; then echo "No remote files given to collect." @@ -198,9 +202,9 @@ cp_remote_logs() { remote_log_files=($remote_log_files) # Clean any previous instances of collected logs. - rm -rf "$TAR_NAME"{.gz,} + rm -rf "$TAR_NAME"{.gz,} "$LOGS_DIR"{.zip,} # Temporary directory to hold the log files. - local logs_dir=${TAR_NAME%.tar} + local logs_dir=$LOGS_DIR mkdir "$logs_dir" && cd "$logs_dir" local total=${#remote_log_files[@]} @@ -214,11 +218,11 @@ cp_remote_logs() { wait # Return from $logs_dir. cd ../ - # Validate log collection -- compare number of fetched logs to number of given logs. local actual_logs="$(find $logs_dir -type f)" local actual_log_count="$(echo "$actual_logs" | wc -l)" local expected_log_count="$total" + if [ "$expected_log_count" != "$actual_log_count" ]; then echo -e "\nFound mismatch between expected # of logs ($expected_log_count) and actual ($actual_log_count)." for log in "${remote_log_files[@]}"; do @@ -231,8 +235,15 @@ cp_remote_logs() { echo "" echo "Successfully downloaded a total of $actual_log_count log files." fi - tar --remove-files -zcf "$TAR_NAME.gz" "$logs_dir" - rm -rf "$logs_dir" + + if [ "$SKIP_LOG_BUNDLE_COMPRESSION" != "true" ]; then + if command -v zip; then + zip -r "$logs_dir.zip" "$logs_dir" > /dev/null + else + tar --remove-files -zcf "$TAR_NAME.gz" "$logs_dir" + fi + rm -rf "$logs_dir" + fi logs_fetched=1 } @@ -271,7 +282,7 @@ fetch_active_logs() { fi done <<< $pods pushd "$output" > /dev/null 2>&1 - cp_remote_logs "$output" "${log_files[@]}" + cp_remote_logs "${log_files[@]}" popd > /dev/null 2>&1 } @@ -303,7 +314,7 @@ fetch_stored_logs() { fi pushd "$output" > /dev/null 2>&1 - cp_remote_logs "$output" $logs + cp_remote_logs $logs popd > /dev/null 2>&1 } diff --git a/test/system/kubernetes/setupTestPod.sh b/test/system/kubernetes/setupTestPod.sh index d3ed5fcc6d2..70abe815125 100755 --- a/test/system/kubernetes/setupTestPod.sh +++ b/test/system/kubernetes/setupTestPod.sh @@ -83,7 +83,8 @@ if [ $skipServiceInstallation = false ]; then #Step 6: Creating ZK-OP echo "Creating ZK Operator" - helm install zkop $publishedChartName/zookeeper-operator --version=$zookeeperOperatorVersion + echo "helm install zkop $publishedChartName/zookeeper-operator --version=$zookeeperOperatorChartVersion --set image.repository=$dockerRegistryUrl/$imagePrefix/$zookeeperOperatorImageName --set image.tag=$zookeeperOperatorVersion --set hooks.image.repository=$helmHookImageName" + helm install zkop $publishedChartName/zookeeper-operator --version=$zookeeperOperatorChartVersion --set image.repository=$dockerRegistryUrl/$imagePrefix/$zookeeperOperatorImageName --set image.tag=$zookeeperOperatorVersion --set hooks.image.repository=$helmHookImageName zkOpName="$(kubectl get pod | grep "zookeeper-operator" | awk '{print $1}')" #kubectl wait --timeout=1m --for=condition=Ready pod/$zkOpName readyValueZk="$(kubectl get deploy | awk '$1 == "zkop-zookeeper-operator" { print $2 }')" @@ -96,7 +97,8 @@ if [ $skipServiceInstallation = false ]; then #Step 7: Creating BK-OP echo "Creating BK Operator" - helm install bkop $publishedChartName/bookkeeper-operator --version=$bookkeeperOperatorVersion --set testmode.enabled=true --set testmode.version=$desiredBookkeeperCMVersion --wait + echo "helm install bkop $publishedChartName/bookkeeper-operator --version=$bookkeeperOperatorChartVersion --set testmode.enabled=true --set image.repository=$dockerRegistryUrl/$imagePrefix/$bookkeeperOperatorImageName --set image.tag=$bookkeeperOperatorVersion --set hooks.image.repository=$helmHookImageName" + helm install bkop $publishedChartName/bookkeeper-operator --version=$bookkeeperOperatorChartVersion --set testmode.enabled=true --set image.repository=$dockerRegistryUrl/$imagePrefix/$bookkeeperOperatorImageName --set image.tag=$bookkeeperOperatorVersion --set hooks.image.repository=$helmHookImageName --wait bkOpName="$(kubectl get pod | grep "bookkeeper-operator" | awk '{print $1}')" #kubectl wait --timeout=1m --for=condition=Ready pod/$bkOpName @@ -110,8 +112,9 @@ if [ $skipServiceInstallation = false ]; then #Step 8: Creating Pravega-OP echo "Creating Pravega Operator" - CERT="$(kubectl get secret selfsigned-cert-tls -o yaml | grep tls.crt | awk '{print $2}')" - helm install prop $publishedChartName/pravega-operator --version=$pravegaOperatorVersion --set webhookCert.crt=$CERT --set testmode.enabled=true --set testmode.version=$desiredPravegaCMVersion --wait + CERT="$(kubectl get secret selfsigned-cert-tls -o yaml | grep tls.crt | head -1 | awk '{print $2}')" + echo "helm install prop $publishedChartName/pravega-operator --version=$pravegaOperatorChartVersion --set webhookCert.crt=$CERT --set testmode.enabled=true --set image.repository=$dockerRegistryUrl/$imagePrefix/$pravegaOperatorImageName --set image.tag=$pravegaOperatorVersion --set hooks.image.repository=$helmHookImageName" + helm install prop $publishedChartName/pravega-operator --version=$pravegaOperatorChartVersion --set webhookCert.crt=$CERT --set testmode.enabled=true --set image.repository=$dockerRegistryUrl/$imagePrefix/$pravegaOperatorImageName --set image.tag=$pravegaOperatorVersion --set hooks.image.repository=$helmHookImageName --wait prOpName="$(kubectl get pod | grep "pravega-operator" | awk '{print $1}')" #kubectl wait --timeout=1m --for=condition=Ready pod/$prOpName readyValuePr="$(kubectl get deploy | awk '$1 == "prop-pravega-operator" { print $2 }')" diff --git a/test/system/src/main/java/io/pravega/test/system/SingleJUnitTestRunner.java b/test/system/src/main/java/io/pravega/test/system/SingleJUnitTestRunner.java index b73bdb9e1ce..f8d588c931e 100644 --- a/test/system/src/main/java/io/pravega/test/system/SingleJUnitTestRunner.java +++ b/test/system/src/main/java/io/pravega/test/system/SingleJUnitTestRunner.java @@ -80,7 +80,7 @@ public static boolean execute(String className, String methodName) { } } - public static void main(String... args) throws ClassNotFoundException { + public static void main(String... args) { String[] classAndMethod = args[0].split("#"); //The return value is used to update the mesos task execution status. The mesos task is set to failed state when // return value is non-zero. diff --git a/test/system/src/main/java/io/pravega/test/system/framework/DockerBasedTestExecutor.java b/test/system/src/main/java/io/pravega/test/system/framework/DockerBasedTestExecutor.java index db98f9c1fc8..ea3bd362240 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/DockerBasedTestExecutor.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/DockerBasedTestExecutor.java @@ -50,7 +50,7 @@ public class DockerBasedTestExecutor implements TestExecutor { public static final int DOCKER_CLIENT_PORT = 2375; private final static String IMAGE = "java:8"; - private static final String LOG_LEVEL = System.getProperty("logLevel", "DEBUG"); + private static final String LOG_LEVEL = System.getProperty("log.level", "DEBUG"); private final AtomicReference id = new AtomicReference(); private final String masterIp = Utils.isAwsExecution() ? getConfig("awsMasterIP", "Invalid Master IP").trim() : getConfig("masterIP", "Invalid Master IP"); private final DockerClient client = DefaultDockerClient.builder().uri("http://" + masterIp diff --git a/test/system/src/main/java/io/pravega/test/system/framework/TestFrameworkException.java b/test/system/src/main/java/io/pravega/test/system/framework/TestFrameworkException.java index dadcdd4bd50..8c0fdaee0d9 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/TestFrameworkException.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/TestFrameworkException.java @@ -37,12 +37,12 @@ public enum Type { public TestFrameworkException(Type type, String reason, Throwable cause) { super(reason, cause); this.type = type; - log.error("TestFramework Exception. Type: {}, Details: {}", type, reason, cause); + log.warn("TestFramework Exception. Type: {}, Details: {}", type, reason, cause); } public TestFrameworkException(Type type, String reason) { super(reason); this.type = type; - log.error("TestFramework Exception. Type: {}, Details: {}", type, reason); + log.warn("TestFramework Exception. Type: {}, Details: {}", type, reason); } } diff --git a/test/system/src/main/java/io/pravega/test/system/framework/kubernetes/K8sClient.java b/test/system/src/main/java/io/pravega/test/system/framework/kubernetes/K8sClient.java index 7b90585aead..cd0c0f07fcc 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/kubernetes/K8sClient.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/kubernetes/K8sClient.java @@ -95,7 +95,7 @@ public class K8sClient { // When present, indicates that modifications should not be persisted. Only valid value is "All", or null. private static final String DRY_RUN = null; private static final String FIELD_MANAGER = "pravega-k8-client"; - private static final String PRETTY_PRINT = "false"; + private static final String PRETTY_PRINT = null; private final ApiClient client; private final PodLogs logUtility; // size of the executor is 3 (1 thread is used to watch the pod status, 2 threads for background log copy). @@ -177,7 +177,7 @@ public V1Namespace createNamespace(final String namespace) { @SneakyThrows(ApiException.class) public CompletableFuture deployPod(final String namespace, final V1Pod pod) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createPod"); + K8AsyncCallback callback = new K8AsyncCallback<>("createPod-" + pod.getMetadata().getName()); api.createNamespacedPodAsync(namespace, pod, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -213,7 +213,7 @@ public CompletableFuture> getStatusOfPodWithLabel(final String return getPodsWithLabel(namespace, labelName, labelValue) .thenApply(v1PodList -> { List podList = v1PodList.getItems(); - log.debug("{} pod(s) found with label {}={}.", podList.size(), labelName, labelValue); + log.info("{} pod(s) found with label {}={}.", podList.size(), labelName, labelValue); return podList.stream().map(V1Pod::getStatus).collect(Collectors.toList()); }); } @@ -242,7 +242,7 @@ public CompletableFuture getPodsWithLabels(String namespace, Map entry.getKey() + "=" + entry.getValue()).collect(Collectors.joining()); - K8AsyncCallback callback = new K8AsyncCallback<>("listPods"); + K8AsyncCallback callback = new K8AsyncCallback<>("listPods-" + labels); api.listNamespacedPodAsync(namespace, PRETTY_PRINT, ALLOW_WATCH_BOOKMARKS, null, null, labelSelector, null, null, null, false, callback); return callback.getFuture(); @@ -274,7 +274,7 @@ public CompletableFuture> getRestartedPods(String @SneakyThrows(ApiException.class) public CompletableFuture createDeployment(final String namespace, final V1Deployment deploy) { AppsV1Api api = new AppsV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("deployment"); + K8AsyncCallback callback = new K8AsyncCallback<>("deployment-" + deploy.getMetadata().getName()); api.createNamespacedDeploymentAsync(namespace, deploy, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -288,7 +288,7 @@ public CompletableFuture createDeployment(final String namespace, @SneakyThrows(ApiException.class) public CompletableFuture getDeploymentStatus(final String deploymentName, final String namespace) { AppsV1Api api = new AppsV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("readNamespacedDeployment"); + K8AsyncCallback callback = new K8AsyncCallback<>("readNamespacedDeployment-" + deploymentName); api.readNamespacedDeploymentStatusAsync(deploymentName, namespace, PRETTY_PRINT, callback); return callback.getFuture(); } @@ -307,7 +307,7 @@ public CompletableFuture getDeploymentStatus(final String deployme public CompletableFuture createCustomObject(String customResourceGroup, String version, String namespace, String plural, Map request) { CustomObjectsApi api = new CustomObjectsApi(); - K8AsyncCallback callback = new K8AsyncCallback<>("createCustomObject"); + K8AsyncCallback callback = new K8AsyncCallback<>("createCustomObject-" + customResourceGroup); api.createNamespacedCustomObjectAsync(customResourceGroup, version, namespace, plural, request, PRETTY_PRINT, callback); return callback.getFuture(); } @@ -375,7 +375,7 @@ public CompletableFuture createAndUpdateCustomObject(String customResour public CompletableFuture getCustomObject(String customResourceGroup, String version, String namespace, String plural, String name) { CustomObjectsApi api = new CustomObjectsApi(); - K8AsyncCallback callback = new K8AsyncCallback<>("getCustomObject"); + K8AsyncCallback callback = new K8AsyncCallback<>("getCustomObject-" + customResourceGroup); api.getNamespacedCustomObjectAsync(customResourceGroup, version, namespace, plural, name, callback); return callback.getFuture(); } @@ -396,7 +396,7 @@ public CompletableFuture deleteCustomObject(String customResourceGroup, CustomObjectsApi api = new CustomObjectsApi(); V1DeleteOptions options = new V1DeleteOptions(); options.setOrphanDependents(false); - K8AsyncCallback callback = new K8AsyncCallback<>("getCustomObject"); + K8AsyncCallback callback = new K8AsyncCallback<>("getCustomObject-" + customResourceGroup); api.deleteNamespacedCustomObjectAsync(customResourceGroup, version, namespace, plural, name, 0, false, null, options, callback); @@ -434,7 +434,7 @@ public void deletePVC(String namespace, String name) { @SneakyThrows(ApiException.class) public CompletableFuture createCRD(final V1beta1CustomResourceDefinition crd) { ApiextensionsV1beta1Api api = new ApiextensionsV1beta1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("create CRD"); + K8AsyncCallback callback = new K8AsyncCallback<>("create CRD-" + crd.getMetadata().getName()); api.createCustomResourceDefinitionAsync(crd, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -447,7 +447,7 @@ public CompletableFuture createCRD(final V1beta @SneakyThrows(ApiException.class) public CompletableFuture createClusterRole(V1beta1ClusterRole role) { RbacAuthorizationV1beta1Api api = new RbacAuthorizationV1beta1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createClusterRole"); + K8AsyncCallback callback = new K8AsyncCallback<>("createClusterRole-" + role.getMetadata().getName()); api.createClusterRoleAsync(role, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -461,7 +461,7 @@ public CompletableFuture createClusterRole(V1beta1ClusterRol @SneakyThrows(ApiException.class) public CompletableFuture createRole(String namespace, V1beta1Role role) { RbacAuthorizationV1beta1Api api = new RbacAuthorizationV1beta1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createRole"); + K8AsyncCallback callback = new K8AsyncCallback<>("createRole-" + role.getMetadata().getName()); api.createNamespacedRoleAsync(namespace, role, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -474,7 +474,7 @@ public CompletableFuture createRole(String namespace, V1beta1Role r @SneakyThrows(ApiException.class) public CompletableFuture createClusterRoleBinding(V1beta1ClusterRoleBinding binding) { RbacAuthorizationV1beta1Api api = new RbacAuthorizationV1beta1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createClusterRoleBinding"); + K8AsyncCallback callback = new K8AsyncCallback<>("createClusterRoleBinding-" + binding.getMetadata().getName()); api.createClusterRoleBindingAsync(binding, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -488,7 +488,7 @@ public CompletableFuture createClusterRoleBinding(V1b @SneakyThrows(ApiException.class) public CompletableFuture createRoleBinding(String namespace, V1beta1RoleBinding binding) { RbacAuthorizationV1beta1Api api = new RbacAuthorizationV1beta1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createRoleBinding"); + K8AsyncCallback callback = new K8AsyncCallback<>("createRoleBinding-" + binding.getMetadata().getName()); api.createNamespacedRoleBindingAsync(namespace, binding, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -502,7 +502,7 @@ public CompletableFuture createRoleBinding(String namespace, @SneakyThrows(ApiException.class) public CompletableFuture createServiceAccount(String namespace, V1ServiceAccount account) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createServiceAccount"); + K8AsyncCallback callback = new K8AsyncCallback<>("createServiceAccount-" + account.getMetadata().getName()); api.createNamespacedServiceAccountAsync(namespace, account, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -545,7 +545,7 @@ public CompletableFuture waitUntilPodCompletes(final private Optional createAWatchAndReturnOnTermination(String namespace, String podName) { log.debug("Creating a watch for pod {}/{}", namespace, podName); CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createAWatchAndReturnOnTermination"); + K8AsyncCallback callback = new K8AsyncCallback<>("createAWatchAndReturnOnTermination-" + podName); @Cleanup Watch watch = Watch.createWatch( client, @@ -582,7 +582,7 @@ private Optional createAWatchAndReturnOnTermination( @SneakyThrows(ApiException.class) public CompletableFuture getConfigMap(String name, String namespace) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("readNamespacedConfigMap"); + K8AsyncCallback callback = new K8AsyncCallback<>("readNamespacedConfigMap-" + name); api.readNamespacedConfigMapAsync(name, namespace, PRETTY_PRINT, false, false, callback); return callback.getFuture(); } @@ -596,7 +596,7 @@ public CompletableFuture getConfigMap(String name, String namespace @SneakyThrows(ApiException.class) public CompletableFuture createConfigMap(String namespace, V1ConfigMap binding) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createConfigMap"); + K8AsyncCallback callback = new K8AsyncCallback<>("createConfigMap-" + binding.getMetadata().getName()); api.createNamespacedConfigMapAsync(namespace, binding, PRETTY_PRINT, DRY_RUN, FIELD_MANAGER, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -610,7 +610,7 @@ public CompletableFuture createConfigMap(String namespace, V1Config @SneakyThrows(ApiException.class) public CompletableFuture deleteConfigMap(String name, String namespace) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("deleteNamespacedConfigMap"); + K8AsyncCallback callback = new K8AsyncCallback<>("deleteNamespacedConfigMap-" + name); api.deleteNamespacedConfigMapAsync(name, namespace, PRETTY_PRINT, null, 0, false, null, null, callback); return callback.getFuture(); } @@ -624,7 +624,7 @@ public CompletableFuture deleteConfigMap(String name, String namespace @SneakyThrows(ApiException.class) public CompletableFuture createSecret(String namespace, V1Secret secret) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("createNamespacedSecret"); + K8AsyncCallback callback = new K8AsyncCallback<>("createNamespacedSecret-" + secret.getMetadata().getName()); api.createNamespacedSecretAsync(namespace, secret, PRETTY_PRINT, null, null, callback); return exceptionallyExpecting(callback.getFuture(), isConflict, null); } @@ -638,7 +638,7 @@ public CompletableFuture createSecret(String namespace, V1Secret secre @SneakyThrows(ApiException.class) public CompletableFuture getSecret(String name, String namespace) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("readNamespacedSecret"); + K8AsyncCallback callback = new K8AsyncCallback<>("readNamespacedSecret-" + name); api.readNamespacedSecretAsync(name, namespace, PRETTY_PRINT, false, false, callback); return callback.getFuture(); } @@ -652,7 +652,7 @@ public CompletableFuture getSecret(String name, String namespace) { @SneakyThrows(ApiException.class) public CompletableFuture deleteSecret(String name, String namespace) { CoreV1Api api = new CoreV1Api(); - K8AsyncCallback callback = new K8AsyncCallback<>("deleteNamespacedSecret"); + K8AsyncCallback callback = new K8AsyncCallback<>("deleteNamespacedSecret-" + name); api.deleteNamespacedSecretAsync(name, namespace, PRETTY_PRINT, null, 0, false, null, null, callback); return callback.getFuture(); } @@ -684,7 +684,7 @@ public CompletableFuture waitUntilPodIsRunning(String namespace, String la } }).count()), runCount -> { // Number of pods which are running - log.debug("Expected running pod count : {}, actual running pod count :{}.", expectedPodCount, runCount); + log.info("Expected running pod count of {}:{}, actual running pod count of {}:{}.", labelValue, expectedPodCount, labelValue, runCount); if (runCount == expectedPodCount) { shouldRetry.set(false); } @@ -716,7 +716,7 @@ public CompletableFuture downloadLogs(final V1Pod fromPod, final String to // the pod. String logFile = toFile + "-" + retryCount.incrementAndGet() + ".log"; Files.copy(logStream, Paths.get(logFile)); - log.debug("Logs downloaded from pod {} to {}", podName, logFile); + log.info("Logs downloaded from pod {} to {}", podName, logFile); } catch (ApiException | IOException e) { log.warn("Retryable error while downloading logs from pod {}. Error message: {} ", podName, e.getMessage()); throw new TestFrameworkException(TestFrameworkException.Type.RequestFailed, "Error while downloading logs"); diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/docker/DockerBasedService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/docker/DockerBasedService.java index f1ce5eea59b..a6433b27ab1 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/docker/DockerBasedService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/docker/DockerBasedService.java @@ -131,7 +131,7 @@ private boolean isSynced() { long replicas = getReplicas(); log.info("Replicas {}", replicas); log.info("Task running count {}", taskRunningCount); - if (((long) taskRunningCount) == replicas) { + if (taskRunningCount == replicas) { return true; } } catch (DockerException e) { diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/docker/PravegaControllerDockerService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/docker/PravegaControllerDockerService.java index f7d8275c45f..caf7cd461c1 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/docker/PravegaControllerDockerService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/docker/PravegaControllerDockerService.java @@ -83,7 +83,7 @@ private ServiceSpec setServiceSpec() { stringBuilderMap.put("log.level", "DEBUG"); stringBuilderMap.put("curator-default-session-timeout", String.valueOf(10 * 1000)); stringBuilderMap.put("controller.zk.connect.session.timeout.milliseconds", String.valueOf(30 * 1000)); - stringBuilderMap.put("controller.transaction.lease.count.max", String.valueOf(120 * 1000)); + stringBuilderMap.put("controller.transaction.lease.count.max", String.valueOf(600 * 1000)); stringBuilderMap.put("controller.retention.frequency.minutes", String.valueOf(2)); StringBuilder systemPropertyBuilder = new StringBuilder(); for (Map.Entry entry : stringBuilderMap.entrySet()) { diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/AbstractService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/AbstractService.java index c37210e17b9..1b83748ffa2 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/AbstractService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/AbstractService.java @@ -130,8 +130,8 @@ private Map getPravegaOnlyDeployment(String zkLocation, int cont .put("segmentStoreReplicas", segmentStoreCount) .put("debugLogging", true) .put("cacheVolumeClaimTemplate", pravegaPersistentVolumeSpec) - .put("controllerResources", getResources("2000m", "3Gi", "1000m", "1Gi")) - .put("segmentStoreResources", getResources("2000m", "5Gi", "1000m", "3Gi")) + .put("controllerResources", getResources("2000m", "2Gi", "1000m", "2Gi")) + .put("segmentStoreResources", getResources("2000m", "6Gi", "1000m", "6Gi")) .put("options", props) .put("image", pravegaImgSpec) .put("longtermStorage", tier2Spec()) @@ -197,9 +197,12 @@ protected static Map buildPatchedPravegaClusterSpec(String servi private Map tier2Spec() { final Map spec; + log.info("Loading tier2Type = {}", TIER2_TYPE); if (TIER2_TYPE.equalsIgnoreCase(TIER2_NFS)) { spec = ImmutableMap.of("filesystem", ImmutableMap.of("persistentVolumeClaim", ImmutableMap.of("claimName", "pravega-tier2"))); + } else if (TIER2_TYPE.equalsIgnoreCase("custom")) { + spec = getCustomTier2Config(); } else { // handle other types of tier2 like HDFS and Extended S3 Object Store. spec = ImmutableMap.of(TIER2_TYPE, getTier2Config()); @@ -207,10 +210,27 @@ private Map tier2Spec() { return spec; } + private Map getCustomTier2Config() { + return ImmutableMap.of("custom", + ImmutableMap.builder() + .put("options", getTier2Config()) + .put("env", getTier2Env()) + .build()); + } + private Map getTier2Config() { - String tier2Config = System.getProperty("tier2Config"); - checkNotNullOrEmpty(tier2Config, "tier2Config"); - Map split = Splitter.on(',').trimResults().withKeyValueSeparator("=").split(tier2Config); + return parseSystemPropertyAsMap("tier2Config"); + } + + private Map getTier2Env() { + return parseSystemPropertyAsMap("tier2Env"); + } + + private Map parseSystemPropertyAsMap(String systemProperty) { + String value = System.getProperty(systemProperty); + checkNotNullOrEmpty(value, systemProperty); + log.info("Parsing {} = {}", systemProperty, value); + Map split = Splitter.on(',').trimResults().withKeyValueSeparator("=").split(value); return split.entrySet().stream().collect(Collectors.toMap(Map.Entry::getKey, e -> { try { return Integer.parseInt(e.getValue()); @@ -224,15 +244,15 @@ private Map getTier2Config() { // Removal of the JVM option 'UseCGroupMemoryLimitForHeap' is required with JVM environments >= 10. This option // is supplied by default by the operators. We cannot 'deactivate' it using the XX:- counterpart as it is unrecognized. private String[] getSegmentStoreJVMOptions() { - return new String[]{"-XX:+UseContainerSupport", "-XX:+IgnoreUnrecognizedVMOptions"}; + return new String[]{"-XX:+UseContainerSupport", "-XX:+IgnoreUnrecognizedVMOptions", "-XX:MaxDirectMemorySize=4g", "-Xmx1024m"}; } private String[] getControllerJVMOptions() { - return new String[]{"-XX:+UseContainerSupport", "-XX:+IgnoreUnrecognizedVMOptions"}; + return new String[]{"-XX:+UseContainerSupport", "-XX:+IgnoreUnrecognizedVMOptions", "-Xmx1024m"}; } private String[] getBookkeeperMemoryOptions() { - return new String[]{"-XX:+UseContainerSupport", "-XX:+IgnoreUnrecognizedVMOptions"}; + return new String[]{"-XX:+UseContainerSupport", "-XX:+IgnoreUnrecognizedVMOptions", "-Xmx1024m"}; } @@ -251,6 +271,13 @@ protected Map getImageSpec(String imageName, String tag) { .build(); } + private Map getBookkeeperImageSpec(String imageName) { + return ImmutableMap.builder().put("imageSpec", ImmutableMap.builder() + .put("repository", imageName) + .put("pullPolicy", IMAGE_PULL_POLICY) + .build()).build(); + } + private Map getResources(String limitsCpu, String limitsMem, String requestsCpu, String requestsMem) { return ImmutableMap.builder() .put("limits", ImmutableMap.builder() @@ -271,7 +298,7 @@ private static V1Secret getTLSSecret() throws IOException { data = IOUtils.toString(inputStream, StandardCharsets.UTF_8); } Yaml.addModelMap("v1", "Secret", V1Secret.class); - V1Secret yamlSecret = (V1Secret) Yaml.loadAs(data, V1Secret.class); + V1Secret yamlSecret = Yaml.loadAs(data, V1Secret.class); return yamlSecret; } @@ -334,7 +361,7 @@ private V1ConfigMap getBookkeeperOperatorConfigMap() { private Map getBookkeeperDeployment(String zkLocation, int bookieCount, ImmutableMap props) { // generate BookkeeperSpec. final Map bkPersistentVolumeSpec = getPersistentVolumeClaimSpec("10Gi", "standard"); - final Map bookkeeperSpec = ImmutableMap.builder().put("image", getImageSpec(DOCKER_REGISTRY + PREFIX + "/" + BOOKKEEPER_IMAGE_NAME, BOOKKEEPER_VERSION)) + final Map bookkeeperSpec = ImmutableMap.builder().put("image", getBookkeeperImageSpec(DOCKER_REGISTRY + PREFIX + "/" + BOOKKEEPER_IMAGE_NAME)) .put("replicas", bookieCount) .put("version", BOOKKEEPER_VERSION) .put("resources", getResources("2000m", "5Gi", "1000m", "3Gi")) diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/BookkeeperK8sService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/BookkeeperK8sService.java index 133238f4f01..d1ac5e4e647 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/BookkeeperK8sService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/BookkeeperK8sService.java @@ -93,7 +93,7 @@ public CompletableFuture scaleService(int newInstanceCount) { .thenCompose(o -> { Map spec = (Map) (((Map) o).get("spec")); int currentBookkeeperCount = ((Double) spec.get("replicas")).intValue(); - log.debug("Current instance counts : Bookkeeper {} .", currentBookkeeperCount); + log.info("Expected instance counts: Bookkeeper {} . Current instance counts : Bookkeeper {} .", newInstanceCount, currentBookkeeperCount); if (currentBookkeeperCount != newInstanceCount) { final Map patchedSpec = buildPatchedBookkeeperClusterSpec("replicas", newInstanceCount); return k8sClient.createAndUpdateCustomObject(CUSTOM_RESOURCE_GROUP_BOOKKEEPER, CUSTOM_RESOURCE_VERSION_BOOKKEEPER, NAMESPACE, CUSTOM_RESOURCE_PLURAL_BOOKKEEPER, patchedSpec) diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/K8SequentialExecutor.java b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/K8SequentialExecutor.java index d3922a024ff..897b3dd836b 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/K8SequentialExecutor.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/K8SequentialExecutor.java @@ -37,6 +37,7 @@ import io.pravega.test.system.framework.kubernetes.ClientFactory; import io.pravega.test.system.framework.kubernetes.K8sClient; import java.util.Map; +import java.util.concurrent.atomic.AtomicReference; import java.util.stream.Collectors; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang.NotImplementedException; @@ -55,7 +56,6 @@ public class K8SequentialExecutor implements TestExecutor { private static final String SERVICE_ACCOUNT = System.getProperty("testServiceAccount", "test-framework"); //Service Account used by the test pod. private static final String CLUSTER_ROLE_BINDING = System.getProperty("testClusterRoleBinding", "cluster-admin-testFramework"); private static final String TEST_POD_IMAGE = System.getProperty("testPodImage", "openjdk:8u181-jre-alpine"); - private static final String LOG_LEVEL = System.getProperty("logLevel", "DEBUG"); @Override public CompletableFuture startTestExecution(Method testMethod) { @@ -70,17 +70,21 @@ public CompletableFuture startTestExecution(Method testMethod) { Map podStatusBeforeTest = getPravegaPodStatus(client); final V1Pod pod = getTestPod(className, methodName, podName.toLowerCase()); + final AtomicReference> logDownload = new AtomicReference<>(CompletableFuture.completedFuture(null)); return client.createServiceAccount(NAMESPACE, getServiceAccount()) // create service Account, ignore if already present. .thenCompose(v -> client.createClusterRoleBinding(getClusterRoleBinding())) // ensure test pod has cluster admin rights. .thenCompose(v -> client.deployPod(NAMESPACE, pod)) // deploy test pod. .thenCompose(v -> { - CompletableFuture logDownload = CompletableFuture.completedFuture(null); // start download of logs. if (!Utils.isSkipLogDownloadEnabled()) { - logDownload = client.downloadLogs(pod, "./build/test-results/" + podName); + logDownload.set(client.downloadLogs(pod, "./build/test-results/" + podName)); } - return client.waitUntilPodCompletes(NAMESPACE, podName).thenCombine(logDownload, (status, v1) -> status); + return client.waitUntilPodCompletes(NAMESPACE, podName); }).handle((s, t) -> { + Futures.getAndHandleExceptions(logDownload.get(), t1 -> { + log.error("Failed to download logs for {}#{}", className, methodName, t1); + return null; + }); if (t == null) { log.info("Test {}#{} execution completed with status {}", className, methodName, s); verifyPravegaPodRestart(podStatusBeforeTest, getPravegaPodStatus(client)); @@ -171,8 +175,6 @@ private static String getArgs() { "pravegaOperatorVersion", "bookkeeperOperatorVersion", "zookeeperOperatorVersion", - "desiredPravegaCMVersion", - "desiredBookkeeperCMVersion", "publishedChartName", "helmRepository", "controllerLabel", @@ -194,9 +196,9 @@ private static String getArgs() { "imageVersion", "securityEnabled", "tlsEnabled", - "logLevel", "configs", - "failFast" + "failFast", + "log.level" }; StringBuilder builder = new StringBuilder(); @@ -239,4 +241,4 @@ private V1beta1ClusterRoleBinding getClusterRoleBinding() { .withApiGroup("") // all core apis. .build()).build(); } -} \ No newline at end of file +} diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaControllerK8sService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaControllerK8sService.java index 3484e2e4358..b1a8158fd7c 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaControllerK8sService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaControllerK8sService.java @@ -102,7 +102,7 @@ public CompletableFuture scaleService(int newInstanceCount) { int currentControllerCount = ((Double) pravegaSpec.get("controllerReplicas")).intValue(); int currentSegmentStoreCount = ((Double) pravegaSpec.get("segmentStoreReplicas")).intValue(); - log.debug("Current instance counts : Controller {} SegmentStore {}.", + log.info("Current instance counts : Controller {} SegmentStore {}.", currentControllerCount, currentSegmentStoreCount); if (currentControllerCount != newInstanceCount) { final Map patchedSpec = buildPatchedPravegaClusterSpec("controllerReplicas", newInstanceCount, "pravega"); diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaSegmentStoreK8sService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaSegmentStoreK8sService.java index 9e32011a159..023e19d57d7 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaSegmentStoreK8sService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/PravegaSegmentStoreK8sService.java @@ -98,7 +98,7 @@ public CompletableFuture scaleService(int newInstanceCount) { int currentControllerCount = ((Double) pravegaSpec.get("controllerReplicas")).intValue(); int currentSegmentStoreCount = ((Double) pravegaSpec.get("segmentStoreReplicas")).intValue(); - log.debug("Current instance counts : Controller {} SegmentStore {}.", + log.info("Current instance counts : Controller {} SegmentStore {}.", currentControllerCount, currentSegmentStoreCount); if (currentSegmentStoreCount != newInstanceCount) { final Map patchedSpec = buildPatchedPravegaClusterSpec("segmentStoreReplicas", newInstanceCount, "pravega"); diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/ZookeeperK8sService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/ZookeeperK8sService.java index 3772ac9ea64..325bc6d6e8e 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/ZookeeperK8sService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/kubernetes/ZookeeperK8sService.java @@ -111,7 +111,21 @@ private Map getZookeeperDeployment(final String deploymentName, .put("spec", ImmutableMap.builder().put("image", getImageSpec(DOCKER_REGISTRY + PREFIX + "/" + ZOOKEEPER_IMAGE_NAME, PRAVEGA_ZOOKEEPER_IMAGE_VERSION)) .put("replicas", clusterSize) .put("persistence", ImmutableMap.of("reclaimPolicy", "Delete")) + .put("pod", ImmutableMap.of("resources", getZookeeperResources())) .build()) .build(); } + + private Map getZookeeperResources() { + return ImmutableMap.builder() + .put("limits", ImmutableMap.builder() + .put("cpu", "400m") + .put("memory", "2Gi") + .build()) + .put("requests", ImmutableMap.builder() + .put("cpu", "200m") + .put("memory", "1Gi") + .build()) + .build(); + } } diff --git a/test/system/src/main/java/io/pravega/test/system/framework/services/marathon/PravegaControllerService.java b/test/system/src/main/java/io/pravega/test/system/framework/services/marathon/PravegaControllerService.java index 52c6b43896e..ef1256263a4 100644 --- a/test/system/src/main/java/io/pravega/test/system/framework/services/marathon/PravegaControllerService.java +++ b/test/system/src/main/java/io/pravega/test/system/framework/services/marathon/PravegaControllerService.java @@ -135,7 +135,7 @@ private App createPravegaControllerApp() { buildSystemProperty("log.level", "DEBUG") + buildSystemProperty("log.dir", "$MESOS_SANDBOX/pravegaLogs") + buildSystemProperty("curator-default-session-timeout", String.valueOf(10 * 1000)) + - buildSystemProperty(propertyName("transaction.lease.count.max"), String.valueOf(120 * 1000)) + + buildSystemProperty(propertyName("transaction.lease.count.max"), String.valueOf(600 * 1000)) + buildSystemProperty(propertyName("retention.frequency.minutes"), String.valueOf(2)); Map map = new HashMap<>(); diff --git a/test/system/src/test/java/io/pravega/test/system/DynamicRestApiTest.java b/test/system/src/test/java/io/pravega/test/system/DynamicRestApiTest.java index 283ac170967..e48a0a11660 100644 --- a/test/system/src/test/java/io/pravega/test/system/DynamicRestApiTest.java +++ b/test/system/src/test/java/io/pravega/test/system/DynamicRestApiTest.java @@ -93,8 +93,7 @@ public void listScopes() { URI controllerGRPCUri = controllerURIs.get(0); URI controllerRESTUri = controllerURIs.get(1); Invocation.Builder builder; - @Cleanup - Response response = null; + String protocol = Utils.TLS_AND_AUTH_ENABLED ? "https://" : "http://"; restServerURI = protocol + controllerRESTUri.getHost() + ":" + controllerRESTUri.getPort(); log.info("REST Server URI: {}", restServerURI); @@ -103,7 +102,8 @@ public void listScopes() { resourceURl = new StringBuilder(restServerURI).append("/ping").toString(); webTarget = client.target(resourceURl); builder = webTarget.request(); - response = builder.get(); + @Cleanup + Response response = builder.get(); assertEquals(String.format("Received unexpected status code: %s in response to 'ping' request.", response.getStatus()), OK.getStatusCode(), response.getStatus()); diff --git a/test/system/src/test/java/io/pravega/test/system/SingleSubscriberUpdateRetentionStreamCutTest.java b/test/system/src/test/java/io/pravega/test/system/SingleSubscriberUpdateRetentionStreamCutTest.java new file mode 100644 index 00000000000..60fbc531877 --- /dev/null +++ b/test/system/src/test/java/io/pravega/test/system/SingleSubscriberUpdateRetentionStreamCutTest.java @@ -0,0 +1,210 @@ +/** + * Copyright Pravega Authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package io.pravega.test.system; + +import io.pravega.client.ClientConfig; +import io.pravega.client.EventStreamClientFactory; +import io.pravega.client.admin.ReaderGroupManager; +import io.pravega.client.admin.StreamManager; +import io.pravega.client.control.impl.Controller; +import io.pravega.client.control.impl.ControllerImpl; +import io.pravega.client.control.impl.ControllerImplConfig; +import io.pravega.client.stream.EventRead; +import io.pravega.client.stream.EventStreamReader; +import io.pravega.client.stream.EventStreamWriter; +import io.pravega.client.stream.EventWriterConfig; +import io.pravega.client.stream.ReaderConfig; +import io.pravega.client.stream.ReaderGroup; +import io.pravega.client.stream.ReaderGroupConfig; +import io.pravega.client.stream.RetentionPolicy; +import io.pravega.client.stream.ScalingPolicy; +import io.pravega.client.stream.Stream; +import io.pravega.client.stream.StreamConfiguration; +import io.pravega.client.stream.StreamCut; +import io.pravega.client.stream.impl.JavaSerializer; +import io.pravega.client.stream.impl.StreamImpl; +import io.pravega.common.Exceptions; +import io.pravega.common.concurrent.ExecutorServiceHelpers; +import io.pravega.common.concurrent.Futures; +import io.pravega.common.hash.RandomFactory; +import io.pravega.test.common.AssertExtensions; +import io.pravega.test.system.framework.Environment; +import io.pravega.test.system.framework.SystemTestRunner; +import io.pravega.test.system.framework.Utils; +import io.pravega.test.system.framework.services.Service; +import lombok.Cleanup; +import lombok.extern.slf4j.Slf4j; +import mesosphere.marathon.client.MarathonException; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; + +import java.net.URI; +import java.util.List; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +@Slf4j +@RunWith(SystemTestRunner.class) +public class SingleSubscriberUpdateRetentionStreamCutTest extends AbstractReadWriteTest { + + private static final String SCOPE = "testCBRScope" + RandomFactory.create().nextInt(Integer.MAX_VALUE); + private static final String STREAM = "testCBRStream" + RandomFactory.create().nextInt(Integer.MAX_VALUE); + private static final String READER_GROUP = "testCBRReaderGroup" + RandomFactory.create().nextInt(Integer.MAX_VALUE); + private static final String SIZE_30_EVENT = "data of size 30"; + + private static final int READ_TIMEOUT = 1000; + private static final int MAX_SIZE_IN_STREAM = 300; + private static final int MIN_SIZE_IN_STREAM = 30; + + private final ReaderConfig readerConfig = ReaderConfig.builder().build(); + private final ScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(4, "executor"); + private final ScheduledExecutorService streamCutExecutor = ExecutorServiceHelpers.newScheduledThreadPool(1, "streamCutExecutor"); + private URI controllerURI = null; + private StreamManager streamManager = null; + private Controller controller = null; + + /** + * This is used to setup the various services required by the system test framework. + * + * @throws MarathonException when error in setup + */ + @Environment + public static void initialize() throws MarathonException { + URI zkUri = startZookeeperInstance(); + startBookkeeperInstances(zkUri); + URI controllerUri = ensureControllerRunning(zkUri); + ensureSegmentStoreRunning(zkUri, controllerUri); + } + + @Before + public void setup() { + Service conService = Utils.createPravegaControllerService(null); + List ctlURIs = conService.getServiceDetails(); + controllerURI = ctlURIs.get(0); + + final ClientConfig clientConfig = Utils.buildClientConfig(controllerURI); + + controller = new ControllerImpl(ControllerImplConfig.builder() + .clientConfig(clientConfig) + .maxBackoffMillis(5000).build(), executor); + streamManager = StreamManager.create(clientConfig); + + assertTrue("Creating scope", streamManager.createScope(SCOPE)); + assertTrue("Creating stream", streamManager.createStream(SCOPE, STREAM, + StreamConfiguration.builder() + .scalingPolicy(ScalingPolicy.fixed(1)) + .retentionPolicy(RetentionPolicy.bySizeBytes(MIN_SIZE_IN_STREAM, MAX_SIZE_IN_STREAM)).build())); + } + + @After + public void tearDown() { + streamManager.close(); + ExecutorServiceHelpers.shutdown(executor); + ExecutorServiceHelpers.shutdown(streamCutExecutor); + } + + @Test + public void singleSubscriberCBRTest() throws Exception { + final ClientConfig clientConfig = Utils.buildClientConfig(controllerURI); + + @Cleanup + EventStreamClientFactory clientFactory = EventStreamClientFactory.withScope(SCOPE, clientConfig); + @Cleanup + EventStreamWriter writer = clientFactory.createEventWriter(STREAM, new JavaSerializer<>(), + EventWriterConfig.builder().build()); + + // Write a single event. + log.info("Writing event e1 to {}/{}", SCOPE, STREAM); + writer.writeEvent("e1", SIZE_30_EVENT).join(); + + @Cleanup + ReaderGroupManager readerGroupManager = ReaderGroupManager.withScope(SCOPE, clientConfig); + readerGroupManager.createReaderGroup(READER_GROUP, ReaderGroupConfig.builder() + .retentionType(ReaderGroupConfig.StreamDataRetention.MANUAL_RELEASE_AT_USER_STREAMCUT) + .disableAutomaticCheckpoints() + .stream(Stream.of(SCOPE, STREAM)).build()); + ReaderGroup readerGroup = readerGroupManager.getReaderGroup(READER_GROUP); + @Cleanup + EventStreamReader reader = clientFactory.createReader(READER_GROUP + "-" + 1, + READER_GROUP, new JavaSerializer<>(), readerConfig); + + // Read one event. + log.info("Reading event e1 from {}/{}", SCOPE, STREAM); + EventRead read = reader.readNextEvent(READ_TIMEOUT); + assertFalse(read.isCheckpoint()); + assertEquals("data of size 30", read.getEvent()); + + // Update the retention stream-cut. + log.info("{} generating stream-cuts for {}/{}", READER_GROUP, SCOPE, STREAM); + CompletableFuture> futureCuts = readerGroup.generateStreamCuts(streamCutExecutor); + // Wait for 5 seconds to force reader group state update. This will allow for the silent + // checkpoint event generated as part of generateStreamCuts to be picked and processed. + Exceptions.handleInterrupted(() -> TimeUnit.SECONDS.sleep(5)); + EventRead emptyEvent = reader.readNextEvent(READ_TIMEOUT); + assertTrue("Stream-cut generation did not complete", Futures.await(futureCuts, 10_000)); + + Map streamCuts = futureCuts.join(); + log.info("{} updating its retention stream-cut to {}", READER_GROUP, streamCuts); + readerGroup.updateRetentionStreamCut(streamCuts); + + // Write two more events. + log.info("Writing event e2 to {}/{}", SCOPE, STREAM); + writer.writeEvent("e2", SIZE_30_EVENT).join(); + log.info("Writing event e3 to {}/{}", SCOPE, STREAM); + writer.writeEvent("e3", SIZE_30_EVENT).join(); + + // Check to make sure truncation happened after the first event. + // The timeout is 5 minutes as the retention period is set to 2 minutes. We allow for 2 cycles to fully complete + // and a little longer in order to confirm that the retention has taken place. + AssertExtensions.assertEventuallyEquals("Truncation did not take place at offset 30.", true, () -> controller.getSegmentsAtTime( + new StreamImpl(SCOPE, STREAM), 0L).join().values().stream().anyMatch(off -> off >= 30), + 1000, 5 * 60 * 1000L); + + // Read next event. + log.info("Reading event e2 from {}/{}", SCOPE, STREAM); + read = reader.readNextEvent(READ_TIMEOUT); + assertFalse(read.isCheckpoint()); + assertEquals("data of size 30", read.getEvent()); + + // Update the retention stream-cut. + log.info("{} generating stream-cuts for {}/{}", READER_GROUP, SCOPE, STREAM); + CompletableFuture> futureCuts2 = readerGroup.generateStreamCuts(streamCutExecutor); + // Wait for 5 seconds to force reader group state update. This will allow for the silent + // checkpoint event generated as part of generateStreamCuts to be picked and processed. + Exceptions.handleInterrupted(() -> TimeUnit.SECONDS.sleep(5)); + EventRead emptyEvent2 = reader.readNextEvent(READ_TIMEOUT); + assertTrue("Stream-cut generation did not complete", Futures.await(futureCuts2, 10_000)); + + Map streamCuts2 = futureCuts2.join(); + log.info("{} updating its retention stream-cut to {}", READER_GROUP, streamCuts2); + readerGroup.updateRetentionStreamCut(streamCuts2); + + // Check to make sure truncation happened after the second event. + // The timeout is 5 minutes as the retention period is set to 2 minutes. We allow for 2 cycles to fully complete + // and a little longer in order to confirm that the retention has taken place. + AssertExtensions.assertEventuallyEquals("Truncation did not take place at offset 60", true, () -> controller.getSegmentsAtTime( + new StreamImpl(SCOPE, STREAM), 0L).join().values().stream().anyMatch(off -> off >= 60), + 1000, 5 * 60 * 1000L); + } +} diff --git a/test/system/src/test/java/io/pravega/test/system/StreamCutsTest.java b/test/system/src/test/java/io/pravega/test/system/StreamCutsTest.java index a1b2bf9bd19..d73ec3e1585 100644 --- a/test/system/src/test/java/io/pravega/test/system/StreamCutsTest.java +++ b/test/system/src/test/java/io/pravega/test/system/StreamCutsTest.java @@ -19,6 +19,9 @@ import io.pravega.client.EventStreamClientFactory; import io.pravega.client.admin.ReaderGroupManager; import io.pravega.client.admin.StreamManager; +import io.pravega.client.control.impl.Controller; +import io.pravega.client.control.impl.ControllerImpl; +import io.pravega.client.control.impl.ControllerImplConfig; import io.pravega.client.stream.EventRead; import io.pravega.client.stream.EventStreamReader; import io.pravega.client.stream.ReaderConfig; @@ -29,9 +32,6 @@ import io.pravega.client.stream.Stream; import io.pravega.client.stream.StreamConfiguration; import io.pravega.client.stream.StreamCut; -import io.pravega.client.control.impl.Controller; -import io.pravega.client.control.impl.ControllerImpl; -import io.pravega.client.control.impl.ControllerImplConfig; import io.pravega.client.stream.impl.JavaSerializer; import io.pravega.common.Exceptions; import io.pravega.common.concurrent.ExecutorServiceHelpers; @@ -49,7 +49,6 @@ import java.util.List; import java.util.Map; import java.util.concurrent.CompletableFuture; -import java.util.concurrent.ExecutionException; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; @@ -104,7 +103,7 @@ public class StreamCutsTest extends AbstractReadWriteTest { * @throws MarathonException When error in setup. */ @Environment - public static void initialize() throws MarathonException, ExecutionException { + public static void initialize() throws MarathonException { URI zkUri = startZookeeperInstance(); startBookkeeperInstances(zkUri); URI controllerUri = ensureControllerRunning(zkUri); diff --git a/test/system/src/test/java/io/pravega/test/system/StreamsAndScopesManagementTest.java b/test/system/src/test/java/io/pravega/test/system/StreamsAndScopesManagementTest.java index c9dae668735..341d7b3e361 100644 --- a/test/system/src/test/java/io/pravega/test/system/StreamsAndScopesManagementTest.java +++ b/test/system/src/test/java/io/pravega/test/system/StreamsAndScopesManagementTest.java @@ -15,14 +15,15 @@ */ package io.pravega.test.system; +import com.google.common.collect.ImmutableSet; import io.pravega.client.ClientConfig; import io.pravega.client.EventStreamClientFactory; import io.pravega.client.admin.StreamManager; -import io.pravega.client.stream.ScalingPolicy; -import io.pravega.client.stream.StreamConfiguration; import io.pravega.client.control.impl.Controller; import io.pravega.client.control.impl.ControllerImpl; import io.pravega.client.control.impl.ControllerImplConfig; +import io.pravega.client.stream.ScalingPolicy; +import io.pravega.client.stream.StreamConfiguration; import io.pravega.common.concurrent.ExecutorServiceHelpers; import io.pravega.test.system.framework.Environment; import io.pravega.test.system.framework.SystemTestRunner; @@ -33,7 +34,7 @@ import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.concurrent.ExecutionException; +import java.util.concurrent.CompletableFuture; import java.util.concurrent.ScheduledExecutorService; import lombok.Cleanup; import lombok.extern.slf4j.Slf4j; @@ -45,9 +46,11 @@ import org.junit.rules.Timeout; import org.junit.runner.RunWith; +import static com.google.common.collect.Lists.newArrayList; import static io.pravega.test.common.AssertExtensions.assertThrows; -import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertEquals; @Slf4j @RunWith(SystemTestRunner.class) @@ -57,6 +60,7 @@ public class StreamsAndScopesManagementTest extends AbstractReadWriteTest { private static final int NUM_STREAMS = 5; private static final int NUM_EVENTS = 100; private static final int TEST_ITERATIONS = 3; + private static final int TEST_MAX_STREAMS = 10; @Rule public Timeout globalTimeout = Timeout.seconds(20 * 60); @@ -74,7 +78,7 @@ public class StreamsAndScopesManagementTest extends AbstractReadWriteTest { * @throws MarathonException When error in setup. */ @Environment - public static void initialize() throws MarathonException, ExecutionException { + public static void initialize() throws MarathonException { URI zkUri = startZookeeperInstance(); startBookkeeperInstances(zkUri); URI controllerUri = ensureControllerRunning(zkUri); @@ -135,7 +139,7 @@ public void testStreamsAndScopesManagement() { private void testStreamScopeManagementIteration() { for (int i = 0; i < NUM_SCOPES; i++) { - final String scope = "testStreamsAndScopesManagement" + String.valueOf(i); + final String scope = "testStreamsAndScopesManagement" + i; testCreateScope(scope); testCreateSealAndDeleteStreams(scope); testDeleteScope(scope); @@ -211,6 +215,66 @@ private void testCreateSealAndDeleteStreams(String scope) { } } + @Test + public void testStreamTags() { + // Perform management tests with Streams and Scopes. + for (int i = 0; i < TEST_MAX_STREAMS; i++) { + log.info("Stream tag test in iteration {}.", i); + final String scope = "testStreamsTags" + i; + testCreateScope(scope); + testCreateUpdateDeleteStreamTag(scope); + testDeleteScope(scope); + } + } + + private void testCreateUpdateDeleteStreamTag(String scope) { + final ImmutableSet tagSet1 = ImmutableSet.of("t1", "t2", "t3"); + final ImmutableSet tagSet2 = ImmutableSet.of("t3", "t4", "t5"); + // Create and Update Streams + for (int j = 1; j <= TEST_MAX_STREAMS; j++) { + StreamConfiguration config = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(j)).build(); + final String stream = "stream" + j; + log.info("creating a new stream in scope {}/{}", stream, scope); + streamManager.createStream(scope, stream, config); + log.info("updating the stream in scope {}/{}", stream, scope); + streamManager.updateStream(scope, stream, config.toBuilder().tags(tagSet1).build()); + assertEquals(tagSet1, streamManager.getStreamTags(scope, stream)); + } + // Check the size of streams with tagName t1 + assertEquals(TEST_MAX_STREAMS, newArrayList(streamManager.listStreams(scope, "t1")).size()); + // Check if the lists of tag t3 and t1 are equal + assertEquals(newArrayList(streamManager.listStreams(scope, "t3")), newArrayList(streamManager.listStreams(scope, "t1"))); + + // Update the streams with new tagSet + List> futures = new ArrayList<>(); + for (int j = 1; j <= TEST_MAX_STREAMS; j++) { + StreamConfiguration config = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(j)).build(); + final String stream = "stream" + j; + log.info("updating the stream tag scope {}/{}", stream, scope); + futures.add(CompletableFuture.runAsync(() -> streamManager.updateStream(scope, stream, config.toBuilder().clearTags().tags(tagSet2).build()))); + } + assertEquals(TEST_MAX_STREAMS, futures.size()); + CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join(); + // Check if the update was successfully done + assertTrue(newArrayList(streamManager.listStreams(scope, "t1")).isEmpty()); + assertEquals(TEST_MAX_STREAMS, newArrayList(streamManager.listStreams(scope, "t4")).size()); + final int tagT3Size = newArrayList(streamManager.listStreams(scope, "t3")).size(); + final int tagT4Size = newArrayList(streamManager.listStreams(scope, "t4")).size(); + log.info("list size of t3 tags and t4 are {}/{}", tagT3Size, tagT4Size); + assertEquals(tagT3Size, tagT4Size); + + // seal and delete stream + for (int j = 1; j <= TEST_MAX_STREAMS; j++) { + final String stream = "stream" + j; + streamManager.sealStream(scope, stream); + log.info("deleting the stream in scope {}/{}", stream, scope); + streamManager.deleteStream(scope, stream); + } + // Check if list streams is updated. + assertTrue(newArrayList(streamManager.listStreams(scope)).isEmpty()); + } + + private long timeDiffInMs(long iniTime) { return (System.nanoTime() - iniTime) / 1000000; } diff --git a/test/system/src/test/resources/pravega.properties b/test/system/src/test/resources/pravega.properties index d92d22f0dd5..b236a6eb642 100644 --- a/test/system/src/test/resources/pravega.properties +++ b/test/system/src/test/resources/pravega.properties @@ -21,11 +21,11 @@ curator-default-session-timeout=10000 bookkeeper.ack.quorum.size=3 bookkeeper.write.timeout.milliseconds=10000 bookkeeper.write.attempts.count.max=3 -controller.transaction.lease.count.max=120000 +controller.transaction.lease.count.max=600000 controller.retention.frequency.minutes=2 log.level=DEBUG hdfs.replaceDataNodesOnFailure.enable=false -#4GB of cache. -pravegaservice.cache.size.max=4294967296 -#4GB of cache + a buffer for whatever else Netty might need it for. +#3GB of cache. +pravegaservice.cache.size.max=3094967296 +#3GB of cache + a buffer for whatever else Netty might need it for. io.netty.maxDirectMemory=5368709120 \ No newline at end of file diff --git a/test/system/src/test/resources/pravega_withAuth.properties b/test/system/src/test/resources/pravega_withAuth.properties index ed128c10d47..dcf74b379e9 100644 --- a/test/system/src/test/resources/pravega_withAuth.properties +++ b/test/system/src/test/resources/pravega_withAuth.properties @@ -21,7 +21,7 @@ curator-default-session-timeout=10000 bookkeeper.ack.quorum.size=3 bookkeeper.write.timeout.milliseconds=10000 bookkeeper.write.attempts.count.max=3 -controller.transaction.lease.count.max=120000 +controller.transaction.lease.count.max=600000 controller.retention.frequency.minutes=2 log.level=DEBUG hdfs.replaceDataNodesOnFailure.enable=false @@ -30,9 +30,9 @@ controller.security.pwdAuthHandler.accountsDb.location=/opt/pravega/conf/passwd controller.security.auth.delegationToken.signingKey.basis=secret autoscale.controller.connect.security.auth.enable=true autoscale.security.auth.token.signingKey.basis=secret -#4GB of cache. -pravegaservice.cache.size.max=4294967296 -#4GB of cache + a buffer for whatever else Netty might need it for. +#3GB of cache. +pravegaservice.cache.size.max=3094967296 +#3GB of cache + a buffer for whatever else Netty might need it for. io.netty.maxDirectMemory=5368709120 pravega.client.auth.token=YWRtaW46MTExMV9hYWFh pravega.client.auth.method=Basic diff --git a/test/system/src/test/resources/pravega_withTLS.properties b/test/system/src/test/resources/pravega_withTLS.properties index 83b7f638ec3..fd732ca458c 100644 --- a/test/system/src/test/resources/pravega_withTLS.properties +++ b/test/system/src/test/resources/pravega_withTLS.properties @@ -21,7 +21,7 @@ curator-default-session-timeout=10000 bookkeeper.ack.quorum.size=3 bookkeeper.write.timeout.milliseconds=10000 bookkeeper.write.attempts.count.max=3 -controller.transaction.lease.count.max=120000 +controller.transaction.lease.count.max=600000 controller.retention.frequency.minutes=2 log.level=DEBUG hdfs.replaceDataNodesOnFailure.enable=false @@ -30,9 +30,9 @@ controller.security.pwdAuthHandler.accountsDb.location=/opt/pravega/conf/passwd controller.security.auth.delegationToken.signingKey.basis=secret autoScale.controller.connect.security.auth.enable=true autoScale.security.auth.token.signingKey.basis=secret -#4GB of cache. -pravegaservice.cache.size.max=4294967296 -#4GB of cache + a buffer for whatever else Netty might need it for. +#3GB of cache. +pravegaservice.cache.size.max=3094967296 +#3GB of cache + a buffer for whatever else Netty might need it for. io.netty.maxDirectMemory=5368709120 pravega.client.auth.token=YWRtaW46MTExMV9hYWFh pravega.client.auth.method=Basic diff --git a/test/testcommon/src/main/java/io/pravega/test/common/AssertExtensions.java b/test/testcommon/src/main/java/io/pravega/test/common/AssertExtensions.java index f71db374774..1843278c893 100644 --- a/test/testcommon/src/main/java/io/pravega/test/common/AssertExtensions.java +++ b/test/testcommon/src/main/java/io/pravega/test/common/AssertExtensions.java @@ -136,7 +136,7 @@ public static void assertEventuallyThrows(Class type, Runna * @param run The Runnable to execute. * @param type The type of exception to expect. */ - public static void assertThrows(Class type, RunnableWithException run) { + public static void assertThrows(Class type, RunnableWithException run) { try { run.run(); Assert.fail("No exception thrown where: " + type.getName() + " was expected"); diff --git a/test/testcommon/src/main/java/io/pravega/test/common/SecurityConfigDefaults.java b/test/testcommon/src/main/java/io/pravega/test/common/SecurityConfigDefaults.java index 8323718e21f..ad700b7e99c 100644 --- a/test/testcommon/src/main/java/io/pravega/test/common/SecurityConfigDefaults.java +++ b/test/testcommon/src/main/java/io/pravega/test/common/SecurityConfigDefaults.java @@ -19,6 +19,8 @@ * Holds default security configuration values. */ public class SecurityConfigDefaults { + public static final String[] TLS_PROTOCOL_VERSION = new String[]{ "TLSv1.2", "TLSv1.3"}; + public static final String TLS_SERVER_CERT_FILE_NAME = "server-cert.crt"; public static final String TLS_SERVER_CERT_PATH = "../config/" + TLS_SERVER_CERT_FILE_NAME;