You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's a bug in the implemenation of SyntheticUnboundedSource class. By default, it splits the data to some number of bundles and lost records number in write streaming pipeline equals to that number of bundles.
Various stress and load tests use this Source, for example, KafkaIOLT and PubSubIOLT. To reproduce the bug, you can run PubSubIOLT (for convenience, I recommend running it on DirectRunner) and check the value of numRecords. It is typically lower than what we initially set.
Issue Priority
Priority: 3 (minor)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam YAML
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
What happened?
It's a bug in the implemenation of SyntheticUnboundedSource class. By default, it splits the data to some number of bundles and lost records number in write streaming pipeline equals to that number of bundles.
Various stress and load tests use this Source, for example, KafkaIOLT and PubSubIOLT. To reproduce the bug, you can run PubSubIOLT (for convenience, I recommend running it on DirectRunner) and check the value of numRecords. It is typically lower than what we initially set.
Issue Priority
Priority: 3 (minor)
Issue Components
The text was updated successfully, but these errors were encountered: