New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Respect file extension setting for S3 destinations in Firehose provider #10651
Respect file extension setting for S3 destinations in Firehose provider #10651
Conversation
All contributors have signed the CLA ✍️ ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welcome to LocalStack! Thanks for raising your first Pull Request and landing in your contributions. Our team will reach out with any reviews or feedbacks that we have shortly. We recommend joining our Slack Community and share your PR on the #community channel to share your contributions with us. Please make sure you are following our contributing guidelines and our Code of Conduct.
I have read the CLA Document and I hereby sign the CLA |
Hello @ozgenbaris1 , thank you for taking the initiative to address an issue that you have encountered. In order to proceed, it is important for the solution to include an integration test. |
0162d87
to
199af06
Compare
hello @pinzon! just added an integration test. can you review? |
@@ -532,3 +532,44 @@ def test_kinesis_firehose_kinesis_as_source_multiple_delivery_streams( | |||
] | |||
) | |||
snapshot.match("kinesis-event-stream-multiple-delivery-streams", s3_data) | |||
|
|||
@markers.aws.unknown |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're not accepting any more unknown
tests. Please use the TEST_TARGET
env variable to run agains the AWS. This will generate and extra file that you need to also submit in this pr. Also change it to @markers.aws.validated
once you confirm that the test passes with AWS . Please check these docs for more info.
Record=record, | ||
) | ||
|
||
s3_objects = list_s3_objects(s3_bucket, timeout=300) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s3_objects = list_s3_objects(s3_bucket, timeout=300) | |
s3_objects = list_s3_objects(s3_bucket, timeout= 300 if is_aws_cloud() else 60) |
We should consider if the test is running in AWS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great PR. Just needs some small improvements.
Hi @ozgenbaris1, It's been some time since this PR was opened, would you like me to take over? |
hey @pinzon, I couldn't have time to test in AWS. I'd appreciate if you could take over and make the necessary adjustments |
The Firehose delivery stream implementation was not considering the file extension setting when the target was defined as an S3 bucket. This fix ensures that the specified file extension is appended to the S3 object's key when writing data. The `_get_s3_object_path` method now accepts the `file_extension` parameter to accommodate this enhancement.
199af06
to
dd35352
Compare
Changes
The Firehose delivery stream implementation was not considering the file extension setting when the target was defined as an S3 bucket. This fix ensures that the specified file extension is appended to the S3 object's key when writing data. The
_get_s3_object_path
method now accepts thefile_extension
parameter to accommodate this enhancement.This change introduces support for file extensions by extracting the FileExtension parameter from the S3 destination description. Now, if a file extension is provided, it will be appended to the S3 object's key, ensuring that files are created with the correct extensions, improving flexibility for users relying on file extensions for downstream processing or identification of data within S3 buckets.
Motivation
resolves #10650