Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to transform file stream before sending it #171

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

manifoldhiker
Copy link

I have the case, where I need to do some transformations on the file before writing it to the response.
The way I do it now is by passing the array of transformation streams with options.

Could it be useful for others? If yes, maybe there are better ways of doing this?

@dougwilson
Copy link
Contributor

I know there was some previous PR / issue on this, and I think it is desired. I think there are some issues to work out in this implementation, mainly that any alteration in one of those transform streams that alters the length will mean the Content-Length header will be wrong and cause a transport error.

@manifoldhiker
Copy link
Author

Ok, I will try to fix this issues

@manifoldhiker
Copy link
Author

Hi @dougwilson ! I am figuring out how to apply transformations on the file (like compressing or adding ID3 tag to mp3 file) and set proper Content-Length to the response.
Could you advise anything about it? As I understood, headers could be sent before the transfer of the response body. That means that:

a) We need to apply a deterministic transformation (when we know exactly how it will affect the size) to set the right header before it's being sent
b) In the case of undeterministic transformations, we need to load full file into memory first, do mutation, calculate its size and then send

Am I thinking correclty?

@dougwilson
Copy link
Contributor

I'm going to see if I can find the previous conversation(s) on this, which may help. You are thinking correctly, but of course the downside to reading in the entire file into memory is that it's very memory-intensive and becomes much easier to overflow the server's memory with lots of requests, exhausting the Node.js process memory until it just dies. That's the main reason we are using streams right now. We've had bug reports about the handling of files >2GB a few times, so there are definitely large file usage it seems.

@manifoldhiker
Copy link
Author

Hello @dougwilson ! Any update on this?

@dougwilson
Copy link
Contributor

No, sorry.

@dougwilson
Copy link
Contributor

The conversation I'm thinking of is in a previous PR for this feature: #69

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants