New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird issue when piping a chunked data from HTTP to a file in OS X #1567
Comments
It's worth noting that using the builtin |
I've also bisected the issue until 10246c8, which is the commit that implements gzip content decoding, which didn't help much. |
In case it also helps, avoiding the Maybe there is a subtle bug in the way |
When the response content is piped through additional streams for decoding, pause and resume calls should be propagated to the last stream in the pipeline so that back pressure propagates correctly. This avoids an issue where simultaneous back pressure from the content decoding stream and from a stream to which Request is piped could cause the response stream to get stuck waiting for a drain event on the content decoding stream which never occurs. See request#1567 for an example. This commit also renames dataStream to responseContent to remedy my previous poor choice of name, since the name will be exposed on the Request instance it should be clearer and closer to the name used to refer to this data in the relevant RFCs. Fixes request#1567 Signed-off-by: Kevin Locke <kevin@kevinlocke.name>
When the response content is piped through additional streams for decoding (e.g. for gzip decompression), pause and resume calls should be propagated to the last stream in the pipeline so that back pressure propagates correctly. This avoids an issue where simultaneous back pressure from the content decoding stream and from a stream to which Request is piped could cause the response stream to get stuck waiting for a drain event on the content decoding stream which never occurs. See request#1567 for an example. This commit also renames dataStream to responseContent to remedy my previous poor choice of name, since the name will be exposed on the Request instance it should be clearer and closer to the name used to refer to this data in the relevant RFCs. Fixes request#1567 Signed-off-by: Kevin Locke <kevin@kevinlocke.name>
Thanks @jviotti, I think I've figured it out. It looks like the response stream can get into a stalled state due to how backpressure is handled on the streams. The issue appears to affect all platforms (or, at least, I was able to reproduce it on Linux) but only affects Node 0.11 and 0.12, not 0.10, due to the changes discussed in nodejs/node-v0.x-archive#8351 for how If you could give the changes in #1568 a try and let me know whether it fixes the issue for you, I'd appreciate it! |
@kevinoid I can confirm your fix works for me! Thanks a lot for the support! Can you let me know when this change is published to NPM please? |
From http://stackoverflow.com/questions/30085943/nodejs-unexpected-behaviour-when-piping-chunked-http-data-to-a-file-in-os-x?noredirect=1#comment48289848_30085943.
I'm getting a weird issue when piping a chunked data from HTTP to a file in OS X.
When triggering the GET HTTP request that initiates the download, my server responds with the following headers:
My code goes like this:
If I run that script, I correctly get the initial chunks (corresponding to about 10MB of data) but the connection then suddenly ends, and no other event (close, end, etc) is triggered from any of the streams.
What's even more weird is that commenting out
gzip: true
leads to the desired behaviour (I of course get a *.img.gz file in that case), and it also works as expected if keeping thegzip: true
and piping toprocess.stdout
instead of to a file.Also notice the issue is not reproducible on Ubuntu. It only seems to happen in OS X and Windows.
cURL
also works as expected:I've proceeded to analyse the packet communication (you can get the pcap here):
I can see my IP is suddenly sending a ACK/FIN while the server is still sending me chunks:
Followed by a long loop of RST and ACK:
And finally an agreed ACK, FIN from both sides.
After a lot of experimentation, the issue is gone if I write the chunks to the destination stream manually instead of piping:
The text was updated successfully, but these errors were encountered: