Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stream.pipeline swallowing errors when read stream is empty #24517

Closed
coderaiser opened this issue Nov 20, 2018 · 14 comments
Closed

stream.pipeline swallowing errors when read stream is empty #24517

coderaiser opened this issue Nov 20, 2018 · 14 comments
Labels
help wanted Issues that need assistance from volunteers or PRs that need help to proceed. stream Issues and PRs related to the stream subsystem.

Comments

@coderaiser
Copy link
Contributor

  • Version: 10.12.0, 11.2.0
  • Platform: Linux cloudcmd.io 4.4.0-122-generic #146-Ubuntu SMP Mon Apr 23 15:34:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Subsystem: Stream

pipeline has inconsistent behavior with pipe, it is swallows writable errors, when readable stream is empty, for example, such code will log undefined:

const {Readable} = require('stream');
const {createWriteStream} = require('fs');
const {pipeline} = require('stream');

const readStream = new Readable({
  read() {}
});

readStream.push(null);

pipeline(readStream, createWriteStream('/'), (e) => {
  console.log(e);
  // outputs
  undefined
});

But If we use pipe with a code:

const {Readable} = require('stream');
const {createWriteStream} = require('fs');
const {pipeline} = require('stream');

const inStream = new Readable({
  read() {}
});

readStream.push(null);
readStream.pipe(createWriteStream('/'));

We will get such an error:

events.js:167
      throw er; // Unhandled 'error' event
      ^

Error: EISDIR: illegal operation on a directory, open '/'
Emitted 'error' event at:
    at lazyFs.open (internal/fs/streams.js:273:12)
    at FSReqWrap.oncomplete (fs.js:141:20)

Would be great if pipeline has the same behavior pipe has :).

@Fishrock123 Fishrock123 added the stream Issues and PRs related to the stream subsystem. label Nov 20, 2018
@Fishrock123
Copy link
Member

@nodejs/streams

@mcollina
Copy link
Member

pipeline is behaving as expected. The whole pipeline is teared down before the file descriptor is actually opened, as the readable stream ends.

@coderaiser
Copy link
Contributor Author

pipeline works in the same way as pipe, but adds error handling and end, finish etc event listeners.

The thing is callback called once, when stream read, and write error goes nowhere.
But I need to know if something went wrong with a stream I'm writing to, even if read stream is empty.
The write error just does not have time to emit, so it came after read tries to call the once called callback.

In pipe-io everything works as expected, but I want to use pipeline on environments where it exists.

@mcollina
Copy link
Member

The thing is callback called once, when stream read, and write error goes nowhere.
But I need to know if something went wrong with a stream I'm writing to, even if read stream is empty.
The write error just does not have time to emit, so it came after read tries to call the once called callback.

It seems you are suggesting to call the callback more than once. The goal of that callback is to be definitive on when the actual pipeline is teared down, i.e. stream.destroy() is called everywhere.
So that we can assure that there are no memory or file descriptors leaks.

In pipe-io everything works as expected, but I want to use pipeline on environments where it exists.

It is still not clear to me what is the expected behavior. Can you make an example?

@coderaiser
Copy link
Contributor Author

It seems you are suggesting to call the callback more than once. The goal of that callback is to be definitive on when the actual pipeline is teared down,

Callback should definitely be called once, when all streams closed, or error occurs in one of them, but not on the middle of way.

It is still not clear to me what is the expected behavior. Can you make an example?

There is code example. Expected behavior is to return Error in a callback, when there is an error, even if read stream is empty, not to behave as everything is ok.

I'm working on file manager for the web, which is uses restafary for file operations made with help of REST.
I use PUT for creating and updating files, when body is empty - file should be created anyway with empty body, and that is OK. But when user has no rights to create a file, he will not see any errors when creating a file (sending PUT with an empty body), because pipeline says that everything OK :). File doesn't created, error doesn't shown, but everything is OK.

As I said pipe-io works as expected, but it uses pipeline on node >= 10, and such an error ocurres.

@mcollina
Copy link
Member

Callback should definitely be called once, when all streams closed, or error occurs in one of them, but not on the middle of way.

It is not possible to ensure in a generic way that all streams have been teared down. Specifically, there is no guarantee that a close event will be emitted, or it's not possible to pass a callback to destroy() (it's supported in some of the internals, but not in all of the ecosystem).

Looking at the pipe-io code, it detects for certain special cases (including fs.WriteStream). I'm very -1 on such a logic in core, as we bundle require('stream') in 'readable-stream' and supporting it across Node.js version is a guarantee that we cannot make.
Note that we have several issues where 'open'  could come after destroy event and other similar things (#23133).

I'm happy to review a PR that implements such a feature, but I do not see how it would be possible in a generic way.

There is code example. Expected behavior is to return Error in a callback, when there is an error, even if read stream is empty, not to behave as everything is ok.

That snippet shows the current behavior and what happens with pipe. I would like to understand what change you need in pipeline. Feel free to add an assert in there.

I'm working on file manager for the web, which is uses restafary for file operations made with help of REST.
I use PUT for creating and updating files, when body is empty - file should be created anyway with empty body, and that is OK. But when user has no rights to create a file, he will not see any errors when creating a file (sending PUT with an empty body), because pipeline says that everything OK :). File doesn't created, error doesn't shown, but everything is OK.

I think a safest approach is to wait for an open event in your destination before piping to ensure that in fact the destination is open.

@coderaiser
Copy link
Contributor Author

coderaiser commented Nov 21, 2018

That snippet shows the current behavior and what happens with pipe. I would like to understand what change you need in pipeline. Feel free to add an assert in there.

I expect to receive first error in a callback, when some of streams emit error, or receive nothing when everything is done with no errors.

pipeline(readStream, createWriteStream('/'), (e) => {
  console.log('here should be an error:', e);
});

Looking at the pipe-io code, it detects for certain special cases (including fs.WriteStream). I'm very -1 on such a logic in core, as we bundle require('stream') in 'readable-stream' and supporting it across Node.js version is a guarantee that we cannot make.

fs.WriteStream emits open in all versions of node.js why this check can't be done inside pipeline?

Is it better to use code like this in a userland:

const {Readable} = require('stream');
const {createWriteStream} = require('fs');
const {pipeline} = require('stream');

const readStream = new Readable({
  read() {}
});

readStream.push(null);
const writeStream = createWriteStream('/');

writeStream.on('open', (e) => {
    if (e)
        return consol.error(e);
    
    pipeline(readStream, writeStream, (e) => {
        console.log(e);
    });
});

I think a safest approach is to wait for an open event in your destination before piping to ensure that in fact the destination is open.

That what pipe-io doing. I helps do not write code like this every time you want to work with a stream.

pump and pipeline does the same thing in a similar but better, more universal way, and this is great, I want to use pipeline where possible, but I can't do this right now because of such behavior with errors.

@mcollina
Copy link
Member

fs.WriteStream emits open in all versions of node.js why this check can't be done inside pipeline?

How could we detect in a generic way if a stream needs to wait for 'open' or some other event before erroring? This mechanism should be something available for all streams in the ecosystem, e.g. adding a reference to a specific instance of stream (like fs.WriteStream) is a no-go solution here.

@coderaiser
Copy link
Contributor Author

This mechanism should be something available for all streams in the ecosystem, e.g. adding a reference to a specific instance of stream (like fs.WriteStream) is a no-go solution here.

Actually there is specific case check for request in a pipeline :).

The thing is all specific cases of streams is mostly built-in streams, and what pipeline is, it's just ad-hoc solution, which not fixes all strange cases of streams, but just handle it.
Would be great if strange cases of built-in streams gradually brought to general appearance through a chain of major releases.

If it is also no-go solution, I see no reason why built-in method of node.js pipeline can't handle built-in streams cases like fs.WriteStream. It is not about all userland strange buggy streams, it's about core functionality of node.js.

I agree with you that in a long run check every case it is a bad and not generic solution. But for developer of userland modules to remember about every strange case like this one:

writeStream.on('open', (e) => {
    if (e)
        return consol.error(e);
    
    pipeline(readStream, writeStream, (e) => {
        console.log(e);
    });
});

It is also not a solution at all, for now pipe-io does it thing and not swallow errors. Maybe in a future some better solution came-up and we will have ability to just use streams with no strange fixes :).

@jasnell
Copy link
Member

jasnell commented Jun 26, 2020

@nodejs/stream ... it's not clear if this is actionable or not.

@jasnell jasnell added the help wanted Issues that need assistance from volunteers or PRs that need help to proceed. label Jun 26, 2020
@mcollina
Copy link
Member

@ronag is #29656 addressing this?

@mcollina
Copy link
Member

Overall I don't think this is fully fixable if we would like to retain compatibility with older streams. We need to wait:

  1. all streams have been successfully open
  2. all streams have been successfully closed with no error

Node of this is possible with older streams.

@ronag
Copy link
Member

ronag commented Jun 26, 2020

@ronag is #29656 addressing this?

Haven't confirmed. But based from what I've read from this issue I believe so. The problem is that the fs write stream is destroyed before the open call has completed, and that has been fixed.

@mcollina
Copy link
Member

That’s my read as well, closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Issues that need assistance from volunteers or PRs that need help to proceed. stream Issues and PRs related to the stream subsystem.
Projects
None yet
Development

No branches or pull requests

5 participants