Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fs: add stream utilities to FileHandle #40009

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
107 changes: 98 additions & 9 deletions doc/api/fs.md
Expand Up @@ -230,6 +230,99 @@ try {
}
```

#### `filehandle.createReadStream([options])`
<!-- YAML
added: REPLACEME
-->

* `options` {Object}
* `encoding` {string} **Default:** `null`
* `autoClose` {boolean} **Default:** `true`
* `emitClose` {boolean} **Default:** `true`
* `start` {integer}
* `end` {integer} **Default:** `Infinity`
* `highWaterMark` {integer} **Default:** `64 * 1024`
* Returns: {fs.ReadStream}

Unlike the 16 kb default `highWaterMark` for a {stream.Readable}, the stream
returned by this method has a default `highWaterMark` of 64 kb.

`options` can include `start` and `end` values to read a range of bytes from
the file instead of the entire file. Both `start` and `end` are inclusive and
start counting at 0, allowed values are in the
[0, [`Number.MAX_SAFE_INTEGER`][]] range. If `start` is
omitted or `undefined`, `filehandle.createReadStream()` reads sequentially from
the current file position. The `encoding` can be any one of those accepted by
{Buffer}.

If the `FileHandle` points to a character device that only supports blocking
reads (such as keyboard or sound card), read operations do not finish until data
is available. This can prevent the process from exiting and the stream from
closing naturally.

By default, the stream will emit a `'close'` event after it has been
destroyed. Set the `emitClose` option to `false` to change this behavior.

```mjs
import { open } from 'fs/promises';

const fd = await open('/dev/input/event0');
// Create a stream from some character device.
const stream = fd.createReadStream();
setTimeout(() => {
stream.close(); // This may not close the stream.
// Artificially marking end-of-stream, as if the underlying resource had
// indicated end-of-file by itself, allows the stream to close.
// This does not cancel pending read operations, and if there is such an
// operation, the process may still not be able to exit successfully
// until it finishes.
stream.push(null);
stream.read(0);
}, 100);
```

If `autoClose` is false, then the file descriptor won't be closed, even if
there's an error. It is the application's responsibility to close it and make
sure there's no file descriptor leak. If `autoClose` is set to true (default
behavior), on `'error'` or `'end'` the file descriptor will be closed
automatically.

An example to read the last 10 bytes of a file which is 100 bytes long:

```mjs
import { open } from 'fs/promises';

const fd = await open('sample.txt');
fd.createReadStream({ start: 90, end: 99 });
```

#### `filehandle.createWriteStream([options])`
<!-- YAML
added: REPLACEME
-->

* `options` {Object}
* `encoding` {string} **Default:** `'utf8'`
* `autoClose` {boolean} **Default:** `true`
* `emitClose` {boolean} **Default:** `true`
* `start` {integer}
* Returns: {fs.WriteStream}

`options` may also include a `start` option to allow writing data at some
position past the beginning of the file, allowed values are in the
[0, [`Number.MAX_SAFE_INTEGER`][]] range. Modifying a file rather than replacing
it may require the `flags` `open` option to be set to `r+` rather than the
default `r`. The `encoding` can be any one of those accepted by {Buffer}.

If `autoClose` is set to true (default behavior) on `'error'` or `'finish'`
the file descriptor will be closed automatically. If `autoClose` is false,
then the file descriptor won't be closed, even if there's an error.
It is the application's responsibility to close it and make sure there's no
file descriptor leak.

By default, the stream will emit a `'close'` event after it has been
destroyed. Set the `emitClose` option to `false` to change this behavior.

#### `filehandle.datasync()`
<!-- YAML
added: v10.0.0
Expand Down Expand Up @@ -1977,9 +2070,9 @@ changes:
* `end` {integer} **Default:** `Infinity`
* `highWaterMark` {integer} **Default:** `64 * 1024`
* `fs` {Object|null} **Default:** `null`
* Returns: {fs.ReadStream} See [Readable Stream][].
* Returns: {fs.ReadStream}

Unlike the 16 kb default `highWaterMark` for a readable stream, the stream
Unlike the 16 kb default `highWaterMark` for a {stream.Readable}, the stream
returned by this method has a default `highWaterMark` of 64 kb.

`options` can include `start` and `end` values to read a range of bytes from
Expand All @@ -2001,8 +2094,7 @@ available. This can prevent the process from exiting and the stream from
closing naturally.

By default, the stream will emit a `'close'` event after it has been
destroyed, like most `Readable` streams. Set the `emitClose` option to
`false` to change this behavior.
destroyed. Set the `emitClose` option to `false` to change this behavior.

By providing the `fs` option, it is possible to override the corresponding `fs`
implementations for `open`, `read`, and `close`. When providing the `fs` option,
Expand Down Expand Up @@ -2090,7 +2182,7 @@ changes:
* `emitClose` {boolean} **Default:** `true`
* `start` {integer}
* `fs` {Object|null} **Default:** `null`
* Returns: {fs.WriteStream} See [Writable Stream][].
* Returns: {fs.WriteStream}

`options` may also include a `start` option to allow writing data at some
position past the beginning of the file, allowed values are in the
Expand All @@ -2105,8 +2197,7 @@ It is the application's responsibility to close it and make sure there's no
file descriptor leak.

By default, the stream will emit a `'close'` event after it has been
destroyed, like most `Writable` streams. Set the `emitClose` option to
`false` to change this behavior.
destroyed. Set the `emitClose` option to `false` to change this behavior.

By providing the `fs` option it is possible to override the corresponding `fs`
implementations for `open`, `write`, `writev` and `close`. Overriding `write()`
Expand Down Expand Up @@ -6916,8 +7007,6 @@ the file contents.
[MSDN-Rel-Path]: https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file#fully-qualified-vs-relative-paths
[MSDN-Using-Streams]: https://docs.microsoft.com/en-us/windows/desktop/FileIO/using-streams
[Naming Files, Paths, and Namespaces]: https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file
[Readable Stream]: stream.md#class-streamreadable
[Writable Stream]: stream.md#class-streamwritable
[`AHAFS`]: https://developer.ibm.com/articles/au-aix_event_infrastructure/
[`Buffer.byteLength`]: buffer.md#static-method-bufferbytelengthstring-encoding
[`FSEvents`]: https://developer.apple.com/documentation/coreservices/file_system_events
Expand Down
40 changes: 40 additions & 0 deletions lib/internal/fs/promises.js
Expand Up @@ -115,6 +115,12 @@ function lazyLoadCpPromises() {
return cpPromises ??= require('internal/fs/cp/cp').cpFn;
}

// Lazy loaded to avoid circular dependency.
let fsStreams;
function lazyFsStreams() {
return fsStreams ??= require('internal/fs/streams');
}

class FileHandle extends EventEmitterMixin(JSTransferable) {
/**
* @param {InternalFSBinding.FileHandle | undefined} filehandle
Expand Down Expand Up @@ -252,6 +258,40 @@ class FileHandle extends EventEmitterMixin(JSTransferable) {
return readable;
}

/**
* @typedef {import('./streams').ReadStream
* } ReadStream
* @param {{
* encoding?: string;
* autoClose?: boolean;
* emitClose?: boolean;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove autoClose & emitClose from here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain why?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are legacy. The only reason we have still have them is to avoid breakage. Since this is a new API they are not necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It’s a new way for exposing the same old API, removing it would be harder than keeping it – also I don’t think they’re documented as legacy anywhere, so that’s probably best to leave that discussion for another PR.

* start: number;
* end?: number;
* highWaterMark?: number;
* }} [options]
* @returns {ReadStream}
*/
createReadStream(options = undefined) {
const { ReadStream } = lazyFsStreams();
return new ReadStream(undefined, { ...options, fd: this });
}

/**
* @typedef {import('./streams').WriteStream
* } WriteStream
* @param {{
* encoding?: string;
* autoClose?: boolean;
* emitClose?: boolean;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove autoClose & emitClose from here?

* start: number;
* }} [options]
* @returns {WriteStream}
*/
createWriteStream(options = undefined) {
const { WriteStream } = lazyFsStreams();
return new WriteStream(undefined, { ...options, fd: this });
}

[kTransfer]() {
if (this[kClosePromise] || this[kRefs] > 1) {
throw lazyDOMException('Cannot transfer FileHandle while in use',
Expand Down
48 changes: 48 additions & 0 deletions test/parallel/test-fs-promises-file-handle-stream.js
@@ -0,0 +1,48 @@
'use strict';

const common = require('../common');

// The following tests validate base functionality for the fs.promises
// FileHandle.write method.

const fs = require('fs');
const { open } = fs.promises;
aduh95 marked this conversation as resolved.
Show resolved Hide resolved
const path = require('path');
const tmpdir = require('../common/tmpdir');
const assert = require('assert');
const { finished } = require('stream/promises');
const { buffer } = require('stream/consumers');
const tmpDir = tmpdir.path;

tmpdir.refresh();

async function validateWrite() {
const filePathForHandle = path.resolve(tmpDir, 'tmp-write.txt');
const fileHandle = await open(filePathForHandle, 'w');
const buffer = Buffer.from('Hello world'.repeat(100), 'utf8');

const stream = fileHandle.createWriteStream();
stream.end(buffer);
await finished(stream);

const readFileData = fs.readFileSync(filePathForHandle);
aduh95 marked this conversation as resolved.
Show resolved Hide resolved
assert.deepStrictEqual(buffer, readFileData);
}

async function validateRead() {
const filePathForHandle = path.resolve(tmpDir, 'tmp-read.txt');
const buf = Buffer.from('Hello world'.repeat(100), 'utf8');

fs.writeFileSync(filePathForHandle, buf);

const fileHandle = await open(filePathForHandle);
assert.deepStrictEqual(
await buffer(fileHandle.createReadStream()),
buf
);
}

Promise.all([
validateWrite(),
validateRead(),
]).then(common.mustCall());