New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
signal-neon-futures: Poll a future once before queuing it #283
Conversation
Previously, when using the JavaScript context to run a Rust future, the future would be put into the event queue immediately, without starting it first. The new implementation runs the future synchronously to its first suspension point. There are pros and cons to each approach (and I left the "delayed start" version behind as `queue_future`), but in practice we're not worried about, say, blocking other work behind this one.
@kjvalencik, you might be interested in this for future Neon API design. @indutny-signal found that the Node / uv event queue is actually fairly slow to tick through a loop, so I'm going through my futures bridge and JS-based executor and seeing which I've mentioned LocalEventQueue in the past as a non-Send EventQueue, but what would be really nice is a non-Send |
@@ -43,29 +39,44 @@ impl<T: Future> Future for AssertSendSafe<T> { | |||
} | |||
} | |||
|
|||
/// Adds support for executing closures and futures on the JavaScript main thread's microtask queue. | |||
/// Adds support for executing closures and futures on the JavaScript main thread's event queue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was my mistake; we're using Node's event queue rather than JavaScript's microtask queue.
/// | ||
/// Equivalent to `cx.queue().send_future(f)` except that `f` doesn't need to be `Send`. | ||
fn run_future_on_queue(&mut self, f: impl Future<Output = ()> + 'static) { | ||
fn queue_future(&mut self, f: impl Future<Output = ()> + 'static) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No one calls queue_task
and queue_future
, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right; they're there for "completeness" but I could just drop them.
I wonder if we can actually continue to run future without rescheduling it on the queue if we are already on the queue. All microtasks should complete when the JS call returns so from the perspective of JS it will look like different event loop ticks. This might happen in cases where the JS callback returns resolved promise. |
@jrose-signal That's interesting! Do you have any detail on which part of it is slow? There's a fair amount of coordination in the I'm also not sure how callbacks in I don't think I fully understand what you mean by a |
Yeah, even though this isn't okay generally (because of re-entrancy), we know that |
Yeah, I hadn't realized creating a queue was itself expensive, but Fedor that out as well. That'll be my immediate next step: cut down on
The original thing I meant by it was just "
So maybe it would come down to a thread-local Context after all. (Which is what the original version of this PR used, sort of, and it felt a lot more hacky.) |
Closing in favor of #290, which is a more drastic change that incorporates this. |
Previously, when using the JavaScript context to run a Rust future, the future would be put into the event queue immediately, without starting it first. The new implementation runs the future synchronously to its first suspension point. There are pros and cons to each approach (and I left the "delayed start" version behind as
queue_future
), but in practice we're not worried about, say, blocking other work behind this one.