Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Tokio requirement #385

Draft
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

stevefan1999-personal
Copy link

@stevefan1999-personal stevefan1999-personal commented Dec 17, 2022

I wanted to get rid of tokio so we can run this with our own runtime of choice (async-std, smol, whatever). This works by removing tokio mpsc but instead use https://github.com/zesterer/flume or https://github.com/smol-rs/async-channel that are executor agonistic, and some oneshot is also replaced by futures.

Unit tests are passing, but I doubt this won't be enough to be accepted by the team at this rough stage. I rather want to sort these few following questions first before we officially submit this draft:

  1. We have mixed in flume and async_channel and those will increase binary size for a little (synthetic example shows it will grow by 15%). Some people especially for those who wanted to make special program that embeds into the system requires a small binary size and this may not be acceptable. For my example, this is a special embedded system that runs on only 2MB of SPI flashes but can run on std out of curious. So, can we unify into one channel crate?
  2. What about DelayQueue? This is more of a Tokio exclusive feature because the timing is handled by the global reactor system. We wanted to detach from Tokio as much as possible, but right now this is a hard requirement for the deadline feature to compile and pass the test.
  3. Is stream().fuse().poll_next_unpin really the equivalent of poll_recv in Tokio? This is a rather tricky question as far as I read, they are functionally near equivalent except poll_recv is way more efficient for some reason...
  4. We also need to finish some examples that uses other runtime first before submitting the draft

One of the main selling points for this is that we can run tarpc in ESP32 thanks to https://github.com/esp-rs/esp-idf-sys, which itself hoisted a semi-std target but it should be implemented enough to run tarpc without Tokio afterwards. That said I hope this could be more of a long-term goal of decoupling from Tokio, so we still have a looooong way to go.

Regarding the reason I have to use both async-channel and flume: Some unit test failed with either async-channel and futures mpsc channel, in particular for the channels_per_key crate. I have no idea yet on why this is happening, but it seems like this is the only place I needed to use flume. It can be related to point 3, since I do have changed some unit tests instructions over there

@google-cla
Copy link

google-cla bot commented Dec 17, 2022

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@tikue
Copy link
Collaborator

tikue commented Dec 17, 2022

Hey! Thanks so much for working on this contribution. I'd be happy to explore options for removing tokio and getting tarpc running on ESP32. Given upcoming holidays, it may be some time before I can dive into this.

Is tokio's channel really not executor agnostic? That's somewhat surprising to me. And if so, is it intentional or a bug in the tokio channel?

Yeah, the timer related functionally does seem to be a sticking point. I think someone looked at removing tokio a few years ago and that was the hardest part to replace.

@stevefan1999-personal
Copy link
Author

@tikue the problem is in the dependencies. If you pulled in tokio you pull in a lot of other irrelevant stuff that hinders compiling. Also, most Tokio packages are bound to use the reactor which the ESP32 clearly doesn't have, I won't be surprised that their channel implementation is bound to use the reactor as well.

@axos88
Copy link

axos88 commented Dec 22, 2022

Hmmm. It seems to me that delayqueue could be replaced by a bunch of ::spawn async { sleep(duration).await; do_thing.await }. Rescheduling is a bit tricky, but I'd say not impossible. We are using a LUT keyed by the request is anyway, the values could be the task handles returned by spawn to enable cancellation, and rescheduling is basically a cancellation and a scheduling with the new timeout. Although I'm not sure if rescheduling is really needed in terms of tarpc. Not as nice as having them in a stream, but it should work. Or it could also be implemented for smol is guess.

Not sure about performance of this though.

@stevefan1999-personal
Copy link
Author

stevefan1999-personal commented Dec 23, 2022

@axos88 flume itself even has a send_deadline function, but the problem is there is some issue regarding its semantics, and in some place for some reason I can't use flume, it just doesn't pass the unit test

@multivac61
Copy link

Is this being developed? Would be amazing to use this in embedded applications, e.g., in embassy-rs.

@stevefan1999-personal
Copy link
Author

Is this being developed? Would be amazing to use this in embedded applications, e.g., in embassy-rs.

The biggest problem is that the deadline resolver currently has a hard reliance on Tokio, and it looks like we need to have a proactor based async runtime instead to have the deadline notification (otherwise you have to busy-wait/poll), so it is not really a generic solution and also being a tough challenge. I do have tested tarpc with esp32 and the tokio hack, and it worked amazing

@multivac61
Copy link

Is this being developed? Would be amazing to use this in embedded applications, e.g., in embassy-rs.

The biggest problem is that the deadline resolver currently has a hard reliance on Tokio, and it looks like we need to have a proactor based async runtime instead to have the deadline notification (otherwise you have to busy-wait/poll), so it is not really a generic solution and also being a tough challenge. I do have tested tarpc with esp32 and the tokio hack, and it worked amazing

Amazing! Would love to try that. I stumbled upon a new-ish rpc layer postcard-rpc especially made for MCUs and no_std runtimes, trying that out too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants