Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DRAFT] "Stack" Tracing #355

Open
jswrenn opened this issue May 24, 2022 · 9 comments
Open

[DRAFT] "Stack" Tracing #355

jswrenn opened this issue May 24, 2022 · 9 comments
Labels
S-feature Severity: feature. This is adding a new feature.

Comments

@jswrenn
Copy link

jswrenn commented May 24, 2022

What problem are you trying to solve?

What is my program busy doing right now? For synchronous applications, this question can be answered by capturing stack traces from the running program. For asynchronous applications, this approach is unsatisfactory, since tasks that are idle — waiting to be rescheduled — will not appear.

How should the problem be solved?

Tokio Console could answer this question by displaying the causal tree of Spans that stems from each task in the task detail view. The precise user interface would draw inspiration from existing tooling for displaying process trees and span trees. At minimum, the UI should display an htop-like tree view of Span names, targets, and locations. My primary goal with this issue to is to settle on an implementation strategy.

Presently, the tracing-family of crates does not provide any mechanism for querying the consequences of a span. I've implemented this functionality in the tracing-causality library, which provides a tracing Layer that extends each Span in a Registry with a Set encoding that span's direct and indirect consequences. This Layer can then be queried by Span ID to get the full causal tree of consequences that stemmed from that span, plus a stream supplying subsequent, incremental updates to that causal tree.

I propose beginning by modifying the console-api crate to define a Consequences RPC service that permits clients to:

  • query the server for the consequences of a span, represented as an adjacency list of span IDs
  • receive incremental updates on subsequent changes to those consequences

Then, in console-subscriber:

  • install the tracing-causality layer
  • implement this service, using that layer.

Then, console:

  • upon entering the task detail view, request and display the causal tree of that task
  • listen for subsequent updates to the causal tree, and apply them

Unresolved questions:

  • The UI should display span names, targets, and locations, but the Consequences RPC service only provides Span IDs. Where should this additional metadata come from?

Any alternatives you've considered?

In the above proposal, the console-subscriber does the work of keeping just enough metadata around such that causality trees can be requested on-demand for any Span. This responsibility could, instead, fall on console, itself.

In this alternative approach, console-api would provide a service that streamed most tracing layer on_* events to clients. (Some of this functionality is already described in console-api, but not yet implemented in console-subscriber.) The console application would, then, listen to this stream and maintain just enough data to reconstruct causality trees on-demand.

We should seriously consider this option, as it:

  1. solves the aforementioned unresolved question of tracking span names, targets and locations
  2. would lay the groundwork for additional functionality, like:
    • distinguishing between idle/busy spans in the causal tree
    • showing the stream of events occurring within a task
  3. lays the groundwork for general-purpose remote subscribers

How would users interact with this feature?

This feature shall be implemented as a component within the task detail view. The precise user interface would draw inspiration from existing tooling for displaying process trees and span trees. At minimum, the UI should display an htop-like tree view of Span names, targets, and locations.

Would you like to work on this feature?

yes

@jswrenn jswrenn added the S-feature Severity: feature. This is adding a new feature. label May 24, 2022
@hawkw
Copy link
Member

hawkw commented May 24, 2022

In general, I think we should try to prefer whichever approach makes the console CLI do more of the work, and avoid making the application do anything that we can move out of process. In this case, I'm not actually sure which approach is the one that puts less work on the application. Streaming raw span events seems conceptually much simpler, and theoretically means that application is doing less computation...but it would mean sending significantly more data on the wire, including a lot of data that may not be actively consumed. So, although the application would no longer have to do the work of tracking causality internally, but in exchange for that, it might end up doing a lot of additional protobuf encoding. We should work out which approach actually means less work done in process, as I think that's an important factor in the decisionmaking process here.

@hawkw
Copy link
Member

hawkw commented May 24, 2022

We may want to do some actual benchmarking, here. It could be good to measure the performance impact of adding the tracing-causality library to a busy application (i.e. in a load test). Then, we could even prototype a simple layer that just streams span events over gRPC as described in the alternative approach, measure the performance impact of that, and compare the results.

@exFalso
Copy link

exFalso commented Oct 13, 2023

I have to say, without this feature this project is of little use. What am I supposed to learn from scheduler park latencies per task? How is this helping any "debugging" effort?

Having a grasp of where the tasks are parked would be the very first thing I'd expect this debugger to implement. The next thing would be values captured by the futures. You know, how every actual debugger works? Without this we have no way to understand where walltime is actually spent. And of course regular debugging tools don't work with async because all the thread stacktraces point into the scheduler.

Side note, of course the span-based approach will not scale properly in terms of performance. What we'd actually need is a sampler for the task states, but I don't even know where one would begin doing that, you'd need to map parked future states back to un-desugared code. So you'd need... well, something like DWARF, surprise surprise.

@hds
Copy link
Collaborator

hds commented Oct 14, 2023

You might be interested in looking at the task dump feature in tokio which the author of this draft implemented: https://docs.rs/tokio/latest/tokio/runtime/struct.Handle.html#method.dump

We have no integration for this feature with tokio-console, although it would definitely be nice to have.

Having said that, if you want a low level profiler, then tokio-console is not currently offering that functionality. Instead it visualises the instrumentation present in tokio, which is something of a different use case, and not really a traditional debugger.

@exFalso
Copy link

exFalso commented Oct 14, 2023

Thank you this runtime dump does look useful! If it's fast enough perhaps there's a way to dump periodically and aggregate into some kind of perf-like output.

@exFalso
Copy link

exFalso commented Oct 17, 2023

Hmm so had some time to experiment with Handle.dump(), unfortunately:

  1. It's very slow, it essentially waits for a yield and re-polls the futures in a special mode (so it needs to schedule all futures).
  2. It doesn't expose the included frames, it only allows printing of the traces. So there's no way to aggregate into a proper performance graph.
  3. Not sure what I was doing wrong, but the frames printed for me included no information outside of this: BacktraceFrame { ip: 0x556054b78a0f, symbol_address: 0x556054b789e0 }.

This all feels like trying to reinvent the wheel:(

@hds
Copy link
Collaborator

hds commented Oct 17, 2023

I think that to sample your application, the task dumps are also not the right tool.

Right now, there is not so much that works "out of the box" to profile your async Rust application.

The best possibility would be to instrument your own application with tracing and then collect the results in a tool that can sum up the timings of each span. As you can see, this isn't really out of the box either, but I'm not aware of anything else at the moment.

@exFalso
Copy link

exFalso commented Oct 17, 2023

I've tried to go down the tracing rabbithole too in the past for a different reason (distributed tracing) but it was nearly impossible to get it working consistently (the tracing context kept jumping around unless it was very carefully threaded through task spawns, causing misaligned/wrongly parented spans, and often times I had no control over the spawns as they happened in dependency libraries) and ended up dropping it altogether in favour of "just" the opentelemetry libraries. Gotta say, this side of the ecosystem just feels half-baked to be frank.

Edit: sorry this is an offtopic rant, not related to the draft

@hds
Copy link
Collaborator

hds commented Oct 17, 2023

Thanks for the edit. This is the point where I say that we're open to contributions. (-;

In all seriousness, I've got some ideas for how it would be at least possible to get child span timings for each task, so while you'd have to instrument functions and futures yourself, you'd at least get a visualization for it in tokio-console. But it's going to take some time to implement, and all work on tokio-console is done by contributors in their spare time, so these things don't move so fast.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-feature Severity: feature. This is adding a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants