Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node CPU Profiling Roadmap #148

Closed
yunong opened this issue Feb 9, 2018 · 79 comments
Closed

Node CPU Profiling Roadmap #148

yunong opened this issue Feb 9, 2018 · 79 comments

Comments

@yunong
Copy link
Member

yunong commented Feb 9, 2018

We’d like to get feedback on the status of the profiling tools landscape in Node.js today. In particular -- we want to get alignment on a roadmap which will provide a free, open source, and cross-platform set of tools that are part of the node/v8 API i.e. maintained across LTS versions that can provide a powerful suite to debug and diagnose Node.js issues in production.

Production Challenges

There are some challenges that are unique to debugging and diagnosing issues in production. Specifically for large critical production deployments. In particular here are some of the constraints due to the production nature of the deployments:

  1. Generally, most solutions need to be low impact on the process. Attaching debuggers or tracers is often impractical when the application is taking production traffic. Debuggers will pause the process, causing all inflight requests to hang. Tracers and debuggers often introduce large performance penalties, which can impact customers.
  2. Tools need to be safe and not cause kernel panics or process crashes.
  3. It’s often hard to reproduce issues seen at production scale -- therefore programmatic access of tooling via APIs can help diagnose problems as they occur. e.g. start a profile based on some condition since it’s often impossible to reproduce

Profiling

One of the most useful methodologies to optimize CPU performance in a running application is by sampling the CPU stack frames (CPU profiling) and then visualizing the samples, typically using a flamegraph. This technique will show hot code paths on CPU -- which gives you the opportunity to optimize the relevant source code.

The approach can be done in production with OS level profilers such as (perf, DTrace, systemtap, eBPF) with very low overhead. The profilers lack the information to resolve the JS frames, resulting in unhelpful memory addresses for the JS frames. V8 solves this problem by dumping a mapping of native frame addresses to JS source and line number.

It’s important to mention here that having access to all stack frames, whether native (v8, libc, syscalls, libuv, native modules) or JS is important. Problems can occur anywhere in the stack, and we want to be able to profile Node with complete stack frames. E.g. We heavily use gRPC -- which is a native module -- so without access to native frames we would not be able to get visibility into this critical part of our application.

There are a few issues with this implementation:

  1. perf(1) support is now deprecated in V8 and will not be supported starting in Node 8 -- which effectively means we’re losing the ability to profile JS stacks.
  2. Lack of cross-platform tooling. Even if perf support wasn’t deprecated, this solution only works on Linux.

We’d like to contribute and collaborate on a set of comprehensive, cross-platform, and open source CPU profiling tools with the Node and V8 team. The V8 team has advised us that they plan to support the v8 profiler and the v8 cpu profiler API going forward, and we want to unlock CPU profiling capabilities in Node using these supported frameworks.

Roadmap:

  1. Determine which tools -- v8 profiler (--prof), cpu profiler or something else -- to support and maintain for profiling Node applications.
  2. Currently, these tools do not support native c/c++ frames which includes v8, native modules, libuv or syscall stack frames. Note because these tools are in process, they will never be able to show syscalls as those can only be sampled from within the kernel. Supporting native frames will require implementing a stackwalker in the v8 profiler or the cpu profiler.
  3. Determine the overhead of profiling using these tools. With the now deprecated OS level profilers, the overhead was quite low -- and thus suitable to use in a production environment without outsized impact to customers.
  4. Dynamically toggle profiling in a production environment. It’s not practical to profile all the time in production due to the performance impact -- being able to dynamically enable and disable profiling via an API would be ideal.
  5. Provide an API to programmatically consume profiling data once enabled.
  6. Add documentation so these tools can be adopted by users.

We’re looking for feedback and alignment with the community on this subject before proceeding with the design and implementation -- please let us know your thoughts.

@yunong yunong self-assigned this Feb 9, 2018
@mmarchini
Copy link
Contributor

Great explanation and summary of the current scenario.

Suggestion: We could add "Write tests to those profiling tools" to the roadmap, as I think one of the goals is to have these tools as first-class citizens in the future.

@joyeecheung
Copy link
Member

Another suggestion: a better standardized format for the profiles, and tools that convert legacy formats to it. The current v8 CPU profile format is quite limiting considering it's JSON so it cannot be concatenated easily and has to be in the memory as a whole before being serialized.

@davidmarkclements
Copy link
Member

davidmarkclements commented Feb 9, 2018

In 0x (flamegraph tool) I've re-implemented stack tracing on top of v8 profiler (--prof) and the v8 tick processor (--prof-process --preprocess) – see davidmarkclements/0x#99 - it's currently behind a flag but next major version this will be default with a --kernel-tracing flag for OS level tracing.

Pros

  • no need for sudo
  • totally cross platform - runs on anything that Node does
  • zero config to make it work in vms and containers
  • single thing to maintain (if I wasn't keeping kernel tracing support)
  • works around the problem with kernel tracing and turbofan - namely that interpreted (functions which do not get optimized) are opaque behind BytecodeHandlers (the trace output will have series of Bytecode related stack frames instead of the actual function the bytecode is handling). V8 prof does the memory address lookup to resolve these stack frames. This is considered a V8 bug https://bugs.chromium.org/p/v8/issues/detail?id=7155 so hopefully that will be fixed in future... but we still have the whole Node 8 line to date without it.

Cons

  • no I/O stacks and any third party C – if there's a hot path in libuv or some C/Cpp module it's not going to show
  • higher overhead
  • more difficult if you want realtime processing - while the isolate logfile is written to incrementally, you can't fully control the name of the file (isolate-ADDR-v8.log where ADDR is determined internally) and it's incomplete JSON until the process finishes.
  • the output format isn't great - best way is to use v8 tick processor but thats an all at once operation, again blocking to incremental processing
  • platform/configuration/hardware dependent timestamps - the prof output uses TimeTicks::HighResolutionNow which is an internal v8 function which has different output depending on system. In a particular configuration of OS X for instance, the TimeTicks::HighResolutionNow output is based on amount of ticks (it just adds one for each tick) – this makes it incredibly difficult to reliably cross reference stacks capture by --prof to stacks captured by anything else. Not only that but how do you match the sample rate so that you get one-to-one on the samples?

What I would love is the ability to cross reference prof stacks with OS level tracing stacks. If we can do that, then perf being deprecated, and bytecode handler problem (and any other problems) would be resolved with --prof while lower level tracing could be used to compliment --prof.

@davidmarkclements
Copy link
Member

In terms of Node internal implementation, absolutely agree on tests.

The way we currently expose --prof-process isn't ideal, the stacks processor is concatenated JavaScript based on some JS files in deps/v8/tools - .. with additional injected code to make log output work with Node (see nodejs/node#14966)

@danielkhan
Copy link
Contributor

danielkhan commented Feb 9, 2018

We recently added production grade continuous sampling to our product. For that we had to find a sweet spot between sampling frequency and duration. Additionally there was a memory leak in v8::CpuProfiler (nodejs/node#14894) we discovered.
Here is a blog post with screen shots https://www.dynatrace.com/news/blog/introducing-code-level-visibility-for-node-js/

As already mentioned, this approach won't give you native frames - still we considered it to be sufficient for many use cases.

@Hollerberg who implemented it will be at the summit and we are happy to discuss the validity of our approach and be part of an initiative to improve CPU sampling capabilities in node.

@aalexand
Copy link

aalexand commented Feb 9, 2018

We use the CPU profiler in the profiling agent here that is an experimental way to continuously collect the data from production Node.js apps. Some notes on that experience so far:

  • The testing and better overhead guarantees from the runtime would be very much desired. "How much will it cost my prod?" is a common question and it's hard to answer that without the implementation backing that claim and testing protecting it.
  • The heap profiling is not less important than CPU profiling, so it would be good to discuss that as well, maybe separately. The heap state is interesting in production for both investigating occasionally OOMs and understanding the causes of any excessive pressure on the heap (which in GC languages turns into wasted CPU cycles).
  • As a format, we use the proto-based profile.proto format a lot and in the open-source pprof tool, so that would be one possible solution to consider as the unified format.
  • For the native stacks, this would be good, yes. What is also desired is that the profiler is capable of profiling the CPU time of all threads in the process. The profiler we use now only samples the Node thread and when that thread is doing nothing, it's hard to say whether it does nothing because there is no work to do or because it's waiting for something in libuv to finish. Having at least some visibility into what happens in libuv would be highly desired and profiling all threads in the process is one way to go.
  • Regarding perf support - OS profilers are nice but there are types of virtualized environments where perf hardware events are not available, so the benefits of OS-level profiling are smaller there. Also things like heap profiling are only available within the process as tight cooperation with the runtime is necessary.
  • Regarding being able to turn the profiling on and off, this is actually possible today via the StartProfiling / StopProfiling interface.

@mhdawson
Copy link
Member

mhdawson commented Feb 9, 2018

I believe it is important that Node.js have first-class tooling for problem investigation and CPU profiling is part of that along with tracing, debugging, heap dumps, node-report and being able to introspect core files. Ideally, a base level of core tooling would be available with the runtime without having to install additional components as that can often be an issue in production.

That's a long way of saying that I support your effort and will look to see how I can help in respect to getting ci testing and other supporting pieces in place and also how I might be able to pull in people from IBM to help collaborate.

@mmarchini
Copy link
Contributor

For reference: #150

@hashseed
Copy link
Member

Another issue that @bmeurer mentioned to me is that there is no way for Linux perf and any other perf tool outside of V8 to discern inlined functions from its caller, since both occupy a single stack frame. The CpuProfiler doesn't provide this either, but @FranziH is addressing this.

@mmarchini
Copy link
Contributor

mmarchini commented Feb 28, 2018

Updates from the Diagnostics Summit

External profilers

External profilers are working well on Node 6, even though they are not officially supported. They are also working on Node 8+, but the information collected by them can be misleading due to the introduction of Turbofan and Ignition. Also, the current method used by most external profilers to resolve JITed functions (--perf-basic-prof) is not officially supported on V8 and might break/be removed in the future.

To support external profilers in the future, we need two things:

Interpreted Frames

After the introduction of Turbofan and Ignition, Interpreted Frames on the stack don’t reflect JavaScript function calls since only the interpreter appears in the stack, as we can see in the image below.

image

As a consequence, understanding the data collected by external profilers can be tricky, and the data can be misleading since there's no way to distinguish between different JavaScript function calls when they are running in interpreted mode. As soon as those functions are compiled and optimized by Turbofan, they will appear in the stack as before.

During the Summit, we came up with three different approaches to have more meaningful information on the call stack for external profilers. All of them must be implemented on the V8 interpreter, and they basically change the current flow to add a unique stack frame for each JS function.

Intermediate function before calling the interpreter

Add an intermediate frame to the stack which points to a JIT-function with the purpose of keeping track of which JS function is being called.

image

Duplicate the interpreter code for each JS function

Copy the InterpreterEntryTrampoline code for each Interpreted Function, this way each Interpreted Frame will have a unique entry point. Apparently, ChakraCore is implemented this way.

image

Change the stack on runtime to replace the interpreter with a unique address representing the JS function

Hack the call stack at runtime, replacing InterpreterEntryTrampoline's return address with the address to a small JIT function which will return to InterpreterEntryTrampoline later.

image

API with information to resolve JIT function addresses

External profilers can’t resolve names for JITed functions (including V8 Builtins) without help from the runtime. Also, most of the time those names are resolved with post-processing tools. Today we have --perf-basic-prof, which is used by a variety of tools to post-process the output from profilers and extract useful information from that.

As --perf-basic-prof is very limited and is not officially supported by V8, the suggestion is to create an API which listens to code creation events and expose all information needed by external tools to resolve names for JITed functions.

V8 Profiler & CpuProfiler

V8 builtin profilers are officially supported, but they can only sample JavaScript frames. We discussed the possibility to add native frames to those profilers as well in the future. Some challenges are 1) sampling frames from other threads; 2) sampling syscalls frames.

Roadmap

@mmarchini
Copy link
Contributor

Related issue: #150

@mhdawson
Copy link
Member

mhdawson commented Mar 1, 2018

Thanks for the great write up of the discussion from the summit.

@ofrobots
Copy link
Contributor

ofrobots commented Mar 2, 2018

OP said:

perf(1) support is now deprecated in V8 and will not be supported starting in Node 8 -- which effectively means we’re losing the ability to profile JS stacks.

AFAIK, perf(1) support is not deprecated. Like any of the other profiling APIs, it doesn't come with an official support expectation from the V8. I do not think this perf support is going anywhere anytime soon.

@yunong
Copy link
Member Author

yunong commented Mar 6, 2018

@ofrobots the decision deprecating perf(1) support came from discussions with @bmeurer, @hashseed, and @fhinkel. If this has changed then great -- but I'd like to just ensure we're all aligned.

@hashseed
Copy link
Member

hashseed commented Mar 6, 2018

Let me try to rephrase this: perf-based tools are not officially supported because they were never intended for public use. As result, they did not get any attention when we launched TurboFan and Ignition in V8 5.8.

That being said, they worked fine for a long time before 5.8 and if fixed, I expect that they will stay functional for a long time. I don't expect major changes like TurboFan/Ignition to occur often.

@mmarchini
Copy link
Contributor

In addition to what @hashseed said:

I expect that they will stay functional for a long time

The time span here is in years since a new compiler/interpreter won't replace Turbofan/Ignition overnight. Also, it won't happen by surprise: we knew Turbofan and Ignition were coming years before they landed on Node.js core.

I don't expect major changes like TurboFan/Ignition to occur often.

I believe when this happens it will take the same path to upgrade done in the past: the new compiler/interpreter is introduced in the codebase, and the old one is replaced in steps until it can be removed. That gives us time to help to make the new compiler/interpreter work well with external profilers (especially if we want to provide 1 Tier support for external profilers in Node.js, which I think we should).

@mhdawson
Copy link
Member

mhdawson commented Mar 6, 2018

@hashseed so is the recommended path forward from the V8 team to use the perf-based tools as opposed to creating an new API? If so is there some level of commitment to keeping them working (along the lines of what we have now with V8 running Node.js tests ?)

@hashseed
Copy link
Member

hashseed commented Mar 6, 2018

I recommend using the CpuProfiler, since this is the one that both Chrome DevTools and GCP Stackdriver uses. But I can see that the current feature set makes it unattractive in some use cases.

Perf-based tools are not designed with public use in mind, and I cannot make hard commitment to support them in the long run. What I want to avoid is to be in the situation to have to block changes to V8 because they may break perf-based profiling in some way. I don't expect this to happen soon or often though. Also, the breakage we are talking about, introduced by Ignition, is something I hear no other popular interpreted language runtime supports either.

That being said, we currently don't even have any test case for perf-based profiling at this point, again due to the nature of them being ad-hoc tools not designed for public use. I would welcome test cases in Node.js so that we could at least notice if they break, so that we can make informed decisions whether to fix them if effort to do so is reasonable. The current situation is that we cannot honestly claim to offer official support if we wanted to.

@hekike
Copy link
Member

hekike commented Mar 7, 2018

CPU Profiling deep dive WG meeting

Time

UTC Thu 15-Mar-2018 19:00 (07:00 PM):

Timezone Date/Time
US / Pacific Thu 15-Mar-2018 12:00 (12:00 PM)
US / Mountain Thu 15-Mar-2018 13:00 (01:00 PM)
US / Central Thu 15-Mar-2018 14:00 (02:00 PM)
US / Eastern Thu 15-Mar-2018 15:00 (03:00 PM)
London Thu 15-Mar-2018 19:00 (07:00 PM)
Amsterdam Thu 15-Mar-2018 20:00 (08:00 PM)
Moscow Thu 15-Mar-2018 22:00 (10:00 PM)
Chennai Fri 16-Mar-2018 00:30 (12:30 AM)
Hangzhou Fri 16-Mar-2018 03:00 (03:00 AM)
Tokyo Fri 16-Mar-2018 04:00 (04:00 AM)
Sydney Fri 16-Mar-2018 06:00 (06:00 AM)

Agenda

  • Explaining implementation [optional, 5-10m]
  • Demo solution [optional, 5m]
  • Integration with Node: testing, support, release, and timeline for these
  • Testing use-cases
  • What to do when it breaks due to a V8 update
  • CodeEventListener: API, integration, and timeline
  • Future plans (extending CPUProfiler?) [only if we have time]

@Flarna
Copy link
Member

Flarna commented Mar 8, 2018

@hekike I expect this deep dive is focusing only on external profiling (e.g. via perf,..) - not in-process profiling (e.g. using V8 CpuProfiler APIs).

@mhdawson
Copy link
Member

mhdawson commented Mar 8, 2018

I'm travelling so I won't be able to make that time. Any chance we could do it Thurs/Friday instead?

@mmarchini
Copy link
Contributor

I definitely agree with this, especially nowadays where you are often limited to a single core per container

What I've heard from former Chrome DevTools developers is that the current version of V8 expects two cores to perform well, since GC, parsing, compilation, etc. all happen on depart threads now (not to mention the threads spawned by Node.js). So folks running on a single thread might already be taking a good performance hit

It is a sidecar process if you want to continuously take profiles, say every 10 seconds, which is my use case.

IMO, based on how things are implemented today, this should be done on a worker thread.

One thing to note is that all examples in the Node documentation are connecting to the inspector session from the same process. This should probably be updated if not recommended.

Agreed.

Just tried connecting from a different process and I get the same result max latency is around 15-30ms without any Profiler.start command, and jumps to >100-200ms with calls to Profiler.start on a simple "Hello World!" Express app

Apparently we ran similar tests but got very different results. I'll try with express instead of fastify to see if there are any changes. Which hardware did you use to run these tests? What parameters on autocannon?

@rochdev
Copy link

rochdev commented Apr 5, 2020

hello.js
const inspector = require('inspector')
const express = require('express')
const app = express()

inspector.open(3333, '127.0.0.1', false)

app.get('/hello', (req, res) => {
  res.send('Hello World!')
})

app.listen(8080)
profile.js
const WebSocket = require('isomorphic-ws')
const ws = new WebSocket(process.env.WS)

ws.onmessage = ({ data }) => {
  // console.log(data) // only used to validate everything works
}

ws.onopen = () => {
  ws.send(JSON.stringify({
    id: 1,
    method: 'Profiler.enable'
  }))

  setInterval(() => {
    ws.send(JSON.stringify({
      id: 1,
      method: 'Profiler.start'
    }))

    ws.send(JSON.stringify({
      id: 1,
      method: 'Profiler.stop'
    }))
  }, 5000)
}
  • System: MacBook Pro 13-inch 2017, macOS 10.15.2
  • Node: 12.16.1 and 13.12.0
  • Autocannon: autocannon http://localhost:8080/hello (CLI)

Out of curiosity, why do you recommend using a worker thread for this? So far my tests showed no difference between using a worker thread or not. This is especially true if the underlying implementation is already using a thread.

@Flarna
Copy link
Member

Flarna commented Apr 5, 2020

Looking at your example though, how realistic is it for an application to have one million functions?

The problem I reported in #23070 was based on a real crash with a real application from a customer. That application had ~230000 functions. I increased it to 1M in my reproducer to get a similar memory pattern as the real application which requires a few hundred MB already without Profiler.
In that application quite some functions were GCed after startup initialization of the app was done.

That time the memory overhead was the main problem which looks far better now. But it still seems that the amount of work to be done by v8 main thread depends on the number of functions and it is synchron. Most likely because functions coluld move in memory otherwise.

I agree that such a lot functions is not the typical setup. The main issue for us was that that there is no mechanism to estimate the overhead nor to limit it therefore we stopped using the profiler.
Our use case is to continously profile with low interval to get a long term pattern not to profile/optimize something or analyse an app in a bad state.

@mmarchini
Copy link
Contributor

Those are valid use cases, but most of the improvements as I mentioned above should be implemented on V8. So I guess we need to start to discuss how to get this upstream.

@mmarchini
Copy link
Contributor

Added to the agenda so we can discuss how to proceed.

@mmarchini
Copy link
Contributor

Removed from the agenda until someone has time to work on it (reach out to V8 folks, gather data, etc.)

@github-actions
Copy link

This issue is stale because it has been open many days with no activity. It will be closed soon unless the stale label is removed or a comment is made.

@RafaelGSS
Copy link
Member

I would like to discuss it and help. Re-adding to the agenda

@RafaelGSS
Copy link
Member

Hi folks.

As previously discussed on WG Meetings, since it issue was created in 2018 a lot of work was been done and a lot of contexts were mentioned here.

Seems fair to close that issue and create another one about the performance of Profiler.start() and Profiler.end().
We are working also on expose the Deep Dive documents through the documentations/ folder. You can see it here.

@RafaelGSS
Copy link
Member

Closing in favor of #444

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests