Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timestamps in nanoseconds #2

Open
3 tasks done
aexklon opened this issue Mar 25, 2018 · 3 comments
Open
3 tasks done

Timestamps in nanoseconds #2

aexklon opened this issue Mar 25, 2018 · 3 comments
Labels
enhancement New feature or request

Comments

@aexklon
Copy link

aexklon commented Mar 25, 2018

I'm submitting a

  • feature request

Checklist

  • Searched both open and closed issues for duplicates of this issue
  • Title adequately and concisely reflects the feature or the bug

Information

sympact@0.0.3

The Problem

Timestamps are in ms (milliseconds). So, under certain circumstances, the difference between end and start is always zero ms. E.g: when profiling code that do not require long times to execute

Suggested solution

I would like to suggest timestamps in ns (nanoseconds). That could be achieved by replacing Date.now() for process.hrtime() in ./lib/profile.js. That would also require parsing the hrtime format with a function like this one:

function parseTime(hrtime) {
  return (hrtime[0] * 1e9) + hrtime[1]
}
@simonepri
Copy link
Owner

simonepri commented Mar 25, 2018

@al-lopes Yes sounds reasonable.

The real problem here is that one reading takes about 5-10ms to be taken. While we can report the correct execution time we cannot take enough readings if the code is too fast.

Anyway the execution time is computed inside the ./lib/vm.js file

@simonepri simonepri added the enhancement New feature or request label Apr 1, 2018
@ranisalt
Copy link

The microtime package has great precision and it's fast.

@aexklon
Copy link
Author

aexklon commented May 3, 2018

@simonepri I am not sure about why it takes 5~10ms each reading. But assuming it is because of the time required to generate a tempy file and setup each fork, maybe there's one solution, albeit I did not spent much time thinking about nor did I validated it:

It could be done something in the lines of recording start/end timestamps for each reading. With that in memory, you could compare current reading start with previous reading end to know how much of that time was spent not executing. Even for execution, you could compare current execution start with previous execution end to know time spent between executions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants