Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reuse processes using a process pool #195

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

fabiosantoscode
Copy link

No description provided.

@1999
Copy link
Collaborator

1999 commented Oct 15, 2018

Wow, seems like a huge work has been made here! 😮

@fabiosantoscode
Copy link
Author

I've created my own, much simpler package, that replaces mocha-parallel-tests and is able to reuse child processes: https://www.npmjs.com/package/mochallel

Copy link
Collaborator

@1999 1999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiosantoscode I assume you didn't run tests for your changes, did you? Also, it looks like there's some debugging code in your PR. Could you please fix it and update the branch?

mocha.reporter(Reporter).run((code) => {
    onComplete(code);
    throw new Error('foo!');

image

@1999
Copy link
Collaborator

1999 commented Oct 31, 2018

@fabiosantoscode closing this PR as non-active. If you plan to update it feel free to re-open and ping me :)

@1999 1999 closed this Oct 31, 2018
@fabiosantoscode
Copy link
Author

Sorry. Can you reopen? I've just pushed a removal of that throw statement.

@fabiosantoscode
Copy link
Author

I've released an alternative to this module that already does process reusing. It's quite fast! Can you match its speed? http://npmjs.com/mochallel

@fabiosantoscode
Copy link
Author

@1999

@1999
Copy link
Collaborator

1999 commented Oct 31, 2018

@fabiosantoscode I noticed your message above. It's great that it's so fast 👍 Can it be used in mocha-parallel-tests somehow or what do you mean by that?

Also it looks like the build is red, so you also need to fix the errors before we can merge this PR.

@1999 1999 reopened this Oct 31, 2018
@fabiosantoscode
Copy link
Author

fabiosantoscode commented Nov 2, 2018

You can use the multiprocess-map module that I created to use in mochallel, if you like

@fabiosantoscode
Copy link
Author

I meant it as a challenge :) if you can make a faster mochallel, then everybody wins!

@fabiosantoscode
Copy link
Author

The builds are timing out -- Do you think you know what might be causing it?

@fabiosantoscode
Copy link
Author

Ok, I think I fixed it.

@1999
Copy link
Collaborator

1999 commented Nov 2, 2018

You're almost there: 1 test is failing for node@11.

@@ -149,6 +149,7 @@ export default class RunnerMain extends Runner {
private emitSubprocessEvents() {
for (const { event, data, type } of this.subprocessTestResults.events) {
if (type === 'runner') {
assert(event);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a debug code

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just to ensure event exists. It might become undefined if a bug is introduced elsewhere.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a premature optimization. Let's either add a test for it or remove this.


mocha.reporter(Reporter).run(onComplete);
mocha.reporter(Reporter).run((code) => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just write mocha.reporter(Reporter).run(onComplete) I guess?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@1999
Copy link
Collaborator

1999 commented Nov 2, 2018

Nice job! Can you think of any tests that we might add here and which might be useful for this PR?

@fabiosantoscode
Copy link
Author

No, I don't think so.

@1999
Copy link
Collaborator

1999 commented Nov 2, 2018

Why not? This PR is a huge improvement to the way mocha-parallel-tests works. For instance if one of the tests is hanging this will stop others from re-using the same process (see #155 for more info). I would like to see all changes covered with tests and here we could for instance check that the process.pid is the same across different files.

@fabiosantoscode
Copy link
Author

OK, let's do that test.

Copy link
Collaborator

@1999 1999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked through the code and it looks like a good first step to process re-using. However, there are at least two pieces not covered with tests and which require your attention. Hopefully, my comments will be useful to you.

@@ -0,0 +1 @@
Support mocha --delay option
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you didn't change the copied README file contents


const [pid1, pid2] = stdout.split(/\n/g).filter(line => /^pid/.test(line));

if (pid1 && pid2) assert.strictEqual(pid1, pid2);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a bit fragile: if one of the tests doesn't output anything this test will still pass. Do we actually need this check for pid1 and pid2 existence? Isn't it safe to just remove this condition check and run assertion check instead?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes they don't both come out...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, this means that a test is flaky and it should be investigated why it's so. Let's investigate that or remove the test altogether.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd still posit that the test has value. It works 90% of the time and could catch a bug in that 90%.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a very strong opinion on that. Saying that in 10% of tests one of pid1 or pid2 are empty means that the test hasn't been executed at all because this means that the main process has finished its execution but one of the files which were supposed to get executed didn't do it. This looks like a major bug and I'd spend more time into that. You can skip that, but I will do that later on.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh btw I think I know why this can be flaky thing: standard streams are considered to be asynchronous (i.e. console.log is using process.stdout.write which is asynchronous) and this is why listening to their output can be tricky. What you can do is to use a synchronous function to write the process ID for instance fs.writeFileSync. This would guarantee that your PID is saved before the test file has finished executing.

const [pid1, pid2] = stdout.split(/\n/g).filter(line => /^pid/.test(line));

if (pid1 && pid2) assert.strictEqual(pid1, pid2);
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the tests. mocha-parallel-tests is highly covered with tests itself and that's why it's so stable. This PR, however, introduces a very important piece of logic and I'm thinking about at least two additional tests that would make sense here:

  1. Let's assume that we have asynchronous test suite A which is executing for 3 seconds, asynchronous test suite B which is executing for 2 seconds and synchronous test case C; --max-parallel is equal to 2. For this configuration I expect test case C to re-use the process from test case B, while A will be still running in its own process.
  2. The test case from Investigate re-using forked processes #155: what should be the behaviour when the forked process is blocked from exiting (or running the next suite?) by some code (for instance it can launch the http server in "before" hook). I assume we will need to forget about this process and fork new ones but maybe you have a different idea? The reason why we need this is because mocha-parallel-tests is trying to act exactly like mocha. When it can do this it's a shocking surprise for our users.

Could you please also add these tests and maybe you can think of any others?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Ok I'll try to write that test
  2. if some test hoards the process, then I suppose it eventually would time out. I don't think I've implemented that. I'll do it now.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like 2. already times out because we use mocha in the CP as well.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Could you please point me to the test you've written?

What's the CP?

.travis.yml Show resolved Hide resolved
@@ -98,6 +100,9 @@ export default class MochaWrapper extends Mocha {

taskManager
.on('taskFinished', (testResults: ISubprocessResult) => {
if (!testResults) {
throw new Error('No output from test');
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced that we need this behaviour:

  1. mocha doesn't have it so if it happens this will be something weird for users
  2. it's not covered with a test

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@@ -149,6 +149,7 @@ export default class RunnerMain extends Runner {
private emitSubprocessEvents() {
for (const { event, data, type } of this.subprocessTestResults.events) {
if (type === 'runner') {
assert(event);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a premature optimization. Let's either add a test for it or remove this.

};

return process;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more thing to consider here is the mocha --exit option. It looks like you call this method right after the end event is got from the subprocess. But in fact, that's not the time when the process is finished: it can have an asynchronous logic which can for instance stop listening for any kind of events and the process can exit with a non-zero exit code in it. In this case, although the test didn't have any errors mocha will show it as failed. So please take a closer look at #155 and maybe #169 comments. Maybe the thing we should do is to use process pool only when mocha is launched with an --exit option.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do we do then?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a question to you. It looks to me like we can re-use processes only when --exit option is specified otherwise we break the mocha behaviour.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiosantoscode ^ any ideas? If no, let's enable this behaviour for --exit-run tests and add a test for that.

@@ -217,42 +184,60 @@ class Reporter extends reporters.Base {
event: 'sync',
});

assert(event);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a premature optimization. Let's either add a test for it or remove this.

function x() {
throw new Error('foo');
}
x();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not change existing tests, that's a bad practice. Better create a separate test and add it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test was breaking for some reason. I figured adding another stack frame wouldn't hurt.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test was breaking for some reason

It is not failing in master branch. In any case, changing existing test cases is the last effort. Please revert it back.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

insert_final_newline = true
charset = utf-8
indent_style = space
indent_size = 4
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@fabiosantoscode
Copy link
Author

Done @1999

@1999
Copy link
Collaborator

1999 commented Nov 3, 2018

Done @1999

Sorry, done what?

@fabiosantoscode
Copy link
Author

Nevermind

@fabiosantoscode
Copy link
Author

Implemented all the ideas for tests. Any more tests you can think of?

@@ -0,0 +1,5 @@
describe('suite 1', () => {
it('hoard the process', (done) => {
setTimeout(done, 10000);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mocha-parallel-tests are executed sequentially, so if you need to have some asynchronous test better use 3 seconds timeout.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This big timeout is meant to be bad, and time out the process.

const { exec } = require('child_process');
const libExecutable = resolve(__dirname, '../../dist/bin/cli.js');

exec(`${libExecutable} --max-parallel 1 test/reusing-processes-2/index.1.spec.js test/reusing-processes-2/index.2.spec.js`, {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our common rule is that the test is using only its own test cases. So it would be great to copy these testcases to test/process-timeouts folder.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My mistake, pushing a fix.

exec(`${libExecutable} --max-parallel 1 test/reusing-processes-2/index.1.spec.js test/reusing-processes-2/index.2.spec.js`, {
cwd: resolve(__dirname, '../../'),
}, err => {
assert(err);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit generic. It would be great to add a second argument saying what kind of error happened.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It always exits with code 1 I think. It would tell us nothing more about the error.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that the error that you will get if this test fails looks like this now:

AssertionError [ERR_ASSERTION]: null == true
at b (repl:1:11)

And what I think is that the test error should ideally be understandable even without debugging the test. So simple change like that would make things much easier to debug:

assert(err, 'CLI was supposed to exit with an error but it didn\'t')

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, changed it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -0,0 +1 @@
Check if processes are being reused
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better if README files were more precise, at least not copies of each other.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed


const [pid1, pid2] = stdout.split(/\n/g).filter(line => /^pid/.test(line));

if (pid1 && pid2) assert.strictEqual(pid1, pid2);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, this means that a test is flaky and it should be investigated why it's so. Let's investigate that or remove the test altogether.

@fabiosantoscode
Copy link
Author

I think you can merge now

Copy link
Collaborator

@1999 1999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiosantoscode thanks for fixing some of the issues. However, there are still some unresolved comments from our previous discussion and I added some more. Please take a look at these and fix them, so we can merge your PR.

}

debugLog('Process spawned. You can run it manually with this command:');
debugLog(`node ${nodeFlags.join(' ')} ${runnerPath} ${forkArgs.concat([DEBUG_SUBPROCESS.argument]).join(' ')}`);
test.send(JSON.stringify({ type: 'start', testOptions }));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You did but what's the idea of passing a string if your definition says that the msg can be any? I'd prefer hiding this logic inside the "send" function (i.e. do JSON.stringify in case it's needed there).

try {
test = await this.pool.getOrCreate(this.isTypescriptRunMode);
} catch (e) {
throw e;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

} else {
assert(event);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kill: () => void;
destroy: () => void;
on: (event: string, fn: eventFn) => void;
removeListener: (event: string, eventFn) => void;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two methods look very similar to node EventEmitter, but they're doing different things. Could you please rename them to be more explicit? For instance "on" -> "addProcessListener" and "removeListener" -> "removeProsessListener".

Another approach would be that IMochaProcess objects could be extending node EventEmitter but in this case, you'd have to re-implement all EventEmitter methods related to adding/removing listeners.

});
}

create(runnerPath: string, opt: object): IMochaProcess {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opt is too vague. Could you add some interface check for it? As far as I see it should be the ForkOptions from "child_process".

};

return process;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiosantoscode ^ any ideas? If no, let's enable this behaviour for --exit-run tests and add a test for that.

function x() {
throw new Error('foo');
}
x();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -29,6 +30,9 @@ function test {
MOCHA_VERSION=`mocha --version`
echo "You're running tests with mocha version $MOCHA_VERSION"

test 'process reuse' test/reusing-processes/index.js
test 'process reuse (2)' test/reusing-processes-2/index.js
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's have a more explicit name for this test


const [pid1, pid2] = stdout.split(/\n/g).filter(line => /^pid/.test(line));

if (pid1 && pid2) assert.strictEqual(pid1, pid2);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a very strong opinion on that. Saying that in 10% of tests one of pid1 or pid2 are empty means that the test hasn't been executed at all because this means that the main process has finished its execution but one of the files which were supposed to get executed didn't do it. This looks like a major bug and I'd spend more time into that. You can skip that, but I will do that later on.

const [pid1, pid2] = stdout.split(/\n/g).filter(line => /^pid/.test(line));

if (pid1 && pid2) assert.strictEqual(pid1, pid2);
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Could you please point me to the test you've written?

What's the CP?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants