Skip to content

Commit

Permalink
feat(options): make "perTest" the default for "coverageAnalysis" (#2881)
Browse files Browse the repository at this point in the history
Since we now have support for `coverageAnalysis` in all supported test runners, we're finally making `perTest` the default. 

BREAKING CHANGE: `"perTest"` is now the default value for "coverageAnalysis" when the configured test runner is not "command". Explicitly set `"coverageAnalysis": "off"` manually to opt-out of this behavior.
  • Loading branch information
nicojs committed May 8, 2021
1 parent dd29f88 commit 518ebe6
Show file tree
Hide file tree
Showing 19 changed files with 84 additions and 85 deletions.
24 changes: 11 additions & 13 deletions docs/configuration.md
Expand Up @@ -53,27 +53,25 @@ With `commandRunner`, you can specify the command to execute for running tests.

### `coverageAnalysis` [`string`]

Default: `off`<br />
Default: `perTest`<br />
Command line: `--coverageAnalysis perTest`<br />
Config file: `"coverageAnalysis": "perTest"`

With `coverageAnalysis` you specify which coverage analysis strategy you want to use.
_Note:_ The default changed from `"off"` to `"perTest"` in Stryker v5.

Stryker can analyse code coverage results. This can potentially speed up mutation testing a lot, as only the tests covering a
particular mutation are tested for each mutant.
This does *not* influence the resulting mutation testing score. It only improves performance.
With `coverageAnalysis` you specify which coverage analysis strategy you want to use.

The possible values are:
Stryker can analyze mutant coverage results. Doing this can speed up mutation testing because Stryker then decides to run only the exact tests covering the mutant it is testing (instead of running them all).
This performance optimization does *not* influence the resulting mutation testing score but does allow Stryker to distinguish between "Survived" and "NoCoverage".

* **off**: Stryker will not determine the code covered by tests during the initial test run phase. All tests will be executed for each mutant
during the mutation testing phase.
All official test runner plugins (`@stryker-mutator/mocha-runner`, `@stryker-mutator/jasmine-runner`, `@stryker-mutator/karma-runner` and `@stryker-mutator/jest-runner`)
support coverage analysis, except for the `command` test runner, since Stryker will just run your command has no way of knowing more about your tests.

* **all**: Stryker will determine the code covered by your tests during the initial test run phase. Mutants without code coverage will be reported with `NoCoverage` and are not tested during the mutation testing phase. This requires your test runner plugin to report code coverage back to Stryker.
All official test runner plugins support this (`@stryker-mutator/mocha-runner`, `@stryker-mutator/jasmine-runner`, `@stryker-mutator/karma-runner` and `@stryker-mutator/jest-runner`), but the `command` test runner does not, since it just runs a command and has no way of knowing more information about your tests.
The possible values are:

* **perTest**: Stryker will determine the code covered by your test per executed test during the initial test run phase. Only mutants actually covered by your
test suite are tested during the mutation testing phase.
Only the tests that cover a particular mutant are tested for each one. This requires your tests to be able to run independently of each other and in random order.
* **off**: Stryker does no optimization. All tests are executed for each mutant.
* **all**: Stryker will determine the mutants covered by your tests during the initial test run phase. Mutants without code coverage will be reported with `NoCoverage` and will not be tested. This requires your test runner plugin to report code coverage back to Stryker.
* **perTest**: Stryker will determine which tests cover which mutant during the initial test run phase. Only the tests that cover a specific mutant are executed for each mutant. Your tests should be _able to run independently of each other and in random order_. Stryker will determine which mutants are _static_ and will run all tests for them during mutation testing. A mutant is 'static' when it is executed during the loading of the file rather than during a test.

### `dashboard` [`DashboardOptions`]

Expand Down
3 changes: 2 additions & 1 deletion e2e/package.json
Expand Up @@ -42,8 +42,9 @@
"webpack-cli": "~3.3.12"
},
"scripts": {
"postinstall": "node ../node_modules/lerna/cli.js bootstrap && npm run install-local-dependencies && link-parent-bin -c test --link-local-dependencies true",
"postinstall": "node ../node_modules/lerna/cli.js bootstrap && npm run install-local-dependencies && npm run link-parent-bin",
"install-local-dependencies": "ts-node tasks/install-local-dependencies",
"link-parent-bin": "link-parent-bin -c test --link-local-dependencies true",
"test": "ts-node tasks/run-e2e-tests.ts"
},
"localDependencies": {
Expand Down
8 changes: 7 additions & 1 deletion e2e/tasks/install-local-dependencies.ts
@@ -1,6 +1,12 @@
import { bootstrapLocalDependencies } from '../../helpers/bootstrap-local-dependencies';
import path = require('path');

bootstrapLocalDependencies(path.resolve(__dirname, '..'))
/**
* Installs the Stryker dependencies inside the correct test packages (using "localDependencies")
* Install a single e2e test package is possible with `npm run install-local-dependencies -- path/to/package.json path/to/other/package.json`
* Example: `npm run install-local-dependencies -- test/jest-with-ts/package.json`
*/
const globs = process.argv.slice(2);
bootstrapLocalDependencies(path.resolve(__dirname, '..'), globs.length ? globs : undefined)
.then(() => console.log('Installed local dependencies'))
.catch(err => console.error(err));
2 changes: 1 addition & 1 deletion e2e/test/karma-jasmine/package.json
Expand Up @@ -6,7 +6,7 @@
"main": "index.js",
"scripts": {
"pretest": "rimraf \"reports\"",
"test": "stryker run stryker.conf.js",
"test": "stryker run",
"posttest": "mocha --require ../../tasks/ts-node-register.js verify/*.ts"
},
"author": "",
Expand Down
17 changes: 0 additions & 17 deletions e2e/test/karma-jasmine/stryker.conf.js

This file was deleted.

16 changes: 16 additions & 0 deletions e2e/test/karma-jasmine/stryker.conf.json
@@ -0,0 +1,16 @@
{
"$schema": "../../node_modules/@stryker-mutator/core/schema/stryker-schema.json",
"mutate": ["src/*.js"],
"testRunner": "karma",
"reporters": ["clear-text", "html", "event-recorder"],
"concurrency": 2,
"karma": {
"config": {
"files": ["src/*.js", "test/*.js"],
"client": {
"clearContext": false
}
}
},
"timeoutMS": 60000
}
7 changes: 4 additions & 3 deletions e2e/test/karma-jasmine/verify/verify.ts
Expand Up @@ -7,9 +7,10 @@ describe('Verify stryker has ran correctly', () => {
metrics: produceMetrics({
killed: 16,
mutationScore: 64,
mutationScoreBasedOnCoveredCode: 64,
survived: 9,
totalCovered: 25,
mutationScoreBasedOnCoveredCode: 94.12,
survived: 1,
noCoverage: 8,
totalCovered: 17,
totalDetected: 16,
totalMutants: 25,
totalUndetected: 9,
Expand Down
9 changes: 5 additions & 4 deletions helpers/bootstrap-local-dependencies.ts
Expand Up @@ -33,12 +33,13 @@ interface Package {

/**
* Installs local dependencies in one go,
* reads package.json and test/* /package.json files and installs the local dependencies marked there
* reads the package.json files and installs the local dependencies marked there.
* The packageJsonGlobs default to ./package.json and test/* /package.json files
* @param directory the directory where the tests live
*/
export async function bootstrapLocalDependencies(directory: string) {
console.log('bootstrap ' + path.resolve(directory));
const files = await globAsPromised('{package.json,test/*/package.json}', { cwd: path.resolve(directory) });
export async function bootstrapLocalDependencies(directory: string, packageJsonGlobs = ['package.json', 'test/*/package.json']) {
console.log(`bootstrap ${path.resolve(directory)} (using ${packageJsonGlobs.join(',')})`);
const files = (await Promise.all(packageJsonGlobs.map((globPattern) => globAsPromised(globPattern, { cwd: path.resolve(directory) })))).flat();
const packages = await Promise.all(files.map(fileName => readFile(fileName)
.then(content => ({ dir: path.dirname(fileName), content: JSON.parse(content) as Package }))));
const sourcesByTarget: ListByPackage = {};
Expand Down
4 changes: 2 additions & 2 deletions packages/api/schema/stryker-core.json
Expand Up @@ -269,8 +269,8 @@
},
"coverageAnalysis": {
"$ref": "#/definitions/coverageAnalysis",
"description": "Indicates which coverage analysis strategy to use. During mutation testing, stryker will try to only run the tests that cover a particular line of code.\n\n'perTest': Analyse coverage per test.\n'all': Analyse the coverage for the entire test suite.\n'off' (default): Don't use coverage analysis",
"default": "off"
"description": "Indicates which coverage analysis strategy to use. During mutation testing, Stryker will try to only run the tests that cover a particular mutant.\n\n'perTest': Analyse coverage per test.\n'all': Analyse the coverage for the entire test suite.\n'off' (default): Don't use coverage analysis",
"default": "perTest"
},
"clearTextReporter": {
"description": "The options for the clear text reporter.",
Expand Down
10 changes: 6 additions & 4 deletions packages/core/src/config/options-validator.ts
Expand Up @@ -100,10 +100,12 @@ export class OptionsValidator {
options.concurrency = options.maxConcurrentTestRunners;
}
}
if (CommandTestRunner.is(options.testRunner) && options.testRunnerNodeArgs.length) {
this.log.warn(
'Using "testRunnerNodeArgs" together with the "command" test runner is not supported, these arguments will be ignored. You can add your custom arguments by setting the "commandRunner.command" option.'
);
if (CommandTestRunner.is(options.testRunner)) {
if (options.testRunnerNodeArgs.length) {
this.log.warn(
'Using "testRunnerNodeArgs" together with the "command" test runner is not supported, these arguments will be ignored. You can add your custom arguments by setting the "commandRunner.command" option.'
);
}
}
options.mutate.forEach((mutateString, index) => {
const match = MUTATION_RANGE_REGEX.exec(mutateString);
Expand Down
4 changes: 3 additions & 1 deletion packages/core/src/process/3-dry-run-executor.ts
Expand Up @@ -125,7 +125,9 @@ export class DryRunExecutor {
private async timeDryRun(testRunner: TestRunner): Promise<{ dryRunResult: CompleteDryRunResult; timing: Timing }> {
const dryRunTimeout = this.options.dryRunTimeoutMinutes * 1000 * 60;
this.timer.mark(INITIAL_TEST_RUN_MARKER);
this.log.info('Starting initial test run. This may take a while.');
this.log.info(
`Starting initial test run (${this.options.testRunner} test runner with "${this.options.coverageAnalysis}" coverage analysis). This may take a while.`
);
this.log.debug(`Using timeout of ${dryRunTimeout} ms.`);
const dryRunResult = await testRunner.dryRun({ timeout: dryRunTimeout, coverageAnalysis: this.options.coverageAnalysis });
const grossTimeMS = this.timer.elapsedMs(INITIAL_TEST_RUN_MARKER);
Expand Down
7 changes: 3 additions & 4 deletions packages/core/src/stryker-cli.ts
Expand Up @@ -79,8 +79,7 @@ export class StrykerCli {
" Only configure this if your test runner doesn't take care of this already and you're not using just-in-time transpiler like `babel/register` or `ts-node`."
)
.option(
'--coverageAnalysis <perTest|all|off>',
`The coverage analysis strategy you want to use. Default value: "${defaultValues.coverageAnalysis}"`
`--coverageAnalysis <perTest|all|off>', 'The coverage analysis strategy you want to use. Default value: "${defaultValues.coverageAnalysis}"`
)
.option('--testRunner <name>', 'The name of the test runner you want to use')
.option(
Expand Down Expand Up @@ -153,7 +152,7 @@ export class StrykerCli {
)
.option(
'--cleanTempDir <true/false>',
'Choose whether or not to clean the temp dir (which is ".stryker-tmp" inside the current working directory by default) after a successful run. The temp dir will never be removed when the run failed for some reason (for debugging purposes).',
`Choose whether or not to clean the temp dir (which is "${defaultValues.tempDirName}" inside the current working directory by default) after a successful run. The temp dir will never be removed when the run failed for some reason (for debugging purposes).`,
parseBoolean
)
.parse(this.argv);
Expand Down Expand Up @@ -182,7 +181,7 @@ export class StrykerCli {

if (Object.keys(commands).includes(this.command)) {
const promise: Promise<MutantResult[] | void> = commands[this.command as keyof typeof commands]();
promise.catch((err) => {
promise.catch(() => {
process.exitCode = 1;
});
} else {
Expand Down
10 changes: 2 additions & 8 deletions packages/core/src/test-runner/command-test-runner.ts
Expand Up @@ -5,7 +5,6 @@ import { StrykerOptions, CommandRunnerOptions, INSTRUMENTER_CONSTANTS } from '@s
import {
TestRunner,
TestStatus,
DryRunOptions,
MutantRunOptions,
DryRunResult,
MutantRunResult,
Expand All @@ -14,7 +13,7 @@ import {
CompleteDryRunResult,
toMutantRunResult,
} from '@stryker-mutator/api/test-runner';
import { errorToString, StrykerError } from '@stryker-mutator/util';
import { errorToString } from '@stryker-mutator/util';

import { kill } from '../utils/object-utils';
import { Timer } from '../utils/timer';
Expand Down Expand Up @@ -47,12 +46,7 @@ export class CommandTestRunner implements TestRunner {
this.settings = options.commandRunner;
}

public async dryRun({ coverageAnalysis }: Pick<DryRunOptions, 'coverageAnalysis'>): Promise<DryRunResult> {
if (coverageAnalysis !== 'off') {
throw new StrykerError(
`The "${CommandTestRunner.runnerName}" test runner does not support coverageAnalysis "${coverageAnalysis}". Please set "coverageAnalysis": "off".`
);
}
public async dryRun(): Promise<DryRunResult> {
return this.run({});
}

Expand Down
Expand Up @@ -14,7 +14,7 @@ describe(`${CommandTestRunner.name} integration`, () => {
describe(CommandTestRunner.prototype.dryRun.name, () => {
it('should report test as successful', async () => {
const sut = createSut();
const result = await sut.dryRun({ coverageAnalysis: 'off' });
const result = await sut.dryRun();
assertions.expectCompleted(result);
expect(result.tests).lengthOf(1);
expect(result.tests[0].status).eq(TestStatus.Success);
Expand All @@ -25,7 +25,7 @@ describe(`${CommandTestRunner.name} integration`, () => {

it('should report test as failed if exit code != 0', async () => {
const sut = createSut({ command: 'npm run fail' });
const result = await sut.dryRun({ coverageAnalysis: 'off' });
const result = await sut.dryRun();
assertions.expectCompleted(result);
expect(result.tests).lengthOf(1);
expect(TestStatus[result.tests[0].status]).eq(TestStatus[TestStatus.Failed]);
Expand All @@ -39,7 +39,7 @@ describe(`${CommandTestRunner.name} integration`, () => {
// Arrange
const killSpy = sinon.spy(objectUtils, 'kill');
const sut = createSut({ command: 'npm run wait' });
const runPromise = sut.dryRun({ coverageAnalysis: 'off' });
const runPromise = sut.dryRun();

// Act
await sut.dispose();
Expand Down
Expand Up @@ -79,7 +79,7 @@ describe(ConfigReader.name, () => {
expect(testInjector.logger.warn).not.called;
});

it('should use the default config if no stryker.conf file was found', () => {
it.only('should use the default config if no stryker.conf file was found', () => {
process.chdir(resolveTestResource('no-config'));

sut = createSut({});
Expand Down
2 changes: 1 addition & 1 deletion packages/core/test/unit/config/options-validator.spec.ts
Expand Up @@ -43,7 +43,7 @@ describe(OptionsValidator.name, () => {
commandRunner: {
command: 'npm test',
},
coverageAnalysis: 'off',
coverageAnalysis: 'perTest',
dashboard: {
baseUrl: 'https://dashboard.stryker-mutator.io/api/reports',
reportType: ReportType.Full,
Expand Down
4 changes: 3 additions & 1 deletion packages/core/test/unit/process/3-dry-run-executor.spec.ts
Expand Up @@ -93,7 +93,9 @@ describe(DryRunExecutor.name, () => {
it('should log about that this might take a while', async () => {
runResult.tests.push(factory.successTestResult());
await sut.execute();
expect(testInjector.logger.info).calledWith('Starting initial test run. This may take a while.');
expect(testInjector.logger.info).calledWith(
'Starting initial test run (command test runner with "perTest" coverage analysis). This may take a while.'
);
});

describe('with successful tests', () => {
Expand Down
20 changes: 6 additions & 14 deletions packages/core/test/unit/test-runner/command-test-runner.spec.ts
Expand Up @@ -2,7 +2,7 @@ import childProcess from 'child_process';
import os from 'os';

import { DryRunResult, DryRunStatus, TestStatus } from '@stryker-mutator/api/test-runner';
import { errorToString, StrykerError } from '@stryker-mutator/util';
import { errorToString } from '@stryker-mutator/util';
import { expect } from 'chai';
import sinon from 'sinon';
import { factory, assertions } from '@stryker-mutator/test-helpers';
Expand Down Expand Up @@ -51,7 +51,7 @@ describe(CommandTestRunner.name, () => {
it('should report failed test when the exit code != 0', async () => {
timerMock.elapsedMs.returns(42);
const sut = createSut();
const resultPromise = sut.dryRun({ coverageAnalysis: 'off' });
const resultPromise = sut.dryRun();
await tick();
childProcessMock.stdout.emit('data', 'x Test 1 failed');
childProcessMock.stderr.emit('data', '1 != 2');
Expand All @@ -68,7 +68,7 @@ describe(CommandTestRunner.name, () => {
killStub.resolves();
const expectedError = new Error('foobar error');
const sut = createSut();
const resultPromise = sut.dryRun({ coverageAnalysis: 'off' });
const resultPromise = sut.dryRun();
await tick();
childProcessMock.emit('error', expectedError);
const result = await resultPromise;
Expand All @@ -86,14 +86,6 @@ describe(CommandTestRunner.name, () => {
expect(childProcessMock.stdout.listenerCount('data')).eq(0);
expect(childProcessMock.stderr.listenerCount('data')).eq(0);
});

it('should reject if coverageAnalysis !== "off"', async () => {
const sut = createSut();
await expect(sut.dryRun({ coverageAnalysis: 'all' })).rejectedWith(
StrykerError,
'The "command" test runner does not support coverageAnalysis "all".'
);
});
});

describe(CommandTestRunner.prototype.mutantRun.name, () => {
Expand All @@ -119,14 +111,14 @@ describe(CommandTestRunner.name, () => {
it('should kill any running process', async () => {
killStub.resolves();
const sut = createSut();
sut.dryRun({ coverageAnalysis: 'off' });
sut.dryRun();
await sut.dispose();
expect(killStub).calledWith(childProcessMock.pid);
});

it('should resolve running processes in a timeout', async () => {
const sut = createSut();
const resultPromise = sut.dryRun({ coverageAnalysis: 'off' });
const resultPromise = sut.dryRun();
await sut.dispose();
const result = await resultPromise;
expect(result.status).eq(DryRunStatus.Timeout);
Expand All @@ -141,7 +133,7 @@ describe(CommandTestRunner.name, () => {
});

async function actDryRun(sut: CommandTestRunner = createSut(), exitCode = 0) {
const resultPromise = sut.dryRun({ coverageAnalysis: 'off' });
const resultPromise = sut.dryRun();
await actTestProcessEnds(exitCode);
return resultPromise;
}
Expand Down

0 comments on commit 518ebe6

Please sign in to comment.