Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark tests that test overall behaviour other tests expand on in more detail #5823

Closed
dantman opened this issue Mar 18, 2018 · 47 comments
Closed

Comments

@dantman
Copy link

dantman commented Mar 18, 2018

It should be possible to mark a test as testing overall behaviour that other tests are testing for more specific behaviour on. i.e. We should be able to let know that if a certain test fails, a bunch of other tests are expected to fail and aren't really relevant.

We could do this either with a way to mark a test in a suite as important, or some way of marking a bunch of tests as dependent on another test.

Imagine this abstract scenario where we have a foo() that returns 1 and then it's changed to return "string".

// Old implementation
function foo() {
	return 1;
}

// New implementation
function foo() {
	return "string";
}

describe('foo()', () => {
	it('should be a number', () => {
		expect(typeof foo()).toBe('number');
	});

	it('should be a positive number', () => {
		expect(foo()).toBeGreaterThan(0);
	});

	it('should be a finite number', () => {
		expect(foo()).toBeLessThan(Infinity);
		expect(foo()).toBeGreaterThan(-Infinity);
	});

	it('should not be a fraction', () => {
		expect(Number.isInteger(foo()).toBe(true);
	});
});

Every one of these tests is going to fail because the return of foo() is a string instead of a number. But should be a number is the only test that matters here, the moment that should be a number fails every other test is irrelevant. e.g. should be a finite number telling you that a string is not an infinite number is irrelevant to solving the fact that a string is not a number, which is already tested.

However when Jest runs a test suite like this, it will output all 4 failures to the console with a long diff of details for each and every one of them. But the only one that actually tells us what we want is the first test. In fact in a complex application where the more detailed failures give us too specific errors, unless we scroll all the way back up the failures at the end will just make it difficult to figure out what has failed.

If we are able to mark a test in some way as a parent to, dependent of, more important than, ... other tests, then Jest can try and improve the test results in this case. We could ignore the other tests entirely, or we could output the fact the test failed but not output the detailed diffs for any tests we are expecting to fail so the developer can go straight to figuring out why the parent test failed.


As a practical example, I am writing tests for a Redux store that represents a form template. Instead of hardcoding the structure of this template in my test suite I use a snapshot test to make sure that the newTemplate() action is behaving correctly then I use the newTemplate() action to generate an initial template. Then I test the other actions (the ones that just modify the template) by using newTemplate() to create the empty template that the modification actions modify.

This keeps my tests sane, however if newTemplate() is broken I know that every other test is going to fail. And their failures are all irrelevant since the real failure is in newTemplate() and that is what I have to fix rather than the individual modification actions.

@jamonholmgren
Copy link

jamonholmgren commented Mar 18, 2018

Basically, have a series of "smoke tests" that are run first, then go deeper if those pass? Interesting idea.

@dantman You can also run jest with the --bail option, which will stop execution if any test fails.

@SimenB
Copy link
Member

SimenB commented Mar 18, 2018

I like this idea! Maybe we could have a flag that marks later its and describes within the same describe as skipped. So sorta like --bail but it will still run all test files, and potentially sibling describes regardless of a bail or not.

This will totally mess with test.concurrent but that's experimental anyways. This behaviour should not be the default, but should it be per test (programmatic api) or in configuration (either as a global or match some pattern)?

@ipodppod
Copy link

ipodppod commented May 6, 2018

Another choice could be to describe this as part of the test itself.
Maybe something like:

describe('foo()', () => {

	testBeforeAll('should be a number', () => {
		expect(typeof foo()).toBe('number');
	});

	test('should be a positive number', () => {
		expect(foo()).toBeGreaterThan(0);
	});
});

@SimenB
Copy link
Member

SimenB commented May 6, 2018

@cpojer any thoughts on this feature?

@cpojer
Copy link
Member

cpojer commented May 6, 2018

I'm not sure I see the value of making tests dependent on each other. Worst case, every test depends on every test before it. In that case, --bail on a per test basis makes more sense than adding new APIs imho.

@jamonholmgren
Copy link

We could achieve this without any changes to Jest by setting up your tests like this:

image

That way running yarn test will run the smoke test first, then if that passes it'll run everything else.

@ipodppod
Copy link

ipodppod commented May 7, 2018

@jamonholmgren Nice idea but less practical for the example in the OP, where you want to bail only a certain "describe".

Validating some value before using it for the rest of a test is very common imo.

@dantman
Copy link
Author

dantman commented May 7, 2018

It also completely falls apart when you have tests in different tests folders grouped with relevant code an do not want to have to put all your smoke tests in a single spot disconnected from the code it is testing.

@ipodppod
Copy link

ipodppod commented May 7, 2018

I'm starting to like this idea:

test.essential(name, fn, timeout)     // if fails, other tests within the scope will not be executed
describe.essential(name, fn, timeout) // if fails, other tests within the scope will not be executed

It gives lots of flexibility in designing the test suit.
And it reads well.

describe('foo()', () => {

	test.essential('should be a number', () => {
		expect(typeof foo()).toBe('number');
	});

	test('should be a positive number', () => {
		expect(foo()).toBeGreaterThan(0);
	});
});

@dantman
Copy link
Author

dantman commented May 25, 2018

I like test.essential.

Though here's another idea to throw into the mix. What about defining preconditions for tests. I often do things in beforeEach that are setup things, but may fail because they use methods that other tests test for. If we were able to mark things as "preconditions" as in "a part part of a test that if fails indicates that the test itself has not failed but a condition required for the test to work has failed instead".

describe('foo()', () => {
	test('should be a number', () => {
		expect(typeof foo()).toBe('number');
	});

	test('should be a positive number', () => {
		precondition(() => {
			// Greater than/less than tests require the result to be a number
			// If this fails, then the test isn't actually valid
			expect(typeof foo()).toBe('number');
		});

		expect(foo()).toBeGreaterThan(0);
	});
});

Test "dependencies" could be written by writing the test inside of a function, executing that function as a test once on its own, and executing that function as a precondition inside other tests.

describe('foo()', () => {
	function fooIsANumber() {
		expect(typeof foo()).toBe('number');

	}

	test('should be a number', () => {
		fooIsANumber();
	});

	test('should be a positive number', () => {
		precondition(fooIsANumber);

		expect(foo()).toBeGreaterThan(0);
	});
});

Though perhaps essential and precondition are features for separate use cases that are both useful and both would be good to implement. Preconditions are more useful for tests that use functions from other tests, but those functions aren't themselves 'essential' to an entire test suite.

@rickhanlonii
Copy link
Member

I'm saying we would do this, but from an API standpoint, what about:

describe('foo()', () => {
	test('should be a number', () => {
        expect(typeof foo()).toBe('number');
	});

	test.depends(['should be a number'], 'should be a positive number', () => {
		expect(foo()).toBeGreaterThan(0);
	});
});

@SimenB
Copy link
Member

SimenB commented May 26, 2018

I'm more fan of just "all other tests in the same and descendent scopes are skipped" than being able to point to some tests directly from other tests

@rickhanlonii
Copy link
Member

@SimenB agreed, what about:

describe('foo()', () => {
	test.required('should be a number', () => {
        expect(typeof foo()).toBe('number');
	});

	test('should be a positive number', () => {
		expect(foo()).toBeGreaterThan(0);
	});
});

@dantman
Copy link
Author

dantman commented May 26, 2018

test.required looks like the test.essential described before, just under a different name.

I kind of like essential over required cause I think it would be confusing if we had required for "necessary for all other tests to work" if someone comes along and has a valid use case for an optional that does something else... say failing when run on its own but only warning when run as part of a test suite, say for when some test has become unstable and in the meantime you don't want it failing your test suite and CI systems but you still want to be able to run it directly to try and work on fixing the issue.

I too don't like the idea of having to repeat the name of a test, which is normally a human readable sentence not an identifier key intended to be used multiple times.

And of course like I said, I think it would be good to support both test.essential and precondition.

@geolunalg
Copy link

I like the idea of both test.depends and test.essential. Having them both provides more flexibility and allows to truly test behavior, not just unit of code.

@mkeller-upb
Copy link

I like test.essential (test.required)
Is this feature somehow scheduled for development or is it waiting for a merge-request?

test.essential appears to be superior to other proposals:

  • simple, clean interface, few brackets, easy to understand at a quick look
  • test cases of proposed alternative depends and precondition can be mapped to test.essential or solved otherwise, see below for details.

My motivation/urge for this feature: I will have nested describes and my current plan is to have the deepest describe with a list of dependent tests. These tests verifies UI element behaviors of all the various cases, e.g. (in)valid inputs. For the sake of speed, the behavior tests are dependent on each other, on test order. Doing so, I do not need to restore UI state (with before/afterTest) for each of the 3000 tests. The acceptable downside of this speedup is, that complete test sequences fails (inside the deepest describe).

(If you have any other idea, e.g. with a customEnvironment or custom globals, please drop a note.)

Argumentation for using test.essential to realize the other solutions.

  • depends use cases can be mapped to use test.essential :

    • defining dependency to other tests can be implicitly done by order of tests, e.g. test.essential, test. The latter test is only executed if the first succeeded
    • for many dependency relations you can use nested describe -- but I wouldn't recommend that due to maintenance issues
  • precondition: use cases can be mapped to use test.essential:

    • having following order test.essential, test; the first test verifies the precondition of the second (if second shouldn't be tested if first fails, example of dantman)
    • inside a test, you can have an if-condition to not assert (expect...), then the test is listed
    • you can define a test (`test...) inside an if-condition, but than at describe-execution-time the condition is evaluated (and cannot depend on previous test changes or test-setups)

@apples
Copy link

apples commented Apr 23, 2020

Has there been any progress on this?

I just ran into this today, here's my testing scenario:

First, I need to test to ensure that some foo is not undefined.
Then, I need to run a bunch of tests that use foo in some way.

Obviously all the tests using foo will necessarily fail if the first test failed, so there's no reason to run them.

test.essential is perfect for this use case.

Expected behavior would be:
If the essential test fails, all subsequent test cases within the same describe will fail or be skipped. The skipped tests should still show up in the output somehow, with a failure description along the lines of "skipped due to essential test failure". Multiple sequential essential tests should be executed sequentially, and if one fails, the rest will be skipped.

@wmertens
Copy link
Contributor

wmertens commented Apr 29, 2020

I'm also encountering this problem, and I worked around it by creating a gating promise that tests depend on and that wraps the required test. However, this is cumbersome and ugly, plus it lists all tests as failing.

I would prefer explicitly marking a certain test as a requirement; that way when you run individual tests it can run the required tests too.

E.g.

let thing
test('creation works', async () => {
  thing = await getExpensiveThing()
  expect(thing).toHaveProperty('bar')
}

test.requirePrevious('bar is 5', () => {
  expect(thing.bar).toBe(5)
})

test.require('creation works')('can cleanup', async () => {
  await expect(thing.cleanup()).to.resolve.toBeTruthy()
})

and if you would run only the 'can cleanup' test, it would first run 'creation works'.

If creation fails, it would just skip the other tests.

perhaps the requirement methods could instead be named needs and needsPrev to be shorter.

@flaviostutz
Copy link

flaviostutz commented May 21, 2020

I already used a lot of this feature in TestNG for large projects and it was good for avoiding too much false positives and show the root cause more easily. With "depends", the test framework can more easily auto organize the order of the tests to be performed. Currently there is no "ordering", so the tests tends to be large in order to perform various steps in sequence.

See an example at http://websystique.com/java/testing/testng-dependsonmethods-example/

We could have something like:

let thing
let one = it('do one', async () => {
  thing = await doOne()
})

let two = it('do two', async () => {
  await thing.doSomething()
}, one) //here it is the "depends on" argument

So when Jest is run, it will organize the execution graph according to the dependencies and if one node fails, all dependants nodes won't run.

@wmertens
Copy link
Contributor

Actually, that seems very intuitive. How about this refinement:

let thing

let one = it('do one', async () => {
  thing = await doOne()
})

it('do two', async () => {
  await one
  await thing.doSomething()
})

So it and test return a Promise for completion, and you simply await it.

If do one fails, it will throw a subclass of Error so that other tests can detect that they failed due to a dependent test, and skip instead of erroring.

It means that you can't easily analyze the code to find a dependency graph, but that's not necessary either.

@ryanhs
Copy link

ryanhs commented May 22, 2020 via email

@wmertens
Copy link
Contributor

Actually, I just realized that this way, you can't force a parent test to run, it needs to be like this:

let thing

let testOne = it('do one', async () => {
  thing = await doOne()
  return thing
})

it('do two', async () => {
  const thing = await testOne()
  await thing.doSomething()
})

When you run do two, you call the test handle testOne(), which will:

  • check for test dependency loops
  • run the test if it wasn't run yet
  • return the return value of the test

@wmertens
Copy link
Contributor

wmertens commented May 22, 2020

When I import this at the top of a test file, it does exactly what I want:

// https://github.com/facebook/jest/issues/5823
class TestError extends Error {
	constructor(title, error) {
		super(title)
		this.message = `test "${title}" failed: ${error.message}`
		this.stack = error.stack
	}
}
const origTest = global.test
global.test = (title, fn) => {
	let alreadyRan = false
	let result = undefined
	let fnError = undefined
	const handleError = (error, saveError) => {
		let newError
		if (error instanceof TestError) {
			error.message = `Dependency: ${error.message}`
			newError = error
		} else {
			newError = new TestError(title, error)
		}
		if (saveError) fnError = newError
		throw newError
	}

	const runFnOnce = () => {
		if (alreadyRan) {
			if (fnError) throw fnError
			else return result
		}
		alreadyRan = true
		try {
			result = fn()
		} catch (err) {
			// synchronous error
			handleError(err, true)
		}
		// async error
		if (result.catch) result = result.catch(err => handleError(err, false))
		return result
	}
	origTest(title, runFnOnce)
	return runFnOnce
}
// add .each etc
Object.assign(test, origTest)

It wraps test so it returns a handle you can call to await it :-D

I tried adding it to the global test setup but it seems that global.test gets injected after the setup runs. In any case, this works, but the real implementation would be in test() and it would properly show that parent tests ran etc.

@flaviostutz
Copy link

That’s interesting. Do you know if Jest always run the tests in the order they appear (are registered) or they run in parallel in some cases?
I know it run multiple test cases in parallel, but it runs the tests inside the test case in parallel too?

@wmertens
Copy link
Contributor

wmertens commented May 23, 2020 via email

@wmertens
Copy link
Contributor

wmertens commented Sep 9, 2020

I updated #10077 with a new API which I've been using for a while and works great:

const result = test.result('test name')

It doesn't need any big changes, it enforces dependencies exactly, it is robust when you move tests around, it's easy to read, it can do sync and async results etc. I really like it.

@apples
Copy link

apples commented Sep 10, 2020

const result = test.result('test name')

I really like this. It's nice and explicit, which I like.

However, my team has hundreds of tests this would need to be added to, and many developers from different teams and of different skill levels adding new tests every day. An implicit solution like test.essential mentioned above would help a lot in this regard, since we'd only need to add it to one test instead of modifying hundreds.

I am 100% for adding test.result, but I think a way to create implicit dependencies is also needed for this issue to be fully resolved.

@enriqu3l
Copy link

I also like the idea of adding the test.essential solution. There are too many ways to tackle this issue, and thanks @wmertens for your contribution, looks also very nice, but I do think the best and more flexible one is the essential method.

@wmertens
Copy link
Contributor

@enriqu3l I don't like the test.essential because it's extra information you need to provide and maintain.

With test.result, you enforce a dependency when you need it and only when you need it, and as you stop needing it the dependency will be removed as well. You can even wait for multiple tests in a single test.

@apples
Copy link

apples commented Sep 23, 2020

@wmertens I like test.result, but I just think it's solving a slightly different problem.

Conceptually, using test.essential for the first test of a file, would be exactly the same as placing a test.result in the first line of all the other tests in the file, and discarding the actual result.

In my case, and I believe the case of others in this thread, our "dependent" test cases don't actually depend on the first test case or its result. They will run just fine without the first test case. The problem is simply that they might be running unnecessarily, and always fail, in the case that the first test fails. This pollutes the test output and makes it slightly more difficult to spot the actual failure condition, and also makes things a bit slower.

@wmertens
Copy link
Contributor

@apples would your use-case not be served already with beforeAll? With perhaps a test on a global that beforeAll sets up?

@apples
Copy link

apples commented Sep 24, 2020

@wmertens No, it's not that there is some kind of setup required.

Consider a scenario in when you have a function foo() that you expect to return an object that looks like { a: 1, b: 2 }.

You could test it like this:

it('is not null', () => {
    expect(foo()).not.toBeNull();
});

it('A', () => {
    expect(foo().a).toBe(1);
});

it('B', () => {
    expect(foo().b).toBe(2);
});

In this case, tests A and B are completely irrelevant if the first test fails, but will still be run anyways.

With your test.result, you could do something like this for each test:

it('A', () => {
    test.result('is not null'); // don't care about actual result, as there is none
    expect(foo().a).toBe(1);
});

But this would need to be carefully maintained for each test (although, admittedly, it's not a big deal if it is forgotten for one or two tests).

Currently, the best solution (afaik) is to simply let these tests fail. But this pollutes the output with failed tests, which makes it difficult to see that the issues is accurately identified by the first test alone. Ideally, the tests A and B would be skipped entirely, and show up in the output as "skipped".

@wmertens
Copy link
Contributor

It may read a little bit weird, but that's exactly what beforeAll does - if it fails, none of the tests run.

@apples
Copy link

apples commented Sep 29, 2020

No, beforeAll behaves a bit differently:

  • If it fails, it won't prevent other beforeAll blocks from running.
  • All beforeAll in a file/describe will be executed before any tests are ran, not just the ones below them.
  • It doesn't show up in the output as its own failed test.
  • The dependent tests will still show up with errors in the output, but the error will be identical for all of them.

If you have 2 beforeAll that fail, and 3 actual tests, you will get 6 errors in the output, 2 per test.

@wmertens
Copy link
Contributor

@apples so beforeAll is almost what you want but it needs better error behavior?

Except that you would also want test.essential to apply halfway in a file; I wonder if that would not be better expressed by beforeAll in a describe block (not sure if possible but sounds nice)

@shirohana
Copy link

shirohana commented Dec 31, 2020

I'm looking for this feature today (´-ωก`)

In my case, the tests will look up JS modules under directories, and ensure each index.js has expose specific named-exports.

as my expect, it will look like:

describe('~store/*/index.js', () => {
  let moduleEntries

  // if throws, report the title and suspense executing of other tests
  test.ensure('every store contains index.js', () => {
    moduleEntries = resolveModuleEntries()
    return true
  })

  describe.each(
    moduleEntries /* already ensured */
  )('~store/%s/index.js', (name, module) => {
    it('should expose { reducer }', () => {
      expect(typeof module.reducer).toBe('function')
    })
  })
})

// example that used `fs` and `require()`
function resolveModuleEntries () {
  const basedir = path.join(__dirname, '../store')

  return fs.readdirSync(basedir, { withFileTypes: true })
    .filter(dirent => dirent.isDirectory())
    .map(dirent => {
      const moduleId = path.join(basedir, dirent.name, 'index.js')

      // it will throw exception when moduleId cannot be found
      return [dirent.name, require(moduleId)]
    )
}

beforeAll and beforeEach are not designed for handling user errors, hope that Jest can handle something similar to test for tests like test.ensure(), that would be useful to me (ノω・。)

@wmertens
Copy link
Contributor

@shirohana I think what you want won't be helped by what is requested in this issue.

In any case, you could save errors during test lookup and then throw them in .beforeAll(). That will integrate well with Jest.

@shirohana
Copy link

@wmertens Thank you for your advice,

I've tried initialize variables (argument for describe.each()) in beforeAll() hook, but test-cases still been executed even exception thrown in beforeAll(). (maybe related to #2713 and Jasmine)

also, beforeAll() has no able to bring message, I think that can't solve this problem delightfully (´-ωก`)

@shirohana
Copy link

these hooks are not designed for assertions and control testflow through exceptions, I guess.

@jackHedaya
Copy link

Is this a feature still under consideration?

@ottokruse
Copy link

Another scenario that wants to be able to make tests dependent on one another:


We are (maybe wrongfully so) using Jest for some end-to-end integration tests, where we chain multiple steps.

For example:

  1. create user
  2. login as user
  3. perform some action
  4. store some audit data about the action in database
  5. logout

Currently we model this as 1 Jest test "end to end test", to ensure that if e.g. login fails, we don't proceed with the remainder, as it is useless, and wrong (don't want to do the storing of the audit data).

I think it would be nicer for the test failure reporting, to model this as 5 tests.

I know you can also create a test sequencer, but that's poorly documented, and I don't like to have an out-of-band file with the sequence, I'd like to define it in the same place as my tests––e.g. by being able to specify the dependency between tests (jest.essential() proposal would work for us).

@wmertens
Copy link
Contributor

wmertens commented Feb 8, 2022

@ottokruse we've been using our jest.result() prototype at #10077 for a couple of years now and it works fine.

@perymimon
Copy link

perymimon commented May 17, 2022

what about making tests inside other tests or using then to register more tests.

it the upper test fails the test inside will not run ... or register as a tests

// Old implementation
function foo() {
	return 1;
}

// New implementation
function foo() {
	return "string";
}

describe('foo()', () => {
	it('should be a number', () => {
		expect(typeof foo()).toBe('number');

                it('should be a positive number', () => {
		   expect(foo()).toBeGreaterThan(0);
	        });

  	       it('should be a finite number', () => {
		  expect(foo()).toBeLessThan(Infinity);
		  expect(foo()).toBeGreaterThan(-Infinity);
	       });

	       it('should not be a fraction', () => {
		expect(Number.isInteger(foo()).toBe(true);
	       });
	});
});

or

describe('foo()', () => {
	it('should be a number', () => {
		expect(typeof foo()).toBe('number');
        }).then( ()=>{
                it('should be a positive number', () => {
		   expect(foo()).toBeGreaterThan(0);
	        });

  	       it('should be a finite number', () => {
		  expect(foo()).toBeLessThan(Infinity);
		  expect(foo()).toBeGreaterThan(-Infinity);
	       });

	       it('should not be a fraction', () => {
		expect(Number.isInteger(foo()).toBe(true);
	       });
	});
});

In more elegant syntax

describe('foo()', async () => {
   await it('should be a number', () => {
     expect(typeof foo()).toBe('number');
  })
      
  it('should be a positive number', () => {
     expect(foo()).toBeGreaterThan(0);
  });

  it('should be a finite number', () => {
    expect(foo()).toBeLessThan(Infinity);
    expect(foo()).toBeGreaterThan(-Infinity);
  });

  it('should not be a fraction', () => {
     expect(Number.isInteger(foo()).toBe(true);
  });
	
});

@wmertens
Copy link
Contributor

@perymimon it's nice, but then the inside tests won't be known until the outer test runs, and then running only certain tests won't work.

That could be resolved by looking at the AST though 🤔 would be cool...

@apples
Copy link

apples commented May 19, 2022

@wmertens With the .then() design, the block passed to .then() could always be executed itself so the tests are known, but the actual tests within could be suppressed.

@github-actions
Copy link

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

@github-actions github-actions bot added the Stale label May 19, 2023
@github-actions
Copy link

This issue was closed because it has been stalled for 30 days with no activity. Please open a new issue if the issue is still relevant, linking to this one.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests