Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Rewrite all the tests with testing-library #1933

Closed
oliviertassinari opened this issue Jun 24, 2020 · 3 comments
Closed

[RFC] Rewrite all the tests with testing-library #1933

oliviertassinari opened this issue Jun 24, 2020 · 3 comments
Milestone

Comments

@oliviertassinari
Copy link
Member

oliviertassinari commented Jun 24, 2020

This is a follow-up on the discussion we have started in #1730 (comment) between @eps1lon, @dmtrKovalenko, and @oliviertassinari.

The arguments in favor we have explored so far:

  1. It's fast. For comparison: with Cypress, we need ~26s to run 28 tests, with testing-library, ~1s (60s / 1956 tests * 28 tests).
  2. It's end-to-end and reliable. Running the tests in JSDOM and a ChromeHeadless is probably what yields 80% of the value but we can also easily run them in BrowserStack.
  3. It works well, at scale. The problem is already mostly solved in the main repository, we don't need to complexity the stack with different tools. We can apply back what we have spend months refining, and move to solve more user-facing problems. This is has a significant compounding implication, we can have 1. Material-UI's engineer more easily making contributions between the different parts of the product, 2. any improvements we do to the stack benefits all the components.

The concerns against we have explored so far:

  • What about TDD? TDD can be done by watching the source and scoping the tests that run with either .only or a filter.
  • What about visual regression tests? I believe it works great in the repository's stack. By not taking screenshots of the pages of the documentation, we don't get false positives when updating the wordings. By using Argos-CI, we need to pay $0/month instead of $800/month to have the infra running. It's also going to be better after [test] Use playwright instead of vrtest material-ui#21449.
  • Tests are hard to debug. I believe the usage of these 3 leverages makes debugging easy. 1. Dead simple test cases 2. debug() utils, 3. Assertion playground http://testing-playground.com/. In the event, it's not enough, I copy and paste the test case and run it in the documentation, to see how it behaves. Now, there is definitely a concern around getting used to the tooling, it takes time, which echos back to the value of point 3. and why it's important to share it between all the components.
  • JSDOM isn't reliable. We have faced this problem in a couple of cases in the codebase. On the 2,000 existing tests, they are some that need to be skipped. For instance: https://github.com/mui-org/material-ui/blob/d0bfcb4f11fd3904b1e235d80fbfe23758de8372/packages/material-ui/src/Select/Select.test.js#L1018. The solution we have found to this problem is to run the same tests in different targets: JSDOM, ChromeHeadless, Firefox, Safari, Edge.
  • Are the tests end-to-end? Yes as much as we can. We could even introduce feat: make several API and implementation improvements testing-library/user-event#348 if necessary.
  • We have too many tests to migrate. It seems that we have ~100 tests to migrate, to give a comparison it's about the number of tests the Autocomplete has, it should be manageable.

I believe the conclusion is that we should migrate all the tests to testing-library until we hit a roadblock

@eps1lon
Copy link
Member

eps1lon commented Jun 24, 2020

For tests that don't replay any user interaction I'd always suggest testing-library in a browser. It's reliable and faster if you have many tests (browser requires bundling which takes time but a browser is faster runs each test faster. There's a breakpoint when faster tests outweigh the upfront cost of bundling). If you don't have many tests and don't have an issue with diving deep into DOM specs, browser quirks and JSDOM then use JSDOM.

To test user interactions there's no silver-bullet that I know of to decide when to run automated e2e test or stick testing-library and dispatching events manually. To me it comes down to past experiences: does the test break on unrelated changes? Do you still encounter bugs for that scenario even though you have written tests? What is the better debugging experience? etc. So I would trust pickers maintainers to make that judgement.

@oliviertassinari
Copy link
Member Author

After internal discussion, we will apply this strategy:

We will use testing-library exclusively for all future pull requests. This will allows us to better assess the quality of this solution. After a couple of weeks, if we don't find any blocker, we will then migrate all the tests. Hopefully, this will allow Material-UI to simplify its stack (fewer tools).

@oliviertassinari
Copy link
Member Author

oliviertassinari commented Nov 15, 2020

Done in https://next.material-ui.com/components/pickers. However, we had to skip ~50% of the tests as they are unreliable, fail in Safari, fail depending on the time of the day, etc. We will keep polishing them. It could be coming from flaws in the logic of the components or in how the tests are written. We have seen both so far.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants