Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(flushMicroTasks): Use the mocked scheduler when available #743

Conversation

MatanBobi
Copy link
Member

@MatanBobi MatanBobi commented Jul 12, 2020

Following a conversation with @eps1lon, he pointed out that maybe it would be better to mock the Scheduler (based on this PR by @acdlite).

This solution will handle both mocked version of scheduler and original version.

What: I've mocked the scheduler package in our setup-env and used the flushAll function which is available in the mocked scheduler package.

Why: React core team recommends to use a mocked version of the scheduler in test environments. We need to think if we want to recommend our users to mock the scheduler package also in their test environments. I think that if our users will mock the scheduler package, we won't need to know if someone is using fake timers or not and just call flushAll.

How: Checked for the presence of unstable_flushAll and used it if it exists.

Checklist:

  • Documentation added to the
    docs site - N/A
  • Tests - N/A
  • Typescript definitions updated - N/A
  • Ready to be merged

I'm opening this one to create the conversation here and see what you think about it.

Copy link
Member

@kentcdodds kentcdodds left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could simplify this a bit :)

Comment on lines +90 to 95
} else if (Scheduler && Scheduler.unstable_flushAll) {
Scheduler.unstable_flushAll()
enqueueTask(resolve)
} else {
scheduleCallback(() => enqueueTask(resolve))
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
} else if (Scheduler && Scheduler.unstable_flushAll) {
Scheduler.unstable_flushAll()
enqueueTask(resolve)
} else {
scheduleCallback(() => enqueueTask(resolve))
}
} else {
Scheduler?.unstable_flushAll?.()
enqueueTask(resolve)
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that in that case we won't scheduleCallback in case we're not using the mocked scheduler, right?

@MatanBobi
Copy link
Member Author

Thanks @kentcdodds!
Do you have any thoughts on whether we should recommend our users to mock the scheduler package or not?

@@ -86,6 +87,9 @@ export default function flushMicroTasks() {
// reproduce the problem, so there's no test for it. But it works!
jest.advanceTimersByTime(0)
resolve()
} else if (Scheduler && Scheduler.unstable_flushAll) {
Scheduler.unstable_flushAll()
enqueueTask(resolve)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have to queue the task. Should be able resolve immediately.

Copy link
Member Author

@MatanBobi MatanBobi Jul 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried that, I wasn't able to understand why it didn't work and didn't get the time to look into it yesterday,
I thought it might be related to the fact that we're enqueuing a microtask in the destroy callback on the useEffect and flushAll doesn't wait for that.
I'll look into it today.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we supposed to wait for microtasks in cleanup functions? Does it even make sense to run async code in the cleanup function?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd we can avoid this being async that would be awesome, but I don't think we can avoid it because we can't assume people will be mocking the scheduler (especially when they're testing in environments with poor support for mocking).

I assumed that calling flushAll would mean we wouldn't need to worry about scheduling a callback. Sure wish we had a test that reproduces these timing issues. I have one in my bookshelf app that have me trouble before making all these changes. I can try these changes out there. I just couldn't simplify it to a reproducible example for inclusion in this project 😬

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One suspicion I have is that you clean up your timers after the cleanup from testing-library runs. At least this is one issue I encountered in my mocha tests. Basically if you import from testing-libary we trigger an afterEach hook with cleanup and then in your test you call afterEach to cleanup timers. But this results in the cleanup of testing-libary being called with fake timers enabled which is fundamentally flawed (people can't expect to check for fake timers since they're not standardized).

If you can point me to the test in question I could check out if that is indeed the case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😅 yeah, that's what I meant 👍👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @eps1lon, I wasn't aware of the infinite loop that runAllTimers can cause. Using runOnlyPendingTimers sounds like a good approach but I'm not sure all testing frameworks have that ability (I think jasmine only has tick functionality but I've never really used it). Anyways, my examples will use jest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using runOnlyPendingTimers sounds like a good approach but I'm not sure all testing frameworks have that ability (I think jasmine only has tick functionality but I've never really used it). Anyways, my examples will use jest.

Which is why I don't think we should handle this in testing-library. Or at at least provide and entry point that makes no assumption about the testing framework.

Copy link
Member Author

@MatanBobi MatanBobi Jul 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which is why I don't think we should handle this in testing-library. Or at at least provide and entry point that makes no assumption about the testing framework.

You're right.. Our plan wasn't to handle this within testing-library, just explain what should be done by our users in a new documentation page.. Do you think we shouldn't do that either?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's definitely do that 👍

@kentcdodds
Copy link
Member

I think this is the right approach: #744

@MatanBobi
Copy link
Member Author

I think this is the right approach: #744

Yes, definitely. I'm closing this one. Thanks!

@MatanBobi MatanBobi closed this Jul 13, 2020
@kentcdodds
Copy link
Member

🎉 This issue has been resolved in version 10.4.6 🎉

The release is available on:

Your semantic-release bot 📦🚀

@codesandbox-ci
Copy link

This pull request is automatically built and testable in CodeSandbox.

To see build info of the built libraries, click here or the icon next to each commit SHA.

Latest deployment of this branch, based on commit 6d56815:

Sandbox Source
React Configuration
kentcdodds/react-testing-library-examples Configuration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants