Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: add test runner for testing pptr in a browser #6200

Merged
merged 1 commit into from Jul 22, 2020

Conversation

LarsDenBakker
Copy link
Contributor

@LarsDenBakker LarsDenBakker commented Jul 10, 2020

@jackfranklin @TimvdLippe
As discussed, this is a basic example setup for web test runner. I added an example test using mocha and expect, similar to the node tests.

Reference: #6125

The config is an es module (requires latest node 12 or 14) but it can be commonjs as well.

Things to be done:

Make the tests actually work
Ideally we can reuse the existing unit tests, or at least a subset of them

TS or JS tests
The node tests are written in TS and run in mocha with ts-node. I followed a similar approach of using esbuild to compile TS to JS on the fly. We could also run the compiled JS in the browser.

Handle expect errors
I created a small UMD wrapper to make expect work in the browser. However the error stack traces are handled properly by web test runner (I've only been testing with chai until now). Something to look into on WTR side.

Remove caniuse dependency
caniuse has a license which isn't whitelisted, and it's actually problematic in general. I thought I had gotten rid of it, but it turns out for test coverage we still need to use babel. I want to get rid of this dependency and use v8 test coverage information. I got it working with playwright, but for puppeteer we need something like #2136.

puppeteer dependency
web test runner uses puppeteer-core to control the browser when running tests. so we have puppeteer-ception. I'm not sure that would be a problem here?

The test runner is brand new, so I'm sure we'll find some rough edges.

Copy link
Contributor

@TimvdLippe TimvdLippe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my own understanding, this PR serves as an example as to how your web-test-runner uses Puppeteer to run tests in a browser, right? Or do you want it to be used as part of the test suite for Puppeteer itself?

If the former, it probably makes more sense to put this in a folder like examples/ or something. But I am not sure what the best format is and where best to place it.

@LarsDenBakker
Copy link
Contributor Author

@TimvdLippe the goal here is to use WTR to run test the puppeteer browser distribution.

@TimvdLippe
Copy link
Contributor

Ah I see. Could you please reference #6125 in that case?

@LarsDenBakker
Copy link
Contributor Author

Ah I didn't know about that. I added it now.

@jackfranklin
Copy link
Collaborator

@LarsDenBakker thanks for putting this together! We should see if we can get it running on Travis to see what happens but I think the ESM build isn't truly functioning yet, we need to add file extensions (#6202) and also figure out what to do with Mitt #6203).

I'd also be quite keen not to compile via the test runner, but instead run the tests directly against the output in lib/esm. That will let us test the same ESM build we're shipping to users. WDYT?

@LarsDenBakker
Copy link
Contributor Author

@jackfranklin yeah that makes sense, we can run the tests on the .js files. We don't do source maps in test reporting yet, but we'll pick that up very soon.

Regarding mitt, the test runner has --node-resolve flag to resolve imports serverside. But that doesn't help after publishing, of course :)

@jackfranklin
Copy link
Collaborator

Yeah, I'd rather not rely on the --node-resolve flag.

I think a good step here would be to get a test written for the changes to our debug module in #6210 - that's a self contained function that we can run and test in the browser. Probably a good candidate for the very first test as it's tiny and doesn't rely on any big Puppeteer objects.

@LarsDenBakker
Copy link
Contributor Author

LarsDenBakker commented Jul 14, 2020

@jackfranklin I removed the node resolve and TS compilation, and added a test for the current debug module.

I kept the UMD -> ESM transform for the expect library since they don't ship an es module. You could also use it from the window, though I'm not sure how well that works for IDE code completion.

@jackfranklin
Copy link
Collaborator

@LarsDenBakker thanks for your efforts here, sorry for my slow reply.

Looks like CI has a problem with the new spec file:

error: Parsing error: ImportDeclaration should appear when the mode is ES6 and in the module context at test-browser/debug.spec.js:

But this is looking good to go. Could you also try adding this as a step to .travis.yml so we can check CI is happy and stable?

Once we get this all green I'd be happy to ship this. @mathiasbynens what say you? :)

@LarsDenBakker LarsDenBakker force-pushed the feat/web-test-runner branch 3 times, most recently from 3928baf to d11baf4 Compare July 22, 2020 08:08
@LarsDenBakker
Copy link
Contributor Author

No worries. I updated the linting configuration to lint the browser tests in a browser context. I also updated the travis configuration, but it doesn't seem to affect the CI. Does it always pull from master? Or do I need to update something else as well?

@LarsDenBakker
Copy link
Contributor Author

Looks like it's picking up the changes now, not sure why it wasn't before. I updated the tests to match the changes in the debug module on master. All green now 👍

@jackfranklin jackfranklin changed the title feat: add web test runner chore: add test runner for testing pptr in a browser Jul 22, 2020
@mathiasbynens mathiasbynens merged commit 15d1906 into puppeteer:main Jul 22, 2020
@mathiasbynens
Copy link
Member

Thanks, Lars!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants