New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests(smoke): use DevTools runner through node directly #14205
Conversation
@@ -77,37 +61,19 @@ async function buildDevtools(logs) { | |||
* @return {Promise<{lhr: LH.Result, artifacts: LH.Artifacts, log: string}>} | |||
*/ | |||
async function runLighthouse(url, configJson, testRunnerOptions = {}) { | |||
/** @type {string[]} */ | |||
const logs = []; | |||
if (!buildDevtoolsPromise) buildDevtoolsPromise = buildDevtools(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you've removed the logs here, can you confirm that debugging issues with build
are shown in someway visibly in the output still? like, is it now hidden before tons of other output whereas before it was more prominent?
example: make download-devtools.sh
fail with exit code 1. would running yarn smoke --runner=devtools
alert us to the issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, for reference this is the output when I did your suggestion (after switching to execFileSync)
❯❯❯ yarn smoke --runner devtools --debug oopif-scripts
yarn run v1.22.18
$ node lighthouse-cli/test/smokehouse/frontends/smokehouse-bin.js --runner devtools --debug oopif-scripts
Running ONLY smoketests for: oopif-scripts
oopif-scripts smoketest starting…
+ exit 1
oopif-scripts: testing 'http://localhost:10200/oopif-scripts.html'…
nullnullError: Command failed: bash lighthouse-core/test/devtools-tests/download-devtools.sh
at checkExecSyncError (child_process.js:790:11)
at execFileSync (child_process.js:827:15)
at buildDevtools (file:///Users/asraine/lighthouse/lighthouse-cli/test/smokehouse/lighthouse-runners/devtools.js:30:3)
at runLighthouse (file:///Users/asraine/lighthouse/lighthouse-cli/test/smokehouse/lighthouse-runners/devtools.js:51:53)
at runSmokeTest (file:///Users/asraine/lighthouse/lighthouse-cli/test/smokehouse/smokehouse.js:180:18)
at concurrentMapper.runInPool.concurrency.concurrency (file:///Users/asraine/lighthouse/lighthouse-cli/test/smokehouse/smokehouse.js:78:45)
at ConcurrentMapper.pooledMap (file:///Users/asraine/lighthouse/lighthouse-cli/test/smokehouse/lib/concurrent-mapper.js:99:28)
at ConcurrentMapper.runInPool (file:///Users/asraine/lighthouse/lighthouse-cli/test/smokehouse/lib/concurrent-mapper.js:120:31)
at file:///Users/asraine/lighthouse/lighthouse-cli/test/smokehouse/smokehouse.js:78:29
at Array.map (<anonymous>)
smoketest results:
Correctly passed 0 assertions
Failed 1 assertion
oopif-scripts smoketest complete.
1 assertion failing in total
We have 1 failing smoketest(s): oopif-scripts
internal/process/esm_loader.js:74
internalBinding('errors').triggerUncaughtException(
^
Error [ERR_SERVER_NOT_RUNNING]: Server is not running.
at new NodeError (internal/errors.js:322:7)
at Server.close (net.js:1624:12)
at Object.onceWrapper (events.js:519:28)
at Server.emit (events.js:400:28)
at emitCloseNT (net.js:1677:8)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
code: 'ERR_SERVER_NOT_RUNNING'
}
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And if you run dozens of smoke tests, not just one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #14205 (comment)
I'm gonna go with your setup idea
|
||
if (!buildDevtoolsPromise) buildDevtoolsPromise = buildDevtools(logs); | ||
if (!await buildDevtoolsPromise) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this bit was important before to prevent continuing with the smokes if previous smoke test dfns failed to build devtools ... is that not the case anymore?
It may be a good reason to have an actual "setup" callback for a runner now. If you check the PR history you'll see it was removed because it seemed like we could get equivalent behavior using a cached promise. but it seems to make error reporting (of the setup code) harder to reason about / log
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If build fails an error is thrown and the entire script quits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw am fine with doing a "runner setup" fn in a new PR, I'll take that on
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarification:
buildDevtools
is only run once, but the output promise is awaited for every test * # of retries so the output error is pretty noisy.
Setup callback sounds good to me.
throw new Error(`Lighthouse did not start correctly. Got unexpected value for url: ${ | ||
JSON.stringify(lhStartedResponse.result.value)}`); | ||
} | ||
const inspectorTarget = await browser.waitForTarget(t => t.url().includes('devtools')); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
inspectorTarget._targetInfo.type = 'page';
add that and then inspectorTarget.page()
should work fine. i didn't test any APIs but ... the devtools UI is a page as far as the browser is concerned, so i thinkkkkk this'll be fine.
after figuring this out i saw Tony from Edge devtools team figured out the same hack: puppeteer/puppeteer#4247
also in the current cdt e2e tests they create a new tab that inspects another tab: https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/test/conductor/frontend_tab.ts;l=50-73;drc=db0dd9428654edf2c71c2979ce9b4ee8d0191eff
That approach would also work, I guess. But it relies on the weird hosted mode approach too much for my tastes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this does seem to work but I would rather circle back to this idea in a follow up for a few reasons:
- We don't really know what puppeteer expectations we are breaking
- Creating a puppeteer page for the inspector messed with the emulated size of the inspector. The code still worked fine, but I thought I would mention it.
- I actually like our
waitForFunction
better. With thepage.waitForFunction
we would need to wrap everything in a try-catch block and return false on an error
return true; | ||
} catch { | ||
return false; | ||
async function installCustomLighthouseConfig(browser, inspectorSession, config) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We own the code on the other side. IMO we should add a registerCustomConfigForTesting
method in the frontend so we don't need to do any of this.
Doing this as part of this PR isn't hugely important to me, but... in general, let's not monkeypatch around our own code. :p
* @param {puppeteer.CDPSession} inspectorSession | ||
* @param {string[]} logs | ||
*/ | ||
async function installConsoleListener(inspectorSession, logs) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with a page
obj we have proper events for these right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah but see #14205 (comment)
|
||
/** @type {Promise<{lhr: LH.Result, artifacts: LH.Artifacts}>} */ | ||
const resultPromise = new Promise((resolve, reject) => { | ||
const methodName = panel.__proto__.buildReportUI ? 'buildReportUI' : '_buildReportUI'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nowadays we can follow the same pattern as we use for the e2e tests: https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/test/e2e/helpers/lighthouse-helpers.ts;l=28-36;drc=e9062d8c32a4147c2e212041f79269a2e04ecc80
there is no rejection setup though. so i guess we need the renderBugReport sniffer until we fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would need to add one for artifacts too. I think the sniffer is fine for now.
Split out from #14199 since this should benefit our smokes on macos as well.
See my comments in #14199 (review) for the highlights.