Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update "Using Lighthouse as a trace processor" docs #15790

Open
johnbenz13 opened this issue Jan 30, 2024 · 3 comments
Open

Update "Using Lighthouse as a trace processor" docs #15790

johnbenz13 opened this issue Jan 30, 2024 · 3 comments
Assignees

Comments

@johnbenz13
Copy link

Summary

I want to leverage my e2e tests written with jest and puppeteer to run lighthouse audits.
When running each test, I'm saving Trace and DevtoolsLog under defaultPass.trace.json and defaultPass.devtoolslog.json. I'm then trying to run lighthouse as a trace processor like explained in the doc, but it fails due to missing artifacts.json.

So I understand I need to somehow generate the artifacts.json file

  • What is the official/proper way to generate it ?
  • Can it be done using Trace and DevToolsLog only ? I would like to avoid running lighthouse during my e2e tests

Thanks 🙏

@adamraine
Copy link
Member

What is the official/proper way to generate it ?

artifacts.json is generated by Lighthouse itself using gatherMode on the config or -G on the CLI:

lighthouse https://example.com -G path/to/artifacts/folder

Can it be done using Trace and DevToolsLog only ? I would like to avoid running lighthouse during my e2e tests

It can't be recreated from just the Trace and DevtoolsLog unfortunately. Honestly, I think this doc section is either misleading or outdated since Lighthouse doesn't really work with just the trace and dtlog as input (at least not anymore).

@adamraine adamraine changed the title Using Lighthouse as a trace processor Update "Using Lighthouse as a trace processor" docs Jan 30, 2024
@connorjclark
Copy link
Collaborator

connorjclark commented Jan 30, 2024

It sounds like you may want to save all the artifacts and run the LH audits later? Look at the -G and -A options.

If you really want to run Lighthouse audits directly, there is this (old) documentation: https://github.com/GoogleChrome/lighthouse/blob/main/docs/hacking-tips.md#using-audit-classes-directly-providing-your-own-artifacts

Additionally, you could run Lighthouse with puppeteer directly. Consult our docs for more: https://github.com/GoogleChrome/lighthouse/blob/main/docs/user-flows.md https://github.com/GoogleChrome/lighthouse/blob/main/docs/user-flows.md

If you share more about your use case we can provide better advice.

@johnbenz13
Copy link
Author

Thanks for your input.
About using lighthouse in gatherMode this is an option I've considered but I would like to see if there is a less "invasive" option that does not require running lighthouse during the tests phase.

If you share more about your use case we can provide better advice.

I'll explain my use case so you can better understand why I'm trying to avoid running lighthouse during the tests phase.

We're a large company (hundreds of developers) and we have one team that provides the infrastructure running e2e test for all the devs. This is a custom tests running platform that leverages jest and Puppeteer.
I'm part of another team that builds performance measurement tools (mainly based on web-vitals/lighthouse libraries) and we want to provide our developers with the ability to generate lighthouse reports for a chosen subset of their tests. It makes sense to us since they have already written hundreds of thousands of tests that use custom drivers/boilerplates and asking them to rewrite everything into a new platform for performance tests would be a waste.

Given that we are 2 different teams (my team and the team providing the tests infra), I'm trying to be as less invasive as possible. Asking them to integrate lighthouse even as gatherMode will require more cooperation, more involvement from my team in their platform's code and will eventually affect time to delivery.

Note that my focus is only performance metrics used to calculate the performance category score as well as INP that I want to get using "User Flow". I'm not interested in accessibility, SEO or even recommendations on how to fix performance issues. The latter can be generated on-demand, only for specific projects where a regression was identified.

That's why I'm currently looking at what's the minimum required I need to gather during the tests phase, so I can send that data to my service running lighthouse, where I can "re-calculate offline" only the required artifacts for the audits I'm interested in and then run lighthouse in auditMode.
For example the required artifacts for FCP are totally "re-calculable offline" once I have DevToolsLog and Trace.

Of course I'm not ruling out the possibility that I will eventually have to run lighthouse in gatherMode during the tests phase, but this has to be the last option I'm considering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants