@foo-software/lighthouse-check
An NPM module and CLI to run Lighthouse audits programmatically. This project aims to add bells and whistles to automated Lighthouse testing for DevOps workflows. Easily implement in your Continuous Integration or Continuous Delivery pipeline.
This project provides two ways of running audits - locally in your own environment or remotely via Foo's Automated Lighthouse Check API. For basic usage, running locally will suffice, but if you'd like to maintain a historical record of Lighthouse audits and utilize other features, you can run audits remotely by following the steps and examples.
lighthouse-check
will send results and optionally include versioning data like branch, author, PR, etc (typically from GitHub).npm install @foo-software/lighthouse-check
@foo-software/lighthouse-check
provides several functionalities beyond standard Lighthouse audits. It's recommended to start with a basic implementation and expand on it as needed.
Calling lighthouseCheck
will run Lighthouse audits against https://www.foo.software/lighthouse
and https://www.foo.software/contact
.
import { lighthouseCheck } from '@foo-software/lighthouse-check';
(async () => {
const response = await lighthouseCheck({
urls: [
'https://www.foo.software/lighthouse',
'https://www.foo.software/contact',
],
});
console.log('response', response);
})();
Or via CLI.
$ lighthouse-check --urls "https://www.foo.software/lighthouse,https://www.foo.software/contact"
The CLI will log the results.
Foo's Automated Lighthouse Check can monitor your website's quality by running audits automatically! It can provide a historical record of audits over time to track progression and degradation of website quality. Create a free account to get started. With this, not only will you have automatic audits, but also any that you trigger additionally. Below are steps to trigger audits on URLs that you've created in your account.
apiToken
option.Basic example with the CLI
$ lighthouse-check --apiToken "abcdefg"
apiToken
option and page token (or group of page tokens) as urls
option.Basic example with the CLI
$ lighthouse-check --apiToken "abcdefg" \
--urls "hijklmnop,qrstuv"
You can combine usage with other options for a more advanced setup. Example below.
Runs audits remotely and posts results as comments in a PR
$ lighthouse-check --apiToken "abcdefg" \
--urls "hijklmnop,qrstuv" \
--prCommentAccessToken "abcpersonaltoken" \
--prCommentUrl "https://api.github.com/repos/foo-software/lighthouse-check/pulls/3/reviews"
You may notice above we had two lines of output; Report
and Local Report
. These values are populated when options are provided to save the report locally and to S3. These options are not required and can be used together or alone.
Saving a report locally example below.
import { lighthouseCheck } from '@foo-software/lighthouse-check';
(async () => {
const response = await lighthouseCheck({
// relative to the file. NOTE: when using the CLI `--outputDirectory` is relative
// to where the command is being run from.
outputDirectory: '../artifacts',
urls: [
'https://www.foo.software/lighthouse',
'https://www.foo.software/contact',
],
});
console.log('response', response);
})();
Or via CLI.
$ lighthouse-check --urls "https://www.foo.software/lighthouse,https://www.foo.software/contact" \
--ouputDirectory "./artifacts"
import { lighthouseCheck } from '@foo-software/lighthouse-check';
(async () => {
const response = await lighthouseCheck({
awsAccessKeyId: 'abc123',
awsBucket: 'my-bucket',
awsRegion: 'us-east-1',
awsSecretAccessKey: 'def456',
urls: [
'https://www.foo.software/lighthouse',
'https://www.foo.software/contact',
],
});
console.log('response', response);
})();
Or via CLI.
$ lighthouse-check --urls "https://www.foo.software/lighthouse,https://www.foo.software/contact" \
--awsAccessKeyId abc123 \
--awsBucket my-bucket \
--awsRegion us-east-1 \
--awsSecretAccessKey def456 \
Below is a basic Slack implementation. To see how you can accomplish notifications with code versioning data - see the CircleCI example (ie GitHub authors, PRs, branches, etc).
import { lighthouseCheck } from '@foo-software/lighthouse-check';
(async () => {
const response = await lighthouseCheck({
slackWebhookUrl: 'https://www.my-slack-webhook-url.com',
urls: [
'https://www.foo.software/lighthouse',
'https://www.foo.software/contact',
],
});
console.log('response', response);
})();
Or via CLI.
$ lighthouse-check --urls "https://www.foo.software/lighthouse,https://www.foo.software/contact" \
--slackWebhookUrl "https://www.my-slack-webhook-url.com"
The below screenshot shows an advanced implementation as detailed in the CircleCI example.
Populate prCommentAccessToken
and prCommentUrl
options to enable comments on pull requests.
You can use validateStatus
to enforce minimum scores. This could be handy in a DevOps workflow for example.
import {
lighthouseCheck,
validateStatus,
} from '@foo-software/lighthouse-check';
(async () => {
try {
const response = await lighthouseCheck({
awsAccessKeyId: 'abc123',
awsBucket: 'my-bucket',
awsRegion: 'us-east-1',
awsSecretAccessKey: 'def456',
urls: [
'https://www.foo.software/lighthouse',
'https://www.foo.software/contact',
],
});
const status = await validateStatus({
minAccessibilityScore: 90,
minBestPracticesScore: 90,
minPerformanceScore: 70,
minProgressiveWebAppScore: 70,
minSeoScore: 80,
results: response,
});
console.log('all good?', status); // 'all good? true'
} catch (error) {
console.log('error', error.message);
// log would look like:
// Minimum score requirements failed:
// https://www.foo.software/lighthouse: Performance: minimum score: 70, actual score: 64
// https://www.foo.software/contact: Performance: minimum score: 70, actual score: 44
}
})();
Or via CLI. Important: outputDirectory
value must be defined and the same in both commands.
$ lighthouse-check --urls "https://www.foo.software/lighthouse,https://www.foo.software/contact" \
--outputDirectory /tmp/artifacts \
$ lighthouse-check-status --outputDirectory /tmp/artifacts \
--minAccessibilityScore 90 \
--minBestPracticesScore 90 \
--minPerformanceScore 70 \
--minProgressiveWebAppScore 70 \
--minSeoScore 80
In the below example we run Lighthouse audits on two URLs, save reports as artifacts, deploy reports to S3 and send a Slack notification with GitHub info. We defined environment variables like LIGHTHOUSE_CHECK_AWS_BUCKET
in the CircleCI project settings.
This implementation utilizes a CircleCI Orb - lighthouse-check-orb.
version: 2.1
orbs:
lighthouse-check: foo-software/[email protected] # ideally later :)
jobs:
test:
executor: lighthouse-check/default
steps:
- lighthouse-check/audit:
urls: https://www.foo.software/lighthouse,https://www.foo.software/contact
# this serves as an example, however if the below environment variables
# are set - the below params aren't even necessary. for example - if
# LIGHTHOUSE_CHECK_AWS_ACCESS_KEY_ID is already set - you don't need
# the line below.
awsAccessKeyId: $LIGHTHOUSE_CHECK_AWS_ACCESS_KEY_ID
awsBucket: $LIGHTHOUSE_CHECK_AWS_BUCKET
awsRegion: $LIGHTHOUSE_CHECK_AWS_REGION
awsSecretAccessKey: $LIGHTHOUSE_CHECK_AWS_SECRET_ACCESS_KEY
slackWebhookUrl: $LIGHTHOUSE_CHECK_SLACK_WEBHOOK_URL
workflows:
test:
jobs:
- test
Reports are saved as "artifacts".
Upon clicking the HTML file artifacts, we can see the full report!
In the example above we also uploaded reports to S3. Why would we do this? If we want to persist historical data - we don't want to rely on temporary cloud storage.
Similar to the CircleCI implementation, we can also create a workflow implementation with GitHub Actions using lighthouse-check-action
. Example below.
.github/workflows/test.yml
name: Test Lighthouse Check
on: [push]
jobs:
lighthouse-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- run: mkdir /tmp/artifacts
- name: Run Lighthouse
uses: foo-software/lighthouse-check-action@master
with:
accessToken: ${{ secrets.LIGHTHOUSE_CHECK_GITHUB_ACCESS_TOKEN }}
author: ${{ github.actor }}
awsAccessKeyId: ${{ secrets.LIGHTHOUSE_CHECK_AWS_ACCESS_KEY_ID }}
awsBucket: ${{ secrets.LIGHTHOUSE_CHECK_AWS_BUCKET }}
awsRegion: ${{ secrets.LIGHTHOUSE_CHECK_AWS_REGION }}
awsSecretAccessKey: ${{ secrets.LIGHTHOUSE_CHECK_AWS_SECRET_ACCESS_KEY }}
branch: ${{ github.ref }}
outputDirectory: /tmp/artifacts
urls: 'https://www.foo.software/lighthouse,https://www.foo.software/contact'
sha: ${{ github.sha }}
slackWebhookUrl: ${{ secrets.LIGHTHOUSE_CHECK_WEBHOOK_URL }}
- name: Upload artifacts
uses: actions/upload-artifact@master
with:
name: Lighthouse reports
path: /tmp/artifacts
You can override default config and options by specifying overridesJsonFile
option. Contents of this overrides JSON file can have two possible fields; options
and config
. These two fields are eventually used by Lighthouse to populate opts
and config
arguments respectively as illustrated in Using programmatically. The two objects populating this JSON file are merged shallowly with the default config and options.
Example content of
overridesJsonFile
{
"config": {
"settings": {
"onlyCategories": ["performance"]
}
},
"options": {
"chromeFlags": ["--disable-dev-shm-usage"]
}
}
Running lighthouse-check
in the example below will run Lighthouse audits against https://www.foo.software/lighthouse
and https://www.foo.software/contact
and output a report in the '/tmp/artifacts' directory.
Format is --option <argument>
. Example below.
$ lighthouse-check --urls "https://www.foo.software/lighthouse,https://www.foo.software/contact" \
--outputDirectory /tmp/artifacts
lighthouse-check-status
example
$ lighthouse-check-status --outputDirectory /tmp/artifacts \
--minAccessibilityScore 90 \
--minBestPracticesScore 90 \
--minPerformanceScore 70 \
--minProgressiveWebAppScore 70 \
--minSeoScore 80
All options mirror the NPM module. The only difference is that array options like urls
are passed in as a comma-separated string as an argument using the CLI.
$ docker pull foosoftware/lighthouse-check:latest
$ docker run foosoftware/lighthouse-check:latest \
lighthouse-check --verbose \
--urls "https://www.foo.software/lighthouse,https://www.foo.software/contact"
lighthouse-check
functions accept a single configuration object.
lighthouseCheck
You can choose from two ways of running audits - locally in your own environment or remotely via Foo's Automated Lighthouse Check API. You can think of local runs as the default implementation. For directions about how to run remotely see the Foo's Automated Lighthouse Check API Usage section. We denote which options are available to a run type with the Run Type
values of either local
, remote
, or both
.
Below are options for the exported lighthouseCheck
function or lighthouse-check
command with CLI.
validateStatus
results
parameter is required or alternatively outputDirectory
. To utilize outputDirectory
- the same value would also need to be specified when calling lighthouseCheck
.
lighthouseCheck
function returns a promise which either resolves as an object or rejects as an error object. In both cases the payload will be of the same shape documented below.
This package was brought to you by Foo - a website performance monitoring tool. Create a free account with standard performance testing. Automatic website performance testing, uptime checks, charts showing performance metrics by day, month, and year. Foo also provides real time notifications when performance and uptime notifications when changes are detected. Users can integrate email, Slack and PagerDuty notifications.