Skip to content

Latest commit

 

History

History
419 lines (329 loc) · 18 KB

CONTRIBUTING.md

File metadata and controls

419 lines (329 loc) · 18 KB

Contributing

This is the JavaScript track, one of the many tracks on exercism. It holds all the exercises that are currently implemented and available for students to complete. The track consists of various concept exercises which teach the JavaScript syllabus, and various practice exercises, which are unlocked by progressing in the syllabus and can be used to practice concepts learned. You can find this in the config.json. It's not uncommon that people discover incorrect implementations of certain tests, have a suggestion for a track-specific hint to aid the student on the JavaScript specifics, see optimisations in terms of the configurations of jest, eslint or other dependencies, report missing edge cases, factual errors, logical errors, and, implement exercises or develop new exercises.

We welcome contributions of all sorts and sizes, from reporting issues to submitting patches, as well as joining the current discussions 💬.

Warning

This guide is slightly outdated and doesn't hold the V3 changes yet.



This guide covers several common scenarios on improving the JavaScript track. There are several other guides about contributing to other parts of the Exercism ecosystem, that are similar to this repository.

Code of Conduct

Help us keep Exercism welcoming. Please read and abide by the Code of Conduct.

Exercises

Before contributing code to any existing exercise or any new exercise, please have a thorough look at the current exercises and dive into open issues.

New exercise

There are two ways to implement new exercises (exercises that don't exist in this track).

  1. Pick one from the list of exercises (implemented in other tracks).
  2. Create a new, track-specific exercise from scratch.

Implementing existing exercise

The majority of exercises are practice exercises. These exercises are not part of the syllabus (they are not concept exercises), and often have canonical data/shared data between tracks.

Let's say you want to implement a new exercise, from the list of exercises, because you've noticed that this track could benefit from this exercise, really liked it in another track, or just because you find this interesting; the first step is to check for an open issue. If it's there, make sure no one is working on it, and most of all that there is not an open Pull Request towards this exercise.

If there is no such issue, you may open one. The baseline of work is as follows:

  1. Open a new issue, we'll label it with new exercise ✨
  2. We'll assign the issue to you, so you get to work on this exercise
  3. Create a new folder in /exercises
  4. You'll need to sync this folder with the matching config files. You can use scripts/sync to do this: ASSIGNMENT=slug npx babel-node scripts/sync.
  5. Create a <slug>.js stub file.
  6. Create a <slug>.spec.js test file. Here add the tests, per canonical data if possible (more on canonical data below).
  7. Create an example.js file. Place a working implementation, assuming it's renamed to <slug>.js
  8. Create .meta/tests.toml. If the exercise that is being implemented has test data in the problem specifications repository, the contents of this file must be a list of UUIDs of the tests that are implemented or not implemented. Scroll down to tools to find configlet which aids in generating this file interactively.
  9. Run the tests locally, using scripts/test: ASSIGNMENT=slug npx babel-node scripts/test.
  10. Run the linter locally, using scripts/lint: ASSIGNMENT=slug npx babel-node scripts/lint.
  11. Create an entry in config.json: a unique new UUID (you can use the configlet uuid tool to generate one, scroll down to tools to see how you can get it), give it a difficulty (should be similar to similar exercises), and make sure the order of the file is sane. Currently, the file is ordered first on concept exercise, then on "original core", finally everything else, on difficulty low to high, and ultimately lexicographically.
  12. Format the files, using scripts/format: npx babel-node scripts/format.

The final step is opening a Pull Request, with these items all checked off. Make sure the tests run and the linter is happy. It will run automatically on your PR.

If you want to work on a new concept exercise, please talk to the current maintainers of the track.

Creating a track-specific exercise

The steps for a track-specific exercise are similar to those of implementing an established, existing exercise. The differences are:

  • You'll have to write a README.md and test-suite from scratch
  • You'll have to come up with a unique slug.
  • We need to require an icon for it.
  • Generate a UUID, for example using configlet.

Open a new issue with your proposal, and we'll make sure all these steps are correctly taken. Don't worry! You're not alone in this.

Existing exercises

There are always improvements possible on existing exercises.

Improving the README.md

For practice exercises, README.md is generated from a canonical source.

README.md: the description that shows up on the student's exercise page, when they are ready to start. It's also downloaded as part of the exercise's data. The README.md, together with the <slug>.spec.js file form the contract for the implementation of the exercise. No test should force a specific implementation, no README.md explanation should give away a certain implementation. The README.md files are generated, which is explained here.

  • This file may need to be regenerated to sync with the latest canonical data.
  • You may contribute track specific hints.md, as listed in that document
  • You may improve the track-specific exercise-readme-insert.md, and regenerate all the READMEs.

For concept exercises, README.md is generated from the various docs inside the exercise .docs directory.

  • introduction.md: introduce the concept. This is placed on top of the file.
  • instructions.md: the actual exercise instructions. These follow the introduction.
  • hints.md: These are hidden behind a button for each task listed in instructions.md

Syncing the exercise

Only practice exercises require syncing.

Syncing an exercise with canonical data: There is a problem-specifications repository that holds test data in a standardised format. These tests are occasionally fixed, improved, added, removed or otherwise changed. Each change also changes the version of that canonical data. Syncing an exercise consists of:

  • updating the <slug>.spec.js file;
  • updating the .meta/tests.toml file, if the exercise that is being updated has test data in the problem specifications repository. The contents of this file can be updated using configlet, interactively;
  • match the example.js file to still work with the new tests; and
  • regenerate the README.md, should there be any changes.

Improving or adding mentor notes

Mentor notes are the notes given to the mentors to guide them with mentoring. These notes do not live in this repository, but instead in the website-copy repository. Find their contributing guidelines here.

Improving or adding automated test analyzers

Some exercises already have automated mentoring support. These automations don't live in this repository, but instead in the javascript-analyzer repository. Find their contributing guidelines here.

Documentation

There is quite a bit of student-facing documentation, which can be found in the docs folder. You may improve these files by making the required changes and opening a new Pull Request.

Tools

You'll need LTS or higher Node.js to contribute to the code in this repository. Run npm install in the root to be able to run the scripts as listed below. We use the following dependencies:

  • shelljs to provide shell interface to scripts
  • eslint for linting all code in the stub, test file and example file
  • jest to run all the test files on all example implementations
  • babel to transpile everything so it works regardless of your version of Node.js.

We also use prettier to format the files. Prettier is installed when using npm install. You may use npx babel-node scripts/format to run prettier. If you want to auto-format using your editor, install via npm install and it will Just Work™.

Fetch configlet

If you'd like to download configlet, you can use the fetch-configlet binary. It will run on Linux, macOS and Windows, and download configlet to your local drive. Find more information about configlet here.

If a track implements an exercise for which test data exists, the exercise must contain a .meta/tests.toml file. The goal of the tests.toml file is to keep track of which tests are implemented by the exercise. Tests in this file are identified by their UUID and each test has a boolean value that indicates if it is implemented by that exercise. A tests.toml file for a track's two-fer exercise looks like this:

[canonical-tests]
# no name given
"19709124-b82e-4e86-a722-9e5c5ebf3952" = true
# a name given
"3451eebd-123f-4256-b667-7b109affce32" = true
# another name given
"653611c6-be9f-4935-ab42-978e25fe9a10" = false

To make it easy to keep the tests.toml files up to date, contributors can use the configlet application's sync command. This command will compare the tests specified in the tests.toml files against the tests that are defined in the exercise's canonical data. It then interactively gives the maintainer the option to include or exclude test cases that are currently missing, updating the tests.toml file accordingly.

Scripts

We have various scripts to aid with maintaining and contributing to this repository.

Important

If you encounter the following error:

SyntaxError: Unexpected token 'export'

It's because your local Node.js version does not support es6 import and export statements in regular .js files, or files without extension. This is one of the reasons why these scripts are meant to be run through Node.js:

npx babel-node scripts/the-script

Additionally, this ensures that the code written in the scripts and their dependencies can be executed by your current Node.js version, which may be different from the version used by the maintainer or contributor who contributed to the script.

format

/*
 * Run this script (from root directory): npx babel-node scripts/format
 *
 * This runs `prettier` on all applicable files, FORCES using the same version
 * as the CI uses to check if the files have been formatted.
 */

Use this action to format all the files using the correct version of prettier. If you want your editor to do this automatically, install the project development dependencies (npm i), which includes prettier. The correct version will be extracted when running .github/workflows/verify-code-formatting.yml.

lint

/*
 * Run this script (from root directory): npx babel-node scripts/lint
 *
 * This runs `eslint` on all sample solutions (and test) files
 */

If the ASSIGNMENT environment variable is set, only that exercise is tested. For example, if you only want to lint two-fer, you may, depending on your environment use:

ASSIGNMENT=two-fer npx babel-node scripts/lint

test

/**
 * Run this script (from root directory): npx babel-node scripts/test
 *
 * This runs `jest` tests for all sample solutions
 */

If the ASSIGNMENT environment variable is set, only that exercise is tested. For example, if you only want to test the example.js for two-fer, you may, depending on your environment, use:

ASSIGNMENT=two-fer npx babel-node scripts/test

sync

/**
 * Run this script (from root directory): npx babel-node scripts/sync
 *
 * This script is used to propagate any change to root package.json to
 * all exercises and keep them in sync.
 * There is a CI step which checks that package.json in root & exercises match
 * (see checksum script for more info).
 */

If the ASSIGNMENT environment variable is set, only that exercise is tested. For example, if you only want to sync the files for two-fer, you may, depending on your environment, use:

ASSIGNMENT=two-fer npx babel-node scripts/sync

checksum

/*
 * Run this script (from root directory): npx babel-node scripts/checksum
 *
 * This will check root `package.json` matches each exercise's `package.json`.
 * But the catch is there are some dependencies that are only used at build-time and not served to end-users
 * We skip those dependencies while performing the checksum.
 * See `SKIP_PACKAGES_FOR_CHECKSUM` in helpers.js for a list of skipped packages.
 */

ci-check

/**
 * Run this script (from root directory): npx babel-node scripts/ci-check
 *
 * This will run the following checks:
 *
 * 1. Check config in all exercises matches
 * 2. Checks stubs exist
 * 3. Run eslint to check code-style
 */

Run this script to check stubs, configuration integrity and lint the code.

ci

/**
 * Run this script (from root directory): npx babel-node scripts/ci
 *
 * This will run the following checks:
 *
 * 1. Find the exercises
 * 2. Run tests against sample solutions
 */

Run this script to test all exercises.

name-check

/**
 * Run this script (from root directory): npx babel-node scripts/name-check
 *
 * This will run the following checks:
 *
 * 1. Package name is of the format "@exercism/javascript-<exercise>"
 *
 * This script also allows fixing these names: npx babel-node scripts/name-check --fix
 */

Run this script to check if the package name in package.json of exercises is in the expected format or to fix it.

name-uniq

/**
 * Run this script (from root directory): npx babel-node scripts/name-uniq
 *
 * This will run the following checks:
 *
 * 1. All exercises have unique package names in their package.json files.
 */

Run this script to check if there is any duplicate package name.

directory-check

/**
 * Run this script (from root directory): npx babel-node scripts/directory-check
 *
 * This will run the following checks:
 *
 * 1. The package has the correct directory based on the path to the exercise.
 *
 * This script also allows fixing these directories: npx babel-node scripts/directory-check --fix
 */

Run this script to check if the package repository directory in package.json of the exercises is in expected format or to fix it. If the ASSIGNMENT environment variable is set, only that exercise is tested. For example, if you only want to test the directory for concept/closures, you may, depending on your environment, use:

ASSIGNMENT=concept/closures npx babel-node scripts/directory-check