Skip to content

HaoZeke/asv-numpy

Repository files navigation

Readme

About

This is a simple repository to setup and run NumPy asv benchmarks. The results are stored with dvc on DagsHub.

Usage

Since the data is stored with dvc we have some additional steps.

micromamba create -f environment.yml
micromamba activate numpy-bench
# Follow the dvc repo setup instructions

Contributing

We welcome contributed results as PRs.

Setup

Please ensure at a minimum that the machine being used is tuned for benchmarks.

export OPENBLAS_NUM_THREADS=1
export MKL_NUM_THREADS=1
export OMP_NUM_THREADS=1
sudo python -mpyperf system tune

If running within docker keep the seccomp issues (among others) in mind. It would be best to use an unloaded bare metal machine.

Additionally, try to have at-least a few isolated CPUs via the isolcpus kernel parameter.

Contributing

We welcome contributed results as PRs. For generating subsets of interest:

# Get commits for tags
# delete tag_commits.txt before re-runs
for gtag in $(git tag --list --sort taggerdate | grep "^v"); do
    git log $gtag --oneline -n1 --decorate=no | awk '{print $1;}' >> tag_commits_only.txt
done
# Use the last 20
tail --lines=20 tag_commits_only.txt > 20_vers.txt
asv run -m $(hostnamectl hostname) -j HASHFILE:20_vers.txt

Note that each tag and the corresponding commit is already shipped in subsets so this can be simply:

cat subsets/tag_commits.txt | awk '{print $2;}' | tail --lines=20 > 20_vers.txt
asv run -m $(hostnamectl hostname) -j HASHFILE:20_vers.txt

When there are new benchmarks, skip the existing ones by running:

asv run HASHFILE:20_vers.txt -k

Commiting Results

We use dvc and DagsHub for storing the machine data.

License

MIT.

Releases

No releases published

Packages

No packages published