Skip to content

Latest commit

 

History

History
496 lines (386 loc) · 20.3 KB

README.md

File metadata and controls

496 lines (386 loc) · 20.3 KB

Bluespec Compiler - Test suite


This is a test suite for the compiler, simulator, and tools for the Bluespec Hardware Description Language as found in the bsc repository.


Requirements

This test suite uses the DejaGnu testing framework, which is written in Expect, which uses Tcl. The following command will install those packages on Debian or Ubuntu:

$ apt-get install dejagnu

DejaGnu version 1.6.3+ requires a POSIX shell to run one of its scripts. If you are using a non-compatible shell, you may need to assign CONFIG_SHELL in the environment to a POSIX shell, such as CONFIG_SHELL=/bin/sh.

The test suite also uses a number of other tools to run and test programs:

$ apt-get install csh grep m4 make perl pkg-config time

To simulate the Verilog generated by BSC, a Verilog simulator is required. By default, the test suite expects Icarus Verilog, which can be installed on Debian or Ubuntu with the following command:

$ apt-get install iverilog

When a Verilog simulator is not available, simulation tests can be disabled by assigning VTEST=0 in the environment or on the command line when running the make commands shown below.

Tests for BSC's SystemC generation require SystemC headers and libraries for compilation and linking.

SystemC can be installed on Debian 10 (buster) or later with the following command:

$ apt-get install libsystemc-dev

When SystemC is not available, these tests can be disabled by assigning SYSTEMCTEST=0 in the environment or on the command line when running the make commands shown below.


Running the test suite

All of the following commands are executed from the testsuite subdirectory of the bsc repository.

Specifying the BSC installation to test

There are many ways to run tests in the suite, but the simplest is:

$ make TEST_RELEASE=/path/to/bsc/inst check

This will run the suite on the BSC installation pointed to by TEST_RELEASE.

Actually, an even simpler command is possible. If an inst subdirectory exists in the bsc repository containing this test suite (that is, ../inst), the Makefile can detect that and implicitly assign TEST_RELEASE if you have omitted it:

$ make check

If you omit TEST_RELEASE and there is no inst subdirectory of the parent bsc repo, the Makefile will report an error.

Extra tools

By default, the Makefile in bsc/src/comp/ will not build and install the tool showrules or the developer tools. These tools can all be installed with the install-extra target in bsc/src/comp/ (or separately with install-showrules and install-utils).

showrules

The test suite has tests for the showrules tool in bsc.showrules. These tests are disabled if showrules is not found in the TEST_RELEASE bin directory.

Developer tools

The test suite is currently able to use the following developer tools, if they exist in the TEST_RELEASE bin directory: dumpbo, vcdcheck, and bsc2bsv. There is also a dumpba tool, which is not being used by the test suite, but could be usefully added.

After each compilation that generates a .bo file, the suite can perform a sanity check by running dumpbo on the file and checking for an error exit code. There are also a small number of tests that explicitly run dumpbo as part of their testing.

The suite does not currently perform a sanity check on generated .ba files, but that would be possible with dumpba as a future extension.

After each Bluesim simulation that generates a .vcd file, the suite can perform a sanity check by running vcdcheck on the file and checking for an error exit code. There are also a small number of tests that explicitly run vcdcheck as part of their testing.

One test (bsc.bugs/b611/) checks for a bug in bsc2bsv but otherwise this tool is not used. There are no tests for the related tool, bsv2bsc.

Running some or all tests

If you want to perform a quick check, the smoke target will run a small subset of tests:

$ make smoke

The set of tests that are run is specified in Makefile.

On a clean repository, the check target runs most tests, but not all. Several tests have been deemed to be long tests and not essential to run. These have been placed in bsc.long_tests/ and they have been disabled. There is a Makefile in that directory which has targets for enabling and disabling tests, as well as showing the currently enabled tests. At the top tevel of the suite, the fullcheck command will enable these tests and then run check:

$ make fullcheck

After that command has been run, making check will run all the tests, so remember to disable the tests afterward if you don't want to run them (or use git's clean command to restore the repo).

Running tests in a subdirectory

If you want to run only the tests in a given subdirectory, you can change to that directory and do:

$ make localcheck

The localcheck target will only run the tests in the current directory.

If you want to test the current directory and recurse into its subdirectories, you can do:

$ make check

This will run tests in the current directory and print a report and then, if there are no failures in that report, it will immediately run the tests in the subdirectories and print a report for those. (So be aware that there will be two reports, if you want to see all the stats.)

Running tests in parallel

There are also checkparallel and fullparallel targets, which are versions of check and fullcheck that run the tests in parallel.

NOTE: These targets may have issues to be fixed.

Disabling types of tests

If you do not want to run Bluesim simulations, you can disable those tests (or parts of tests) by assigning CTEST=0. For example:

$ make CTEST=0 check

If you do not want to run Verilog simulations, you can disable those tests (or parts of tests) by assigning VTEST=0. For example:

$ make VTEST=0 check

If you do not want to run tests that require SystemC, you can disable those tests by assigning SYSTEMCTEST=0 For example:

$ make SYSTEMCTEST=0 check

Specifying the Verilog simulator

BSC can be used to link Verilog files into a simulation executable. When run this way, the choice of Verilog simulator is specified with the -vsim flag. For example, to specify Icarus Verilog as the simulator, the flag would be -vsim iverilog.

This is how all Verilog simulation is done in the test suite. The argument to -vsim is provided with the TEST_BSC_VERILOG_SIM environment variable:

$ make TEST_BSC_VERILOG_SIM=cvc64 check

The default value is iverilog.

Providing additonal options to BSC

If you want to provide additional options on the command line for all invocations of BSC in the test suite, that can be specified with the TEST_BSC_OPTIONS environment variable. For example:

$ make TEST_BSC_OPTIONS="-use-dpi" check

Specifying C++ options for SystemC tests

SystemC tests involve compiling SystemC C++ files and linking with the SystemC library. The test suite expects that it will need to extend the include and library paths of the C++ compiler with the directories where SystemC header and library files can be found. The Makefile will attempt to determine the appropriate directories using pkg-config. If this does not work or if you wish to override the values, the directories can be specified with the TEST_SYSTEMC_INC and TEST_SYSTEMC_LIB environment variables. For example:

$ make check \
    TEST_SYSTEMC_INC=/opt/systemc/include \
    TEST_SYSTEMC_LIB=/opt/systemc/lib

If compiling or linking with SystemC on your system requires additional flags to the C++ compiler, that can be provided with the TEST_SYSTEMC_CXXFLAGS environment variable. For example:

$ make TEST_SYSTEMC_CXXFLAGS=-std=c++11 check

Test suite structure

In the top directory, you will find:

  • Makefiles: Makefile and supporting *.mk
  • Test infrastructure: config/unix.tcl and supporting config/*.tcl (plus a few other scripts)
  • Directories containing individual tests: bsc.* (named bsc.<category>, indicating the category of tests in that directory)

When you run make from the top directory, the system looks for all directories named bsc.* and expects to find tests there, to run. You'll notice that there are many such directories, with names like bsc.scheduler and bsc.typchecker (grouping tests by the compiler stage that they exercise); bsc.arrays and bsc.real (grouping tests by the feature they exercise); and bsc.bugs, bsc.lib, and bsc.bsv_examples (grouping tests related to GitHug issues, to testing libraries, and real examples). These directories can also contain subdirectories, to further group related tests; for example, bsc.typechecker contains subdirectories such as literals, numeric, and string. Each directory and subdirectory must contain a Makefile (see bsc.codegen/Makefile for an example of the bare minimum). The system will assume that each subdirectory containing a Makefile is also a test directory, and will recurse into it; however, the directory's Makefile can override this by setting the value of SUBDIRS with an explicit list of subdirectories to recurse into.

In the directory tree under bsc.*, any file named *.exp is assumed to be a test script to run. Our convention is to have only one .exp file per directory, and named for that directory. For example, in bsc.typechecker, there is a script typechecker.exp, which explains how to run tests using all the other files in that directory. The subdirectory bsc.typechecker/string/ has a file string.exp, containing the commands for running the tests in that subdirectory.

It is not a commonly used feature, but note that the system allows for running just the <name>.exp scripts using the Makefile target <name>.check. For example:

 $ make string.check

The .exp files are Tcl scripts, that call test procedures defined in config/*.tcl. These procedures are high-level, so a single command can cause a source file to be compiled, linked to Verilog, simulated, and its output compared with an expected output, and repeated again for Bluesim. Typically, a test consists of a source file, an expected output, and a single line in the .exp file (or a few lines, in some cases). Targeted tests for certain stages (or backends or bugs) may only execute the compile step, or may dump intermediate results to examine.

The .exp scripts are executed in place, so the files generated by running BSC (.bo, .ba, .cxx, .v, .exe, etc) are written to those directories. The system will automatically clean all directories, as the first step of make, before executing the tests. You can also manually run make clean if you want to remove the generated files (or make localclean to not clean subdirectories). The cleaning process removes files whose names match a set of patterns (defined in norealclean.mk); the Makefile in a test directory can specify additional files to delete by setting DONTKEEPFILES and can exempt files from removal by setting KEEPFILES (see bsc.bsv_examples/FloatingPoint/Makefile for example).

A new test can added to an existing directory by adding a procedure to the .exp file and the necessary source and expected output to the directory. A new subdirectory can be added by creating the subdirectory and adding a Makefile and <dirname>.exp. A new toplevel bsc.<category> can be added in the same way, with a Makefile and <category>.exp file.

TBD: Document the common test procedures, from config/*.tcl


Diagnosing test failures

When you run the test suite, it will print some information to the terminal. This will give you high-level information about which .exp files were run, which tests failed (or unexpectedly passed), and a breakdown of time spent in test procedures. For example, you should see something like this (highly abridged):

cleaning /path/bsc/testsuite
cleaning /path/bsc/testsuite/bsc.arrays
cleaning /path/bsc/testsuite/bsc.arrays/bounds
...

MAKEFLAGS= BSCTEST=1 BSC=...
...

		=== bsc tests ===

Schedule of variations:
    unix

Running target unix
Using /usr/local/share/dejagnu/baseboards/unix.exp as board description file for target.
Using /usr/local/share/dejagnu/config/unix.exp as generic interface file for target.
Using /path/bsc/testsuite/config/unix.exp as tool-and-target-specific interface file.
testconfig dir is: /path/bsc/testsuite/config
Sourcing: /path/bsc/testsuite/config/verilog.tcl
testconfig dir is: /path/bsc/testsuite/config
Sourcing: /path/bsc/testsuite/config/bluetcl.tcl
Running /path/bsc/testsuite/bsc.arrays/arrays.exp ...
Running /path/bsc/testsuite/bsc.arrays/bounds/select/select.exp ...
...
Running /path/bsc/testsuite/bsc.interra/operators/Arith/arith.exp ...
NOTE: Random seed = 3427
...
Running /path/bsc/testsuite/bsc.verilog/vcd/vcd.exp ...
Running /path/bsc/testsuite/bsc.verilog/verilog.exp ...

       === Test Distribution Summary === 
   bluetcl_exec_fail                        --     6
   bluetcl_exec_pass                        --    23
   bluetcl_pass                             --    39
   ...
   string_does_not_occur                    --   148
   string_occurs                            --   121
Total:                                      -- 17692

                === Timing by Category ===

  WALLCLOCK sec        CPU sec      SYSTEM sec   #calls  CATEGORY
     0.1 (  0%)      0.0 (  0%)      0.0 (  0%)    3122  sed
     0.3 (  0%)      0.1 (  0%)      0.1 (  0%)       7  run_systemc_executable
...
  3761.3 ( 26%)   3412.5 ( 27%)    335.2 ( 21%)    1186  bsc_link_objects
  5807.1 ( 40%)   5106.6 ( 41%)    584.5 ( 37%)    2775  bsc_compile_verilog
 14402.6 (100%)  12576.5 (100%)   1585.8 (100%)   15249  TOTAL

		=== bsc Summary ===

# of expected passes		17235
# of unexpected failures	75
# of unexpected successes	1
# of expected failures		123

The first lines are cleaning the subdirectories; then there is a dump of the various Makefile options, to verify how the test suite is being run; and then the execution of tests begin. Each time the system enters a subdirectory and executes an .exp script, there is a line:

Running /path/bsc/testsuite/bsc.interra/operators/Arith/arith.exp ...

That .exp file may call procedures like note to print informational messages:

NOTE: Random seed = 3427

But otherwise, the system won't print any message unless a test fails (FAIL:) or unexpectedly passes (XPASS:). The later usually indicates that a feature has been fixed and the test for it is now passing, so the expected behavior just needs to be updated.

A test is usually multiple steps, such as compile, link, simulate, and compare the simulation output to the expected. If the compile step fails, then you'll also see failures for the other steps:

FAIL: module `mkTestbench' in `Testbench.bsv' should compile to Verilog
FAIL: `mkTestbench.v mkDut.v' should link to executable `mkTestbench'
FAIL: Verilog simulation `mkTestbench' should execute: child process exited abnormally
FAIL: `mkTestbench.v.out' differs from `arith.out.expected'

Thus, you only need to consider the first failure message for a given test, and can ignore the rest (that share the same name). Tests are also generally repeated for both Verilog and Bluesim backends. Often, the compile step is done once, for Verilog, and the generated .ba files are reused for Bluesim linking, so that the compile step doesn't have to be run twice. In that case, if the compile step fails, you would observe not only Verilog failures but also Bluesim failures that follow. (This may not always be the case, as some tests are written to separately compile Bluesim and Verilog.) Verilog and Bluesim simulations are also generally repeated a second time, with VCD dumping turned on. All of this is to explain that you might see the following additional failures, as well:

FAIL: Verilog simulation `mkTestbench' should execute with VCD dump: child process exited abnormally
FAIL: `mkTestbench mkDut' should link to executable `mkTestbench'
FAIL: Bluesim simulation `mkTestbench' should execute: child process exited abnormally
FAIL: `mkTestbench.c.out' differs from `arith.out.expected'
FAIL: Bluesim simulation `mkTestbench' should execute: child process exited abnormally

All of these failures would be due to the initial BSC compile failure, reported in the first line.

The output seen on the terminal is recorded in a file testrun.sum ("sum" for summary). A more verbose log is written to the file testrun.log. That file outputs the commands that are being executed, so you can see what specific command failed. For the above example, that might look like the following:

Executing (bsc_compile_verilog): /path/inst/bin/bsc  -elab -no-show-timestamps -no-show-version -u -verilog -g mkTestbench Testbench.bsv >& Testbench.bsv.bsc-vcomp-out
FAIL: module `mkTestbench' in `Testbench.bsv' should compile to Verilog

This is reporting that the test procedure bsc_compile_verilog is being run, and gives the command that was run, including showing the file that stdout and stderr were piped to. The output of every command is recorded to a file, that can be examined afterwards. The testrun.log file will tell you the name, but once you learn the pattern, you'll know what to look for without having to open the testrun.log file. The files have a suffix, of the form <description>-out; so if you're in a test directory, you can list all the files *out, to see what output has been recorded. Sometimes there are further extensions -- for example, before comparing the output to an expected file, the test procedure might filter the file, and that goes to <description>-out.filtered. So listing all the files *out* might show you more.

For a source file named Foo.bsv, the output of the BSC compile step is Foo.bsv.bsc-vcomp-out for Verilog or Foo.bsv.bsc-ccomp-out for Bluesim, or just Foo.bsv.bsc-out for compilation with no backend flags. For the output of the linking step, a suffix is added to the name of the top module in the design; for a top module named sysFoo, the output from Verilog linking would be in sysFoo.bsc-vcomp-out. The simulation output would be in the file sysFoo.c.out (for Bluesim) or sysFoo.v.out (for Verilog). If the test compares an output to a golden expected output, that is usually stored as <generatedfilename>.expected. So the expected output of Verilog simulation might be called sysFoo.v.out.expected. Generally, the output of Verilog and Bluesim simulation should be the same, so there is only one file for the expected output, named sysFoo.out.expected.

Instead of running the whole test suite, you can also change into an individual directory and run just the tests in that directory (with make localcheck) or just the tests in that directory and below (with make check). If you do that, there will be a testrun.log (and testrun.sum) file in the directory you're in.

If a failure occurs in the GitHub CI, most of the out files will be collected into a tar archive, which can be downloaded from the GitHub Actions page. Look for the downloadable "artifacts" -- this will be mostly tar-files of the built compiler on each OS, but will also include logs.tar for failing test suite jobs.

NOTE: Because of space limits, not all files are included. If you run into problems, you can check in commits on a branch that add more files or use note to print additional info, etc. The script archive_logs.sh is where you would add more wildcard patterns.

To give another example, if a failure occurs during Verilog linking, the terminal output might look like this:

Running /path/bsc/testsuite/bsc.verilog/positivereset/ClockDividers/ClockDividers.exp ...
FAIL: sysClockDiv.v ClockDiv.v should link to executable sysClockDiv

A common convention used in tests in the suite is that a file named Foo would have a top-level module named sysFoo and submodules (if any) would be prefixed with mkFoo_. Knowing that convention, the above failure suggests that we look to the test file ClockDiv.bsv. To see what error messages were output during Verilog linking, we would look in the file sysClockDiv.bsc-vcomp-out, in the directory bsc.verilog/positivereset/ClockDividers/.