Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add configuration support #785

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
Open

Conversation

LarsAsplund
Copy link
Collaborator

Related to #772 and #179

Supports automatic discovery of VHDL configurations of testbench entities. These are added as VUnit configurations to the testbench. These VUnit configurations have a newly introduced vhdl_configuration_name attribute that, if set, makes VUnit call the simulator with the VHDL configuration name rather than the testbench entity + architecture names.

@LarsAsplund LarsAsplund marked this pull request as draft January 1, 2022 21:52
@LarsAsplund LarsAsplund marked this pull request as ready for review January 2, 2022 22:18
@eine eine added this to the v4.7.0 milestone Jan 9, 2022
@eine eine added Enhancement ThirdParty: OSVVM Related to OSVVM and/or OSVVMLibraries. labels Jan 9, 2022
@LarsAsplund LarsAsplund force-pushed the add-configuration-support branch 5 times, most recently from a97884b to 35e8438 Compare February 6, 2022 11:06
@eine eine modified the milestones: v4.7.0, v4.8.0, v5.0.0 Apr 19, 2023
@sjoerss
Copy link

sjoerss commented Jul 7, 2023

vunit_config_example.zip

I provide a minimal reproducible example, about using a VHDL configuration and test cases.
Both together does not work. Either the VHDL configuration is used but no testcase are found OR the test cases are found but the VHDL configuration is ignored.

I hope that helps.
As soon there is a solution for testing. I am ready.

best regards,
Sebastian Joerss

@LarsAsplund
Copy link
Collaborator Author

@sjoerss I'm returning to this and had a look at your use case. It's not the style of a classic OSVVM testbench but nevertheless something it is reasonable for us to support.

This PR will also take a new direction. It was built on the assumption that top-level generics can't be combined with configurations. That doesn't seem to be the case anymore when testing with a number of simulators.

The previous approach also introduced reading the runner configuration from file and running parallel threads in different root directories. These feature still have value as they support other use cases but I will release that later in separate PRs.

@LarsAsplund
Copy link
Collaborator Author

@sjoerss I think my original approach was too OSVVM use case focused. Your use case is one of a family of use cases where the expressiveness of VHDL is too limited. In VHDL, the configuration is bound to an entity, i.e. a VUnit testbench. However, we also want to be able to bind the configuration to a subset of test cases in the testbench. In your case the subset is the test cases in the entity selected by the configuration but there are also other examples. For example, we can have a standard VUnit testbench with the test cases in the same file and then a set of VHDL configurations selecting what implementation of the DUT to use. We may not want to run all tests on every DUT.

With standard VUnit configurations we express the creation of a configuration and the binding in a single line, for example my_testbench.add_config(...) or my_test_case.add_config(...) but with VHDL configurations we should think of it as two steps. The VHDL code creates the configuration but then we need to bind it to a testbench or a set of test cases. For example, my_test_case.bind_config(my_vhdl_configuration).

This would also allow us to scan for test cases first, before we do the binding. It would also allow us to set other properties of the configurations before binding such as what to generics to use.

@LarsAsplund
Copy link
Collaborator Author

@sjoerss Thinking about this a bit more made me realize that your use case is more than just allowing us to bind a VHDL configuration to a test case. Today a testbench file is scanned for test cases or, if the test suite is located elsewhere, VUnit can be instructed to scan another file by using the testbench method scan_tests_from_file. In either case there is only one test suite in each testbench/simulation. In your case there are two test suites in different files and if we simply extend scan_tests_from_file to allow scanning of several files, the testbench would "see" several test suites. You would use configurations to only allow one test suite to run in each simulation but internally there would be two test suites for the testbench and unless we restrict that and keep the clean/standard structure of unit tests, I fear that we're opening Pandoras box.

Rather than going down that road I would prefer that this use case is solved in other ways. What is your reason for partitioning the test cases in two test suites? I know of at least two reasons why splitting test suites can be a good idea:

  1. Putting all test cases for a DUT in the same file can make the file very large and unmanageable. However, you may still want to reuse the "test fixture" of the testbench (DUT and surrounding verification components such as BFMs and clock generators). In that case I create two testbenches, one for each test suite, and put the test fixture in a separate entity which I instantiate in both testbenches.
  2. All test cases may fit in a single testbench but some of them may be slow, especially if the DUT is at the system level. In those cases it can be nice to have a basic and fast subset of the test suite (that you run frequently) and then the full test suite is executed once every night. You can achieve that by using VUnit attributes. For example, if you tag the basic test cases with .basic (the attribute namespace starting without the dot is reserved for VUnit):
if run("Test something basic") then
    -- vunit: .basic
...

and then you call:

python run.py --with-attributes .basic

What is your use case and would any of these solutions work?

@LarsAsplund
Copy link
Collaborator Author

This PR is replaced by #951 when it comes to adding support for top-level configurations. This PR is kept alive since it also provides functionality for running concurrent simulator threads in different directories. This capability is needed to solve #877 and people are using it as a temporary solution. Once that split of simulator directories has been merged (via another PR) this PR will be closed.

@sjoerss
Copy link

sjoerss commented Aug 10, 2023

@LarsAsplund
Thanks a lot for your good thoughts about how to handle configuration and testcases.

I have attached a simple diagram of our System Level Testbench.
The VHDL configuration is used to instantiate the Testsuite and different DUT (design under test) combinations.
There are different Testsuites for each DUT combination.
Each Testsuite should contain one or several testcases.
At the moment there are no testcases possible. Just the testsuite can be executed as one testcase run.

The testbench method scan_tests_from_file is an option for me. You can define the name of one! testsuite file in the run.py file.
Then the testcases in the testsuite are found and they must be bound to the VHDL configuration.

The changes you made in this branch, could it work with my testbench setup?
Shall I test it?

pcbl_testbench drawio

@LarsAsplund
Copy link
Collaborator Author

It is not possible to have test cases in test suites the way you suggest. My recommendation is to have a testbench for each test suite and then put all your BFMs in a single entity that you instantiate in each testbench to reduce copy and pasting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement ThirdParty: OSVVM Related to OSVVM and/or OSVVMLibraries.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants