Skip to content

Latest commit

 

History

History
23 lines (17 loc) · 2.79 KB

Smoke-Testsuite.md

File metadata and controls

23 lines (17 loc) · 2.79 KB

As the Infinispan project grows, there's a need to expand the tests to run them under different set ups, e.g. with uber jars, with client modules running in Wildfly, with normal jars...etc.

On top of different set ups, we are also finding that a subset of the tests we have also need to be run with different configurations, such as compatibility mode, with/without distribution, with/without replication, with/without transactions, with/without xsite...etc.

The current test set up means that each module has its own set of tests, so duplication can easily happen, and on top of that, to be able to add set ups to test, tests need to be duplicated or extended in a way not originally intended.

On top of the duplication of tests, there's a lot of waste of resources generated by the number of times caches/cachemanagers/clusters/sites are started and stopped in the entire testsuite, which leads to a general slow down of the testsuite.

During a recently held meeting, it was agreed that the following improvements would need to be made:

  • Move the most relevant of all functional tests into a single testsuite project. We could try to use some tooling to determine the most relevant tests (student project?)
  • Define suites for which caches/cachemanagers/clusters/sites are started and then run a load of tests for that particular set up. The aim is to reduce time needed to run tests.
  • Running all tests in all configurations/set-ups would likely take too long, so we should consider randomising the number of configurations/set-ups run. Any randomising applied should allow to backtrack to figure out the exact configuration/set-up run to debug failures. Other OS projects such as Lucene are already using randomising of tests, so we should check what they do.
  • Randomising could be used as way to introduce failures and see how the tests behave in those conditions. Again, being able backtrack to the cause of failure would be necessary.
  • Incremental testing possibilities should be considered if available. In essence, incremental test would mean that when a change happens, only tests affected by those changes would be run. Not sure how it'd run in the presence of reflection?
  • Randomising test data would also be helpful to discover failures, e.g. primitive data, non-serializable data, serializable data, externalizer-marshallable data, nulls...etc.
  • It could be desirable to make all this work independent of the test framework used by creating our own DSL and defining tests in plain test. Sanne has done some similar work with ANTLR for one of Hibernate OGM parsers.

Some of the projects that could help with this are: