Skip to content

safo-bora/tests_demo

Repository files navigation

Test Project

Run Unit Testing Badge Badge Badge

About

This is a test demo project for experiments with CI integration, test reports, and metrics from testing.

QA Community - https://t.me/AutomationUA (Tests are written by people)

CI/CD Integrations Example (We took 2 CI systems just to compare)

Jenkins:

GitHub Action:


Unit Testing

Unit Testing Best Practices:

1️⃣ Coverage:

  • The main metric for unit tests. (How many percent of the code is covered by tests.)
  • It should always be calculated and visible.
  • In my case, coverage is 96.49%.
  • Please add a badge to the repository if possible. It's very important to see coverage WITHOUT diving into build logs 🙏

2️⃣ Mocks:

  • Mocks are often used in unit tests. To check the interaction with an external service without actually calling that service.
  • I added an example. (Example Mocked Service Test)

3️⃣ Very important:

  • Break the code and check that the test fails. Because if you've covered the code with a test, broken the code, and the test is green, then you're doing something wrong.

API Testing (Example with Trello API)

API Testing Best Practices

  • Always consider NOT updating ALL tests with the slightest changes in the code.
  • Avoid making requests directly in the tests to prevent duplicate code (noise).
  • Instead of returning just a JSON response in the test, parse it into an object.

Example: 🙏

**Here we created a separate class responsible for sending requests. And all the logic of the Trellо API


Grafana For Metrics from Testing (In Progress)

Of course, when a project is small and the team is small, metrics may not be needed. But when the team is large and there are many tests, we need to understand how long each test takes, which tests are flaky, and which errors occur in which tests.

I really love DataDog, and we had it in our company, so it was easy to set up there. https://safo-bora-katerina.blogspot.com/2024/02/datadoggrafana-and-metrics.html

Now I took Grafana as an example to experiment with it.

What were we interested in?

  • We were interested in which tests were the longest.
  • How often do we run tests per day (since we knew how much an hour of CI costs and it was important for us to understand the time).
  • How long do we wait for the build (and all tests).
  • The number of tests at each level (unit, API, end-to-end, etc.)
  • PASSED/FAILED/SKIPPED.
  • Whether there are flaky tests? Which ones specifically?
  • Whether we rerun tests, and how many times?

About

Demo project for experiments with automation testing and CI integrations on self-hosted CI and SaaS

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages