What is this document? what do i find in it? how was it divided? What topics does it contain? If possible, include anchor links for navigation in this document.
This documentation has the following sections:
- Test Approach
- Test Deliverables
- Test scenarios
- Test Environment
- Approach to test automation
- Running the tests scripts
- Test Report
- Dependencies
- Bug Report
- References
How are the tests described? (Gherkin, step by step, etc).
Where do I find detailed test cases?
How were the tests prioritized? What criteria are used?
What levels of testing will be performed? What were the criteria for this decision?
What types of testing will be performed ( Load testing, Security testing, Performance testing, CrossBrowser testing etc.)? What were the criteria for this decision?
- The test deliverables are:
- Features files, with description of test cases and scenarios;
- [LANGUAGE] codebase using [FRAMEWORK];
- [FORMAT] reports located in the [PATH];
- Test evidences in [FORMAT] located [PATH];
- Execution scripts using [SCRIPT LANGUAGE USED (MAKE, RAKE, ETC)]. The intention is that the tests are subject to integration in a pipeline.
- Defects report which will be described in tool [BUG TRACKING TOOL]. Link to access the bug tracking tool here.
The test scenarios were modeled based on the following techniques:
- Equivalence Partitioning
- Boundary Testing
- Decision Table
- State Transition Testing
ID | Description | Technique |
---|---|---|
[CT-001] | Check system behavior when valid email id and password is entered. | |
[CT-002] | Check system behavior when invalid email id and valid password is entered. | |
[CT-003] | Check system behavior when invalid email id and invalid password is entered. | |
[CT-004] | Check system behavior when email id and password are left blank and Sign in entered. | |
[CT-005] | Check Forgot your password is working as expected. |
Below are all the tools / platforms used during the development of the testing strategy and automated testing code:
Item | Description | Version | Note |
---|---|---|---|
IntelliJ IDEA | IDE | Last | N/A |
Google Chrome | Testing browser | 87.0.4280.88 | N/A |
Pa11y | Accessibility testing automation tool | 5.0.0 | N/A |
GitHub | Code repository and documentation | N/A | N/A |
Tool/Requirement | Description | Last | N/A |
Tool/Requirement | Description | Last | N/A |
The automation tool was chosen based on the following criteria:
!!! DELETE CRITERIA THAT DO NOT APPLY AND LEAVE ONLY THOSE THAT WERE REALLY USED!!!
- Ease of Developing and Maintaining the Scripts: Development & maintenance of test scripts should be as simple as possible to decrease the human and time resource utilization.
- Ease of Test Execution for Non-Technical user: The test suite execution should be simple for any project member to run them easily as and when required. Also, it should be easy for manual testers who have very little or no technical knowledge.
- Support to Web, Desktop & Mobile application: Leveraging 3 different tools for 3 types of platforms for test automation is a complicated task to handle. It is better to select a tool that supports all the three platforms.
- Intuitive Test Report: Test reports build confidence and so the reports need to be intuitive and simple for the management team to understand easily.
- Cross Browser Testing: Support to Cross browser testing is a must when there are multiple end-users and no particular browser restriction.
- Support to Keyword & Data Driven Testing: Keyword driven testing acts as an extension to the data driven testing framework. When a project becomes complex, test framework needs to be extended.
- Technical Support and Assistance: Automation Engineers definitely need help while handling critical problems of a project. The tool that provides technical support and assistance would be of great help.
- Language Support like C#, Java, Python and others: Every test scenario cannot be recorded. In some cases, the tester must write the code. So, the tool that supports the required language to write the customized scripts would be helpful.
- TFS DevOps integration with builds: Support to integrate with Continuous Integration tools for automated builds and deployments is necessary.
- Pricing: Depending on the above qualities and project cost estimates, evaluate the cost difference among the other available automation tools.
Below, the criteria that were used to decide which tests are good candidates for automation:
!!!!!!! DELETE CRITERIA THAT DO NOT APPLY AND LEAVE ONLY THOSE THAT WERE REALLY USED!!!!! How do you choose which tests to automate and which tests to leave for manual testing?
Tests that should be automated:
- Business critical paths - the features or user flows that if they fail, cause a considerable damage to the business.
- Tests that need to be run against every build/release of the application, such as smoke test, sanity test and regression test.
- Tests that need to run against multiple configurations — different OS & Browser combinations.
- Tests that execute the same workflow but use different data for its inputs for each test run e.g. data-driven.
- Tests that involve inputting large volumes of data, such as filling up very long forms.
- Tests that can be used for performance testing, like stress and load tests.
- Tests that take a long time to perform and may need to be run during breaks or overnight.
- Tests during which images must be captured to prove that the application behaved as expected, or to check that a multitude of web pages looks the same on multiple browsers.
- Generally speaking, the more repetitive the test run, the better it is for automation.
Tests that should not be automated:
- Tests that you will only run only once. The only exception to this rule is that if you want to execute a test with a very large set of data, even if it’s only once, then it makes sense to automate it.
- User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
- Tests that need to be run ASAP. Usually, a new feature which is developed requires a quick feedback so testing it manually at first
- Tests that require ad hoc/random testing based on domain knowledge/expertise - Exploratory Testing.
- Intermittent tests. Tests without predictable results cause more noise that value. To get the best value out of automation the tests must produce predictable and reliable results in order to produce pass and fail conditions.
- Tests that require visual confirmation, however, we can capture page images during automated testing and then have a manual check of the images.
- Test that cannot be 100% automated should not be automated at all, unless doing so will save a considerable amount of time.
- CT-001 - Check system behavior when valid email id and password is entered.
- CT-002 - Check system behavior when invalid email id and valid password is entered.
- CT-003 - Check system behavior when invalid email id and invalid password is entered.
- CT-004 - Check system behavior when email id and password are left blank and Sign in entered.
- CT-005 - Check Forgot your password is working as expected.
- The following scripts do, respectively:
Description | Command |
---|---|
Install dependencies | make setup |
Run tests on Chrome | make e2e_tests_on_chrome |
Run tests on Firefox | make e2e_tests_on_firefox |
Run tests on Safari | make e2e_tests_on_safari |
- Below is an image of the test execution report:
The dependencies used in the functional testing automation project are:
Tool | Version |
---|---|
TOOL1 | 2.7.0 |
TOOL2 | 3.1.2 |
TOOL3 | 3.32.1 |
TOOL4 | 3.4.2 |
TOOL5 | 3.142.7 |
-
Bug id: [001]
- Severity: High
- Priority: High
- Reported By: me
- Reported On: 10/01/2021
- Status: New
- Environment: Safari
- Description: Describe here a brief summary of the problem.
- Steps to reproduce the problem: Describe here the sequence of steps necessary to reproduce the problem:
- Step 1
- Step 2
- Step 3
- Step N
- Obtained Result: Through the steps described above, what should have happened? How did the system really behave? What is the evidence for this malfunction? Add screenshots, videos, logs ... And everything else that can provide information about the current wrong state of the system.
- Expected Result: Through the steps described above, what should happen? What behavior should the system behave? Put here evidence that justifies this (Text of the documentation you used to reach this conclusion).
-
Bug id: [002]
- Severity: High
- Priority: High
- Reported By: me
- Reported On: 10/01/2021
- Status: Fixed
- Environment: Chrome
- Description: Describe here a brief summary of the problem.
- Steps to reproduce the problem: Describe here the sequence of steps necessary to reproduce the problem:
- Step 1
- Step 2
- Step 3
- Step N
- Obtained Result: Through the steps described above, what should have happened? How did the system really behave? What is the evidence for this malfunction? Add screenshots, videos, logs ... And everything else that can provide information about the current wrong state of the system.
- Expected Result: Through the steps described above, what should happen? What behavior should the system behave? Put here evidence that justifies this (Text of the documentation you used to reach this conclusion).
Important links that supported the development of this testing strategy:
- Make a README
- How to create Test Strategy Document (Sample Template)
- How To Write Test Strategy Document (With Sample Test Strategy Template)
- Online Markdown Editor - Dillinger, the Last Markdown Editor ever
- What is Test Scenario? Template with Examples
- Boundary Value Analysis and Equivalence Partitioning Testing
- Decision Table Testing: Learn with Example
- What is State Transition Testing? Diagram, Technique, Example
- How to Choose Which Test to Automate?
- Markdown Cheatsheet
- 4 Simple Steps to Select the Right Test Automation tool for your Project
- Template used in this document